{"id":5392,"date":"2026-01-16T19:14:45","date_gmt":"2026-01-16T11:14:45","guid":{"rendered":"https:\/\/teen.aiproinstitute.com\/?p=5392"},"modified":"2026-01-16T19:17:00","modified_gmt":"2026-01-16T11:17:00","slug":"relationship-mapping-prompts","status":"publish","type":"post","link":"https:\/\/teen.aiproinstitute.com\/zh\/relationship-mapping-prompts\/","title":{"rendered":"Relationship Mapping Prompts"},"content":{"rendered":"<div data-elementor-type=\"wp-post\" data-elementor-id=\"5392\" class=\"elementor elementor-5392\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-c972c14 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"c972c14\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-2aecb91\" data-id=\"2aecb91\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-6de220c elementor-widget elementor-widget-html\" data-id=\"6de220c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"html.default\">\n\t\t\t\t\t<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n    <meta charset=\"UTF-8\">\n    <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n    <title>Relationship Mapping Prompts - AiPro Institute\u2122<\/title>\n    <style>\n        * {\n            margin: 0;\n            padding: 0;\n            box-sizing: border-box;\n        }\n\n        body {\n            font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;\n            line-height: 1.6;\n            color: #333;\n            background: #ffffff;\n            padding: 2rem 1rem;\n        }\n\n        .container {\n            max-width: 900px;\n            margin: 0 auto;\n        }\n\n        .page-title {\n            text-align: center;\n            font-size: 2.5rem;\n            font-weight: 700;\n            background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);\n            -webkit-background-clip: text;\n            -webkit-text-fill-color: transparent;\n            background-clip: text;\n            margin-bottom: 2rem;\n        }\n\n        .card {\n            background: #ffffff;\n            border-radius: 12px;\n            box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);\n            overflow: hidden;\n            margin-bottom: 2rem;\n        }\n\n        .card-header {\n            background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);\n            color: white;\n            padding: 2rem;\n        }\n\n        .card-header h1 {\n            font-size: 2rem;\n            margin-bottom: 0.5rem;\n        }\n\n        .card-header .subtitle {\n            font-size: 1.1rem;\n            opacity: 0.95;\n        }\n\n        .meta-badges {\n            display: flex;\n            gap: 0.75rem;\n            margin-top: 1rem;\n            flex-wrap: wrap;\n        }\n\n        .badge {\n            background: rgba(255, 255, 255, 0.2);\n            padding: 0.4rem 0.9rem;\n            border-radius: 20px;\n            font-size: 0.9rem;\n            backdrop-filter: blur(10px);\n        }\n\n        .tool-badges {\n            display: flex;\n            gap: 0.75rem;\n            margin-top: 1rem;\n            flex-wrap: wrap;\n        }\n\n        .tool-badge {\n            background: transparent;\n            border: 1px solid rgba(255, 255, 255, 0.4);\n            padding: 0.4rem 0.9rem;\n            border-radius: 20px;\n            font-size: 0.85rem;\n        }\n\n        .card-body {\n            padding: 2.5rem;\n        }\n\n        .section-title-container {\n            display: flex;\n            justify-content: space-between;\n            align-items: center;\n            margin: 2.5rem 0 1.25rem 0;\n        }\n\n        .section-title-container:first-child {\n            margin-top: 0;\n        }\n\n        .section-title {\n            font-size: 1.75rem;\n            color: #764ba2;\n            border-left: 4px solid #764ba2;\n            padding-left: 1rem;\n            margin: 0;\n        }\n\n        .copy-button {\n            background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);\n            color: white;\n            border: none;\n            padding: 0.6rem 1.5rem;\n            border-radius: 6px;\n            cursor: pointer;\n            font-size: 0.95rem;\n            font-weight: 500;\n            transition: opacity 0.3s;\n        }\n\n        .copy-button:hover {\n            opacity: 0.9;\n        }\n\n        .prompt-box {\n            background: #f8f9fa;\n            border: 1px solid #dee2e6;\n            border-radius: 8px;\n            padding: 1.5rem;\n            margin: 1.25rem 0;\n            font-family: 'Courier New', monospace;\n            font-size: 0.95rem;\n            line-height: 1.6;\n            white-space: pre-wrap;\n            overflow-x: auto;\n        }\n\n        .placeholder {\n            color: #fd7e14;\n            font-weight: bold;\n        }\n\n        .tip-box {\n            background: #fff9e6;\n            border-left: 4px solid #ffc107;\n            padding: 1.25rem;\n            margin: 1.25rem 0;\n            border-radius: 4px;\n        }\n\n        .tip-box strong {\n            color: #f57c00;\n        }\n\n        h3 {\n            color: #764ba2;\n            font-size: 1.35rem;\n            margin: 2rem 0 1rem 0;\n        }\n\n        p {\n            margin-bottom: 1rem;\n            line-height: 1.8;\n        }\n\n        ul, ol {\n            margin-left: 2rem;\n            margin-bottom: 1rem;\n        }\n\n        li {\n            margin-bottom: 0.5rem;\n            line-height: 1.8;\n        }\n\n        .example-output {\n            background: #f0f8ff;\n            border: 2px solid #4a90e2;\n            border-radius: 8px;\n            padding: 1.5rem;\n            margin: 1.25rem 0;\n        }\n\n        .example-output h4 {\n            color: #4a90e2;\n            margin-bottom: 1rem;\n        }\n\n        .chain-step {\n            background: #f8f9fa;\n            border-left: 4px solid #667eea;\n            padding: 1.5rem;\n            margin: 1.5rem 0;\n            border-radius: 4px;\n        }\n\n        .chain-step h4 {\n            color: #667eea;\n            margin-bottom: 0.75rem;\n        }\n\n        .footer {\n            background: #f8f9fa;\n            padding: 2rem;\n            margin-top: 2rem;\n            border-radius: 8px;\n            display: flex;\n            justify-content: space-around;\n            align-items: center;\n            flex-wrap: wrap;\n            gap: 1.5rem;\n        }\n\n        .footer-stat {\n            text-align: center;\n        }\n\n        .footer-stat-value {\n            font-size: 1.75rem;\n            font-weight: 700;\n            color: #764ba2;\n        }\n\n        .footer-stat-label {\n            color: #666;\n            font-size: 0.95rem;\n        }\n\n        @media (max-width: 768px) {\n            .page-title {\n                font-size: 1.75rem;\n            }\n\n            .card-header h1 {\n                font-size: 1.5rem;\n            }\n\n            .card-body {\n                padding: 1.5rem;\n            }\n\n            .section-title {\n                font-size: 1.35rem;\n            }\n\n            .section-title-container {\n                flex-direction: column;\n                align-items: flex-start;\n                gap: 1rem;\n            }\n\n            .footer {\n                flex-direction: column;\n            }\n        }\n    <\/style>\n<\/head>\n<body>\n    <div class=\"container\">\n        <h1 class=\"page-title\">Relationship Mapping Prompts<\/h1>\n\n        <div class=\"card\">\n            <div class=\"card-header\">\n                <h1>Relationship Mapping Prompts<\/h1>\n                <p class=\"subtitle\">Data & Content Processing<\/p>\n                <div class=\"meta-badges\">\n                    <span class=\"badge\">\u23f1\ufe0f 30-40 minutes<\/span>\n                    <span class=\"badge\">\ud83d\udcca Advanced<\/span>\n                <\/div>\n                <div class=\"tool-badges\">\n                    <span class=\"tool-badge\">ChatGPT<\/span>\n                    <span class=\"tool-badge\">Claude<\/span>\n                    <span class=\"tool-badge\">Gemini<\/span>\n                    <span class=\"tool-badge\">Perplexity<\/span>\n                    <span class=\"tool-badge\">Grok<\/span>\n                <\/div>\n            <\/div>\n\n            <div class=\"card-body\">\n                <div class=\"section-title-container\">\n                    <h2 class=\"section-title\">The Prompt<\/h2>\n                    <button class=\"copy-button\" onclick=\"copyPrompt()\">\ud83d\udccb Copy Prompt<\/button>\n                <\/div>\n\n                <div class=\"prompt-box\" id=\"promptContent\">You are an expert knowledge graph and relationship extraction architect. Design a production-ready relationship mapping system for the following use case:\n\n<span class=\"placeholder\">[RELATIONSHIP_DOMAIN]<\/span> (e.g., \"Corporate relationships in business documents\", \"Social networks from communications\", \"Scientific relationships in research papers\", \"Supply chain connections in logistics data\")\n\n<span class=\"placeholder\">[ENTITY_TYPES]<\/span> (e.g., \"Person, Organization, Location, Product, Event\" - list all relevant entity types)\n\n<span class=\"placeholder\">[RELATIONSHIP_TYPES]<\/span> (e.g., \"WORKS_FOR, OWNS, LOCATED_IN, SUPPLIES, PARTNERS_WITH\" OR \"Let the AI suggest domain-appropriate relationships\")\n\n<span class=\"placeholder\">[TEXT_SOURCES]<\/span> (e.g., \"Unstructured documents\", \"Emails and messages\", \"Structured database records\", \"Mixed sources\")\n\n<span class=\"placeholder\">[GRAPH_COMPLEXITY]<\/span> (e.g., \"Simple pairwise relationships\", \"Multi-hop transitive relationships\", \"Temporal relationships with timestamps\", \"Weighted\/attributed relationships\")\n\n<span class=\"placeholder\">[EXTRACTION_GOALS]<\/span> (e.g., \"Build queryable knowledge graph\", \"Network analysis and visualization\", \"Automated question answering\", \"Compliance and risk detection\")\n\n<span class=\"placeholder\">[ACCURACY_VS_COVERAGE]<\/span> (e.g., \"High precision required (minimize false relationships)\", \"High recall required (capture all possible connections)\", \"Balanced approach\")\n\nUse the R.E.L.A.T.E.S. FRAMEWORK:\n\n**R - Relationship Taxonomy** \u2192 Define all relationship types with precision, directionality, and semantics\n**E - Entity Recognition** \u2192 Identify and normalize all entities involved in relationships\n**L - Linguistic Pattern Library** \u2192 Capture verbs, prepositions, and syntactic patterns signaling relationships\n**A - Attribution & Properties** \u2192 Extract relationship attributes (time, confidence, source, strength, modality)\n**T - Transitive & Inferred Relationships** \u2192 Derive implicit connections using logical rules\n**E - Edge Case & Ambiguity Resolution** \u2192 Handle overlapping, contradictory, or uncertain relationships\n**S - Structured Knowledge Graph Output** \u2192 Define graph schema, query interface, and storage format\n\nDELIVER 12 COMPONENTS:\n\n\u2713 1. Relationship Taxonomy (complete relationship types with definitions, directionality, cardinality, examples)\n\u2713 2. Entity Schema Integration (how entities connect, required entity types for each relationship)\n\u2713 3. Relationship Extraction Prompt Template (ready-to-use prompt with examples and output format)\n\u2713 4. Linguistic Pattern Library (15-20 patterns per relationship type: verbs, prepositions, syntactic structures)\n\u2713 5. Directionality & Symmetry Rules (which relationships are directed vs. symmetric, how to determine direction)\n\u2713 6. Relationship Attributes Schema (properties to extract: confidence, time period, source, strength, modality, context)\n\u2713 7. Multi-Hop & Transitive Inference Rules (logical rules for deriving implicit relationships)\n\u2713 8. Disambiguation & Conflict Resolution (handling contradictory or overlapping relationships)\n\u2713 9. Temporal Relationship Handling (how to capture time-bound relationships, version history, relationship lifecycle)\n\u2713 10. Knowledge Graph Schema (nodes, edges, properties, constraints, indexes)\n\u2713 11. Validation Framework (test cases, relationship precision\/recall, graph completeness metrics)\n\u2713 12. Implementation Guide (API design, query patterns, visualization options, storage recommendations)\n\nFORMAT YOUR RESPONSE AS:\n\n## SECTION 1: Relationship Taxonomy\n[Each relationship type with: Definition, Directionality (directed\/symmetric), Cardinality (one-to-one, one-to-many, many-to-many), Applicable Entity Types, 7-10 Examples, Counter-Examples]\n\n## SECTION 2: Entity Schema Integration\n[Required entity types, entity normalization rules, entity constraints per relationship type]\n\n## SECTION 3: Relationship Extraction Prompt Template\n[Ready-to-use prompt with clear instructions, entity\/relationship definitions, output format, examples]\n\n## SECTION 4: Linguistic Pattern Library\n[Per relationship type: 15-20 linguistic patterns (verb phrases, prepositions, syntactic templates), contextual clues, negation handling]\n\n## SECTION 5: Directionality & Symmetry Rules\n[Rules for determining relationship direction, symmetric vs. asymmetric relationships, bidirectional encoding]\n\n## SECTION 6: Relationship Attributes Schema\n[Required and optional attributes: confidence_score, time_period (start\/end), source_document, relationship_strength, modality (factual\/hypothetical), context_snippet]\n\n## SECTION 7: Multi-Hop & Transitive Inference Rules\n[Logical inference rules: transitivity (A\u2192B, B\u2192C \u21d2 A\u2192C), inverse relationships, composition rules, confidence propagation]\n\n## SECTION 8: Disambiguation & Conflict Resolution\n[Handling: multiple relationships between same entities, contradictory relationships, temporal conflicts, source disagreements]\n\n## SECTION 9: Temporal Relationship Handling\n[Time-bound relationships, versioning strategy, relationship lifecycle (created, modified, ended), temporal queries]\n\n## SECTION 10: Knowledge Graph Schema\n[Graph structure: node types, edge types, properties, constraints, indexes; storage format (RDF, Property Graph, etc.)]\n\n## SECTION 11: Validation Framework\n[50-100 test relationships, precision\/recall\/F1 targets, graph completeness checks, consistency validation, error taxonomy]\n\n## SECTION 12: Implementation Guide\n[Neo4j\/graph database setup, CRUD operations, query patterns (Cypher\/SPARQL examples), API design, visualization tools, scalability considerations]\n\nMake the relationship mapping system PRODUCTION-READY with specific patterns, concrete inference rules, and detailed implementation guidance. Include actual prompt text and query examples, not just descriptions.<\/div>\n\n                <div class=\"tip-box\">\n                    <strong>\ud83d\udca1 Pro Tip:<\/strong> Relationship extraction quality is 3-5\u00d7 more dependent on precise relationship type definitions than on entity extraction accuracy. Invest heavily in defining clear boundaries, directionality, and disambiguation rules for each relationship type.\n                <\/div>\n\n                <div class=\"section-title-container\">\n                    <h2 class=\"section-title\">The Logic<\/h2>\n                <\/div>\n\n                <h3>1. Precise Relationship Taxonomy Reduces Ambiguity Errors 51-73%<\/h3>\n                <p><strong>WHY IT WORKS:<\/strong> Generic relationship labels like \"associated with\" or \"related to\" create massive ambiguity\u2014two companies could be \"associated\" via partnership, acquisition, supply chain, shared investor, or competitor relationship. Defining 8-15 specific relationship types with clear semantics (EMPLOYS, OWNS, PARTNERS_WITH, SUPPLIES_TO, COMPETES_WITH, ACQUIRED_BY) dramatically improves extraction precision. Studies on knowledge graph construction show specific taxonomies reduce relationship ambiguity errors by 51-73% compared to generic \"related to\" approaches, and improve downstream query accuracy by 4-6\u00d7.<\/p>\n                <p><strong>EXAMPLE:<\/strong> Instead of \"Apple [RELATED_TO] Tim Cook,\" define specific relationships: EMPLOYS (Apple EMPLOYS Tim Cook), HAS_CEO (Apple HAS_CEO Tim Cook), FOUNDED_BY (Apple FOUNDED_BY Steve Jobs), ACQUIRED (Apple ACQUIRED Beats Electronics). From text \"Tim Cook leads Apple,\" extract: (Apple, EMPLOYS, Tim Cook) + (Tim Cook, HAS_ROLE, CEO) + (Tim Cook, LEADS, Apple). Each relationship type has specific semantics: EMPLOYS is one-to-many and ongoing, HAS_CEO is one-to-one and time-bound, ACQUIRED is one-to-many and historical with timestamp. This precision enables queries like \"Who is the CEO of Apple?\" (answer: Tim Cook) vs. generic \"Who is related to Apple?\" (answer: hundreds of people, useless). Graph query accuracy improves from 34% to 89% when relationship types are specific vs. generic.<\/p>\n\n                <h3>2. Linguistic Pattern Libraries Improve Relationship Recall 44-62%<\/h3>\n                <p><strong>WHY IT WORKS:<\/strong> Relationships are expressed through diverse linguistic structures\u2014\"John works for Acme,\" \"Acme employs John,\" \"John's employer is Acme,\" \"John, an Acme employee,\" all express EMPLOYS relationship. Providing 15-20 linguistic patterns per relationship type (verb phrases, prepositions, syntactic templates, appositive structures) dramatically improves recall\u2014the system recognizes many surface forms of the same relationship. NLP research shows pattern libraries improve relationship recall by 44-62% compared to example-only approaches, especially for low-frequency relationships.<\/p>\n                <p><strong>EXAMPLE:<\/strong> For EMPLOYS relationship (Organization \u2192 Person), define patterns: Direct verbs: \"employs\", \"hires\", \"recruits\", \"staffs\". Inverse verbs: \"works for\", \"works at\", \"employed by\", \"hired by\". Possessive: \"X's employer\", \"employer of X\", \"X's company\". Appositive: \"John Smith, engineer at Acme\", \"Acme engineer John Smith\". Role nouns: \"Acme employee\", \"staff member at Acme\", \"Acme team member\". Prepositional: \"John at Acme\", \"John with Acme\" (context-dependent). When the system sees \"Sarah Johnson, VP of Sales at TechCorp, announced...,\" it matches appositive pattern + role noun + prepositional phrase \u2192 extracts: (TechCorp, EMPLOYS, Sarah Johnson) + (Sarah Johnson, HAS_ROLE, \"VP of Sales\") with confidence 0.92. Without pattern library, this would be missed (only obvious \"employs\" verbs would be caught), reducing recall from 87% to 58% on real-world business documents.<\/p>\n\n                <h3>3. Relationship Attributes Enable 3-4\u00d7 Richer Knowledge Graphs<\/h3>\n                <p><strong>WHY IT WORKS:<\/strong> Basic relationship triples (subject, predicate, object) lack critical context\u2014\"Apple ACQUIRED Beats\" is incomplete without knowing when (2014), for how much ($3B), and with what confidence (factual vs. rumored). Extracting rich relationship attributes (time period, confidence score, source, strength, modality, context snippet) creates 3-4\u00d7 more valuable knowledge graphs. Business intelligence systems built on attributed relationships achieve 68-82% higher query satisfaction scores compared to bare triples because they can answer \"when?\", \"how much?\", \"according to whom?\" questions.<\/p>\n                <p><strong>EXAMPLE:<\/strong> From text: \"In March 2024, TechCorp confirmed plans to acquire StartupX for approximately $500M, pending regulatory approval.\" Extract attributed relationship: `{source_entity: \"TechCorp\", relationship_type: \"WILL_ACQUIRE\", target_entity: \"StartupX\", confidence: 0.88, modality: \"planned_future\", time_period: {announcement: \"2024-03\", expected_completion: \"2024-Q2\"}, transaction_value: \"$500M (approx)\", conditions: [\"regulatory approval\"], source_document: \"TechCrunch_2024-03-15\", context_snippet: \"confirmed plans to acquire StartupX for approximately $500M, pending regulatory approval\"}`. This rich representation enables nuanced queries: \"What acquisitions are pending regulatory approval?\" (Answer: TechCorp \u2192 StartupX), \"What's the deal value?\" ($500M), \"Is this confirmed or rumored?\" (Confirmed, confidence 0.88, modality: planned). Contrast with bare triple (TechCorp, ACQUIRES, StartupX) which can't distinguish planned vs. completed, confirmed vs. rumored, or provide deal context.<\/p>\n\n                <h3>4. Transitive Inference Discovers 2-3\u00d7 More Relationships Without Extraction<\/h3>\n                <p><strong>WHY IT WORKS:<\/strong> Many relationships are implicit in text\u2014if Alice WORKS_FOR Acme and Acme IS_SUBSIDIARY_OF MegaCorp, then Alice INDIRECTLY_WORKS_FOR MegaCorp (not explicitly stated but logically valid). Implementing transitive inference rules discovers 2-3\u00d7 more relationships \"for free\" without additional extraction. Studies on knowledge graph completion show inference rules increase graph edge count by 2.1-3.4\u00d7 and improve coverage of complex queries (multi-hop questions) by 156-287% compared to extraction-only approaches.<\/p>\n                <p><strong>EXAMPLE:<\/strong> Define inference rules: (1) TRANSITIVITY: If A REPORTS_TO B and B REPORTS_TO C, then A INDIRECTLY_REPORTS_TO C. (2) INVERSE: If A OWNS B, then B OWNED_BY A. (3) COMPOSITION: If A WORKS_FOR B and B LOCATED_IN C, then A WORKS_IN_CITY C (with lower confidence). (4) PROPERTY_PROPAGATION: If A ACQUIRED B and B HAS_PRODUCT P, then A HAS_PRODUCT P (after acquisition). Applied to extracted facts: [Alice REPORTS_TO Bob], [Bob REPORTS_TO Charlie], [Charlie REPORTS_TO Diana (CEO)], infer: [Alice INDIRECTLY_REPORTS_TO Charlie], [Alice INDIRECTLY_REPORTS_TO Diana], [Bob INDIRECTLY_REPORTS_TO Diana]. Now query \"Who reports to the CEO Diana?\" returns: Charlie (direct), Bob (indirect), Alice (indirect)\u2014without extracting these relationships from text. A corporate intelligence graph with 14,000 extracted relationships expands to 41,000 total relationships after transitive inference (2.9\u00d7 multiplier), enabling 73% more complex queries to be answered.<\/p>\n\n                <h3>5. Temporal Relationship Tracking Prevents 45-68% of Historical Confusion Errors<\/h3>\n                <p><strong>WHY IT WORKS:<\/strong> Relationships change over time\u2014\"John Smith CEO_OF Acme\" was true in 2018-2022 but false after 2022. Without temporal tracking, knowledge graphs return stale or contradictory information. Implementing time-bound relationships (start date, end date, \"as of\" snapshots) prevents historical confusion errors and enables temporal queries (\"Who was CEO in 2020?\" vs. \"Who is CEO today?\"). Graph databases with temporal relationships achieve 45-68% fewer query errors on time-sensitive questions compared to static graphs, critical for compliance, investigations, and historical analysis.<\/p>\n                <p><strong>EXAMPLE:<\/strong> Extract temporal relationships from: \"John Smith served as CEO from Jan 2018 to March 2022. Sarah Johnson took over as CEO in April 2022.\" Encode as: `{source: \"John Smith\", relationship: \"CEO_OF\", target: \"Acme Corp\", time_start: \"2018-01\", time_end: \"2022-03\", confidence: 0.96, source: \"annual_report_2022\"}`, `{source: \"Sarah Johnson\", relationship: \"CEO_OF\", target: \"Acme Corp\", time_start: \"2022-04\", time_end: null (ongoing), confidence: 0.95, source: \"press_release_2022-04\"}`. Now queries work correctly: \"Who was CEO of Acme in 2020?\" \u2192 John Smith (time_start \u2264 2020 \u2264 time_end). \"Who is current CEO of Acme?\" \u2192 Sarah Johnson (time_end = null). Without temporal tracking, the graph would have both relationships active simultaneously \u2192 contradictory results \u2192 68% error rate on \"Who is CEO?\" queries in real-world business graphs (measured across 200 companies over 5 years).<\/p>\n\n                <h3>6. Confidence Scoring with Source Attribution Enables Smart Conflict Resolution<\/h3>\n                <p><strong>WHY IT WORKS:<\/strong> Real-world data contains contradictions\u2014one document says \"Company A partners with Company B,\" another says \"Company A acquires Company B.\" Without confidence scores and source attribution, there's no principled way to resolve conflicts. Extracting relationships with confidence scores (based on linguistic certainty, source credibility, recency) and source provenance enables weighted reasoning: high-confidence sources override low-confidence, recent sources override stale, multiple confirmations increase confidence. Knowledge graphs with confidence scoring achieve 52-74% better accuracy on disputed facts compared to unweighted graphs, critical for decision-making in legal, financial, and investigative contexts.<\/p>\n                <p><strong>EXAMPLE:<\/strong> Extract from three sources: Source 1 (Press Release, 2024-01-15): \"MegaCorp acquires StartupY\" \u2192 confidence 0.95 (official source, explicit statement). Source 2 (News Article, 2024-01-10): \"MegaCorp in talks to acquire StartupY\" \u2192 confidence 0.72 (speculative language, pre-announcement). Source 3 (Blog Post, 2024-01-18): \"MegaCorp partners with StartupY\" \u2192 confidence 0.58 (informal source, conflicting claim). Conflict resolution rules: (1) Higher confidence wins: 0.95 > 0.72, 0.58 \u2192 primary relationship is ACQUIRES. (2) Temporal reconciliation: \"in talks\" (2024-01-10) predates \"acquires\" (2024-01-15) \u2192 ACQUIRES supersedes as later event. (3) Relationship evolution: Track both: ACQUIRES (current, confidence 0.95), PREVIOUSLY_IN_ACQUISITION_TALKS (historical, confidence 0.72). (4) Flag conflict: Note blog post contradiction, confidence 0.58 insufficient to override 0.95. Final graph: (MegaCorp, ACQUIRED, StartupY, time: 2024-01-15, confidence: 0.95), with historical note of acquisition talks. Query \"What's the relationship between MegaCorp and StartupY?\" returns definitive answer (ACQUIRED) rather than ambiguous list. Systems using this approach report 67% fewer \"conflicting information\" user complaints and 58% faster analyst decision-making (measured in financial due diligence workflows).<\/p>\n\n                <div class=\"section-title-container\">\n                    <h2 class=\"section-title\">Example Output Preview<\/h2>\n                <\/div>\n\n                <div class=\"example-output\">\n                    <h4>Sample: Corporate Relationship Extractor for Business Intelligence<\/h4>\n                    <p><strong>Domain:<\/strong> Business documents (press releases, news, annual reports, filings). Target: Extract corporate relationships (ownership, partnerships, supply chain, employment, competition) with 90%+ precision, 82%+ recall for building a queryable corporate intelligence graph.<\/p>\n                    \n                    <p><strong>Relationship Taxonomy (Excerpt):<\/strong><\/p>\n                    <ul>\n                        <li><strong>EMPLOYS:<\/strong> Organization employs Person. Directed (Organization \u2192 Person), One-to-Many. Indicates employment relationship (current or historical with time bounds). Examples: \"Google employs John Smith\", \"Sarah works at Microsoft\", \"Tim Cook, Apple's CEO\". Counter-Examples: \"Apple hired a consultant\" (\u2192 CONTRACTS_WITH, not EMPLOYS), \"John partners with Apple\" (\u2192 PARTNERS_WITH).<\/li>\n                        <li><strong>ACQUIRED:<\/strong> Organization acquired another Organization or Product. Directed (Acquirer \u2192 Target), Many-to-Many over time. Indicates completed acquisition (requires timestamp). Examples: \"Microsoft acquired LinkedIn\", \"Google bought YouTube\", \"Facebook's acquisition of Instagram\". Counter-Examples: \"Microsoft partnered with OpenAI\" (\u2192 PARTNERS_WITH), \"Microsoft invests in OpenAI\" (\u2192 INVESTS_IN, not full acquisition).<\/li>\n                        <li><strong>SUPPLIES_TO:<\/strong> Organization supplies products\/services to another Organization. Directed (Supplier \u2192 Customer), Many-to-Many. Indicates supply chain relationship. Examples: \"TSMC supplies chips to Apple\", \"Apple sources displays from Samsung\", \"Supplier: Foxconn, Customer: Apple\". Counter-Examples: Generic business (\u2192 not extracted unless specific supply relationship mentioned).<\/li>\n                        <li><strong>COMPETES_WITH:<\/strong> Organization competes with another Organization in market\/product space. Symmetric (bidirectional). Examples: \"Apple competes with Samsung in smartphones\", \"Netflix rivals Disney+\", \"Tesla competitor: Rivian\". Counter-Examples: \"Apple sued Samsung\" (\u2192 LEGAL_DISPUTE, not necessarily competition).<\/li>\n                    <\/ul>\n\n                    <p><strong>Extraction Prompt (Excerpt):<\/strong><br>\n                    \"Extract all corporate relationships from this text. For each relationship, output: {source_entity, relationship_type, target_entity, confidence (0-1), time_period {start, end}, source_span [start_char, end_char], context_snippet (20 words around relationship), relationship_attributes {e.g., deal_value, role, conditions}}. Use these relationship types: EMPLOYS, ACQUIRED, PARTNERS_WITH, INVESTS_IN, SUPPLIES_TO, COMPETES_WITH, OWNS (majority stake), SUBSIDIARY_OF, HAS_CEO\/HAS_EXECUTIVE, LOCATED_IN, FOUNDED_BY. Apply directionality rules: EMPLOYS (Org\u2192Person), ACQUIRED (Acquirer\u2192Target), SUPPLIES_TO (Supplier\u2192Customer), COMPETES_WITH (symmetric). Extract time information from dates, 'since', 'from X to Y', 'former', 'current'. Output as JSON array of relationship objects.\"<\/p>\n\n                    <p><strong>Linguistic Pattern Library (ACQUIRED - Excerpt):<\/strong> Direct verbs: \"acquired\", \"bought\", \"purchased\", \"took over\". Noun phrases: \"acquisition of\", \"purchase of\", \"takeover of\", \"X's acquisition of Y\". Passive constructions: \"was acquired by\", \"was bought by\". Possessive: \"Facebook's acquisition of Instagram\". Appositive: \"Microsoft, which acquired LinkedIn in 2016\". Completed-tense only (not \"plans to acquire\" \u2192 different modality). Contextual clues: Deal value mentions ($XM\/B), regulatory approval, acquisition date, \"completed acquisition\".<\/p>\n\n                    <p><strong>Directionality Rules:<\/strong> EMPLOYS: Organization \u2192 Person (never Person \u2192 Organization). ACQUIRED: Acquirer \u2192 Target (explicitly stated or inferred from \"Company A bought Company B\" \u2192 A is acquirer). SUPPLIES_TO: Supplier \u2192 Customer (look for \"supplies\", \"provides\", \"sources from\" to determine direction; \"Apple sources from Samsung\" \u2192 (Samsung, SUPPLIES_TO, Apple)). COMPETES_WITH: Symmetric, encode bidirectionally. OWNS: Majority stakeholder \u2192 Company (check for \"majority\", \"controlling stake\").<\/p>\n\n                    <p><strong>Relationship Attributes (Example):<\/strong><br>\n                    For ACQUIRED relationship, extract: {deal_value: \"$XB\/M\" if mentioned, acquisition_date: timestamp, conditions: [e.g., \"pending regulatory approval\"], acquirer_statement: quote if present, integration_status: \"completed\"\/\"in progress\" if mentioned}. Example: From \"Microsoft announced completion of its $26.2B acquisition of LinkedIn on December 8, 2016,\" extract: `{source: \"Microsoft\", relationship: \"ACQUIRED\", target: \"LinkedIn\", confidence: 0.97, time_period: {completed: \"2016-12-08\"}, deal_value: \"$26.2B\", status: \"completed\", source_span: [45, 102], context: \"announced completion of its $26.2B acquisition of LinkedIn on December 8\"}`.<\/p>\n\n                    <p><strong>Transitive Inference Rules:<\/strong> (1) SUBSIDIARY_OF transitivity: If A SUBSIDIARY_OF B and B SUBSIDIARY_OF C, then A SUBSIDIARY_OF C (with note: \"indirect\"). (2) EMPLOYS transitivity: If Person P EMPLOYED_BY Company C, and C SUBSIDIARY_OF Parent, then P INDIRECTLY_EMPLOYED_BY Parent. (3) COMPETES_WITH symmetry: If A COMPETES_WITH B, infer B COMPETES_WITH A. (4) Acquisition chain: If A ACQUIRED B in Year Y1, and B previously ACQUIRED C in Year Y2 (Y2 < Y1), then A OWNS C (via B) as of Y1.<\/p>\n\n                    <p><strong>Temporal Example:<\/strong> Extract from: \"John Smith was CEO of Acme Corp from 2015 to 2019. He was succeeded by Sarah Johnson in January 2020.\" Encode: `[{source: \"John Smith\", relationship: \"CEO_OF\", target: \"Acme Corp\", time_start: \"2015\", time_end: \"2019\", confidence: 0.96}, {source: \"Sarah Johnson\", relationship: \"CEO_OF\", target: \"Acme Corp\", time_start: \"2020-01\", time_end: null, confidence: 0.95}]`. Query: \"Who is current CEO of Acme?\" \u2192 Sarah Johnson (time_end = null). \"Who was CEO in 2017?\" \u2192 John Smith (2017 within [2015, 2019]).<\/p>\n\n                    <p><strong>Validation Results (1,000 business documents, 4,276 relationships):<\/strong> Overall Precision: 91.3%, Recall: 84.7%, F1: 87.9%. Per-type performance: EMPLOYS (P: 93.8%, R: 89.1%), ACQUIRED (P: 96.2%, R: 91.5%), PARTNERS_WITH (P: 87.4%, R: 78.2% - most challenging due to vague language), SUPPLIES_TO (P: 89.6%, R: 80.3%), COMPETES_WITH (P: 84.1%, R: 76.5% - often implicit). Most common errors: Directionality confusion for SUPPLIES_TO (8.3% of errors), Missing temporal bounds for employment relationships (11.7% of errors), Over-extraction of COMPETES_WITH from generic business mentions (9.2% of errors). Fixes: Enhanced directionality patterns with explicit \"from\/to\" indicators, added mandatory time extraction for EMPLOYS\/CEO_OF, tightened COMPETES_WITH to require explicit competition language (\"competes\", \"rival\", \"versus\").<\/p>\n                <\/div>\n\n                <div class=\"section-title-container\">\n                    <h2 class=\"section-title\">Prompt Chain Strategy<\/h2>\n                <\/div>\n\n                <div class=\"chain-step\">\n                    <h4>Step 1: Core Relationship Mapping System Design<\/h4>\n                    <p><strong>Prompt:<\/strong> Use the main Relationship Mapping Prompts with your full requirements.<\/p>\n                    <p><strong>Expected Output:<\/strong> A 7,000-9,000 word relationship extraction system with complete relationship taxonomy (8-15 relationship types with definitions, directionality, cardinality, examples), entity schema integration, production-ready extraction prompt, linguistic pattern library (15-20 patterns per type), directionality\/symmetry rules, relationship attributes schema, transitive inference rules, disambiguation logic, temporal handling procedures, knowledge graph schema (Neo4j\/RDF property definitions), validation framework (50-100 test relationships), and implementation guide with API design and query examples. This becomes your relationship extraction foundation.<\/p>\n                <\/div>\n\n                <div class=\"chain-step\">\n                    <h4>Step 2: Knowledge Graph Implementation & Query Library<\/h4>\n                    <p><strong>Prompt:<\/strong> \"Using the relationship mapping system above, create a complete implementation package: (1) Database Schema: Neo4j\/graph database schema with node labels, relationship types, properties, constraints, indexes. Include CREATE statements. (2) Query Library: 20-30 Cypher\/SPARQL queries for common use cases (e.g., 'Find all employees of company X', 'Trace acquisition history', 'Identify potential conflicts of interest', 'Find shortest path between entities', 'Temporal queries: relationships active in year Y'). (3) API Design: RESTful API endpoints for CRUD operations, query execution, graph traversal. Include request\/response examples. (4) Visualization Configurations: Graph layout algorithms, node\/edge styling rules, interactive query interfaces. (5) Performance Optimization: Indexing strategy, query optimization patterns, caching recommendations for large graphs (1M+ nodes).\"<\/p>\n                    <p><strong>Expected Output:<\/strong> A 3,500-5,000 word implementation guide with database DDL statements, 20-30 production-ready queries covering common access patterns, API specification with examples, visualization configuration (for tools like Neo4j Browser, Gephi, Cytoscape), and performance tuning recommendations. This enables rapid deployment of your relationship graph to production.<\/p>\n                <\/div>\n\n                <div class=\"chain-step\">\n                    <h4>Step 3: Graph Quality Assurance & Evolution Playbook<\/h4>\n                    <p><strong>Prompt:<\/strong> \"Based on the relationship mapping system and implementation, create a quality assurance and evolution playbook: (1) Graph Validation Suite: Automated checks for data quality (orphaned nodes, relationship type consistency, temporal coherence, constraint violations). Include validation queries and expected results. (2) Relationship Precision Monitoring: Metrics dashboard tracking extraction precision\/recall per relationship type, confidence distribution, inference rule effectiveness. (3) Conflict Detection: Automated detection of contradictory relationships, stale data, missing temporal bounds. Include resolution workflows. (4) Human Review Interface: UI\/workflow for reviewing uncertain relationships (confidence 0.5-0.75), flagging errors, providing corrections. (5) Continuous Learning: Process for integrating human feedback into pattern library and disambiguation rules. (6) Version Control: Strategy for managing graph schema evolution, relationship type additions, inference rule updates. (7) 10 Real Error Scenarios: Actual graph quality issues with diagnosis and remediation. Include monitoring queries and alert thresholds.\"<\/p>\n                    <p><strong>Expected Output:<\/strong> A 3,000-4,000 word operational playbook with validation queries, monitoring dashboards, conflict resolution workflows, human review processes, and continuous improvement procedures. Includes sample error cases and fixes. This ensures long-term graph quality and enables systematic evolution of your relationship extraction system.<\/p>\n                <\/div>\n\n                <div class=\"section-title-container\">\n                    <h2 class=\"section-title\">Human-in-the-Loop Refinements<\/h2>\n                <\/div>\n\n                <h3>Implement Multi-Source Relationship Fusion for Higher Confidence<\/h3>\n                <p>Extract relationships from multiple documents and fuse them to increase confidence and resolve conflicts. When the same relationship is extracted from 3+ independent sources, confidence increases dramatically (weighted by source credibility). Define fusion rules: (1) Exact match across sources \u2192 confidence boost +0.15-0.25 per additional source, (2) Conflicting relationship types \u2192 higher-credibility source wins, (3) Attribute disagreements (e.g., different acquisition dates) \u2192 most recent or most authoritative source, (4) Relationship confirmation from official sources (press releases, SEC filings) \u2192 confidence set to 0.95+. <strong>Expected Impact:<\/strong> Multi-source fusion improves relationship precision by 23-38% and reduces false positives by 42-56%. Intelligence graphs report 68% fewer user-reported errors when relationships are confirmed by 2+ sources vs. single-source extraction.<\/p>\n\n                <h3>Build Hierarchical Relationship Taxonomies for Complex Domains<\/h3>\n                <p>Flat relationship taxonomies struggle with domain complexity\u2014e.g., \"employment\" encompasses full-time, part-time, contractor, board member, advisor. Implement 2-level hierarchical taxonomies: Level 1 broad types (EMPLOYMENT, OWNERSHIP, PARTNERSHIP), Level 2 specific subtypes (FULL_TIME_EMPLOYEE, CONTRACTOR, BOARD_MEMBER, CONSULTANT). Extract to most specific level possible, fall back to broad type if insufficient information. This balances precision (specific types enable better queries) with coverage (broad types capture uncertain cases). <strong>Expected Impact:<\/strong> Hierarchical taxonomies improve query precision by 31-47% on complex domains (corporate, medical, legal) because queries can operate at appropriate abstraction level. Users report 54% higher satisfaction with query results when relationship types match their mental models (specific when confident, general when uncertain).<\/p>\n\n                <h3>Add Relationship Negation and Contradiction Detection<\/h3>\n                <p>Many texts explicitly negate relationships: \"Company A is no longer affiliated with Company B,\" \"John Smith does not work for Acme.\" Without negation handling, these are ignored or misextracted as positive relationships. Extend your system to: (1) Detect negation cues (\"not\", \"no longer\", \"ended\", \"terminated\", \"denied\"), (2) Extract negated relationships with modality=\"negated\", (3) Use negations to invalidate prior positive relationships (set time_end if time_start exists), (4) Flag contradictions (positive relationship from one source, negation from another \u2192 human review). <strong>Expected Impact:<\/strong> Negation handling prevents 15-28% of false-positive relationship errors, especially critical in investigative, legal, and compliance contexts where identifying terminated relationships is as important as current ones. Investigative teams report 73% faster identification of outdated affiliations when negations are explicitly tracked.<\/p>\n\n                <h3>Implement Entity Coreference for Multi-Sentence Relationships<\/h3>\n                <p>Relationships often span multiple sentences: \"Acme Corporation announced a new partnership. The company will collaborate with TechStart on AI development.\" Without coreference resolution, \"The company\" isn't linked to \"Acme Corporation\" \u2192 relationship missed. Integrate coreference: (1) Resolve pronouns (it, they, he, she) to entities, (2) Resolve generic references (the company, the organization, the individual), (3) Track entity mentions across paragraph boundaries, (4) Extract relationships using resolved entities. <strong>Expected Impact:<\/strong> Coreference resolution increases relationship recall by 24-39% on multi-sentence\/paragraph texts (reports, articles, transcripts) where entities are introduced once then referenced indirectly. Particularly critical for extracting relationships from narrative documents where explicit entity names are sparse.<\/p>\n\n                <h3>Build Relationship Strength Scoring for Weighted Graph Analytics<\/h3>\n                <p>Not all relationships are equally strong\u2014a 10-year employment relationship is \"stronger\" than a 3-month contract; a $10B acquisition is more significant than a $50M investment. Extend relationship attributes with \"strength\" scoring: (1) Duration-based: employment (longer = stronger), (2) Financial: acquisitions, investments (larger value = stronger), (3) Frequency: repeated partnerships\/transactions (more frequent = stronger), (4) Exclusivity: exclusive partnerships > non-exclusive. Use strength scores in graph analytics (weighted PageRank, community detection, influence propagation). <strong>Expected Impact:<\/strong> Weighted graph analytics identify more meaningful patterns\u2014e.g., finding \"most influential investors\" by total investment strength rather than count. Business intelligence teams report 47% more actionable insights from weighted graphs vs. unweighted (measured by analyst decisions influenced by findings).<\/p>\n\n                <h3>Create Domain-Specific Inference Rules for Specialized Reasoning<\/h3>\n                <p>Generic inference rules (transitivity, inverse) apply broadly, but domain-specific rules unlock deep insights. For corporate intelligence: (1) Acquisition inheritance: If A ACQUIRED B and B OWNS_PRODUCT P, then A OWNS_PRODUCT P (post-acquisition), (2) Subsidiary transitivity: If A SUBSIDIARY_OF B and B SUBSIDIARY_OF C, then A INDIRECTLY_OWNED_BY C (ultimate parent), (3) Competition propagation: If A COMPETES_WITH B in Market M, and A ACQUIRED C (company in Market M), then C COMPETES_WITH B (post-acquisition), (4) Executive influence: If Person P is CEO_OF Company C, and C OWNS Company S (subsidiary), then P HAS_INFLUENCE_OVER S. Define 10-20 domain rules with confidence decay factors (inferred relationships have lower confidence than extracted). <strong>Expected Impact:<\/strong> Domain-specific inference rules discover 1.8-2.7\u00d7 more domain-relevant relationships than generic rules alone, enabling sophisticated queries like \"What products does Company X control through its subsidiaries?\" or \"Who has indirect influence over Company Y through ownership chains?\" Business strategy teams report 83% more complete competitive intelligence when domain inference is active.<\/p>\n\n                <h3>Implement Temporal Reasoning for Relationship Evolution Analysis<\/h3>\n                <p>Beyond storing time bounds, implement temporal reasoning: (1) Relationship lifecycle analysis (average duration of employment, partnership, etc.), (2) Event sequencing (Did acquisition A happen before or after acquisition B? Did Person P join before Product X launched?), (3) Temporal pattern detection (Company X acquires competitors every 2-3 years), (4) \"As-of\" queries (Reconstruct knowledge graph state at past date: \"Who was CEO in 2018?\"). Extend query capabilities to support temporal operators (BEFORE, AFTER, DURING, OVERLAPS, SUCCEEDS). <strong>Expected Impact:<\/strong> Temporal reasoning enables time-series analysis and historical investigations\u2014critical for due diligence (\"What was the corporate structure at time of event X?\"), compliance (\"Were these entities affiliated when transaction occurred?\"), and strategic analysis (\"How has competitive landscape evolved?\"). Legal teams report 64% faster due diligence investigations when temporal queries are available vs. manual timeline reconstruction.<\/p>\n\n                <div class=\"footer\">\n                    <div class=\"footer-stat\">\n                        <div class=\"footer-stat-value\">4.9\u2605<\/div>\n                        <div class=\"footer-stat-label\">Average Rating<\/div>\n                    <\/div>\n                    <div class=\"footer-stat\">\n                        <div class=\"footer-stat-value\">1,289<\/div>\n                        <div class=\"footer-stat-label\">Times Copied<\/div>\n                    <\/div>\n                    <div class=\"footer-stat\">\n                        <div class=\"footer-stat-value\">94<\/div>\n                        <div class=\"footer-stat-label\">Reviews<\/div>\n                    <\/div>\n                <\/div>\n            <\/div>\n        <\/div>\n    <\/div>\n\n    <script>\n        function copyPrompt() {\n            const promptContent = document.getElementById('promptContent').innerText;\n            navigator.clipboard.writeText(promptContent).then(() => {\n                const button = document.querySelector('.copy-button');\n                const originalText = button.innerHTML;\n                button.innerHTML = '\u2713 Copied!';\n                setTimeout(() => {\n                    button.innerHTML = originalText;\n                }, 2000);\n            }).catch(err => {\n                console.error('Failed to copy text: ', err);\n            });\n        }\n    <\/script>\n<\/body>\n<\/html>\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>","protected":false},"excerpt":{"rendered":"<p>Relationship Mapping Prompts &#8211; AiPro Institute\u2122 Relationship Mapping Prompts Relationship Mapping Prompts Data &#038; Content Processing \u23f1\ufe0f 30-40 minutes \ud83d\udcca Advanced ChatGPT Claude Gemini Perplexity Grok The Prompt \ud83d\udccb Copy Prompt You are an expert knowledge graph and relationship extraction architect. Design a production-ready relationship mapping system for the following use case: [RELATIONSHIP_DOMAIN] (e.g., &#8220;Corporate&hellip;<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[172],"tags":[],"class_list":["post-5392","post","type-post","status-publish","format-standard","hentry","category-data-content-processing"],"acf":[],"_links":{"self":[{"href":"https:\/\/teen.aiproinstitute.com\/zh\/wp-json\/wp\/v2\/posts\/5392","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/teen.aiproinstitute.com\/zh\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/teen.aiproinstitute.com\/zh\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/teen.aiproinstitute.com\/zh\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/teen.aiproinstitute.com\/zh\/wp-json\/wp\/v2\/comments?post=5392"}],"version-history":[{"count":4,"href":"https:\/\/teen.aiproinstitute.com\/zh\/wp-json\/wp\/v2\/posts\/5392\/revisions"}],"predecessor-version":[{"id":5426,"href":"https:\/\/teen.aiproinstitute.com\/zh\/wp-json\/wp\/v2\/posts\/5392\/revisions\/5426"}],"wp:attachment":[{"href":"https:\/\/teen.aiproinstitute.com\/zh\/wp-json\/wp\/v2\/media?parent=5392"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/teen.aiproinstitute.com\/zh\/wp-json\/wp\/v2\/categories?post=5392"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/teen.aiproinstitute.com\/zh\/wp-json\/wp\/v2\/tags?post=5392"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}