CritPost Analysis

Joshua Liu, MD

1h (at the time of analysis)

View original LinkedIn post
✓ Completed

63/100

influencer

You have a sharp, authentic voice and a legitimate insight—that clinical pathway alignment is fundamentally a multiplayer problem AI cannot solve alone. This resonates because it's true. But you're *stating* rather than *proving* it. The piece reads like a well-delivered opinion from an insider, not a substantive argument grounded in evidence. Your biggest weakness is evidence quality (2/5 rubric score)—you assert without supporting why, making this influencer-tier content rather than thought leadership. The secondary issue is nuance: you've created a false binary (AI vs. humans) that ignores AI-assisted hybrid approaches, which weakens your credibility with sophisticated readers.

Dimension Breakdown

📊 How CSF Scoring Works

The Content Substance Framework (CSF) evaluates your content across 5 dimensions, each scored 0-20 points (100 points total).

Dimension Score Calculation:

Each dimension score (0-20) is calculated from 5 sub-dimension rubrics (0-5 each):

Dimension Score = (Sum of 5 rubrics ÷ 25) × 20

Example: If rubrics are [2, 1, 4, 3, 2], sum is 12.
Score = (12 ÷ 25) × 20 = 9.6 → rounds to 10/20

Why normalize? The 0-25 rubric range (5 rubrics × 5 max) is scaled to 0-20 to make all 5 dimensions equal weight in the 100-point CSF Total.

14/20
Specificity

Quantitative data weakness (3/5 rubric). GitHub stats cited but unsourced; SeamlessMD timeline presented as benchmark without comparative data or methodology explanation.

12/20
Experience Depth

Evidence quality critical (2/5 rubric). Claims about AI limitations, stakeholder dynamics, and trust-building are asserted without empirical support or structured analysis of causal mechanisms.

14/20
Originality

Synthesis weak (3/5 rubric). Single-player vs. multiplayer framing is sound but underdeveloped. Lacks exploration of why this distinction matters mechanistically or whether it holds across contexts.

11/20
Nuance

Reasoning depth and nuance both low (3/5, 2/5 rubrics). False dichotomy: presents human facilitation and AI tools as mutually exclusive rather than exploring AI-assisted scenarios or complementary roles.

12/20
Integrity

Hedge avoidance moderate (4/5 rubric). Conversational voice is strong (5/5) but softening phrases ('might even draft,' 'could ask') dilute confidence. Minor AI patterns detected; strategic all-caps risks aggressive tone on rereads.

Rubric Score Breakdown

🎤 Voice

Cliché Density 4/5
Structural Variety 5/5
Human Markers 5/5
Hedge Avoidance 4/5
Conversational Authenticity 5/5
Sum: 23/2518/20

🎯 Specificity

Concrete Examples 4/5
Quantitative Data 3/5
Named Entities 4/5
Actionability 3/5
Precision 4/5
Sum: 18/2514/20

🧠 Depth

Reasoning Depth 3/5
Evidence Quality 2/5
Nuance 2/5
Insight Originality 4/5
Systems Thinking 3/5
Sum: 14/2511/20

💡 Originality

Novelty 4/5
Contrarian Courage 4/5
Synthesis 3/5
Unexplored Angles 4/5
Thought Leadership 3/5
Sum: 18/2514/20

Priority Fixes

Impact: 9/10
Experience Depth
⛔ Stop: Making unsupported claims about stakeholder dynamics. You write: 'Clinicians spend 6 months debating the nuances of a care pathway—while all an AI can do is document the debate, but not resolve it.' This is illustrative but not evidenced.
✅ Start: Add one concrete example from SeamlessMD's work showing what happened when you tried to automate or AI-assist alignment. Quantify it: 'In 3 pilots, AI-generated pathway content led to X% higher rejection rates because [specific reason]. When we switched to SME-facilitated customization, adoption rose to Y%.' Name the disciplines, the friction points, the resolution mechanism.
💡 Why: This shifts you from 'trust me, I know healthcare' to 'here's what we measured.' Evidence quality is your lowest rubric score (2/5). One well-researched example lifts the entire piece. It also creates a defensible claim rather than a rhetorical one.
⚡ Quick Win: Mine your own SeamlessMD implementation data for one failed AI experiment or one successful human-facilitated pathway. Write a 200-word narrative: What was tried, why it failed/succeeded, what metric changed. Insert it after the 'GOOD LUCK' paragraph. This takes 2 hours if the data exists; it's worth 15 CSF points.
Impact: 8/10
Nuance
⛔ Stop: Presenting this as an either/or problem. You imply: 'AI is useless at alignment, humans must do it.' This false dichotomy weakens your argument with anyone who thinks critically. It also makes you sound defensive—like you're protecting a turf rather than analyzing a problem.
✅ Start: Reframe as a complementarity question: 'AI can generate pathway drafts in minutes. Humans are irreplaceable at navigating power dynamics, building trust, and securing buy-in. The question isn't whether AI can replace humans—it's what AI should handle and what humans should own.' Then answer that question with evidence. Example: 'We found that AI handles content generation 10x faster, but human-led alignment reduces implementation friction by 60%. Neither alone works; together they're faster than either historically was.'
💡 Why: Nuance rubric is 2/5—critically low. Sophisticated readers (the ones who become thought leaders) immediately spot false binaries and discount the author. You lose credibility by oversimplifying. The contrarian move isn't 'AI can't do this'—it's 'here's exactly what AI and humans should each own.' That's more original and more defensible.
⚡ Quick Win: Replace the closing rhetorical question 'How could AI possibly do that?' with a statement: 'Here's how we figured out what AI should do: [one specific decision]. Here's what only humans can do: [one specific reason]. That clarity is what makes pathways implementable.' Takes 30 minutes; shows analytical maturity rather than rhetorical triumph.
Impact: 7/10
Specificity
⛔ Stop: Citing stats without sourcing them. You reference '4% of all GitHub public commits are written by AI' and '20% by end of 2026'—but don't say where this comes from. Readers can't verify, and you lose credibility even if it's true.
✅ Start: Add source attribution in parentheses: '(GitHub's 2024 State of AI report)' or 'per McKinsey analysis.' If you can't cite it, remove it. Then strengthen the 9-10 week SeamlessMD benchmark by adding context: 'compared to the historical 6-9 month process at most health systems' or 'for a 50-element pathway involving 6+ disciplines.' This specificity makes the claim meaningful, not just impressive-sounding.
💡 Why: Quantitative data rubric is 3/5 (moderate-low). Adding sources converts unverifiable claims into credible data. Precision on the benchmark (what's included, what's the baseline) prevents readers from dismissing it as marketing. This moves you from 'plausible' to 'trusted.'
⚡ Quick Win: Spend 15 minutes sourcing the GitHub stat (or removing it if unsourceable). Add 1-2 sentences of context around the 9-10 week timeline: 'For a typical 50-element pathway across medicine, nursing, pharmacy, and administration, implementation historically requires 6-9 months. Our facilitated process reduces that to 9-10 weeks by [specific methodology]. This 60-70% reduction in time-to-implementation is the gap where human expertise creates value AI cannot.' Instant specificity boost.

Transformation Examples

🧠 Deepen Your Thinking
❌ Before

AI can't read the room. It doesn't know how to navigate social politics, build genuine trust, or figure out how to tailor a strategy to the specific needs and resource limitations of a local community.

✅ After

Current AI models fail at pathway alignment for three reasons: First, they lack real-time access to institutional politics—the history of tension between disciplines, budget constraints, or past failed initiatives. Second, they can't dynamically adjust communication style based on stakeholder response; they generate static content, not interactive facilitation. Third, they can't hold accountability for outcomes—clinicians won't trust a system with no skin in the game. SeamlessMD solved this by assigning a human SME who brings institutional memory, reads emotional resistance, and owns implementation risk. The question isn't whether AI is metaphysically incapable of trust—it's whether current AI architecture can integrate the local context and accountability that clinical teams require. We believe the latter requires human judgment, at least for now.

How: Distinguish between technical constraints (current models can't integrate real-time local data, stakeholder preferences) and claimed ontological impossibilities (AI fundamentally cannot build trust). Test whether the problem is *what AI is* or *how we're using it*. Ask: What specifically about 'reading the room' is irreducible—emotional intelligence? Power dynamics recognition? Contextual knowledge gaps? Institutional memory? Then provide evidence for which are permanent vs. solvable constraints.

🎤 Add Authentic Voice
❌ Before

You could ask AI to create the perfect care pathway, and it might even draft something reasonable in minutes—BUT coming up with plausible pathway content is the easy part. The really hard part is getting stakeholder alignment.

✅ After

Ask AI to create a care pathway, and it will generate a plausible draft in minutes. The content is technically sound. But that's not the work. The work is stakeholder alignment—getting medicine, nursing, pharmacy, and administration to agree on 50 interdependent decisions while honoring local constraints, legacy workflows, and professional autonomy. That's where six months disappears.

  • Removed 'might' and 'even'—confidence suits this argument. Changed to 'will generate' and 'plausible draft,' which is factual without hedging.
  • Replaced 'reasonable' with 'technically sound'—specific and clinical, not vague.
  • Added concrete detail: '50 interdependent decisions,' 'local constraints, legacy workflows, professional autonomy'—shows you understand the actual problem, not just asserting one exists.
  • Tightened logic: 'But that's not the work. The work is...' creates narrative momentum instead of 'coming up with... is the easy part. The really hard part is...' which feels repetitive.
💡 Originality Challenge
❌ Before

Derivative Area: The core premise—'AI can generate content but can't build consensus'—is familiar territory in change management and organizational behavior literature. Your healthcare specificity is valuable, but the underlying insight (technical solutions fail without stakeholder buy-in) has been documented since at least 2000s enterprise software implementations.

✅ After

The genuine contrarian move isn't 'AI can't do stakeholder alignment'—that's now conventional wisdom. The move is: 'AI-assisted facilitation could actually *accelerate* alignment if we redesigned the human-AI collaboration model.' This would require you to: (1) document where your current SME-led process bottlenecks, (2) test whether AI could remove those bottlenecks, (3) measure whether the hybrid model produces equivalent or better outcomes. If true, this is genuinely novel and positions you as a builder, not just a critic. If false, you have an interesting failure story ('We tried AI-assisted facilitation; here's why it didn't work'). Either way, you move from defensive positioning to research-backed insight.

  • Why does *clinical* stakeholder alignment specifically resist AI facilitation compared to other domains? Is it the stakes (patient safety), the power structures (physician authority), the lack of written rules (unspoken norms), or something else? Compare to software teams, where stakeholder alignment is also complex but arguably faster.
  • What if the problem isn't AI's limitation but our deployment strategy? Could AI work better if positioned as a *tool for humans* rather than a *replacement for humans*? What would facilitation look like if AI handled logistics, conflict tracking, and documentation while humans handled negotiation?
  • Is the 6-month debate actually a proxy for something else—like organizational dysfunction, turf protection, or lack of clinical leadership? Some health systems probably align pathways faster. What are they doing differently?
  • What role does pathway *design methodology* play? If stakeholders co-design with AI assistance vs. top-down AI generation, does outcome differ? Does the process matter as much as the content?

30-Day Action Plan

Week 1: Evidence Depth (Experience Depth dimension)

Audit SeamlessMD's project data for one clear example: either a failed AI pathway generation attempt or a successful SME-facilitated customization. Document: the disciplines involved, the specific friction point or resolution mechanism, and the measured outcome (time, adoption rate, implementation issues). Write a 250-word narrative of what happened and why it matters. This becomes your primary supporting evidence.

Success: You have a concrete, measurable example with timestamps and stakeholder details that could withstand scrutiny. It specifically illustrates why stakeholder facilitation matters more than content generation speed.

Week 2: Nuance (Nuance dimension)

Rewrite the argument as a *complementarity framework* rather than a critique. Draft an outline: (1) What AI does well—rapid pathway drafting, content generation, pattern recognition. (2) What humans must own—stakeholder negotiation, accountability, contextual judgment. (3) How you've structured the collaboration at SeamlessMD—specific process, decision points, roles. (4) Quantified outcome—time, adoption, or quality metric that proves the hybrid approach works. Aim for 400 words; this becomes your core narrative.

Success: The argument shifts from 'AI can't do this' to 'here's the right role for each.' A skeptical reader can follow the logic without feeling defensive. You reference your SeamlessMD data (from Week 1) as proof.

Week 3: Specificity (Specificity dimension)

Add sourcing and precision. (1) Find the source for the GitHub AI commit stat or remove it. (2) Quantify the 9-10 week benchmark: 'compared to the historical 6-9 month implementation across comparable health systems' or whatever your baseline is. (3) Define '50 elements'—is this care pathway tasks, decision points, stakeholder agreements? (4) Add 1-2 contextual details about SeamlessMD's facilitation methodology: 'Our SME-led model includes weekly alignment sessions, documented disagreements mapped to clinical literature, and explicit accountability roles.' Rewrite your core claims with these specifics.

Success: Every quantitative claim has a source or baseline comparison. Every vague term ('reasonable,' 'complex,' 'hard') is replaced with specific detail or metric. A reader can verify or challenge each claim.

Week 4: Integration and Polish (All dimensions)

Synthesize Weeks 1-3 into a revised piece: (1) Open with your Week 2 complementarity framing, anchored by your Week 1 case study. (2) Use Week 3 specifics throughout—source citations, quantified benchmarks, process details. (3) Close with an honest limitation: What *don't* you know about AI's potential in this space? What would need to be true for AI-assisted facilitation to work? This converts your piece from defensive opinion to grounded analysis. Aim for 1,200-1,500 words; this is now hybrid-zone thought leadership (CSF 35-45).

Success: The revised piece reads as research-backed analysis rather than opinion. A healthcare executive can extract actionable insight ('Here's how AI and humans should collaborate on pathways'). A researcher can identify gaps and next questions.

Before You Publish, Ask:

If AI could integrate real-time institutional context and be accountable for outcomes, could it facilitate stakeholder alignment effectively?

Filters for: Whether your argument is about AI's fundamental limitations or about current technical constraints. If you answer 'maybe,' you're thinking more nuanced than your current piece suggests. If you answer 'no,' you need to explain why ontologically, not just pragmatically.

What percentage of your 9-10 week SeamlessMD process is the SME actually *facilitating negotiation* vs. *waiting for busy stakeholders* vs. *managing logistics*?

Filters for: Whether you've analyzed where the actual bottleneck is. If facilitation is only 30% of the time and logistics is 70%, your argument changes—AI could eliminate logistics burden and free humans for the real work. This insight strengthens your case.

Have you compared adoption/implementation success rates between health systems using SME-facilitated pathways vs. those using top-down AI-generated pathways?

Filters for: Whether you're operating from observed data or assumption. If you haven't run this comparison, your evidence is anecdotal. If you have, that's your strongest claim. This shapes your research agenda.

What would need to be true about AI for you to say 'yes, AI could do significant parts of stakeholder facilitation'?

Filters for: Whether you're open to being wrong or defensive about your position. Thought leaders can articulate the conditions under which they'd change their view. This makes you seem credible, not dogmatic.

Is the six-month debate actually about the complexity of clinical pathways, or is it a symptom of organizational dysfunction, turf protection, or leadership gaps?

Filters for: Whether you've done causal analysis. If you can't distinguish between 'this is inherently hard' and 'our organization does this poorly,' your recommendation might miss the root problem. This is the difference between diagnosis and treatment.

💪 Your Strengths

  • Authenticity (17/20 rubric voice score): Your conversational tone, rhetorical questions, and opinionated stance feel genuine. No corporate jargon; this reads like someone who actually works in healthcare tech. This is rare and valuable.
  • Structural variety (5/5 rubric): You break conventional structure deliberately—mixing short punchy sentences with longer arguments, using all-caps for emphasis, rhetorical questions. It keeps readers engaged.
  • Contrarian positioning (4/5 rubric originality): You're not afraid to push back on the 'AI will solve everything' narrative. This courage to disagree is thought leadership fuel.
  • Domain specificity (4/5 rubric): You name disciplines, reference actual pathway work, mention SeamlessMD by name. This grounds your argument in reality, not abstraction.
  • Human insight (4/5 rubric): The single-player vs. multiplayer distinction is sound and properly applied to clinical contexts. It's a useful frame, even if underdeveloped.
Your Potential:

You're at the threshold between influencer and hybrid thought leadership. Your voice, insider knowledge, and willingness to challenge hype are genuine assets. What's missing is rigor: you need to back your arguments with evidence rather than assertion. If you add one solid case study (Week 1), reframe the argument as complementarity rather than critique (Week 2), and source your claims (Week 3), you'll move from CSF 28 to CSF 38-42. That's the zone where editors listen, health systems adopt your framework, and you stop being 'the person who says AI can't do X' and start being 'the researcher who figured out what AI and humans should each own.' The research is within reach; you have the data and perspective. The question is whether you're willing to do the analytical work to convert opinion into insight.

Detailed Analysis

Score: 17/100

Rubric Breakdown

Cliché Density 4/5
Pervasive None
Structural Variety 5/5
Repetitive Varied
Human Markers 5/5
Generic Strong Personality
Hedge Avoidance 4/5
Hedged Confident
Conversational Authenticity 5/5
Stilted Natural

Overall Assessment

Exceptionally authentic voice with sharp personality. Author breaks conventional structure deliberately—using rhetorical questions, fragments, and sarcasm. Rare clichés and zero corporate jargon. Conversational, opinionated, and grounded in real experience. This reads like someone who actually works in healthcare tech, not a template.

Strengths:
  • • Strong, unhedged opinions ('AI can't read the room'). Author commits to positions rather than softening with 'might' or 'arguably.'
  • • Creative structural rule-breaking—fragments, caps, rhetorical questions, dashes—feels intentional and energetic, not accidental.
  • • Insider expertise evident through specific domain knowledge (interprofessional teams, care pathway implementation, health system timelines). Reads like someone who's lived this.
Weaknesses:
  • • Occasional softening ('might even draft something reasonable') dilutes confidence in an otherwise bold argument. Tighten language.
  • • Heavy use of all-caps ('GOOD LUCK,' 'BUT') works for emphasis but risks feeling aggressive on rereads. Selective would be more powerful.
  • • Could lean deeper into personal anecdote—a 2-3 sentence story about a failed AI pathway attempt would strengthen credibility further.

Original Post

𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝘀𝘁: “AI can code, so creating care pathways for patients must be easy.” 𝗥𝗲𝗮𝗹𝗶𝘁𝘆: Clinicians spend 6 months debating the nuances of a care pathway  - while all an AI can do is document the debate, but not resolve it. Today 4% of all GitHub public commits are written by AI - and it’s projected to reach 20% by the end of 2026. So if AI is taking over coding, one of the highest paying knowledge work activities of the last decade, isn’t it reasonable to assume AI will do the same for clinical pathways? The short answer is “No” - the big difference is that coding is largely a “single player” activity, while developing clinical pathways is a complex, “multiplayer” exercise in diplomacy. When you code, it can just be you and your AI coding agent going back and forth until it works. But if you’ve ever been part of an interprofessional care team, you know that implementing care pathways is anything but an individual activity. You could ask AI to create the perfect care pathway, and it might even draft something reasonable in minutes - BUT coming up with plausible pathway content is the easy part. The really hard part is getting stakeholder alignment. Oh, you think you could just create a voice AI persona to lead these complex change management activities? GOOD LUCK getting AI to facilitate medicine, nursing, pharmacy, OT/PT, administration, etc. agreeing on all 50 elements in a pathway. Because let's be real: AI can’t read the room. It doesn't know how to navigate social politics, build genuine trust, or figure out how to tailor a strategy to the specific needs and resource limitations of a local community. Those are the deeply human-to-human parts of the job - the stuff that requires real communication, collaboration, and most importantly, shared accountability. It’s why the best decision we ever made at SeamlessMD for digital care journeys (e.g. pre/post-surgery patient navigation) was not only to develop template care pathway content in-house, but to provide clinical subject matter experts who can facilitate pathway customization and alignment among care teams in 9-10 weeks (lighting fast for health systems!) How could AI possibly do that?

Source: LinkedIn (Chrome Extension)

Content ID: 8f756855-a6b7-4beb-a3b3-56f7a713e421

Processed: 2/8/2026, 5:21:59 PM