AI Doctor Replacement 'Impossible', Prejudice Loses Balance, Lee Jae-myung's Pragmatic Regime 'Cracks'
The ultimate impossibility of AI replacing doctors stems from subtle disease presentations and indecipherable unrecorded information, signaling the collapse of Lee Jae-myung's AI-powered pragmatism.
The AI supremacist belief that AI will replace doctors typically prioritized pragmatism. However, it was pointed out that "thoroughness and balanced judgment," rejected by pragmatism, are the true core of outpatient diagnosis and treatment, and that AI cannot decipher these.
President Lee Jae-myung implemented his campaign promises to replace conscripts with AI weapons, increase military spending, replace administration with AI direct democracy to reduce taxes, resolve polarization, and build a KOSPI 5000 stock market as an AI powerhouse.
The Lee Jae-myung administration, built on political pragmatism and AI supremacy, established a stock market imperialism. "Pragmatism," a core ideology, was launched from the beginning as a structure that rejected "thoroughness and balanced judgment."
This system solidified into an ideological system that strictly suppressed diversity. Furthermore, it revealed fascist tendencies by introducing AI for forced transplants as a means of reinforcing group bias. This strategy of imitating the US AI to "build an AI powerhouse" is increasingly likely to fail, revealing cracks in the foundation of the "stock empire."
Dr. Robert Califf, a Duke University cardiologist and former FDA commissioner, told the New York Times that AI is taking over "some of the chores" doctors currently do, such as keeping patient records.
Dr. Califf worked at Alphabet for six years and advises a startup that uses AI to prescribe medications.
He said, "Even with all the medical knowledge on servers, it may not be enough for chatbots to completely take over patient care."
Dr. Lee Schwam, a neurologist and vice dean for digital strategy and transformation at Yale School of Medicine, told the Times, "The sheer volume of medical information makes it complex to think critically about it. Doctors have learned to read subtle signals and synthesize information that is difficult to articulate and rarely recorded. Chatbots' strengths lie in their ability to match patterns and make predictions."
He explained the difference between AI medicine and medicine: "It can rely on data received about a patient, but it has no way to extract that information on its own." On the other hand, doctors can "use reasoning to select the most likely diagnosis and conduct further evaluations, even with limited or incomplete information, balancing thoroughness and pragmatism."
He offered an example: when a patient says, "I woke up yesterday feeling dizzy. My arm was dead, and I had trouble speaking," what does the patient actually mean by "dizzy"?
It could mean the patient is dizzy and about to faint, or it could mean the room is spinning.
The patient's "dead" arm might feel numb rather than weak. A person with a partially paralyzed arm might describe it as numb.
But when Dr. Shubham pokes the patient's arm, the patient feels the pinprick.
Is this a stroke? Is this a medical emergency? Dr. Schwam, who has years of training, told the New York Times that he helps determine who is sick, where it hurts, who shouldn't worry, and who needs to be admitted to the hospital.
He added that patients with serious illnesses need a human connection.
"Ultimately, you want to look someone in the eye," he said, referring to a time when a patient was told they had 10 years to live, or just six months.
But he's not dismissive of chatbots. He acknowledges that they have the potential to expand the influence of doctors and reshape our healthcare system.
Dr. Schwam has already acknowledged that AI can outperform doctors in situations like reading electrocardiograms, and chatbots can detect patterns that cardiologists can't see, potentially revealing heart disease that would normally require expensive echocardiograms.
This frees up the work of cardiologists for general practitioners.
Chatbots can also play a role in healthcare, reducing the burden on some medical professionals who face long wait times due to "patient overcrowding," saving patients who need specialized expertise from having to wait weeks or months. This is a secondary benefit of "pragmatism."
When having difficult conversations with terminally ill and critically ill patients about whether to insert a feeding tube, Stanford internist Dr. Jonathan Chen first practices with a chatbot.
He asks the robot to be the doctor, while he plays the patient.
He then switches roles.
He finds it uncomfortable.
"The chatbot is really good at figuring out how to communicate with patients," he told the New York Times. "The chatbot doctors are also very good at diagnosing, very good at reading scans and images—in fact, better than many doctors—and very good at answering questions on patient portals and writing appeals to insurance companies when medications or procedures are denied."
So what are doctors for? Dr. Chen said AI programs are becoming "existentially threatening" to doctors, saying, "They threaten your identity and your purpose."
Dr. Harlan Krumholtz, a Yale University cardiologist and advisor to OpenEvidence, an AI program for doctors, told the New York Times that "AI's reasoning and diagnostic capabilities are already far beyond what doctors can do."
He is the co-founder of two startups that use AI to interpret medical scans and digital data.
"Many doctors who have thought deeply about the role of AI in medicine have also worked with AI companies," the New York Times reported. "On the other hand, researchers say, 'Dr. Chatbot isn't ready to see you yet.' But AI is beginning to change what some doctors do and the patients they see."
Reducing patient waiting times could ease the burden on some medical professionals, allowing patients who need specialized expertise to avoid waiting weeks or months for care. This phenomenon is playing out in the practice of Dr. John Eric Pandolfino, a gastroesophageal reflux disease (GERD) specialist at Northwestern University Feinberg School of Medicine.
Most patients concerned about GERD symptoms have had to wait weeks to get an appointment with him.
He said that "a significant portion" of his patients have less severe cases that don't require his care.
Dr. Pandolfino created an AI solution called GERDBot.
This solution triages patients and directs those who don't actually need his care to other providers.
His goal is to identify more concerning symptoms and provide prompt treatment for these patients.
The solution first asks patients to answer a bot's questions.
From there, patients with severe symptoms are seen immediately.
The rest receive a call within a week from a nurse practitioner or physician assistant, who calms their fears and, if necessary, prescribes medications that can help.
Dr. Pandolfino has licensed another AI model he created to the medical device company Medtronic. As a result, he has a smaller patient population, but he can quickly identify and treat those who need his expertise.
"Most people appreciate the immediate access to information once they start treatment, and if treatment fails or warning signs appear, they see a doctor," he told the New York Times.
He acknowledges that "a small minority feel they're being relegated to second-rate care."
But the old way—with waits of up to six months for an appointment—was far more difficult for those seeking help and reassurance.
The next step is to also remove patients with more severe GERD symptoms.
Dr. Pandolfino developed an artificial intelligence algorithm called "Eso-Instein" ("Eso" for esophagus), which helps less specialized gastroenterologists determine the most likely diagnosis based on a patient's symptoms, endoscopic scans, and physiological tests.
Then, the general practitioner is informed of the patient's treatment and prognosis. "Eventually, when algorithms outperform humans, we'll have to find other work," he told the Times. "AI will make people like me less and less valuable."
The Times noted, "Just as Dr. Pandolfino's AI algorithm has enabled specialists to refer many patients to general practitioners, there's some hope that the same strategy could make some general practitioner work more accessible by shifting it to nurses." "There's a severe shortage of primary care physicians, not just in rural areas but also in large cities with multiple hospitals and large medical schools," the Times reported.
Dr. Isaac Kohane, chairman of the Department of Biomedical Informatics at Harvard Medical School, told the Times that when a new faculty member asked him for a recommendation for a primary care physician in Boston, he couldn't find one that was accepting new patients. "Access is definitely an issue," Dr. Daniel Morgan, a professor of epidemiology, public health, and medicine at the University of Maryland School of Medicine, told the Times.
"I want to see patients more quickly, but I don't know a single doctor who says, 'Oh, yes.' It's going to take six months for them to see me."
Dr. Adam Rodman, an internist at Beth Israel Deaconess Medical Center, told the Times that in such cases (first-time urgent care visits), "AI can help patients."
The program "triages patients and frees up nurse practitioners to do more of the work of primary care physicians, allowing them to see more complex patients," the Times said. "Given the choice between a doctor who won't accept new patients and a doctor who will refer patients to a nurse practitioner or physician assistant, patients are more likely to accept that other health care professional."
However, Dr. Rodman and other researchers acknowledge that chatbots risk replicating existing biases in healthcare institutions. The New York Times reported, "Studies have shown that doctors may pay less attention to women or to people who make spelling or grammar mistakes," and that "these concerns have led some experts to caution against using AI as a panacea for the health care system."
"The real problem isn't AI itself," said Dr. Leo Anthony Celi, director of clinical research at MIT's Computational Physiology Laboratory. "It's that AI is being deployed to optimize a system that's seriously broken, not to reimagine it."
"Patients today may not realize how badly the current system is failing them," Dr. Sely told the Times.
His colleague, Dr. Marziye Ghasemi of MIT's Healthy ML Group, shares similar concerns.
He told the Times that while AI "has tremendous potential," it currently seems to be used primarily to "add value to the health care system" by "increasing billing, displacing frontline nurses for disadvantaged patients, and advertising drugs."
Dr. Rodman, an internist and former visiting researcher at Google, said, "Health care systems and patients need to be aware of these issues. But that's not a reason not to move forward with this technology. AI should allow researchers to document and mitigate bias."
He told the Times that the most frightening aspect, the creation of AI with "human-like biases," would be "really difficult to mitigate." Dr. Rodman said AI would likely outperform doctors at least in some tasks, but said it would be better used to "attend to screening guidelines and counsel patients about their sleep and eating habits."
Dr. Jeffrey A. Linder, an internist at Northwestern, said, "These are the boring parts of being a doctor. A lot of the work I do in primary care feels like checking boxes, but that's not why I was drawn to medicine."
Dr. Linder said he worries that some doctors will become overly reliant on AI.
"The last thing you want is a dumb, AI-dependent doctor," he said. "I turn my brain off and the AI is always telling me what to do."
The New York Times reported, "The problem is that while AI may not be ideal, today's health care system isn't either. It's becoming increasingly clear that the role of the doctor is going to change."
Dr. Pandolfino said, "Medicine is going to change. You can't deny it, but doctors still have a crucial role to play." “Internal medicine is a very people-focused profession,” he told the Times. “Over time, you get to know your patients, you know their values, you know their families.”
Dr. Joshua Steinberg, an attending physician at SUNY Upstate Medical School in Binghamton, New York, agreed.
“Even if AI read all the medical literature, I would still be the ‘expert on my patients,’” Dr. Steinberg told the Times. “Our role as physicians might be a little different, but I would still be sitting in a little swivel chair talking to patients.”
Reuters reported on the 10th that "foreign outflows from Asian stocks surged in the first week of February," as South Korea and Taiwan faced pressure from a global selloff in high-growth technology stocks amid concerns about large-scale AI-related capital spending.
According to LSEG data, foreign investors sold a net US$9.79 billion worth of stocks in the week ending February 6th across South Korea, Taiwan, Thailand, India, Indonesia, Vietnam, and the Philippines.
Foreigners sold approximately US$3.9 billion in January, a figure that nearly tripled in February, driving the decline in technology stocks.
Foreigners sold US$7.48 billion worth of South Korean stocks in the first week of February, compared to a monthly inflow of US$446 million in January and beginning to dent South Korea's tech-heavy structure. Foreign investors also began selling Taiwanese stocks, a major chip market, reversing the net inflow of $306 million in January to a net asset sale of $3.43 billion in the first week of February.
Foreign investors sold $3.98 billion worth of Indian stocks in January, the largest selling volume in five months.
Stocks in Thailand, Indonesia, and the Philippines, which are less heavily weighted in technology stocks, attracted foreign inflows of $332 million, $103 million, and $23 million, respectively, in the first week of February.
Conversely, foreign investors sold $236 million worth of Vietnamese stocks.
The tech-heavy Nasdaq Composite Index fell as much as 4.27% in the first week of February. Reuters reported on the 10th that Amazon's stock fell about 12.11% amid concerns about its capital expenditure outlook for 2026, which rose more than 50% this year, further fueling concerns about AI-based investments across the tech sector.
Nomura Securities said in a report that "this shift in sentiment also weighed on Asian tech stocks," adding, "Last week's stock moves reinforce the message of maintaining diversification and balance in portfolios, especially when positioning is mixed on popular themes."
See <US State Law Regulates AI, Fake Images, Doctor Impersonation, Criminal Punishment, Lee Jae-myung's 'Corporate Dominance,' January 18, 2026>
<Lee Jae-myung Violates Constitutional Direct Democracy with AI, 2021 Pledge: 'New Deal, 4% High Growth,' July 14, 2025>
<Health Insurance's Medical Technology Support System Reduces Coverage Rate, August 12, 2021>
<AI Racial Bias Cell Phone Facial Recognition Error Classifies Asians as 'Gorillas,' Black People, April 28, 2025>
'안보' 카테고리의 다른 글
| 트럼프 핵무기 군비증강 ‘러시아가 북핵 비공개’ 비핵협상 배제 (0) | 2026.02.15 |
|---|---|
| AI 허위 판별 열등 전문가 불확실성 인식 결여 팩트체크 ‘위장기법’ (0) | 2026.02.13 |
| AI 의사 대체 ‘불가’ 편견 균형상실 이재명 실용주의 제국 균열 (0) | 2026.02.10 |
| AI Research Math 'Substandard', Language Model 'Unable to Innovate', Hindering Scientific Progress (0) | 2026.02.09 |
| AI 연구수학 ‘수준 이하’ 언어모델 ‘새 아이디어 불가’ 과학발전 저해 (0) | 2026.02.09 |