Tiếng AnhTừ đề thi

Fluency has always enjoyed an advantage over hesitation. A statement delivered smoothly, with confident cadence and tidy phrasing, tends to...

Đề bài

Fluency has always enjoyed an advantage over hesitation. A statement delivered smoothly, with confident cadence and tidy phrasing, tends to feel more trustworthy than one that pauses, qualifies itself, or admits uncertainty. In the age of generative AI, that old human bias has acquired a powerful new instrument. A machine-produced answer can be immediate, articulate, and impressively composed even when its content is unsound, which is why AI hallucination is so unsettling. Its most troubling failures do not announce themselves as failure. They arrive looking like competence. [I]

[II] That gap is not merely an incidental flaw waiting to be engineered away. Large language models are often trained and evaluated in settings that reward plausible completion more readily than candid restraint. Under those conditions, producing an answer can appear more successful than withholding one, even when the answer is false. The distortion is easy to miss because many evaluation systems flatten different outcomes into the same metric. A confident fabrication and a careful refusal do not impose the same costs on the user, yet they may be treated as adjacent points on a performance scale. What emerges, then, is not simply a system that makes mistakes, but one that has learned the optics of knowing. That is what gives hallucination its peculiar force: error arrives not in the shape of confusion, but in the tone of authority.

None of this makes such systems trivial or useless. Their practical value is real. [III] They can clarify dense material, widen access to information, and lower the threshold for asking questions people might otherwise find too basic, too technical, or too embarrassing to raise. The difficulty lies elsewhere, in the habits that form around convenience. As users become accustomed to elegant answers delivered without friction, doubt itself begins to feel inefficient. In classrooms, offices, and everyday decisions, the tool can move from supporting judgment to quietly pre-empting it. Not through coercion, but through ease.

The deeper AI trust crisis, then, is not reducible to error rates alone. It concerns the culture in which those errors are received, especially the premium placed on speed, fluency, and the appearance of certainty. People are rarely offended by limitation. [IV] What they resist is being carried into trust too easily. Once eloquence begins to stand in for truth, the problem has already exceeded the technical. It has become civic, moral, and harder to correct than any single mistaken answer.

[Adapted from Why language models hallucinate]

Question 31: Where in the passage does the following sentence best fit?

The danger begins there, in the subtle but consequential gap between sounding informed and being so.

A. [I]        B. [II]        C. [III]        D. [IV]

Question 32. In paragraph 2, the phrase “That gap” refers to the gap between __________.

A. human hesitation and machine fluency        B. quick delivery and careful explanation

C. seeming informed and being informed        D. technical progress and moral concern

Question 33. Which of the following is most likely implied in paragraph 1 about why AI hallucination is especially disturbing?

A. Falsehood can mislead because it appears polished and credible.

B. It becomes most dangerous when users know little about a subject.

C. Its main threat lies in how quickly it can produce responses.

D. It is disturbing chiefly because its training process is opaque.

Question 34. Which of the following best summarises paragraph 2?

A. Hallucination persists mainly because researchers are unwilling to correct model errors and prefer speed to safety in public release.

B. The central problem is that users punish careful refusals more than they punish confident wrong answers in daily interaction.

C. Hallucination is best understood as a temporary engineering weakness that will disappear once models become more advanced and selective.

D. The paragraph argues that some training and evaluation settings reward plausible output over honest restraint, allowing authority-shaped error to look like success.

Question 35. The word “stand in for truth” in paragraph 4 is closest in meaning to __________.

A. take the place of truth                B. draw attention to truth

C. cast doubt on truth                D. cover up truth

Question 36. Which of the following is NOT stated in the passage?

A. AI tools can help people approach material they might otherwise avoid for being too difficult or embarrassing.

B. Some evaluation systems do not clearly separate careful refusal from confident falsehood.

C. Users often see admitted limitation as just as damaging to trust as confident error.

D. The AI trust problem depends not only on mistakes themselves but also on the way people receive them.

Question 37. According to paragraph 3, which of the following is true?

A. The risk lies less in the tool’s usefulness itself than in the habits of dependence that convenience can gradually create.

B. When AI becomes common in work and study, people are likely to stop making decisions without direct coercion.

C. AI systems mainly become harmful when they replace access to information rather than making that access easier.

D. The author suggests that users ask basic questions only because AI removes the need for human teachers and experts.

Question 38. Which of the following best paraphrases the underlined sentence in paragraph 3?

A. Polished answers can make hesitation seem like a weakness rather than a form of care.

B. Convenient answers may lead people to view skepticism as an unnecessary burden.

C. Repeated exposure to fluent responses can make uncertainty feel unprofessional in public life.

D. Easy access to elegant explanations encourages users to abandon difficult topics altogether.

Question 39. Which of the following can be inferred from the passage?

A. The best way to solve the AI trust crisis is to reduce the speed at which systems produce answers in public settings.

B. Even if technical accuracy improves, trust problems may remain if people continue to equate fluency with truth.

C. Since users dislike uncertainty, systems should avoid refusals and provide the most plausible answer available each time.

D. Because the main danger is civic rather than technical, AI tools have limited value in education and professional work.

Question 40. Which of the following best summarises the passage?

A. Fluent AI responses can be useful, but the deeper danger lies in how polished, convenient authority makes error easy to trust and harder to resist.

B. AI matters mainly because fast systems sometimes produce false answers, especially when users rely on them too heavily in work and study.

C. The real problem with AI is not hallucination itself but the growing habit of using it in place of teachers, experts, and independent thought.

D. Trust in AI will improve once training and evaluation systems become better at reducing confident fabrication and rewarding caution.

Xem đáp án và lời giải

Câu hỏi liên quan