Tiếng AnhTừ đề thi

Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 3...

Đề bài

Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 31 to 40.

How AI News Summarisation Undermines Information Integrity

The growth of artificial intelligence – powered news summarisation services has brought a new level of convenience, but this progress also has a worrying downside: tools that aim to shorten complex information into easy summaries can create distortions that weaken public trust in facts people can check. [I] Recent BBC research was prompted by a major error from Apple Intelligence, which misread a headline about Luigi Mangione and falsely suggested the suspect had shot himself rather than being arrested for murder. The incident highlights how easily leading AI assistants can add false details, link quotes to the wrong people, and get the original article wrong. In its investigation, the BBC was given temporary access to ChatGPT, Copilot, Gemini, and Perplexity and asked them to summarise 100 news prompts. The results were troubling: 51% of responses showed significant issues, including 19% with factual errors and 13% that changed or fully invented quotations attributed to BBC reporting.

[II] Gemini, for example, wrongly claimed that the NHS discourages vaping, despite the NHS promoting its “swap to stop” initiative. Perplexity produced a false timeline that placed TV doctor Michael Mosley’s disappearance in October and his discovery in November, even though he died in June 2024. Taken together, these are not minor slips but a pattern of confident guessing that threatens the relationship between citizens and credible journalism. The risk is amplified because AI summaries often sound authoritative and certain, without the caution that human journalists typically use when details are unclear. BBC News CEO Deborah Turness also warned about a broader system effect: as generative AI becomes more common, information can enter a feedback loop in which people use AI to write messages and others use AI to process them, gradually thinning meaning until the signal becomes mostly noise.

[III] Corporate responses to these findings have often avoided the central issue. An OpenAI spokesperson highlighted the platform’s reach and said the company is committed to improving in – line citation accuracy, but that promise feels weak when set against the failures documented in the BBC’s test. [IV] Regulation remains limited, with few strong mechanisms requiring clear disclosure about model limits or how much outputs rely on pattern – based guessing. At root, these systems are designed to produce plausible text, not guaranteed truth, which makes them risky for tasks where exact factual accuracy matters.

[Adapted from https://www.theregister.com/]

Question 31: Where in the passage does the following sentence best fit?

The impact of this unreliability goes beyond isolated cases of misinformation.

A. [II]        B. [IV]        C. [I]        D. [III]

Question 32: The word "them" in paragraph 1 refers to __________.

A. 100 news prompts                 B. leading AI assistants         

C. AI summarisation services         D. factual errors in reports

Question 33: The phrase "thinning meaning" is closest in meaning to __________.

A. enhancing the clarity of information         B. reducing the depth of communication

C. expanding the reach of news digital         D. altering the speed of data delivery

Question 34: According to paragraph 1, which of the following is NOT mentioned as a specific type of error made by AI during the BBC's 100 – prompt test?

A. Attributing fabricated statements to the original news source.

B. Failing to provide a correct interpretation of the source text.

C. Incorporating inaccurate information that was not in the original.

D. Violating copyright laws by using protected content without citation.

Question 35: Which of the following best summarises the content of paragraph 2?

A. The emergence of AI feedback loops is the primary cause for the public's loss of interest in the healthcare advice provided by the NHS.

B. The confident yet inaccurate nature of AI summaries poses a severe threat to the integrity of information and the credibility of journalism.

C. Scientific initiatives like "swap to stop" are frequently misrepresented by AI assistants because human journalists fail to use enough caution.

D. The BBC has successfully developed a broader system effect to ensure that information processed by AI maintains its original meaning and signal.

Question 36:  The word “authoritative” in paragraph 2 is OPPOSITE in meaning to __________.

A. formal        B. official        C. confident        D. uncertain

Question 37: Based on the passage, what is TRUE about the current state of AI regulation and corporate accountability?

A. Companies have implemented strong mechanisms to disclose the extent of pattern – based guessing in their generative models.

B. OpenAI has successfully resolved the citation failures documented in the BBC investigation to ensure guaranteed truth.

C. Current legal frameworks lack the necessary strength to compel companies to provide transparent information about model limitations.

D. Regulation focuses primarily on ensuring that AI platforms maintain their market reach rather than improving factual accuracy.

Question 38: Which of the following best paraphrases the underlined sentence in paragraph 3: "At root, these systems are designed to produce plausible text, not guaranteed truth, which makes them risky for tasks where exact factual accuracy matters."?

A. Unless these systems are designed to produce plausible text, they will remain a risky tool for any task that involves the use of digital data.

B. The inherent design of AI to prioritise factual truth over plausible text is the main reason why they are considered safe for news reporting.

C. Since these platforms are engineered to generate believable content rather than verify facts, they are unsuitable for roles requiring high precision.

D. High factual accuracy is only achievable if the text produced by AI is designed to be risky for tasks where truth is not guaranteed.

Question 39: Which of the following can most likely be inferred from the passage?

A. Apple Intelligence is considered the most reliable AI tool because its errors regarding murder suspects are less frequent than others.

B. The deceptive authority of AI – generated content can make it more difficult for the public to identify false information in their daily life.

C. The BBC investigation suggests that AI assistants will eventually replace human journalists once the feedback loop of information is closed.

D. Most AI companies avoid addressing factual errors because they believe that citation accuracy is more important than the truth itself.

Question 40: Which of the following best summarises the passage?

A. Artificial intelligence has revolutionized the news industry by providing convenient and factually certain summaries that enhance the efficiency of news consumption for citizens.

B. Recent research by the BBC indicates that while AI tools like ChatGPT and Gemini have minor factual slips, they are generally committed to protecting the integrity of information.

C. The rise of AI news summarisation presents a significant danger to public trust due to the systemic tendency of these models to generate confident but inaccurate information.

D. The collaboration between the BBC and major tech corporations is essential to regulate the feedback loop of misinformation and to ensure that AI assistants always provide cited data.

Xem đáp án và lời giải

Câu hỏi liên quan