Tiếng AnhTừ đề thi

“Deepfakes” describe synthetic images or videos produced by deep learning, in which algorithms ingest vast examples and generate outputs wit...

Đề bài

        “Deepfakes” describe synthetic images or videos produced by deep learning, in which algorithms ingest vast examples and generate outputs with unsettling verisimilitude. As infants learn by trial and error, so do models that, once trained, can mimic patterns without “seeing” reality. The process is often concealed from laypeople, and yet the consequences – if such media were trusted uncritically – could be profound. Although the technology is frequently showcased as innovation, it has also been framed in the passive voice: harms are incurred, norms are unsettled, and trust is diluted.

        Unlike playful face-swap filters or clumsy photoshop hoaxes – typically benign, self-evident, and intended for amusement – high-grade deepfakes are dangerous precisely because they are hard to spot. The casual edits that once circulated as jokes could be laughed off; deepfakes, by contrast, may pass as authentic even to trained eyes. If the public confuses fabrication with documentary record, deliberation is corrupted, reputations are damaged, and accountability is displaced, whereas the tool that enables the fakery remains invisible to most observers.

        The stakes range from petty fraud to geopolitical chaos. Personalized clips can depict a relative begging for money, while counterfeit speeches by leaders might inflame unrest or catalyze war. When they proliferate across feeds, the velocity of misinformation outpaces the capacity for verification; by the time a correction is issued, the lie has already traveled. Should emergency systems be spoofed, officials could be forced into reactive postures, and citizens – misled by plausible footage – might act on fabricated cues.

        Vigilance is teachable, and detection is becoming algorithmic. AI systems can be trained to notice artifacts that humans typically miss, which could expose forgeries. Yet media literacy still matters: users should interrogate extraordinary claims, verify sources, and pause before sharing. If safeguards are adopted early, damage may be contained; if not, the asymmetry between forgers and fact-checkers will widen. Although deepfakes are not yet ubiquitous, their prevalence and polish are likely to increase, making prudent skepticism indispensable.

(Adapted from University of Virginia Information Security: “What the heck is a deepfake? Can you really believe what you see?”)

Question 33. Which of the following is NOT mentioned in paragraph 2 as a reason ordinary edits seem harmless?

A. They are designed mainly for amusement.

B. They are easy for viewers to spot as fake.

C. They typically appear as obvious, joking alterations.

D. They require expert authorization from platforms.

Question 34. The word verisimilitude in paragraph 1 can be best replaced by ______?

A. likeness                B. deviation                        C. obscurity                        D. discord

Question 35. The word benign in paragraph 2 is OPPOSITE in meaning to ______.

A. harmful                B. trivial                        C. gentle                        D. innocuous

Question 36. The word they in paragraph 3 refers to ______.

A. emergency systems                                B. deepfake videos circulating online

C. government officials                                D. fact-checking organizations

Question 37. Which of the following best paraphrases the underlined sentence in paragraph 4?

A. AI detection improves human observation by highlighting anomalies that trained analysts can then verify independently.

B. Automated systems complement human expertise by flagging suspicious patterns for further forensic examination.

C. Machine-learning tools, once trained, detect subtle cues invisible to people and thereby identify fabricated media.

D. Machine learning identifies manipulation traces imperceptible to unaided vision, enabling reliable forgery detection.

Question 38. Which of the following is TRUE according to paragraph 1?

A. Deepfake models emulate patterns from large datasets rather than perceiving reality directly, somewhat like infant learning.

B. The public widely understands how deep learning operates, so the risks are already minimal in everyday contexts.

C. Deepfakes depend on specialized hardware that completely prevents any misuse by untrained individuals online.

D. Because outputs are always imperfect, deepfakes cannot meaningfully influence public trust or social norms.

Question 39. Which paragraph mentions small-scale scams involving fabricated pleas from family members?
A. Paragraph 1        B. Paragraph 2                C. Paragraph 3                D. Paragraph 4

Question 40. Which paragraph mentions the contrast between face-swap amusements and hard-to-detect forgeries?

A. Paragraph 1        B. Paragraph 2                C. Paragraph 3                D. Paragraph 4

Xem đáp án và lời giải

Câu hỏi liên quan