Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 3...
Đề bài
Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 31 to 40.
With Congress stalled, the task of disciplining AI has sloughed to courts and states. After a federal judge let a wrongful-death suit against Character.AI and Google proceed, many took it as a harbinger of tort-centric governance. [I] For open-source communities – whose code, weights, and prompts propagate at internet speed – the question is not merely moral but juridical: could maintainers or small deployers be cast as negligent when anthropomorphic design, weak guardrails, and adolescent users intermix in volatile ways rarely anticipated ex ante?
Historically, tort law has adapted, toggling between negligence and strict liability. Negligence asks whether a developer exercised reasonable care given foreseeability, gravity of harm, and the burden of safeguards. Strict liability dispenses with that inquiry when activities are abnormally dangerous. [II] Rhode Island’s S0358 flirts with a quasi-strict approach and a “rebuttable presumption of mental state,” easing plaintiffs’ burdens where opaque models frustrate proof. For open-source actors, such presumptions could transmogrify distribution into de facto risk, even when upstream contributors acted with evident prudence.
States are also experimenting with ex-ante compliance levers. New York’s RAISE Act would police “frontier” models and mandate written safety protocols; California’s SB 813 moots a safe harbor for developers who align with third-party standards, even if uncodified. [III] By dangling liability shields for those who merely ‘comply’ with third-party protocols, lawmakers risk rewarding paperwork over prudence. In heterogeneous, fast-iterating open-source ecosystems, that dynamic could induce accountability theatre, privileging template audits over context-sensitive threat modeling, or nudging small teams to withdraw rather than navigate proliferating checklists.
Fragmentation compounds the dilemma. With more than a thousand state-level AI bills, nationwide deployers face jurisdictional landmines, while small open-source projects lack compliance muscle. [IV] Some urge federal preemption – a ten-year state moratorium and uniform standards – arguing clarity will deter forum-shopping and stabilize incentives; others warn premature centralization could ossify best practices before they mature. Meanwhile, shortages of independent auditors and uneven Attorney-General expertise threaten erratic enforcement. In such a polycentric landscape, tort suits may, by default, calibrate responsibility post hoc.
(Adapted from https://ai-frontiers.org/articles/options-for-ai-liability)
Question 31. The word harbinger in paragraph 1 mostly means ______.
A. ominously predictive B. loosely descriptive
C. mildly celebratory D. oddly retrospective
Question 32. Where in the passage does the following sentence best fit?
That leaves open-source maintainers wondering whether releasing model weights itself makes them ‘manufacturers’ in the eyes of tort law.
A. [I] B. [II] C. [III] D. [IV]
Question 33. Which of the following best summarises paragraph 2?
A. It contrasts negligence and strict liability, noting how Rhode Island’s bill lowers proof burdens that opaque models currently exacerbate.
B. It claims negligence has collapsed and that only strict liability can regulate rapidly evolving artificial intelligence models across jurisdictions today.
C. It explains why tort law cannot evolve further and urges courts to abandon long-standing doctrines that no longer fit technological realities.
D. It details criminal sanctions for AI harm and argues prosecutors should replace civil courts as primary risk regulators nationwide.
Question 34. What does the Rhode Island bill attempt to overcome?
A. Proof burdens created by model opacity B. High insurance costs for startups
C. Patent trolls targeting AI libraries D. The lack of federal judges
Question 35. According to paragraph 3, California’s SB 813 would shield developers who comply with ______.
A. non-binding third-party safety standards prior to broad model deployment in commerce
B. federal regulations promulgated by Congress and binding nationwide immediately upon passage
C. proprietary vendor checklists approved by the Attorney General personally each month
D. industry codes of conduct enforced exclusively through private arbitration tribunals
Question 36. What is a likely consequence for small open-source teams under proliferating state checklists?
A. Retreating from release rather than navigating duplicative audits and persistent auditor bottlenecks
B. Rapidly scaling compliance headcount through effortless grant funding and university secondments
C. Seamlessly harmonising every requirement using a single, universally accepted template
D. Outsourcing governance entirely to package managers without any internal risk assessment
Question 37. The phrase accountability theatre in paragraph 3 refers to ______.
A. paper compliance B. real safety
C. regulatory certainty D. auditor independence
Question 38. Which of the following can be inferred from the passage?
A. Without federal preemption, heterogeneous state regimes may raise fixed compliance costs that disproportionately pressure open-source projects to limit releases or avoid certain jurisdictions.
B. Strict liability will immediately eliminate forum-shopping because plaintiffs cannot sue in multiple courts when similar state standards remain ambiguous and unevenly enforced.
C. Auditor shortages will likely vanish once checklists proliferate, since market incentives always attract sufficient experts to meet intricate verification obligations.
D. A moratorium on state regulation guarantees safer models because companies universally prioritise voluntary protocols over commercial deadlines and revenue objectives.
Question 39. Which of the following best paraphrases the underlined sentence in paragraph 3?
A. Offering safe harbors for box-ticking adherence may incentivize formal documentation rather than careful risk judgments that actually reduce harm.
B. When third-party protocols are mandated, developers invariably produce flawless systems so additional prudential safeguards would be redundant and inefficient.
C. Legal immunity should follow any compliance effort because paperwork is reliable evidence of sound engineering across complex sociotechnical contexts.
D. The presence of liability shields guarantees innovation, ensuring audits always prioritize context-specific threat modeling over standardized reporting requirements.
Question 40. Which of the following best summarises the passage?
A. Courts and states are improvising AI liability; proposed shields risk performative compliance, fragmenting burdens for open-source actors while sparking calls for cautious federal preemption.
B. Federal law has already solved AI liability, and states simply administer identical rules that unequivocally protect open-source developers from all tort claims.
C. Open-source distribution renders negligence obsolete; strict liability should replace all doctrines to guarantee compensation regardless of developer intent or safeguards.
D. Auditor scarcity and state bills make litigation impossible, so AI governance should rely solely on private certification markets without court involvement.
