Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 3...
Đề bài
Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 31 to 40.
As AI seeps into governance and markets, jurists debate whether a limited legal personality should attach to highly autonomous systems. Rather than a binary yes/no, many propose a spectrum calibrating specific rights and duties to functional capacities, drawing a guarded analogy to corporate personhood. On this cautious middle path, the law might allow narrow capacities – such as being sued or having assets held in trust – without imputing moral agency or dignity. [I] The ambition is managerial, not metaphysical: to allocate responsibility where it can be practically enforced.
Serious obstacles remain. AI is a protean family of tools with divergent architectures, risk profiles, and human entanglements; a one-size regime would be blunt and unjust. Limited-liability logics also falter where systems operate with real autonomy, obscuring fault and making accountability diffuse. [II] Moral personhood diverges from legal personhood, and anthropomorphic labels – “smart”, “self-directed” – can seduce lawmakers into over-attributing agency. The better question is not whether AI “deserves” rights but how law should supervise artifacts that sometimes act without immediate human supervision.
The present consensus is deliberately modest: treat AI as products, keep humans answerable, and adapt remedies for novel harms. Granting independent legal personhood to AI would be premature so long as accountability still traces back to human designers and operators. Examples exist: corporate law can lift the veil for fraud or misfeasance, suggesting targeted revocation or shutdown powers for AIs that cause harm. [III] Meanwhile, EU debates on AI liability remain wary of anything approaching full personhood, favoring incremental procedural adjustments.
Looking ahead, two currents tug in opposite directions. Brain-machine interfaces may entwine computation with cognition, and sustained social participation could bestow de facto legitimacy on useful systems. Even so, most analysts expect no full personhood within the next two decades; prudence counsels constraint while capabilities race forward. [IV] The likely path is iterative: sharpen liability, refine evidentiary rules, and reserve any expansion of status for moments when control, accountability, and public reason can be credibly guaranteed.
(Adapted from CEULI – “Legal Personhood for AI: Challenges and Future Possibilities”)
Question 31. The word protean in paragraph 2 mostly means ______.
A. highly variable B. narrowly fixed
C. painfully repetitive D. mildly predictable
Question 32. Where in the passage does the following sentence best fit?
Yet analogies to corporations become strained when the ‘agent’ is a stack of models that updates itself.
A. [I] B. [II] C. [III] D. [IV]
Question 33. Which of the following best summarises paragraph 1?
A. It rejects corporate analogies and urges immediate abolition of all AI rights, citing moral risks that outweigh administrative gains.
B. It predicts swift recognition of AI dignity, arguing that social utility inevitably converts into enforceable rights for sophisticated systems.
C. It claims AI already enjoys personhood by custom, and the law merely needs to formalise the widely accepted social consensus.
D. It proposes a calibrated spectrum of narrow legal capacities for some AIs, cautiously borrowing from corporate personhood without conferring moral status.
Question 34. What is the text’s prevailing policy stance now?
A. Accelerate full personhood B. Human accountability over AI personhood
C. Build liability first, status later D. Criminalise autonomy itself
Question 35. According to paragraph 3, in cases of fraud or misfeasance, limited liability ______.
A. remains intact, proving corporate shields are absolute even for autonomous systems
B. collapses entirely, forcing criminal prosecution of every engineer involved
C. can be lifted, hinting at analogous, targeted remedies for harmful AI systems
D. transfers automatically to insurers, eliminating the need for procedural reform
Question 36. What would limited legal status primarily aim to achieve?
A. Enable targeted capacities without implying dignity or broad moral agency
B. Guarantee property rights and political liberties for advanced learning systems
C. Replace human accountability with autonomous, machine-centric responsibility
D. Abolish product-liability doctrines that currently govern AI-caused harms
Question 37. The phrase this cautious middle path in paragraph 1 refers to ______.
A. full personhood B. product treatment
C. anthropomorphism D. limited personhood
Question 38. Which of the following can be inferred from the passage?
A. The EU will inevitably grant AI full personhood once brain-machine interfaces reach commercial maturity and public acceptance across sectors.
B. Legal systems will likely adopt incremental, hybrid remedies that preserve human responsibility while addressing AI-specific harms through tailored procedural tools.
C. Courts will soon presume AI moral agency because anthropomorphic labels already dominate public discourse and legislative drafting worldwide.
D. Corporate personhood offers a perfect template for AI, eliminating the need for any bespoke liability or evidentiary innovations in the near term.
Question 39. Which option best paraphrases the underlined sentence in paragraph 3?
Granting independent legal personhood to AI would be premature so long as accountability still traces back to human designers and operators.
A. Since developers are sometimes liable, AIs should nonetheless obtain rights equivalent to corporations to ensure predictability in transnational commercial contexts.
B. Because humans participate in design, AI systems must be categorically excluded from any legal standing to prevent confusion over responsibilities.
C. Until humans cease being the locus of control and blame, awarding AIs independent personhood would be untimely and conceptually unjustified.
D. Once operators sign indemnities, independent personhood becomes harmless because liability can always be contractually reassigned to human counterparties.
Question 40. Which of the following best summarises the passage?
A. It weighs analogies to corporations, outlines hurdles, endorses product-based accountability, and foresees incremental reforms while postponing any broad grant of AI personhood.
B. It demonstrates that AI already qualifies as a citizen-like agent deserving rights equal to humans, subject only to modest procedural safeguards.
C. It urges the EU to pioneer immediate full personhood so other jurisdictions can harmonise transnational trade and liability regimes accordingly.
D. It predicts rapid social legitimation will force legislators to constitutionalise AI rights within the next two decades, despite unresolved accountability problems.
