Tiếng AnhTừ đề thi

Frontier AI – highly capable, general-purpose systems – has catalysed calls for “compute caps,” tiered thresholds that condition or limit ac...

Đề bài

        Frontier AI – highly capable, general-purpose systems – has catalysed calls for “compute caps,” tiered thresholds that condition or limit access to training resources. The impetus is prudential: if claims about development and use cannot be verified, rivalry may spiral into an arms-race dynamic. Here, verification means attesting to training properties, safety testing, and deployment footprints without divulging proprietary artefacts. [I] If routine, trusted attestation emerged, it could temper escalation, enable fairer diffusion of benefits, and normalise accountable practices across jurisdictions.

        Proponents argue that compute is a tractable intervention point. It is necessary (frontier training is compute-hungry), detectable (resource-intensive clusters), excludable (physical and licensable), quantifiable (operations, memory, interconnects), and concentrated (few firms control cutting-edge chips and hyperscale facilities). This concentration reduces the number of gatekeepers a verification regime must enlist. [II] On this view, compute caps operationalise verification: calibrated thresholds trigger disclosures, licences, or denials, aligning incentives for compliance while retaining room for research and small-scale experimentation.

        A complementary design is compute-based reporting: model developers pre-notify a public authority before large training runs; compute providers verify notification before provisioning; and cryptographic or hardware attestations log usage. If verification remains patchy and parochial, mutual suspicion will metastasize and erode restraint. [III] Hardware-enabled mechanisms (tamper-evident power/bandwidth monitors, enclave-based attestations) could verify properties of training or deployment without exposing model internals, creating auditable footprints that make caps enforceable and proportionate.

        Challenges persist. The transparency-security trade-off is acute: revealing locations or capacities may leak sensitive signals. Mitigations include confidential computing, multilateral audits to rule out backdoors, and neutral data centres jointly secured by rival parties. Retrofitted mechanisms can help in the near term; next-generation chips might embed verifiability by design. [IV] Meanwhile, adversaries could route around controls via alternative jurisdictions, so any cap-and-verify architecture must prioritise interoperability, supply-chain integrity, and credible, cross-border enforcement.

(Adapted from United Nations Secretary-General’s Scientific Advisory Board, “Verification of Frontier AI,” June 2025)

Question 1. According to paragraph 2, compute is deemed “excludable” because ______.

A. licensing frameworks are universally harmonised across all major geopolitical blocs today

B. software algorithms remain inherently opaque and thus cannot be independently audited

C. its physical nature allows access to be restricted through hardware control and policy

D. small developer collectives can always circumvent controls by pooling dispersed resources

Question 2. The word excludable in paragraph 2 mostly means ______.

A. tightly controllable                                        B. broadly permissive

C. loosely supervised                                D. minimally regulated

Question 3. Which of the following best summarises paragraph 1?

A. Compute caps eliminate all competitive pressures by restricting data access globally.

B. Verification replaces intellectual-property law by mandating disclosure of model code.

C. Frontier AI expansion mainly depends on public trust rather than technical governance.

D. Verification that protects confidentiality can justify compute caps to reduce escalation.

Question 4. What does compute-based reporting require from developers and providers?

A. Providers must publish proprietary model weights once training throughput exceeds thresholds.

B. Developers pre-notify authorities; providers verify notification before provisioning large compute.

C. Developers disclose training datasets in full; providers audit algorithmic source code line-by-line.

D. Providers auction access; developers submit bids that determine caps for each training epoch.

Question 5. What are hardware-enabled mechanisms intended to verify?

A. Open-source licences only                        B. Personnel background checks

C. Selected training properties securely                D. Consumer end-use declarations

Question 6. The phrase This concentration in paragraph 2 refers to ______.

A. compute oligopoly                                        B. safety protocols

C. model weights                                        D. export controls

Question 7. Which of the following best paraphrases the underlined sentence in paragraph 3?

If verification remains patchy and parochial, mutual suspicion will metastasize and erode restraint.

A. Should attestation mechanisms exhibit fragmentation and narrow scope, confidence deficits compound systemically, progressively undermining voluntary compliance norms among rivals.

B. If verification achieves international coordination, competitive pressures intensify paradoxically as transparency exposes asymmetries, prompting states to abandon collaborative restraint.

C. When verification mandates exhaustive disclosure, adversaries demonstrate greater compliance because comprehensive transparency reduces ambiguity and enables precise trust-building.

D. Provided verification remains fragmented nationally, inter-state tensions naturally attenuate because opacity enables face-saving ambiguity without triggering accountability pressures.

Question 8. Which of the following can be inferred from the passage?

A. Compute caps are unnecessary once interpretability tools fully expose internal model mechanisms everywhere.

B. Concentrated chip manufacturing and hyperscale provision make cooperation from fewer actors potentially sufficient.

C. Confidential computing eliminates all risks associated with the transparency-security trade-off in perpetuity.

D. Neutral data centres cannot contribute because they invariably compromise national security prerogatives.

Question 9. Where in the passage does the following sentence best fit?

These five properties make compute a natural chokepoint for tiered caps and enforceable disclosures.

A. [I]                                B. [II]                        C. [III]                                D. [IV]

Question 10. Which of the following best summarises the passage?

A. Interpretability research alone can stabilise geopolitics; compute governance adds little practical value.

B. Data provenance controls are superior to compute caps for every frontier-AI risk scenario.

C. Compute-anchored verification – via reporting and hardware attestations – can operationalise calibrated caps despite real trade-offs.

D. Export controls guarantee equitable access, making multilateral verification frameworks redundant worldwide.

Xem đáp án và lời giải

Câu hỏi liên quan