Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
Trade global traditional assets with USDT in one place
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Participate in events to win generous rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and enjoy airdrop rewards!
Futures Points
Earn futures points and claim airdrop rewards
Investment
Simple Earn
Earn interests with idle tokens
Auto-Invest
Auto-invest on a regular basis
Dual Investment
Buy low and sell high to take profits from price fluctuations
Soft Staking
Earn rewards with flexible staking
Crypto Loan
0 Fees
Pledge one crypto to borrow another
Lending Center
One-stop lending hub
VIP Wealth Hub
Customized wealth management empowers your assets growth
Private Wealth Management
Customized asset management to grow your digital assets
Quant Fund
Top asset management team helps you profit without hassle
Staking
Stake cryptos to earn in PoS products
Smart Leverage
New
No forced liquidation before maturity, worry-free leveraged gains
GUSD Minting
Use USDT/USDC to mint GUSD for treasury-level yields
#TrumpordersfederalbanonAnthropicAI is gaining attention after reports that U.S. President Donald Trump directed federal agencies to stop using technology developed by Anthropic. According to multiple media outlets, the order instructs government departments to phase out Anthropic’s AI systems over a defined transition period. The move has triggered strong reactions across the technology and policy communities.
At the center of the issue is a disagreement between Anthropic and parts of the U.S. defense establishment regarding how advanced AI systems should be deployed in military and intelligence environments. Reports indicate that concerns were raised about operational control, compliance standards, and national security protocols. In response, federal authorities reportedly categorized the situation as a potential security risk, which led to the directive halting federal usage.
This development is significant because Anthropic is considered one of the leading AI research firms in the United States. A federal-level restriction on a domestic AI company is highly unusual and signals a broader shift in how governments may regulate or control advanced artificial intelligence technologies. It also highlights the growing tension between AI developers who emphasize safety guardrails and government agencies seeking broader operational capabilities.
The impact of this decision could extend beyond one company. AI firms working with governments may now face stricter contractual requirements, increased scrutiny, and more complex compliance obligations. At the same time, competitors in the AI sector could see new opportunities to secure federal partnerships under revised policy frameworks.
Financial markets may also react to this kind of news. Technology stocks, AI-related companies, and even crypto markets sometimes experience volatility when major regulatory or geopolitical announcements occur. Investors tend to reassess risk exposure when government intervention signals uncertainty in a fast-growing industry like artificial intelligence.
Ultimately, this situation reflects a larger global debate about AI governance, national security, corporate ethics, and technological sovereignty. As artificial intelligence becomes more deeply integrated into defense, infrastructure, and economic systems, policy decisions like this may become more common. The story is still developing, and further clarifications from federal agencies and Anthropic itself will determine the longer-term consequences for the AI sector.
At the center of the issue is a disagreement between Anthropic and parts of the U.S. defense establishment regarding how advanced AI systems should be deployed in military and intelligence environments. Reports indicate that concerns were raised about operational control, compliance standards, and national security protocols. In response, federal authorities reportedly categorized the situation as a potential security risk, which led to the directive halting federal usage.
This development is significant because Anthropic is considered one of the leading AI research firms in the United States. A federal-level restriction on a domestic AI company is highly unusual and signals a broader shift in how governments may regulate or control advanced artificial intelligence technologies. It also highlights the growing tension between AI developers who emphasize safety guardrails and government agencies seeking broader operational capabilities.
The impact of this decision could extend beyond one company. AI firms working with governments may now face stricter contractual requirements, increased scrutiny, and more complex compliance obligations. At the same time, competitors in the AI sector could see new opportunities to secure federal partnerships under revised policy frameworks.
Financial markets may also react to this kind of news. Technology stocks, AI-related companies, and even crypto markets sometimes experience volatility when major regulatory or geopolitical announcements occur. Investors tend to reassess risk exposure when government intervention signals uncertainty in a fast-growing industry like artificial intelligence.
Ultimately, this situation reflects a larger global debate about AI governance, national security, corporate ethics, and technological sovereignty. As artificial intelligence becomes more deeply integrated into defense, infrastructure, and economic systems, policy decisions like this may become more common. The story is still developing, and further clarifications from federal agencies and Anthropic itself will determine the longer-term consequences for the AI sector.