🎉 Hey Gate Square friends! Non-stop perks and endless excitement—our hottest posting reward events are ongoing now! The more you post, the more you win. Don’t miss your exclusive goodies! 🚀
🆘 #Gate 2025 Semi-Year Community Gala# | Square Content Creator TOP 10
Only 1 day left! Your favorite creator is one vote away from TOP 10. Interact on Square to earn Votes—boost them and enter the prize draw. Prizes: iPhone 16 Pro Max, Golden Bull sculpture, Futures Vouchers!
Details 👉 https://www.gate.com/activities/community-vote
1️⃣ #Show My Alpha Points# | Share your Alpha points & gains
Post your
The essence of a large language model is to forcibly construct a self-consistent value system based on existing input data. Hallucinations can be seen as a natural manifestation and extension after self-consistency. Many new scientific discoveries are precisely because they encounter an 'error' in the natural world that cannot be explained by existing theories and cannot be self-consistent, so they must abandon the old theories. This roughly explains why, so far, no large language model (with so much data) can spontaneously make new scientific discoveries, because the model itself does not have the ability to judge right from wrong.