We collect cookies to analyze our website traffic and performance; we never collect any personal data. Cookie Policy
Accept
NEW YORK DAWN™NEW YORK DAWN™NEW YORK DAWN™
Notification Show More
Font ResizerAa
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Reading: A.I. Is Mastering Language. Should We Trust What It Says?
Share
Font ResizerAa
NEW YORK DAWN™NEW YORK DAWN™
Search
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Follow US
NEW YORK DAWN™ > Blog > Technology > A.I. Is Mastering Language. Should We Trust What It Says?
A.I. Is Mastering Language. Should We Trust What It Says?
Technology

A.I. Is Mastering Language. Should We Trust What It Says?

Last updated: April 16, 2022 7:03 pm
Editorial Board Published April 16, 2022
Share
SHARE
17mag openai promo facebookJumbo

But as GPT-3’s fluency has dazzled many observers, the large-language-model approach has also attracted significant criticism over the last few years. Some skeptics argue that the software is capable only of blind mimicry — that it’s imitating the syntactic patterns of human language but is incapable of generating its own ideas or making complex decisions, a fundamental limitation that will keep the L.L.M. approach from ever maturing into anything resembling human intelligence. For these critics, GPT-3 is just the latest shiny object in a long history of A.I. hype, channeling research dollars and attention into what will ultimately prove to be a dead end, keeping other promising approaches from maturing. Other critics believe that software like GPT-3 will forever remain compromised by the biases and propaganda and misinformation in the data it has been trained on, meaning that using it for anything more than parlor tricks will always be irresponsible.

Wherever you land in this debate, the pace of recent improvement in large language models makes it hard to imagine that they won’t be deployed commercially in the coming years. And that raises the question of exactly how they — and, for that matter, the other headlong advances of A.I. — should be unleashed on the world. In the rise of Facebook and Google, we have seen how dominance in a new realm of technology can quickly lead to astonishing power over society, and A.I. threatens to be even more transformative than social media in its ultimate effects. What is the right kind of organization to build and own something of such scale and ambition, with such promise and such potential for abuse?

Or should we be building it at all?

OpenAI’s origins date to July 2015, when a small group of tech-world luminaries gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the symbolic heart of Silicon Valley. The dinner took place amid two recent developments in the technology world, one positive and one more troubling. On the one hand, radical advances in computational power — and some new breakthroughs in the design of neural nets — had created a palpable sense of excitement in the field of machine learning; there was a sense that the long ‘‘A.I. winter,’’ the decades in which the field failed to live up to its early hype, was finally beginning to thaw. A group at the University of Toronto had trained a program called AlexNet to identify classes of objects in photographs (dogs, castles, tractors, tables) with a level of accuracy far higher than any neural net had previously achieved. Google quickly swooped in to hire the AlexNet creators, while simultaneously acquiring DeepMind and starting an initiative of its own called Google Brain. The mainstream adoption of intelligent assistants like Siri and Alexa demonstrated that even scripted agents could be breakout consumer hits.

But during that same stretch of time, a seismic shift in public attitudes toward Big Tech was underway, with once-popular companies like Google or Facebook being criticized for their near-monopoly powers, their amplifying of conspiracy theories and their inexorable siphoning of our attention toward algorithmic feeds. Long-term fears about the dangers of artificial intelligence were appearing in op-ed pages and on the TED stage. Nick Bostrom of Oxford University published his book ‘‘Superintelligence,’’ introducing a range of scenarios whereby advanced A.I. might deviate from humanity’s interests with potentially disastrous consequences. In late 2014, Stephen Hawking announced to the BBC that ‘‘the development of full artificial intelligence could spell the end of the human race.’’ It seemed as if the cycle of corporate consolidation that characterized the social media age was already happening with A.I., only this time around, the algorithms might not just sow polarization or sell our attention to the highest bidder — they might end up destroying humanity itself. And once again, all the evidence suggested that this power was going to be controlled by a few Silicon Valley megacorporations.

The agenda for the dinner on Sand Hill Road that July night was nothing if not ambitious: figuring out the best way to steer A.I. research toward the most positive outcome possible, avoiding both the short-term negative consequences that bedeviled the Web 2.0 era and the long-term existential threats. From that dinner, a new idea began to take shape — one that would soon become a full-time obsession for Sam Altman of Y Combinator and Greg Brockman, who recently had left Stripe. Interestingly, the idea was not so much technological as it was organizational: If A.I. was going to be unleashed on the world in a safe and beneficial way, it was going to require innovation on the level of governance and incentives and stakeholder involvement. The technical path to what the field calls artificial general intelligence, or A.G.I., was not yet clear to the group. But the troubling forecasts from Bostrom and Hawking convinced them that the achievement of humanlike intelligence by A.I.s would consolidate an astonishing amount of power, and moral burden, in whoever eventually managed to invent and control them.

In December 2015, the group announced the formation of a new entity called OpenAI. Altman had signed on to be chief executive of the enterprise, with Brockman overseeing the technology; another attendee at the dinner, the AlexNet co-creator Ilya Sutskever, had been recruited from Google to be head of research. (Elon Musk, who was also present at the dinner, joined the board of directors, but left in 2018.) In a blog post, Brockman and Sutskever laid out the scope of their ambition: ‘‘OpenAI is a nonprofit artificial-intelligence research company,’’ they wrote. ‘‘Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.’’ They added: ‘‘We believe A.I. should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.’’

The OpenAI founders would release a public charter three years later, spelling out the core principles behind the new organization. The document was easily interpreted as a not-so-subtle dig at Google’s ‘‘Don’t be evil’’ slogan from its early days, an acknowledgment that maximizing the social benefits — and minimizing the harms — of new technology was not always that simple a calculation. While Google and Facebook had reached global domination through closed-source algorithms and proprietary networks, the OpenAI founders promised to go in the other direction, sharing new research and code freely with the world.

You Might Also Like

Effective-tuning vs. in-context studying: New analysis guides higher LLM customization for real-world duties

Typical Gamer’s JOGO doubles down on UEFN maps with acquisition of RHQ Inventive

What your instruments miss at 2:13 AM: How gen AI assault chains exploit telemetry lag – Half 1

Henk Rogers’ actual story behind Tetris, the Excellent Sport | The DeanBeat

OpenAI’s $3B Windsurf transfer: the actual purpose behind its enterprise AI code push

TAGGED:Alphabet IncArtificial IntelligenceComputers and the InternetData-Mining and Database MarketingDeepMind Technologies LtdEno, BrianFacebook IncLanguage and LanguagesMusk, ElonOpenAI LabsSoftwareTalese, GayThe Washington MailWriting and Writers
Share This Article
Facebook Twitter Email Print

Follow US

Find US on Social Medias
FacebookLike
TwitterFollow
YoutubeSubscribe
TelegramFollow
Popular News
Max Rose to Run for House in Likely Rematch Against Malliotakis
Politics

Max Rose to Run for House in Likely Rematch Against Malliotakis

Editorial Board December 6, 2021
Substantial portion of most cancers sufferers in early trials entry medicine which can be later accredited, examine finds
Tax Advantages of Proudly owning a Dwelling: Tax Breaks, Incentives, and Extra
Trump orders launch of JFK, RFK and MLK assassination data
Discovery Closes In on Acquisition of WarnerMedia

You Might Also Like

Zencoder launches Zen Brokers, ushering in a brand new period of team-based AI for software program improvement
Technology

Zencoder launches Zen Brokers, ushering in a brand new period of team-based AI for software program improvement

May 9, 2025
Zencoder launches Zen Brokers, ushering in a brand new period of team-based AI for software program improvement
Technology

The walled backyard cracks: Nadella bets Microsoft’s Copilots—and Azure’s subsequent act—on A2A/MCP interoperability

May 9, 2025
Resurgens Gaming raises funds to launch Ghost Launchpad sport accelerator
Technology

Resurgens Gaming raises funds to launch Ghost Launchpad sport accelerator

May 9, 2025
Sq. Enix’s Symbiogenesis onchain recreation debuts on Sony’s Soneium blockchain
Technology

Sq. Enix’s Symbiogenesis onchain recreation debuts on Sony’s Soneium blockchain

May 9, 2025

Categories

  • Health
  • Politics
  • Sports
  • Entertainment
  • Technology
  • World
  • Art

About US

New York Dawn is a proud and integral publication of the Enspirers News Group, embodying the values of journalistic integrity and excellence.
Company
  • About Us
  • Newsroom Policies & Standards
  • Diversity & Inclusion
  • Careers
  • Media & Community Relations
  • Accessibility Statement
Contact Us
  • Contact Us
  • Contact Customer Care
  • Advertise
  • Licensing & Syndication
  • Request a Correction
  • Contact the Newsroom
  • Send a News Tip
  • Report a Vulnerability
Term of Use
  • Digital Products Terms of Sale
  • Terms of Service
  • Privacy Policy
  • Cookie Settings
  • Submissions & Discussion Policy
  • RSS Terms of Service
  • Ad Choices
© 2024 New York Dawn. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?