ChatGPT bot 'for professional use' on the way By Glenn CHAPMAN San Francisco (AFP) Jan 11, 2023 Hot startup OpenAI on Wednesday initiated a waitlist for a professional and paid version of its software ChatGPT, which has sparked debate about artificial intelligence and the future of work. OpenAI co-founder Greg Brockman teased an upcoming version of ChatGPT "geared for professional use" as media reports swirled that Microsoft plans to invest $10 billion in the startup. Microsoft, which makes its own Cortana digital assistant, declined to comment. "Working on a professional version of ChatGPT; will offer higher limits and faster performance," Brockman said in a tweet. OpenAI late last year released its free version of the ChatGPT chatbot that is capable of answering questions so well that it reopened the debate on risks linked to artificial intelligence (AI) technologies. ChatGPT has also prompted concerns it could be used by students for homework assignments or to replace authors or others with writing jobs. The professional model will come with a fee and be faster than the free version, which will remain available, the company said. A waiting list page asked people what prices they think would be too high, too low, and just right for ChatGPT and how upset they would be if they could no longer use the chatbot. "If you are selected, we'll reach out to you to set up a payment process and a pilot," the page explained. "Please keep in mind that this is an early experimental program that is subject to change." Claude de Loupy, head of Syllabs, a French company specialized in automatic text generation, said "ChatGPT's response can be off the mark," but that its overall performance remains "really impressive." Conversations with the chatbot, posted online by fascinated users, show a kind of omniscient machine that is capable of explaining scientific concepts or writing scenes for a play or even lines of computer code. Its level of sophistication both fascinates and worries some observers, who voice concern these technologies could be misused to trick people, by spreading false information or by creating increasingly credible scams. Asked about these dangers, a response from ChatGPT said that human-like chatbots could be hazardous if misused. "There are potential dangers in building highly sophisticated chatbots, particularly if they are designed to be indistinguishable from humans in their language and behavior," the chatbot told AFP. On its welcome page, OpenAI lays out disclaimers, saying the chatbot "may occasionally generate incorrect information" or "produce harmful instructions or biased content." OpenAI, cofounded in 2015 in San Francisco by billionaire tech mogul Elon Musk, who left the business in 2018, received $1 billion from Microsoft in 2019.
Unpacking the "black box" to build better AI models Boston MA (SPX) Jan 09, 2023 When deep learning models are deployed in the real world, perhaps to detect financial fraud from credit card activity or identify cancer in medical images, they are often able to outperform humans. But what exactly are these deep learning models learning? Does a model trained to spot skin cancer in clinical images, for example, actually learn the colors and textures of cancerous tissue, or is it flagging some other features or patterns? These powerful machine-learning models are typically ba ... read more
|
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us. |