Technology

Study claims ChatGPT is losing capability, but some experts aren’t convinced | Ars Technica

Study claims ChatGPT is losing capability, but some experts aren’t convinced | Ars Technica

On Tuesday, researchers from Stanford University and University of California, Berkeley released a research paper that purports to show changes in GPT-4's outputs over time. The paper fuels a common-but-unproven belief that the AI language model has grown worse at coding and compositional tasks over the past few months. Some experts aren't convinced by the results, but they say that the lack of certainty points to a larger problem with how OpenAI handles its model releases.

I don’t get the hype over AI πŸ€–

AI – artificial intelligence not artificial insemination – is the latest of quickly forgotten hype over technology, replacing our machine learning, blockchain, Metaverse, 3D goggles and so many other long-forgotten hype.

When most people talk about AI, they’re talking about Large Language Models. These models use machine learning to make computer programs produce sentences that sound like they were written by a person. They take words and phrases from what people type or say and generate a response. It has impressed some people, but the results are often rough and incorrect.

Wikipedia defines Large Language Models this way:

A large language model (LLM) is a computerized language model, embodied by an artificial neural network using an enormous amount of “parameters” (“neurons” in its layers with up to tens of millions to billions “weights” between them), that are (pre-)trained on many GPUs in relatively short time due to massive parallel processing of vast amounts of unlabeled texts containing up to trillions of tokens (parts of words) provided by corpora such as Wikipedia Corpus and Common Crawl, using self-supervised learning or semi-supervised learning,[1] resulting in a tokenized vocabulary with a probability distribution. LLMs can be upgraded by using additional GPUs to (pre-)train the model with even more parameters on even vaster amounts of unlabeled texts.[2]

The invention of the transformer algorithm, either unidirectional (such as used by GPT models) or bidirectional (such as used by BERT model), allows for such massively parallel processing.[3] Due to all above, most of the older (specialized) supervised models for specific tasks became outdated.[4]

In an implicit way, LLMs have acquired an embodied knowledge about syntax, semantics and “ontology” inherent in human language corpora, but also inaccuracies and biases present in the corpora.[4]

https://en.wikipedia.org/wiki/Large_language_model

At one level, that sounds quite impressive, but in reality, the output of programs like ChatGPT is, at best, mediocre. In my experience, ChatGPT has very limited knowledge of numerous subject matters. Its understanding of contemporary or specialized topics is lacking and falls short when compared to information found in a conventional encyclopedia. The answers generated by ChatGPT are bland, lacking depth and refinement. Furthermore, they frequently contain numerous mistakes and inaccuracies. The fear of expressing something insensitive or illegal constrains ChatGPT, leading it to avoid engaging in discussions or providing information on certain topics. As a result, the range of conversations and knowledge it can actively engage with becomes significantly limited.

But what I object most to the hype of Artificial Intelligence is how the market leaders want to cement their position in law by tightly controlling the technology, aiming to restrict competition through government regulation and extensive lobbying efforts. The leading manufacturers of AI systems actively seek to impose a ban on open-source models, thereby hindering individuals and institutions from embracing this technology for their own advantage. They often cite various conjectured threats and fears surrounding the potential misuse of AI. However, the irony is that regardless of how successful these AI lobbyists may be, those operating beyond the jurisdiction of United States law will likely persist in freely developing AI, ultimately allowing it to permeate back into the United States.

Useful software, particularly open source software, moves around the internet like water. You can only resist it’s penetration temporarily. Governments may attempt to build barriers in an effort to suppress it, but the source code effortlessly permeates the vast expanses of the internet, evading the clutches of regulatory bodies. It brings to mind the enduring controversy surrounding the DeCSS library, which enabled Linux users and others to freely view and duplicate unencrypted DVD movies, expanding the domain of online sharing. In a futile endeavor, numerous regulators and movie companies endeavored to eradicate DeCSS from the web, yet it persistently eluded their grasp. In fact, it has now become a commonplace component of Linux distributions, further illustrating the ineffectiveness of their efforts. While the AI advocates may strive to proscribe the unrestricted utilization and dissemination of Large Language Models, their attempts will prove fruitless, as these models will continue to be employed and exchanged without the government’s control. Even if a brief period of suppression were to occur, it would merely hinder progress in the United States and select countries, rather than stifle it completely.

Large language models are likely to become an every day part of computing, especially as open source makes them widely available. There will be a lot of fear and loathing over AI but I doubt they’ll change life nearly as much as their promoters and the Luddites make them out be. Large language models with be an evolution, making computing better in ways behind the scenes but won’t change much in every day life in society.

Those of a certain age can remember the illegal prime number shown as an image, that for a number of years was considered a crime to distribute, at least in the United States (this image’s colors plus C0 is the code to decrypt most DVDs). Worked out so well for the government, which eventually decided this was so stupid and forgot about it by the mid-2000s.

More Thoughts on Television

Being somebody who doesn’t own a television, πŸ“Ί I find watching any television to be incredibly disturbing. The bright colors, the sex and violence a find so nauseating. 🀒 The endless chants and championing of big government makes want to punch the screen. πŸ‘Š

The truth is, I don’t know how anybody could stand to watch television for more then five minutes without their brains going to mush.πŸ’­

 Along The Potomac River