Technology
Artificial intelligence and licensing π€
I am very concerned about proposals to require licensing and regulations on artificial intelligence and all that might fall under that umbrella – machine learning, natural language models. Look who is putting forward proposals to regulate artificial intelligence – it’s the big incumbent players like Chat GPT and Facebook.
Maybe commercial products for sale should be regulated but free, open source projects should not be. Frameworks should be widely available to the public for any purpose they want, good or bad. Let the people play and innovate. If harm exists, go after the harmful commercial users, not the everyday people experimenting with the technology for non profit purposes to see how they can innovate.
Stopping bad actors seems like a good idea but you can’t stop a technology from moving forward in a global internet. If the US bans innovation, another less regulated country is likely to move it forward – China, Switzerland or some other place. I’m okay with regulating Meta and Open AI but not what goes on inside people’s basements.
Artificial intelligence, machine learning and natural language processing
There is a a lot of confusion and hype on this topic, so I thought it best to clarify this point. You probably use artificial intelligence and these related subsets every time you use a computer not it’s not as scary as you might think. Indeed, Artificial Intelligence (AI), Machine Learning (ML), and Large Language Models (LLMs) are related concepts in the field of technology, but they have distinct differences:
- Artificial Intelligence (AI):
- Definition: AI refers to the broader field of creating machines or systems that can perform tasks that typically require human intelligence. It aims to replicate human-like thinking, reasoning, problem-solving, and decision-making.
- Scope: AI encompasses a wide range of techniques and applications, including natural language processing, computer vision, robotics, and more.
- Examples: Virtual assistants like Siri and Alexa, autonomous cars, and AI-powered recommendation systems.
- Machine Learning (ML):
- Definition: ML is a subset of AI that focuses on developing algorithms and models that allow machines to learn from data and make predictions or decisions without explicit programming.
- Approach: ML algorithms use patterns and statistical analysis to improve their performance over time as they are exposed to more data.
- Examples: Spam email filters, image recognition software, and predictive text suggestions on smartphones.
- Large Language Models (LLMs):
- Definition: LLMs are a specific type of ML model designed for natural language understanding and generation tasks. They are massive neural networks trained on vast amounts of text data.
- Functionality: LLMs excel at tasks like text generation, translation, summarization, and question-answering. They can understand and generate human-like text.
- Examples: GPT-3, GPT-4, and BERT are examples of LLMs that have gained prominence for their text-based capabilities.
In summary, AI is the overarching concept that aims to create intelligent machines, while ML is a subset of AI that focuses on learning from data. LLMs, on the other hand, are specific ML models designed for natural language processing tasks. They are powerful tools within the field of AI and ML, capable of understanding and generating human language at scale.
Area Codes In NY State
This interactive map shows area codes in New York State. In most of the state now you have to dial the full ten digit number, as there are multiple area codes overlaying the area.
Data Sources: USGS Area Code Base Map (updated with new area code overlays) and Wikipedia: List of Area Codes.
Study claims ChatGPT is losing capability, but some experts aren’t convinced | Ars Technica
On Tuesday, researchers from Stanford University and University of California, Berkeley released a research paper that purports to show changes in GPT-4's outputs over time. The paper fuels a common-but-unproven belief that the AI language model has grown worse at coding and compositional tasks over the past few months. Some experts aren't convinced by the results, but they say that the lack of certainty points to a larger problem with how OpenAI handles its model releases.
I don’t get the hype over AI π€
AI – artificial intelligence not artificial insemination – is the latest of quickly forgotten hype over technology, replacing our machine learning, blockchain, Metaverse, 3D goggles and so many other long-forgotten hype.
When most people talk about AI, they’re talking about Large Language Models. These models use machine learning to make computer programs produce sentences that sound like they were written by a person. They take words and phrases from what people type or say and generate a response. It has impressed some people, but the results are often rough and incorrect.
Wikipedia defines Large Language Models this way:
A large language model (LLM) is a computerized language model, embodied by an artificial neural network using an enormous amount of “parameters” (“neurons” in its layers with up to tens of millions to billions “weights” between them), that are (pre-)trained on many GPUs in relatively short time due to massive parallel processing of vast amounts of unlabeled texts containing up to trillions of tokens (parts of words) provided by corpora such as Wikipedia Corpus and Common Crawl, using self-supervised learning or semi-supervised learning,[1] resulting in a tokenized vocabulary with a probability distribution. LLMs can be upgraded by using additional GPUs to (pre-)train the model with even more parameters on even vaster amounts of unlabeled texts.[2]
The invention of the transformer algorithm, either unidirectional (such as used by GPT models) or bidirectional (such as used by BERT model), allows for such massively parallel processing.[3] Due to all above, most of the older (specialized) supervised models for specific tasks became outdated.[4]
In an implicit way, LLMs have acquired an embodied knowledge about syntax, semantics and “ontology” inherent in human language corpora, but also inaccuracies and biases present in the corpora.[4]
https://en.wikipedia.org/wiki/Large_language_model
At one level, that sounds quite impressive, but in reality, the output of programs like ChatGPT is, at best, mediocre. In my experience, ChatGPT has very limited knowledge of numerous subject matters. Its understanding of contemporary or specialized topics is lacking and falls short when compared to information found in a conventional encyclopedia. The answers generated by ChatGPT are bland, lacking depth and refinement. Furthermore, they frequently contain numerous mistakes and inaccuracies. The fear of expressing something insensitive or illegal constrains ChatGPT, leading it to avoid engaging in discussions or providing information on certain topics. As a result, the range of conversations and knowledge it can actively engage with becomes significantly limited.
But what I object most to the hype of Artificial Intelligence is how the market leaders want to cement their position in law by tightly controlling the technology, aiming to restrict competition through government regulation and extensive lobbying efforts. The leading manufacturers of AI systems actively seek to impose a ban on open-source models, thereby hindering individuals and institutions from embracing this technology for their own advantage. They often cite various conjectured threats and fears surrounding the potential misuse of AI. However, the irony is that regardless of how successful these AI lobbyists may be, those operating beyond the jurisdiction of United States law will likely persist in freely developing AI, ultimately allowing it to permeate back into the United States.
Useful software, particularly open source software, moves around the internet like water. You can only resist it’s penetration temporarily. Governments may attempt to build barriers in an effort to suppress it, but the source code effortlessly permeates the vast expanses of the internet, evading the clutches of regulatory bodies. It brings to mind the enduring controversy surrounding the DeCSS library, which enabled Linux users and others to freely view and duplicate unencrypted DVD movies, expanding the domain of online sharing. In a futile endeavor, numerous regulators and movie companies endeavored to eradicate DeCSS from the web, yet it persistently eluded their grasp. In fact, it has now become a commonplace component of Linux distributions, further illustrating the ineffectiveness of their efforts. While the AI advocates may strive to proscribe the unrestricted utilization and dissemination of Large Language Models, their attempts will prove fruitless, as these models will continue to be employed and exchanged without the government’s control. Even if a brief period of suppression were to occur, it would merely hinder progress in the United States and select countries, rather than stifle it completely.
Large language models are likely to become an every day part of computing, especially as open source makes them widely available. There will be a lot of fear and loathing over AI but I doubt they’ll change life nearly as much as their promoters and the Luddites make them out be. Large language models with be an evolution, making computing better in ways behind the scenes but won’t change much in every day life in society.
Those of a certain age can remember the illegal prime number shown as an image, that for a number of years was considered a crime to distribute, at least in the United States (this image’s colors plus C0 is the code to decrypt most DVDs). Worked out so well for the government, which eventually decided this was so stupid and forgot about it by the mid-2000s.
More Thoughts on Television
Being somebody who doesn’t own a television, πΊ I find watching any television to be incredibly disturbing. The bright colors, the sex and violence a find so nauseating.Β π€’ The endless chants and championing of big government makes want to punch the screen. π
The truth is, I don’t know how anybody could stand to watch television for more then five minutes without their brains going to mush.π