Tracing the History of Polymeric Materials, Part 25: Silicones | Plastics Technology
Technology
It’s a Computer – The New Stack
Next-gen content farms are using AI-generated text to spin up junk websites | MIT Technology Review
Over 140 major brands are paying for ads that end up on unreliable AI-written sites, likely without their knowledge. Ninety percent of the ads from major brands found on these AI-generated news sites were served by Google, though the company’s own policies prohibit sites from placing Google-served ads on pages that include “spammy automatically generated content.” The practice threatens to hasten the arrival of a glitchy, spammy internet that is overrun by AI-generated content, as well as wasting massive amounts of ad money.
Most companies that advertise online automatically bid on spots to run those ads through a practice called “programmatic advertising.” Algorithms place ads on various websites according to complex calculations that optimize the number of eyeballs an ad might attract from the company’s target audience. As a result, big brands end up paying for ad placements on websites that they may have never heard of before, with little to no human oversight.
To take advantage, content farms have sprung up where low-paid humans churn out low-quality content to attract ad revenue. These types of websites already have a name: “made for advertising” sites. They use tactics such as clickbait, autoplay videos, and pop-up ads to squeeze as much money as possible out of advertisers. In a recent survey, the Association of National Advertisers found that 21% of ad impressions in their sample went to made-for-advertising sites. The group estimated that around $13 billion is wasted globally on these sites each year.
NPR
New physics-based self-learning machines could replace current artificial neural networks and save energy
Artificial intelligence and licensing π€
I am very concerned about proposals to require licensing and regulations on artificial intelligence and all that might fall under that umbrella – machine learning, natural language models. Look who is putting forward proposals to regulate artificial intelligence – it’s the big incumbent players like Chat GPT and Facebook.
Maybe commercial products for sale should be regulated but free, open source projects should not be. Frameworks should be widely available to the public for any purpose they want, good or bad. Let the people play and innovate. If harm exists, go after the harmful commercial users, not the everyday people experimenting with the technology for non profit purposes to see how they can innovate.
Stopping bad actors seems like a good idea but you can’t stop a technology from moving forward in a global internet. If the US bans innovation, another less regulated country is likely to move it forward – China, Switzerland or some other place. I’m okay with regulating Meta and Open AI but not what goes on inside people’s basements.
Artificial intelligence, machine learning and natural language processing
There is a a lot of confusion and hype on this topic, so I thought it best to clarify this point. You probably use artificial intelligence and these related subsets every time you use a computer not it’s not as scary as you might think. Indeed, Artificial Intelligence (AI), Machine Learning (ML), and Large Language Models (LLMs) are related concepts in the field of technology, but they have distinct differences:
- Artificial Intelligence (AI):
- Definition: AI refers to the broader field of creating machines or systems that can perform tasks that typically require human intelligence. It aims to replicate human-like thinking, reasoning, problem-solving, and decision-making.
- Scope: AI encompasses a wide range of techniques and applications, including natural language processing, computer vision, robotics, and more.
- Examples: Virtual assistants like Siri and Alexa, autonomous cars, and AI-powered recommendation systems.
- Machine Learning (ML):
- Definition: ML is a subset of AI that focuses on developing algorithms and models that allow machines to learn from data and make predictions or decisions without explicit programming.
- Approach: ML algorithms use patterns and statistical analysis to improve their performance over time as they are exposed to more data.
- Examples: Spam email filters, image recognition software, and predictive text suggestions on smartphones.
- Large Language Models (LLMs):
- Definition: LLMs are a specific type of ML model designed for natural language understanding and generation tasks. They are massive neural networks trained on vast amounts of text data.
- Functionality: LLMs excel at tasks like text generation, translation, summarization, and question-answering. They can understand and generate human-like text.
- Examples: GPT-3, GPT-4, and BERT are examples of LLMs that have gained prominence for their text-based capabilities.
In summary, AI is the overarching concept that aims to create intelligent machines, while ML is a subset of AI that focuses on learning from data. LLMs, on the other hand, are specific ML models designed for natural language processing tasks. They are powerful tools within the field of AI and ML, capable of understanding and generating human language at scale.