Artificial Intelligence: Difference between revisions
link to Mistral |
link to llama and llama.cpp |
||
Line 7: | Line 7: | ||
One interesting essay I read was "''[https://adam.harvey.studio/creative-commons Creative Commons and the Face Recognition Problem]''" by Adam Harvey. He describes how 100 million images from Flickr were used to train facial recognition systems using peoples wedding and vacation photos. | One interesting essay I read was "''[https://adam.harvey.studio/creative-commons Creative Commons and the Face Recognition Problem]''" by Adam Harvey. He describes how 100 million images from Flickr were used to train facial recognition systems using peoples wedding and vacation photos. | ||
== Understanding AI == | == Understanding AI == | ||
An excellent introduction to Artificial Intelligence and Large Language Models (LLMs) is an article [https://www.understandingai.org/p/large-language-models-explained-with Large language models, explained with a minimum of math and jargon] by Timothy Lee and Sean Trott - July 27, 2023<blockquote>''Tim Lee is a journalist with a master’s degree in computer science. The article is the result of two months of in-depth research. Co-author Sean Trott is a cognitive scientist at the University of California, San Diego.''</blockquote> | An excellent introduction to Artificial Intelligence and Large Language Models (LLMs) is an article [https://www.understandingai.org/p/large-language-models-explained-with Large language models, explained with a minimum of math and jargon] by Timothy Lee and Sean Trott - July 27, 2023<blockquote>''Tim Lee is a journalist with a master’s degree in computer science. The article is the result of two months of in-depth research. Co-author Sean Trott is a cognitive scientist at the University of California, San Diego.'' | ||
[[wp:Graphics_processing_unit|GPU]]<nowiki/>s - specialized electronic circuits initially developed for computer graphics - are an important aspect of AI (and other forms of computing such as neural networks, cryptocurrency). [[wp:llama.cpp|llama.cpp]], is a library written in C++ that performs inference on various LLMs (including [[wp:Llama_(language_model)|Llama]]) by implementing tensor algebra so systems without GPUs can do "AI".</blockquote> | |||
== Vectors are not just for graphics == | == Vectors are not just for graphics == |
Revision as of 12:48, 18 June 2025
See wp:Artificial intelligence on Wikipedia
Open AI's ChatGPT, Anthropic's Claude, Meta's LLaMA China's DeepSeek is all the rage[1]; stock market valuations of major companies like Alphabet (Google) or NVidia fluctuate billions of dollars overnight due to perceived strength or weakness in the new technological arms race. Companies like Mistral AI, founded only in 2023, are worth billions of dollars.
This page will capture some of the interesting points about AI and its use or relevance in Knowledge Management, MediaWiki, and probably some other tangents like deep fakes or politics.
One interesting essay I read was "Creative Commons and the Face Recognition Problem" by Adam Harvey. He describes how 100 million images from Flickr were used to train facial recognition systems using peoples wedding and vacation photos.
Understanding AI[edit]
An excellent introduction to Artificial Intelligence and Large Language Models (LLMs) is an article Large language models, explained with a minimum of math and jargon by Timothy Lee and Sean Trott - July 27, 2023
Tim Lee is a journalist with a master’s degree in computer science. The article is the result of two months of in-depth research. Co-author Sean Trott is a cognitive scientist at the University of California, San Diego. GPUs - specialized electronic circuits initially developed for computer graphics - are an important aspect of AI (and other forms of computing such as neural networks, cryptocurrency). llama.cpp, is a library written in C++ that performs inference on various LLMs (including Llama) by implementing tensor algebra so systems without GPUs can do "AI".
Vectors are not just for graphics[edit]
SVG is cool for graphics. But vectors aren't just for graphics.
Biases in the Hive Mind[edit]
In Semantics derived automatically from language corpora contain human-like biases the writers show that AI "learns" the same biases we live. The biases are embedded in our language. TLDR; if a million documents with the word 'nurse' contain predominantly female people and the word 'doctor' in that same corpora contain mostly male figures, then AI learns that nurse is female and presumes a doctor to be male.
Google AI Studio[edit]
Google AI Studio is a tool that lets you use Google's Gemini API 'easily'. In other words, if you want to develop something using AI, this is a starting point and you don't need to even know how to program. But the current site is laughably ridiculous - offering sample prompts like
- "Test if AI knows which number is bigger." (something a 5 year old knows)
- "Get recipe ideas based on an image of ingredients." (like, Don't know what to cook for dinner? Take a picture of your pantry! If you don't know how to do meal planning, I don't think artificial intelligence will help much.)
So, if you truly want to build something with AI, you might need a programmer ;-)
At Google, they have two series of LLMs: Gemini (cookbook) and Gemma
Dive Deeper[edit]
- Hugging Face
- Kaggle - competitions, open datasets, models and notebooks like day-1-prompting
- Temporal - reliable, scalable AI orchestrator
- ↑ ed. note: I crossed out the prior models to illustrate the litany of new models that are hyped in rapid succession. Of course ChatGPT and OpenAI are still relevant - and release updates to their prior technology in the never-ending 'arms race' of AI.