Skip to content

VanshGehlot/AGI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 

Repository files navigation

AGI - Artificial General Intelligence 🚀

A curated list of tools and resources to help you accelerate your journey toward building AGI 🤖

Generative artificial intelligence is a branch of AI focused on using machine learning algorithms to generate new content such as images, video, text, and audio. The goal is for AI systems to create unique, original artefacts that seem as if they were created by humans.

Some examples of generative AI include:

  • Generative Adversarial Networks (GANs)(GAN-Zoo): GANs use two neural networks, a generator and a discriminator, that compete against each other to create new data samples. GANs have been used to generate photorealistic images, human faces, works of art, and more.

  • Variational Autoencoders (VAEs): VAEs are neural networks used for unsupervised learning of compressed latent representations of data. They have been used to generate new data samples such as images, handwriting, and speech.

  • Text Generation: Models like GPT-3 can generate paragraphs of coherent text, poetry, articles, and scripts after being trained on massive datasets.

  • Deepfakes (Link): Deep learning techniques can manipulate or generate visual and audio content with a high potential for misuse. Models can generate fake images, videos, and speech that seem authentic.

  • Reinforcement Learning from Human Feedback (RLHF)(Link): Algorithms like DeepMind's AlphaGo have gotten smarter at playing complex games by competing against themselves to develop advanced strategies.

  • Neural Style Transfer (Link): This technique uses neural networks to transfer the style of one image onto the content of another image. It has been used by apps like Prisma, Dreamscope, and Style2Paints to generate unique works of art.

The goal of generative AI is to give machines a degree of creative ability and push the boundaries of what AI can accomplish. But it also introduces risks around the plausible spread of misinformation that we must consider seriously. Overall, generative AI is an exciting and fast-moving field of research.

LLMs - Large language models 💻

LLMs or Large Language Models are neural networks trained on massive amounts of text data to recognize patterns and generate natural language. They are a type of self-supervised learning model, meaning they learn by making predictions based on vast amounts of unlabeled data.

Some key characteristics of LLMs include:

  • They are trained on huge datasets that can contain billions of words. The larger the dataset, the more knowledge and capabilities the LLM can acquire.

  • They employ the transformer architecture, which uses an attention mechanism to understand the context and relationships between words in a sentence. This allows the model to handle long-range dependencies in language.

  • They are usually pre-trained on a general language modelling task and then fine-tuned for more specific downstream tasks like question answering, text summarization, and sentiment analysis.

  • They generate text by predicting the next most likely word in a sequence given the context. The output can seem very human and coherent.

  • Examples of prominent LLMs include OpenAI's GPT-3, Google's BERT, and Microsoft's Turing-NLG.

  • They have achieved state-of-the-art results on many NLP tasks but also have some weaknesses like lack of world knowledge and susceptibility to biases in the training data.

Some key applications and future directions of LLMs include:

  • Natural language generation for dialogue, storytelling, and creative writing.

  • Robust question answering and open-domain chatbots.

  • Multimodal learning by incorporating images, speech, and other data types.

  • Achieving human-level language understanding, which remains an open challenge

Free Courses Available [40+] 🧠

LLMs Educational Resources 📚

  • START HERE: "Transformers from Scratch", Brandon Rohrer, [Website]

  • Stanford Transformers Class: "CS25: Transformers United", Stanford, 2022, [Website]

  • Andrej Karpathy GPT Tutorial: "Let's build GPT: from scratch, in code, spelt out." Andrej Karpathy, 2023 [Youtube Video]

Robotics Educational Resources

  • AI-Enabled Robotics Class: "CS199: Stanford Robotics Independent Study", Stanford, 2023, [Website]

LLMs + Robotics Educational Resources

  • Google's 2022 Research: "Google Research, 2022 & beyond: Robotics", Google, 2023, [Website]

  • Controlling Robots Via Large Language Models: "Controlling Robots Via Large Language Models", Sanjiban Choudhury, CS 4756/5756, Cornell, 2023 [Slides]

Reasoning

  • AutoTAMP: "AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers", arXiv, June 2023. [Paper]

  • LLM Designs Robots: "CAN LARGE LANGUAGE MODELS DESIGN A ROBOT?", arXiv, Mar 2023. [Paper]

  • PaLM-E: "PaLM-E: An Embodied Multimodal Language Model", arXiV, Mar 2023. [Paper] [Website] [Demo]

  • RT-1: "RT-1: Robotics Transformer for Real-World Control at Scale", arXiv, Dec 2022. [Paper] [Code] [Website]

  • ProgPrompt: "Generating Situated Robot Task Plans using Large Language Models", arXiv, Sept 2022. [Paper] [Code Doesn't Really Exist here] [Website]

  • Code-As-Policies: "Code as Policies: Language Model Programs for Embodied Control", arXiv, Sept 2022. [Paper] [Code] [Website]

  • Say-Can: "Do As I Can, Not As I Say: Grounding Language in Robotic Affordances", arXiv, Apr 2021. [Paper] [Code] [Website]

  • Socratic: "Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language", arXiv, Apr 2021. [Paper] [Code] [Website]

  • PIGLeT: "PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World", ACL, Jun 2021. [Paper] [Code] [Website]

AI Related Visualization 👀

The Best Article 📝

Curated By - Vansh Gehlot [LinkedIn] [Twitter] [Website]

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published