VibeBuilders.ai Logo
VibeBuilders.ai

Rlhf

Explore resources related to rlhf to help implement AI solutions for your business.

[D] What is your honest experience with reinforcement learning?
reddit
LLM Vibe Score0
Human Vibe Score1
Starks-TechnologyThis week

[D] What is your honest experience with reinforcement learning?

In my personal experience, SOTA RL algorithms simply don't work. I've tried working with reinforcement learning for over 5 years. I remember when Alpha Go defeated the world famous Go player, Lee Sedol, and everybody thought RL would take the ML community by storm. Yet, outside of toy problems, I've personally never found a practical use-case of RL. What is your experience with it? Aside from Ad recommendation systems and RLHF, are there legitimate use-cases of RL? Or, was it all hype? Edit: I know a lot about AI. I built NexusTrade, an AI-Powered automated investing tool that lets non-technical users create, update, and deploy their trading strategies. I’m not an idiot nor a noob; RL is just ridiculously hard. Edit 2: Since my comments are being downvoted, here is a link to my article that better describes my position. It's not that I don't understand RL. I released my open-source code and wrote a paper on it. It's the fact that it's EXTREMELY difficult to understand. Other deep learning algorithms like CNNs (including ResNets), RNNs (including GRUs and LSTMs), Transformers, and GANs are not hard to understand. These algorithms work and have practical use-cases outside of the lab. Traditional SOTA RL algorithms like PPO, DDPG, and TD3 are just very hard. You need to do a bunch of research to even implement a toy problem. In contrast, the decision transformer is something anybody can implement, and it seems to match or surpass the SOTA. You don't need two networks battling each other. You don't have to go through hell to debug your network. It just naturally learns the best set of actions in an auto-regressive manner. I also didn't mean to come off as arrogant or imply that RL is not worth learning. I just haven't seen any real-world, practical use-cases of it. I simply wanted to start a discussion, not claim that I know everything. Edit 3: There's a shockingly number of people calling me an idiot for not fully understanding RL. You guys are wayyy too comfortable calling people you disagree with names. News-flash, not everybody has a PhD in ML. My undergraduate degree is in biology. I self-taught myself the high-level maths to understand ML. I'm very passionate about the field; I just have VERY disappointing experiences with RL. Funny enough, there are very few people refuting my actual points. To summarize: Lack of real-world applications Extremely complex and inaccessible to 99% of the population Much harder than traditional DL algorithms like CNNs, RNNs, and GANs Sample inefficiency and instability Difficult to debug Better alternatives, such as the Decision Transformer Are these not legitimate criticisms? Is the purpose of this sub not to have discussions related to Machine Learning? To the few commenters that aren't calling me an idiot...thank you! Remember, it costs you nothing to be nice! Edit 4: Lots of people seem to agree that RL is over-hyped. Unfortunately those comments are downvoted. To clear up some things: We've invested HEAVILY into reinforcement learning. All we got from this investment is a robot that can be super-human at (some) video games. AlphaFold did not use any reinforcement learning. SpaceX doesn't either. I concede that it can be useful for robotics, but still argue that it's use-cases outside the lab are extremely limited. If you're stumbling on this thread and curious about an RL alternative, check out the Decision Transformer. It can be used in any situation that a traditional RL algorithm can be used. Final Edit: To those who contributed more recently, thank you for the thoughtful discussion! From what I learned, model-based models like Dreamer and IRIS MIGHT have a future. But everybody who has actually used model-free models like DDPG unanimously agree that they suck and don’t work.

[D] What is your honest experience with reinforcement learning?
reddit
LLM Vibe Score0
Human Vibe Score1
Starks-TechnologyThis week

[D] What is your honest experience with reinforcement learning?

In my personal experience, SOTA RL algorithms simply don't work. I've tried working with reinforcement learning for over 5 years. I remember when Alpha Go defeated the world famous Go player, Lee Sedol, and everybody thought RL would take the ML community by storm. Yet, outside of toy problems, I've personally never found a practical use-case of RL. What is your experience with it? Aside from Ad recommendation systems and RLHF, are there legitimate use-cases of RL? Or, was it all hype? Edit: I know a lot about AI. I built NexusTrade, an AI-Powered automated investing tool that lets non-technical users create, update, and deploy their trading strategies. I’m not an idiot nor a noob; RL is just ridiculously hard. Edit 2: Since my comments are being downvoted, here is a link to my article that better describes my position. It's not that I don't understand RL. I released my open-source code and wrote a paper on it. It's the fact that it's EXTREMELY difficult to understand. Other deep learning algorithms like CNNs (including ResNets), RNNs (including GRUs and LSTMs), Transformers, and GANs are not hard to understand. These algorithms work and have practical use-cases outside of the lab. Traditional SOTA RL algorithms like PPO, DDPG, and TD3 are just very hard. You need to do a bunch of research to even implement a toy problem. In contrast, the decision transformer is something anybody can implement, and it seems to match or surpass the SOTA. You don't need two networks battling each other. You don't have to go through hell to debug your network. It just naturally learns the best set of actions in an auto-regressive manner. I also didn't mean to come off as arrogant or imply that RL is not worth learning. I just haven't seen any real-world, practical use-cases of it. I simply wanted to start a discussion, not claim that I know everything. Edit 3: There's a shockingly number of people calling me an idiot for not fully understanding RL. You guys are wayyy too comfortable calling people you disagree with names. News-flash, not everybody has a PhD in ML. My undergraduate degree is in biology. I self-taught myself the high-level maths to understand ML. I'm very passionate about the field; I just have VERY disappointing experiences with RL. Funny enough, there are very few people refuting my actual points. To summarize: Lack of real-world applications Extremely complex and inaccessible to 99% of the population Much harder than traditional DL algorithms like CNNs, RNNs, and GANs Sample inefficiency and instability Difficult to debug Better alternatives, such as the Decision Transformer Are these not legitimate criticisms? Is the purpose of this sub not to have discussions related to Machine Learning? To the few commenters that aren't calling me an idiot...thank you! Remember, it costs you nothing to be nice! Edit 4: Lots of people seem to agree that RL is over-hyped. Unfortunately those comments are downvoted. To clear up some things: We've invested HEAVILY into reinforcement learning. All we got from this investment is a robot that can be super-human at (some) video games. AlphaFold did not use any reinforcement learning. SpaceX doesn't either. I concede that it can be useful for robotics, but still argue that it's use-cases outside the lab are extremely limited. If you're stumbling on this thread and curious about an RL alternative, check out the Decision Transformer. It can be used in any situation that a traditional RL algorithm can be used. Final Edit: To those who contributed more recently, thank you for the thoughtful discussion! From what I learned, model-based models like Dreamer and IRIS MIGHT have a future. But everybody who has actually used model-free models like DDPG unanimously agree that they suck and don’t work.

[D] What is your honest experience with reinforcement learning?
reddit
LLM Vibe Score0
Human Vibe Score1
Starks-TechnologyThis week

[D] What is your honest experience with reinforcement learning?

In my personal experience, SOTA RL algorithms simply don't work. I've tried working with reinforcement learning for over 5 years. I remember when Alpha Go defeated the world famous Go player, Lee Sedol, and everybody thought RL would take the ML community by storm. Yet, outside of toy problems, I've personally never found a practical use-case of RL. What is your experience with it? Aside from Ad recommendation systems and RLHF, are there legitimate use-cases of RL? Or, was it all hype? Edit: I know a lot about AI. I built NexusTrade, an AI-Powered automated investing tool that lets non-technical users create, update, and deploy their trading strategies. I’m not an idiot nor a noob; RL is just ridiculously hard. Edit 2: Since my comments are being downvoted, here is a link to my article that better describes my position. It's not that I don't understand RL. I released my open-source code and wrote a paper on it. It's the fact that it's EXTREMELY difficult to understand. Other deep learning algorithms like CNNs (including ResNets), RNNs (including GRUs and LSTMs), Transformers, and GANs are not hard to understand. These algorithms work and have practical use-cases outside of the lab. Traditional SOTA RL algorithms like PPO, DDPG, and TD3 are just very hard. You need to do a bunch of research to even implement a toy problem. In contrast, the decision transformer is something anybody can implement, and it seems to match or surpass the SOTA. You don't need two networks battling each other. You don't have to go through hell to debug your network. It just naturally learns the best set of actions in an auto-regressive manner. I also didn't mean to come off as arrogant or imply that RL is not worth learning. I just haven't seen any real-world, practical use-cases of it. I simply wanted to start a discussion, not claim that I know everything. Edit 3: There's a shockingly number of people calling me an idiot for not fully understanding RL. You guys are wayyy too comfortable calling people you disagree with names. News-flash, not everybody has a PhD in ML. My undergraduate degree is in biology. I self-taught myself the high-level maths to understand ML. I'm very passionate about the field; I just have VERY disappointing experiences with RL. Funny enough, there are very few people refuting my actual points. To summarize: Lack of real-world applications Extremely complex and inaccessible to 99% of the population Much harder than traditional DL algorithms like CNNs, RNNs, and GANs Sample inefficiency and instability Difficult to debug Better alternatives, such as the Decision Transformer Are these not legitimate criticisms? Is the purpose of this sub not to have discussions related to Machine Learning? To the few commenters that aren't calling me an idiot...thank you! Remember, it costs you nothing to be nice! Edit 4: Lots of people seem to agree that RL is over-hyped. Unfortunately those comments are downvoted. To clear up some things: We've invested HEAVILY into reinforcement learning. All we got from this investment is a robot that can be super-human at (some) video games. AlphaFold did not use any reinforcement learning. SpaceX doesn't either. I concede that it can be useful for robotics, but still argue that it's use-cases outside the lab are extremely limited. If you're stumbling on this thread and curious about an RL alternative, check out the Decision Transformer. It can be used in any situation that a traditional RL algorithm can be used. Final Edit: To those who contributed more recently, thank you for the thoughtful discussion! From what I learned, model-based models like Dreamer and IRIS MIGHT have a future. But everybody who has actually used model-free models like DDPG unanimously agree that they suck and don’t work.

ARENA_2.0
github
LLM Vibe Score0.544
Human Vibe Score0.08491210825084358
callummcdougallMar 28, 2025

ARENA_2.0

This GitHub repo hosts the exercises and Streamlit pages for the ARENA 2.0 program. You can find a summary of each of the chapters below. For more detailed information (including the different ways you can access the exercises), click on the links in the chapter headings. Additionally, see this Notion page for a guide to the virtual study materials available. Chapter 0: Fundamentals The material on this page covers the first five days of the curriculum. It can be seen as a grounding in all the fundamentals necessary to complete the more advanced sections of this course (such as RL, transformers, mechanistic interpretability, and generative models). Some highlights from this chapter include: Building your own 1D and 2D convolution functions Building and loading weights into a Residual Neural Network, and finetuning it on a classification task Working with weights and biases to optimise hyperparameters Implementing your own backpropagation mechanism Chapter 1: Transformers & Mech Interp The material on this page covers the next 8 days of the curriculum. It will cover transformers (what they are, how they are trained, how they are used to generate output) as well as mechanistic interpretability (what it is, what are some of the most important results in the field so far, why it might be important for alignment). Some highlights from this chapter include: Building your own transformer from scratch, and using it to sample autoregressive output Using the TransformerLens library developed by Neel Nanda to locate induction heads in a 2-layer model Finding a circuit for indirect object identification in GPT-2 small Intepreting model trained on toy tasks, e.g. classification of bracket strings, or modular arithmetic Replicating Anthropic's results on superposition Unlike the first chapter (where all the material was compulsory), this chapter has 4 days of compulsory content and 4 days of bonus content. During the compulsory days you will build and train transformers, and get a basic understanding of mechanistic interpretability of transformer models which includes induction heads & use of TransformerLens. The next 4 days, you have the option to continue with whatever material interests you out of the remaining sets of exercises. There will also be bonus material if you want to leave the beaten track of exercises all together! Chapter 2: Reinforcement Learning Reinforcement learning is an important field of machine learning. It works by teaching agents to take actions in an environment to maximise their accumulated reward. In this chapter, you will be learning about some of the fundamentals of RL, and working with OpenAI’s Gym environment to run your own experiments. Some highlights from this chapter include: Building your own agent to play the multi-armed bandit problem, implementing methods from Sutton & Bardo Implementing a Deep Q-Network (DQN) and Proximal Policy Optimization (PPO) to play the CartPole game Applying RLHF to autoregressive transformers like the ones you built in the previous chapter Chapter 3: Training at Scale With the advent of large language models, training at scale has become a necessity to create highly competent models. In this chapter we will go through the basics of GPUs and distributed training, along with introductions to libraries that make training at scale easier. Some highlights from this chapter include: Quantizing your model to INT8 for blazing fast inference Implementing distributed training loops using torch.dist Getting hands on with Huggingface Accelerate and Microsoft DeepsSpeed

h2o-llmstudio
github
LLM Vibe Score0.499
Human Vibe Score0.04822694170894296
h2oaiMar 28, 2025

h2o-llmstudio

Welcome to H2O LLM Studio, a framework and no-code GUI designed for fine-tuning state-of-the-art large language models (LLMs). Jump to With H2O LLM Studio, you can Quickstart What's New Setup Recommended Install Virtual Environments Run H2O LLM Studio GUI Run H2O LLM Studio GUI using Docker Run H2O LLM Studio with command line interface (CLI) Troubleshooting Data format and example data Training your model Example: Run on OASST data via CLI Model checkpoints Documentation Contributing License With H2O LLM Studio, you can easily and effectively fine-tune LLMs without the need for any coding experience. use a graphic user interface (GUI) specially designed for large language models. finetune any LLM using a large variety of hyperparameters. use recent finetuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with a low memory footprint. use Reinforcement Learning (RL) to finetune your model (experimental) use advanced evaluation metrics to judge generated answers by the model. track and compare your model performance visually. In addition, Neptune and W&B integration can be used. chat with your model and get instant feedback on your model performance. easily export your model to the Hugging Face Hub and share it with the community. Quickstart For questions, discussing, or just hanging out, come and join our Discord! Use cloud-based runpod.io instance to run the H2O LLM Studio GUI. Using CLI for fine-tuning LLMs: What's New PR 788 New problem type for Causal Regression Modeling allows to train single target regression data using LLMs. PR 747 Fully removed RLHF in favor of DPO/IPO/KTO optimization. PR 741 Removing separate max length settings for prompt and answer in favor of a single maxlength settings better resembling chattemplate functionality from transformers. PR 592 Added KTOPairLoss for DPO modeling allowing to train models with simple preference data. Data currently needs to be manually prepared by randomly matching positive and negative examples as pairs. PR 592 Starting to deprecate RLHF in favor of DPO/IPO optimization. Training is disabled, but old experiments are still viewable. RLHF will be fully removed in a future release. PR 530 Introduced a new problem type for DPO/IPO optimization. This optimization technique can be used as an alternative to RLHF. PR 288 Introduced Deepspeed for sharded training allowing to train larger models on machines with multiple GPUs. Requires NVLink. This feature replaces FSDP and offers more flexibility. Deepspeed requires a system installation of cudatoolkit and we recommend using version 12.1. See Recommended Install. PR 449 New problem type for Causal Classification Modeling allows to train binary and multiclass models using LLMs. PR 364 User secrets are now handled more securely and flexible. Support for handling secrets using the 'keyring' library was added. User settings are tried to be migrated automatically. Please note that due to current rapid development we cannot guarantee full backwards compatibility of new functionality. We thus recommend to pin the version of the framework to the one you used for your experiments. For resetting, please delete/backup your data and output folders. Setup H2O LLM Studio requires a machine with Ubuntu 16.04+ and at least one recent Nvidia GPU with Nvidia drivers version >= 470.57.02. For larger models, we recommend at least 24GB of GPU memory. For more information about installation prerequisites, see the Set up H2O LLM Studio guide in the documentation. For a performance comparison of different GPUs, see the H2O LLM Studio performance guide in the documentation. Recommended Install The recommended way to install H2O LLM Studio is using pipenv with Python 3.10. To install Python 3.10 on Ubuntu 16.04+, execute the following commands: System installs (Python 3.10) Installing NVIDIA Drivers (if required) If deploying on a 'bare metal' machine running Ubuntu, one may need to install the required Nvidia drivers and CUDA. The following commands show how to retrieve the latest drivers for a machine running Ubuntu 20.04 as an example. One can update the following based on their OS. alternatively, one can install cudatoolkits in a conda environment: Virtual environments We offer various ways of setting up the necessary python environment. Pipenv virtual environment The following command will create a virtual environment using pipenv and will install the dependencies using pipenv: If you are having troubles installing the flash_attn package, consider running instead. This will install the dependencies without the flash_attn package. Note that this will disable the use of Flash Attention 2 and model training will be slower and consume more memory. Nightly Conda virtual environment You can also setup a conda virtual environment that can also deviate from the recommended setup. The contains a command that installs a fresh conda environment with CUDA 12.4 and current nightly PyTorch. Using requirements.txt If you wish to use another virtual environment, you can also install the dependencies using the requirements.txt file: Run H2O LLM Studio GUI You can start H2O LLM Studio using the following command: This command will start the H2O wave server and app. Navigate to (we recommend using Chrome) to access H2O LLM Studio and start fine-tuning your models! If you are running H2O LLM Studio with a custom environment other than Pipenv, you need to start the app as follows: If you are using the nightly conda environment, you can run . Run H2O LLM Studio GUI using Docker Install Docker first by following instructions from NVIDIA Containers. Make sure to have nvidia-container-toolkit installed on your machine as outlined in the instructions. H2O LLM Studio images are stored in the h2oai dockerhub container repository. Navigate to (we recommend using Chrome) to access H2O LLM Studio and start fine-tuning your models! (Note other helpful docker commands are docker ps and docker kill.) If you prefer to build your own Docker image from source, follow the instructions below. Run H2O LLM Studio with command line interface (CLI) You can also use H2O LLM Studio with the command line interface (CLI) and specify the configuration .yaml file that contains all the experiment parameters. To finetune using H2O LLM Studio with CLI, activate the pipenv environment by running make shell, and then use the following command: To run on multiple GPUs in DDP mode, run the following command: By default, the framework will run on the first k GPUs. If you want to specify specific GPUs to run on, use the CUDAVISIBLEDEVICES environment variable before the command. To start an interactive chat with your trained model, use the following command: where experiment_name is the output folder of the experiment you want to chat with (see configuration). The interactive chat will also work with model that were finetuned using the UI. To publish the model to Hugging Face, use the following command: pathtoexperiment is the output folder of the experiment. device is the target device for running the model, either 'cpu' or 'cuda:0'. Default is 'cuda:0'. api_key is the Hugging Face API Key. If user logged in, it can be omitted. user_id is the Hugging Face user ID. If user logged in, it can be omitted. model_name is the name of the model to be published on Hugging Face. It can be omitted. safe_serialization is a flag indicating whether safe serialization should be used. Default is True. Troubleshooting If running on cloud based machines such as runpod, you may need to set the following environment variable to allow the H2O Wave server to accept connections from the proxy: If you are experiencing timeouts when running the H2O Wave server remotely, you can increase the timeout by setting the following environment variables: All default to 5 (seconds). Increase them if you are experiencing timeouts. Use -1 to disable the timeout. Data format and example data For details on the data format required when importing your data or example data that you can use to try out H2O LLM Studio, see Data format in the H2O LLM Studio documentation. Training your model With H2O LLM Studio, training your large language model is easy and intuitive. First, upload your dataset and then start training your model. Start by creating an experiment. You can then monitor and manage your experiment, compare experiments, or push the model to Hugging Face to share it with the community. Example: Run on OASST data via CLI As an example, you can run an experiment on the OASST data via CLI. For instructions, see Run an experiment on the OASST data guide in the H2O LLM Studio documentation. Model checkpoints All open-source datasets and models are posted on H2O.ai's Hugging Face page and our H2OGPT repository. Documentation Detailed documentation and frequently asked questions (FAQs) for H2O LLM Studio can be found at . If you wish to contribute to the docs, navigate to the /documentation folder of this repo and refer to the README.md for more information. Contributing We are happy to accept contributions to the H2O LLM Studio project. Please refer to the CONTRIBUTING.md file for more information. License H2O LLM Studio is licensed under the Apache 2.0 license. Please see the LICENSE file for more information.

AI-PhD-S24
github
LLM Vibe Score0.472
Human Vibe Score0.0922477795435268
rphilipzhangMar 25, 2025

AI-PhD-S24

Artificial Intelligence for Business Research (Spring 2024) Scribed Lecture Notes Class Recordings (You need to apply for access.) Teaching Team Instructor*: Renyu (Philip) Zhang, Associate Professor, Department of Decisions, Operations and Technology, CUHK Business School, philipzhang@cuhk.edu.hk, @911 Cheng Yu Tung Building. Teaching Assistant*: Leo Cao, Full-time TA, Department of Decisions, Operations and Technology, CUHK Business School, yinglyucao@cuhk.edu.hk. Please be noted that Leo will help with any issues related to the logistics, but not the content, of this course. Tutorial Instructor*: Qiansiqi Hu, MSBA Student, Department of Decisions, Operations and Technology, CUHK Business School, 1155208353@link.cuhk.edu.hk. BS in ECE, Shanghai Jiaotong University Michigan Institute. Basic Information Website: https://github.com/rphilipzhang/AI-PhD-S24 Time: Tuesday, 12:30pm-3:15pm, from Jan 9, 2024 to Apr 16, 2024, except for Feb 13 (Chinese New Year) and Mar 5 (Final Project Discussion) Location: Cheng Yu Tung Building (CYT) LT5 About Welcome to the mono-repo of the PhD course AI for Business Research (DSME 6635) at CUHK Business School in Spring 2024. You may download the Syllabus of this course first. The purpose of this course is to learn the following: Have a basic understanding of the fundamental concepts/methods in machine learning (ML) and artificial intelligence (AI) that are used (or potentially useful) in business research. Understand how business researchers have utilized ML/AI and what managerial questions have been addressed by ML/AI in the recent decade. Nurture a taste of what the state-of-the-art AI/ML technologies can do in the ML/AI community and, potentially, in your own research field. We will meet each Tuesday at 12:30pm in Cheng Yu Tung Building (CYT) LT5 (please pay attention to this room change). Please ask for my approval if you need to join us via the following Zoom links: Zoom link, Meeting ID 996 4239 3764, Passcode 386119. Most of the code in this course will be distributed through the Google CoLab cloud computing environment to avoid the incompatibility and version control issues on your local individual computer. On the other hand, you can always download the Jupyter Notebook from CoLab and run it your own computer. The CoLab files of this course can be found at this folder. The Google Sheet to sign up for groups and group tasks can be found here. The overleaf template for scribing the lecture notes of this course can be found here. If you have any feedback on this course, please directly contact Philip at philipzhang@cuhk.edu.hk and we will try our best to address it. Brief Schedule Subject to modifications. All classes start at 12:30pm and end at 3:15pm. |Session|Date |Topic|Key Words| |:-------:|:-------------:|:----:|:-:| |1|1.09|AI/ML in a Nutshell|Course Intro, ML Models, Model Evaluations| |2|1.16|Intro to DL|DL Intro, Neural Nets, Computational Issues in DL| |3|1.23|Prediction and Traditional NLP|Prediction in Biz Research, Pre-processing| |4|1.30|NLP (II): Traditional NLP|$N$-gram, NLP Performance Evaluations, Naïve Bayes| |5|2.06|NLP (III): Word2Vec|CBOW, Skip Gram| |6|2.20|NLP (IV): RNN|Glove, Language Model Evaluation, RNN| |7|2.27|NLP (V): Seq2Seq|LSTM, Seq2Seq, Attention Mechanism| |7.5|3.05|NLP (V.V): Transformer|The Bitter Lesson, Attention is All You Need| |8|3.12|NLP (VI): Pre-training|Computational Tricks in DL, BERT, GPT| |9|3.19|NLP (VII): LLM|Emergent Abilities, Chain-of-Thought, In-context Learning, GenAI in Business Research| |10|3.26|CV (I): Image Classification|CNN, AlexNet, ResNet, ViT| |11|4.02|CV (II): Image Segmentation and Video Analysis|R-CNN, YOLO, 3D-CNN| |12|4.09|Unsupervised Learning (I): Clustering & Topic Modeling|GMM, EM Algorithm, LDA| |13|4.16|Unsupervised Learning (II): Diffusion Models|VAE, DDPM, LDM, DiT| Important Dates All problem sets are due at 12:30pm right before class. |Date| Time|Event|Note| |:--:|:-:|:---:|:--:| |1.10| 11:59pm|Group Sign-Ups|Each group has at most two students.| |1.12| 7:00pm-9:00pm|Python Tutorial|Given by Qiansiqi Hu, Python Tutorial CoLab| |1.19| 7:00pm-9:00pm|PyTorch Tutorial|Given by Qiansiqi Hu, PyTorch Tutorial CoLab| |3.05|9:00am-6:00pm|Final Project Discussion|Please schedule a meeting with Philip.| |3.12| 12:30pm|Final Project Proposal|1-page maximum| |4.30| 11:59pm|Scribed Lecture Notes|Overleaf link| |5.12|11:59pm|Project Paper, Slides, and Code|Paper page limit: 10| Useful Resources Find more on the Syllabus. Books: ESL, Deep Learning, Dive into Deep Learning, ML Fairness, Applied Causal Inference Powered by ML and AI Courses: ML Intro by Andrew Ng, DL Intro by Andrew Ng, NLP (CS224N) by Chris Manning, CV (CS231N) by Fei-Fei Li, Deep Unsupervised Learning by Pieter Abbeel, DLR by Sergey Levine, DL Theory by Matus Telgarsky, LLM by Danqi Chen, Generative AI by Andrew Ng, Machine Learning and Big Data by Melissa Dell and Matthew Harding, Digital Economics and the Economics of AI by Martin Beraja, Chiara Farronato, Avi Goldfarb, and Catherine Tucker Detailed Schedule The following schedule is tentative and subject to changes. Session 1. Artificial Intelligence and Machine Learning in a Nutshell (Jan/09/2024) Keywords: Course Introduction, Machine Learning Basics, Bias-Variance Trade-off, Cross Validation, $k$-Nearest Neighbors, Decision Tree, Ensemble Methods Slides: Course Introduction, Machine Learning Basics CoLab Notebook Demos: k-Nearest Neighbors, Decision Tree Homework: Problem Set 1: Bias-Variance Trade-Off Online Python Tutorial: Python Tutorial CoLab, 7:00pm-9:00pm, Jan/12/2024 (Friday), given by Qiansiqi Hu, 1155208353@link.cuhk.edu.hk. Zoom Link, Meeting ID: 923 4642 4433, Pass code: 178146 References: The Elements of Statistical Learning (2nd Edition), 2009, by Trevor Hastie, Robert Tibshirani, Jerome Friedman, https://hastie.su.domains/ElemStatLearn/. Probabilistic Machine Learning: An Introduction, 2022, by Kevin Murphy, https://probml.github.io/pml-book/book1.html. Mullainathan, Sendhil, and Jann Spiess. 2017. Machine learning: an applied econometric approach. Journal of Economic Perspectives 31(2): 87-106. Athey, Susan, and Guido W. Imbens. 2019. Machine learning methods that economists should know about. Annual Review of Economics 11: 685-725. Hofman, Jake M., et al. 2021. Integrating explanation and prediction in computational social science. Nature 595.7866: 181-188. Bastani, Hamsa, Dennis Zhang, and Heng Zhang. 2022. Applied machine learning in operations management. Innovative Technology at the Interface of Finance and Operations. Springer: 189-222. Kelly, Brian, and Dacheng Xiu. 2023. Financial machine learning, SSRN, https://ssrn.com/abstract=4501707. The Bitter Lesson, by Rich Sutton, which develops so far the most critical insight of AI: "The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin." Session 2. Introduction to Deep Learning (Jan/16/2024) Keywords: Random Forests, eXtreme Gradient Boosting Trees, Deep Learning Basics, Neural Nets Models, Computational Issues of Deep Learning Slides: Machine Learning Basics, Deep Learning Basics CoLab Notebook Demos: Random Forest, Extreme Gradient Boosting Tree, Gradient Descent, Chain Rule Presentation: By Xinyu Li and Qingyu Xu. Gu, Shihao, Brian Kelly, and Dacheng Xiu. 2020. Empirical asset pricing via machine learning. Review of Financial Studies 33: 2223-2273. Link to the paper. Homework: Problem Set 2: Implementing Neural Nets Online PyTorch Tutorial: PyTorch Tutorial CoLab, 7:00pm-9:00pm, Jan/19/2024 (Friday), given by Qiansiqi Hu, 1155208353@link.cuhk.edu.hk. Zoom Link, Meeting ID: 923 4642 4433, Pass code: 178146 References: Deep Learning, 2016, by Ian Goodfellow, Yoshua Bengio and Aaron Courville, https://www.deeplearningbook.org/. Dive into Deep Learning (2nd Edition), 2023, by Aston Zhang, Zack Lipton, Mu Li, and Alex J. Smola, https://d2l.ai/. Probabilistic Machine Learning: Advanced Topics, 2023, by Kevin Murphy, https://probml.github.io/pml-book/book2.html. Deep Learning with PyTorch, 2020, by Eli Stevens, Luca Antiga, and Thomas Viehmann. Gu, Shihao, Brian Kelly, and Dacheng Xiu. 2020. Empirical asset pricing with machine learning. Review of Financial Studies 33: 2223-2273. Session 3. DL Basics, Predictions in Business Research, and Traditonal NLP (Jan/23/2024) Keywords: Optimization and Computational Issues of Deep Learning, Prediction Problems in Business Research, Pre-processing and Word Representations in Traditional Natural Language Processing Slides: Deep Learning Basics, Prediction Problems in Business Research, NLP(I): Pre-processing and Word Representations.pdf) CoLab Notebook Demos: He Initialization, Dropout, Micrograd, NLP Pre-processing Presentation: By Letian Kong and Liheng Tan. Mullainathan, Sendhil, and Jann Spiess. 2017. Machine learning: an applied econometric approach. Journal of Economic Perspectives 31(2): 87-106. Link to the paper. Homework: Problem Set 2: Implementing Neural Nets, due at 12:30pm, Jan/30/2024 (Tuesday). References: Kleinberg, Jon, Jens Ludwig, Sendhil Mullainathan, and Ziad Obermeyer. 2015. Prediction policy problems. American Economic Review 105(5): 491-495. Mullainathan, Sendhil, and Jann Spiess. 2017. Machine learning: an applied econometric approach. Journal of Economic Perspectives 31(2): 87-106. Kleinberg, Jon, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. 2018. Human decisions and machine predictions. Quarterly Journal of Economics 133(1): 237-293. Bajari, Patrick, Denis Nekipelov, Stephen P. Ryan, and Miaoyu Yang. 2015. Machine learning methods for demand estimation. American Economic Review, 105(5): 481-485. Farias, Vivek F., and Andrew A. Li. 2019. Learning preferences with side information. Management Science 65(7): 3131-3149. Cui, Ruomeng, Santiago Gallino, Antonio Moreno, and Dennis J. Zhang. 2018. The operational value of social media information. Production and Operations Management, 27(10): 1749-1769. Gentzkow, Matthew, Bryan Kelly, and Matt Taddy. 2019. Text as data. Journal of Economic Literature, 57(3): 535-574. Chapter 2, Introduction to Information Retrieval, 2008, Cambridge University Press, by Christopher D. Manning, Prabhakar Raghavan and Hinrich Schutze, https://nlp.stanford.edu/IR-book/information-retrieval-book.html. Chapter 2, Speech and Language Processing (3rd ed. draft), 2023, by Dan Jurafsky and James H. Martin, https://web.stanford.edu/~jurafsky/slp3/. Parameter Initialization and Batch Normalization (in Chinese) GPU Comparisons-vs-NVIDIA-H100-(PCIe)-vs-NVIDIA-RTX-6000-Ada/624vs632vs640) GitHub Repo for Micrograd, by Andrej Karpathy. Hand Written Notes Session 4. Traditonal NLP (Jan/30/2024) Keywords: Pre-processing and Word Representations in NLP, N-Gram, Naïve Bayes, Language Model Evaluation, Traditional NLP Applied to Business/Econ Research Slides: NLP(I): Pre-processing and Word Representations.pdf), NLP(II): N-Gram, Naïve Bayes, and Language Model Evaluation.pdf) CoLab Notebook Demos: NLP Pre-processing, N-Gram, Naïve Bayes Presentation: By Zhi Li and Boya Peng. Hansen, Stephen, Michael McMahon, and Andrea Prat. 2018. Transparency and deliberation within the FOMC: A computational linguistics approach. Quarterly Journal of Economics, 133(2): 801-870. Link to the paper. Homework: Problem Set 3: Implementing Traditional NLP Techniques, due at 12:30pm, Feb/6/2024 (Tuesday). References: Gentzkow, Matthew, Bryan Kelly, and Matt Taddy. 2019. Text as data. Journal of Economic Literature, 57(3): 535-574. Hansen, Stephen, Michael McMahon, and Andrea Prat. 2018. Transparency and deliberation within the FOMC: A computational linguistics approach. Quarterly Journal of Economics, 133(2): 801-870. Chapters 2, 12, & 13, Introduction to Information Retrieval, 2008, Cambridge University Press, by Christopher D. Manning, Prabhakar Raghavan and Hinrich Schutze, https://nlp.stanford.edu/IR-book/information-retrieval-book.html. Chapter 2, 3 & 4, Speech and Language Processing (3rd ed. draft), 2023, by Dan Jurafsky and James H. Martin, https://web.stanford.edu/~jurafsky/slp3/. Natural Language Tool Kit (NLTK) Documentation Hand Written Notes Session 5. Deep-Learning-Based NLP: Word2Vec (Feb/06/2024) Keywords: Traditional NLP Applied to Business/Econ Research, Word2Vec: Continuous Bag of Words and Skip-Gram Slides: NLP(II): N-Gram, Naïve Bayes, and Language Model Evaluation.pdf), NLP(III): Word2Vec.pdf) CoLab Notebook Demos: Word2Vec: CBOW, Word2Vec: Skip-Gram Presentation: By Xinyu Xu and Shu Zhang. Timoshenko, Artem, and John R. Hauser. 2019. Identifying customer needs from user-generated content. Marketing Science, 38(1): 1-20. Link to the paper. Homework: No homework this week. Probably you should think about your final project when enjoying your Lunar New Year Holiday. References: Gentzkow, Matthew, Bryan Kelly, and Matt Taddy. 2019. Text as data. Journal of Economic Literature, 57(3): 535-574. Tetlock, Paul. 2007. Giving content to investor sentiment: The role of media in the stock market. Journal of Finance, 62(3): 1139-1168. Baker, Scott, Nicholas Bloom, and Steven Davis, 2016. Measuring economic policy uncertainty. Quarterly Journal of Economics, 131(4): 1593-1636. Gentzkow, Matthew, and Jesse Shapiro. 2010. What drives media slant? Evidence from US daily newspapers. Econometrica, 78(1): 35-71. Timoshenko, Artem, and John R. Hauser. 2019. Identifying customer needs from user-generated content. Marketing Science, 38(1): 1-20. Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeff Dean. 2013. Efficient estimation of word representations in vector space. ArXiv Preprint, arXiv:1301.3781. Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems (NeurIPS) 26. Parts I - II, Lecture Notes and Slides for CS224n: Natural Language Processing with Deep Learning, by Christopher D. Manning, Diyi Yang, and Tatsunori Hashimoto, https://web.stanford.edu/class/cs224n/. Word Embeddings Trained on Google News Corpus Hand Written Notes Session 6. Deep-Learning-Based NLP: RNN and Seq2Seq (Feb/20/2024) Keywords: Word2Vec: GloVe, Word Embedding and Language Model Evaluations, Word2Vec and RNN Applied to Business/Econ Research, RNN Slides: Guest Lecture Announcement, NLP(III): Word2Vec.pdf), NLP(IV): RNN & Seq2Seq.pdf) CoLab Notebook Demos: Word2Vec: CBOW, Word2Vec: Skip-Gram Presentation: By Qiyu Dai and Yifan Ren. Huang, Allen H., Hui Wang, and Yi Yang. 2023. FinBERT: A large language model for extracting information from financial text. Contemporary Accounting Research, 40(2): 806-841. Link to the paper. Link to GitHub Repo. Homework: Problem Set 4 - Word2Vec & LSTM for Sentiment Analysis References: Ash, Elliot, and Stephen Hansen. 2023. Text algorithms in economics. Annual Review of Economics, 15: 659-688. Associated GitHub with Code Demonstrations. Li, Kai, Feng Mai, Rui Shen, and Xinyan Yan. 2021. Measuring corporate culture using machine learning. Review of Financial Studies, 34(7): 3265-3315. Chen, Fanglin, Xiao Liu, Davide Proserpio, and Isamar Troncoso. 2022. Product2Vec: Leveraging representation learning to model consumer product choice in large assortments. Available at SSRN 3519358. Pennington, Jeffrey, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543). Parts 2 and 5, Lecture Notes and Slides for CS224n: Natural Language Processing with Deep Learning, by Christopher D. Manning, Diyi Yang, and Tatsunori Hashimoto, https://web.stanford.edu/class/cs224n/. Chapters 9 and 10, Dive into Deep Learning (2nd Edition), 2023, by Aston Zhang, Zack Lipton, Mu Li, and Alex J. Smola, https://d2l.ai/. RNN and LSTM Visualizations Hand Written Notes Session 7. Deep-Learning-Based NLP: Attention and Transformer (Feb/27/2024) Keywords: RNN and its Applications to Business/Econ Research, LSTM, Seq2Seq, Attention Mechanism Slides: Final Project, NLP(IV): RNN & Seq2Seq.pdf), NLP(V): Attention & Transformer.pdf) CoLab Notebook Demos: RNN & LSTM, Attention Mechanism Presentation: By Qinghe Gui and Chaoyuan Jiang. Zhang, Mengxia and Lan Luo. 2023. Can consumer-posted photos serve as a leading indicator of restaurant survival? Evidence from Yelp. Management Science 69(1): 25-50. Link to the paper. Homework: Problem Set 4 - Word2Vec & LSTM for Sentiment Analysis References: Qi, Meng, Yuanyuan Shi, Yongzhi Qi, Chenxin Ma, Rong Yuan, Di Wu, Zuo-Jun (Max) Shen. 2023. A Practical End-to-End Inventory Management Model with Deep Learning. Management Science, 69(2): 759-773. Sarzynska-Wawer, Justyna, Aleksander Wawer, Aleksandra Pawlak, Julia Szymanowska, Izabela Stefaniak, Michal Jarkiewicz, and Lukasz Okruszek. 2021. Detecting formal thought disorder by deep contextualized word representations. Psychiatry Research, 304, 114135. Hansen, Stephen, Peter J. Lambert, Nicholas Bloom, Steven J. Davis, Raffaella Sadun, and Bledi Taska. 2023. Remote work across jobs, companies, and space (No. w31007). National Bureau of Economic Research. Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27. Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. ICLR Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. Parts 5, 6, and 8, Lecture Notes and Slides for CS224n: Natural Language Processing with Deep Learning, by Christopher D. Manning, Diyi Yang, and Tatsunori Hashimoto, https://web.stanford.edu/class/cs224n/. Chapters 9, 10, and 11, Dive into Deep Learning (2nd Edition), 2023, by Aston Zhang, Zack Lipton, Mu Li, and Alex J. Smola, https://d2l.ai/. RNN and LSTM Visualizations PyTorch's Tutorial of Seq2Seq for Machine Translation Illustrated Transformer Transformer from Scratch, with the Code on GitHub Hand Written Notes Session 7.5. Deep-Learning-Based NLP: Attention is All You Need (Mar/05/2024) Keywords: Bitter Lesson: Power of Computation in AI, Attention Mechanism, Transformer Slides: The Bitter Lesson, NLP(V): Attention & Transformer.pdf) CoLab Notebook Demos: Attention Mechanism, Transformer Homework: One-page Proposal for Your Final Project References: The Bitter Lesson, by Rich Sutton Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. ICLR Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. Part 8, Lecture Notes and Slides for CS224n: Natural Language Processing with Deep Learning, by Christopher D. Manning, Diyi Yang, and Tatsunori Hashimoto, https://web.stanford.edu/class/cs224n/. Chapter 11, Dive into Deep Learning (2nd Edition), 2023, by Aston Zhang, Zack Lipton, Mu Li, and Alex J. Smola, https://d2l.ai/. Illustrated Transformer Transformer from Scratch, with the Code on GitHub Andrej Karpathy's Lecture to Build Transformers Hand Written Notes Session 8. Deep-Learning-Based NLP: Pretraining (Mar/12/2024) Keywords: Computations in AI, BERT (Bidirectional Encoder Representations from Transformers), GPT (Generative Pretrained Transformers) Slides: Guest Lecture by Dr. Liubo Li on Deep Learning Computation, Pretraining.pdf) CoLab Notebook Demos: Crafting Intelligence: The Art of Deep Learning Modeling, BERT API @ Hugging Face Presentation: By Zhankun Chen and Yiyi Zhao. Noy, Shakked and Whitney Zhang. 2023. Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381: 187-192. Link to the Paper Homework: Problem Set 5 - Sentiment Analysis with Hugging Face, due at 12:30pm, March 26, Tuesday. References: Devlin, Jacob, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. ArXiv preprint arXiv:1810.04805. GitHub Repo Radford, Alec, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training, (GPT-1) PDF link, GitHub Repo Radford, Alec, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9. (GPT-2) PDF Link, GitHub Repo Brown, Tom, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901. (GPT-3) GitHub Repo Huang, Allen H., Hui Wang, and Yi Yang. 2023. FinBERT: A large language model for extracting information from financial text. Contemporary Accounting Research, 40(2): 806-841. GitHub Repo Part 9, Lecture Notes and Slides for CS 224N: Natural Language Processing with Deep Learning, by Christopher D. Manning, Diyi Yang, and Tatsunori Hashimoto. Link to CS 224N Part 2 & 4, Slides for COS 597G: Understanding Large Language Models, by Danqi Chen. Link to COS 597G A Visual Guide to BERT, How GPT-3 Works Andrej Karpathy's Lecture to Build GPT-2 (124M) from Scratch Hand Written Notes Session 9. Deep-Learning-Based NLP: Large Language Models (Mar/19/2024) Keywords: Large Language Models, Generative AI, Emergent Ababilities, Instruction Fine-Tuning (IFT), Reinforcement Learning with Human Feedback (RLHF), In-Context Learning, Chain-of-Thought (CoT) Slides: What's Next, Pretraining.pdf), Large Language Models.pdf) CoLab Notebook Demos: BERT API @ Hugging Face Presentation: By Jia Liu. Liu, Liu, Dzyabura, Daria, Mizik, Natalie. 2020. Visual listening in: Extracting brand image portrayed on social media. Marketing Science, 39(4): 669-686. Link to the Paper Homework: Problem Set 5 - Sentiment Analysis with Hugging Face, due at 12:30pm, March 26, Tuesday (soft-deadline). References: Wei, Jason, et al. 2021. Finetuned language models are zero-shot learners. ArXiv preprint arXiv:2109.01652, link to the paper. Wei, Jason, et al. 2022. Emergent abilities of large language models. ArXiv preprint arXiv:2206.07682, link to the paper. Ouyang, Long, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35, 27730-27744. Wei, Jason, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35, 24824-24837. Kaplan, Jared. 2020. Scaling laws for neural language models. ArXiv preprint arXiv:2001.08361, link to the paper. Hoffmann, Jordan, et al. 2022. Training compute-optimal large language models. ArXiv preprint arXiv:2203.15556, link to the paper. Shinn, Noah, et al. 2023. Reflexion: Language agents with verbal reinforcement learning. ArXiv preprint arXiv:2303.11366, link to the paper. Reisenbichler, Martin, Thomas Reutterer, David A. Schweidel, and Daniel Dan. 2022. Frontiers: Supporting content marketing with natural language generation. Marketing Science, 41(3): 441-452. Romera-Paredes, B., Barekatain, M., Novikov, A. et al. 2023. Mathematical discoveries from program search with large language models. Nature, link to the paper. Part 10, Lecture Notes and Slides for CS224N: Natural Language Processing with Deep Learning, by Christopher D. Manning, Diyi Yang, and Tatsunori Hashimoto. Link to CS 224N COS 597G: Understanding Large Language Models, by Danqi Chen. Link to COS 597G Andrej Karpathy's 1-hour Talk on LLM CS224n, Hugging Face Tutorial Session 10. Deep-Learning-Based CV: Image Classification (Mar/26/2024) Keywords: Large Language Models Applications, Convolution Neural Nets (CNN), LeNet, AlexNet, VGG, ResNet, ViT Slides: What's Next, Large Language Models.pdf), Image Classification.pdf) CoLab Notebook Demos: CNN, LeNet, & AlexNet, VGG, ResNet, ViT Presentation: By Yingxin Lin and Zeshen Ye. Netzer, Oded, Alain Lemaire, and Michal Herzenstein. 2019. When words sweat: Identifying signals for loan default in the text of loan applications. Journal of Marketing Research, 56(6): 960-980. Link to the Paper Homework: Problem Set 6 - AlexNet and ResNet, due at 12:30pm, April 9, Tuesday. References: Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25. He, Kaiming, Xiangyu Zhang, Shaoqing Ren and Jian Sun. 2016. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, 770-778. Dosovitskiy, Alexey, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. ArXiv preprint, arXiv:2010.11929, link to the paper, link to the GitHub repo. Jean, Neal, Marshall Burke, Michael Xie, Matthew W. Davis, David B. Lobell, and Stefand Ermon. 2016. Combining satellite imagery and machine learning to predict poverty. Science, 353(6301), 790-794. Zhang, Mengxia and Lan Luo. 2023. Can consumer-posted photos serve as a leading indicator of restaurant survival? Evidence from Yelp. Management Science 69(1): 25-50. Course Notes (Lectures 5 & 6) for CS231n: Deep Learning for Computer Vision, by Fei-Fei Li, Ruohan Gao, & Yunzhu Li. Link to CS231n. Chapters 7 and 8, Dive into Deep Learning (2nd Edition), 2023, by Aston Zhang, Zack Lipton, Mu Li, and Alex J. Smola. Link to the book. Fine-Tune ViT for Image Classification with Hugging Face 🤗 Transformers Hugging Face 🤗 ViT CoLab Tutorial Session 11. Deep-Learning-Based CV (II): Object Detection & Video Analysis (Apr/2/2024) Keywords: Image Processing Applications, Localization, R-CNNs, YOLOs, Semantic Segmentation, 3D CNN, Video Analysis Applications Slides: What's Next, Image Classification.pdf), Object Detection and Video Analysis.pdf) CoLab Notebook Demos: Data Augmentation, Faster R-CNN & YOLO v5 Presentation: By Qinlu Hu and Yilin Shi. Yang, Jeremy, Juanjuan Zhang, and Yuhan Zhang. 2023. Engagement that sells: Influencer video advertising on TikTok. Available at SSRN Link to the Paper Homework: Problem Set 6 - AlexNet and ResNet, due at 12:30pm, April 9, Tuesday. References: Girshick, R., Donahue, J., Darrell, T. and Malik, J., 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587). Redmon, Joseph, Santosh Divvala, Ross Girshick, and Ali Farhadi. 2016. You only look once: Unified, real-time object detection. Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788). Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R. and Fei-Fei, L., 2014. Large-scale video classification with convolutional neural networks. Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp. 1725-1732). Glaeser, Edward L., Scott D. Kominers, Michael Luca, and Nikhil Naik. 2018. Big data and big cities: The promises and limitations of improved measures of urban life. Economic Inquiry, 56(1): 114-137. Zhang, S., Xu, K. and Srinivasan, K., 2023. Frontiers: Unmasking Social Compliance Behavior During the Pandemic. Marketing Science, 42(3), pp.440-450. Course Notes (Lectures 10 & 11) for CS231n: Deep Learning for Computer Vision, by Fei-Fei Li, Ruohan Gao, & Yunzhu Li. Link to CS231n. Chapter 14, Dive into Deep Learning (2nd Edition), 2023, by Aston Zhang, Zack Lipton, Mu Li, and Alex J. Smola. Link to the book. Hand Written Notes Session 12. Unsupervised Learning: Clustering, Topic Modeling & VAE (Apr/9/2024) Keywords: K-Means, Gaussian Mixture Models, EM-Algorithm, Latent Dirichlet Allocation, Variational Auto-Encoder Slides: What's Next, Clustering, Topic Modeling & VAE.pdf) CoLab Notebook Demos: K-Means, LDA, VAE Homework: Problem Set 7 - Unsupervised Learning (EM & LDA), due at 12:30pm, April 23, Tuesday. References: Blei, David M., Ng, Andrew Y., and Jordan, Michael I. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3(Jan): 993-1022. Kingma, D.P. and Welling, M., 2013. Auto-encoding Variational Bayes. arXiv preprint arXiv:1312.6114. Kingma, D.P. and Welling, M., 2019. An introduction to variational autoencoders. Foundations and Trends® in Machine Learning, 12(4), pp.307-392. Bandiera, O., Prat, A., Hansen, S., & Sadun, R. 2020. CEO behavior and firm performance. Journal of Political Economy, 128(4), 1325-1369. Liu, Jia and Olivier Toubia. 2018. A semantic approach for estimating consumer content preferences from online search queries. Marketing Science, 37(6): 930-952. Mueller, Hannes, and Christopher Rauh. 2018. Reading between the lines: Prediction of political violence using newspaper text. American Political Science Review, 112(2): 358-375. Tian, Z., Dew, R. and Iyengar, R., 2023. Mega or Micro? Influencer Selection Using Follower Elasticity. Journal of Marketing Research. Chapters 8.5 and 14, The Elements of Statistical Learning (2nd Edition), 2009, by Trevor Hastie, Robert Tibshirani, Jerome Friedman, Link to Book. Course Notes (Lectures 1 & 4) for CS294-158-SP24: Deep Unsupervised Learning, taught by Pieter Abbeel, Wilson Yan, Kevin Frans, Philipp Wu. Link to CS294-158-SP24. Hand Written Notes Session 13. Unsupervised Learning: Diffusion Models (Apr/16/2024) Keywords: VAE, Denoised Diffusion Probabilistic Models, Latent Diffusion Models, CLIP, Imagen, Diffusion Transformers Slides: Clustering, Topic Modeling & VAE.pdf), Diffusion Models.pdf), Course Summary CoLab Notebook Demos: VAE, DDPM, DiT Homework: Problem Set 7 - Unsupervised Learning (EM & LDA), due at 12:30pm, April 23, Tuesday. References: Kingma, D.P. and Welling, M., 2013. Auto-encoding Variational Bayes. arXiv preprint arXiv:1312.6114. Kingma, D.P. and Welling, M., 2019. An introduction to variational autoencoders. Foundations and Trends® in Machine Learning, 12(4), pp.307-392. Ho, J., Jain, A. and Abbeel, P., 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33, 6840-6851. Chan, S.H., 2024. Tutorial on Diffusion Models for Imaging and Vision. arXiv preprint arXiv:2403.18103. Peebles, W. and Xie, S., 2023. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4195-4205. Link to GitHub Repo. Tian, Z., Dew, R. and Iyengar, R., 2023. Mega or Micro? Influencer Selection Using Follower Elasticity. Journal of Marketing Research. Ludwig, J. and Mullainathan, S., 2024. Machine learning as a tool for hypothesis generation. Quarterly Journal of Economics, 139(2), 751-827. Burnap, A., Hauser, J.R. and Timoshenko, A., 2023. Product aesthetic design: A machine learning augmentation. Marketing Science, 42(6), 1029-1056. Course Notes (Lecture 6) for CS294-158-SP24: Deep Unsupervised Learning, taught by Pieter Abbeel, Wilson Yan, Kevin Frans, Philipp Wu. Link to CS294-158-SP24. CVPR 2022 Tutorial: Denoising Diffusion-based Generative Modeling: Foundations and Applications, by Karsten Kreis, Ruiqi Gao, and Arash Vahdat Link to the Tutorial Lilian Weng (OpenAI)'s Blog on Diffusion Models Lilian Weng (OpenAI)'s Blog on Diffusion Models for Video Generation Hugging Face Diffusers 🤗 Library Hand Written Notes

airoboros
github
LLM Vibe Score0.506
Human Vibe Score0.020378533434805633
jondurbinMar 19, 2025

airoboros

airoboros: using large language models to fine-tune large language models This is my take on implementing the Self-Instruct paper. The approach is quite heavily modified, and does not use any human-generated seeds. This updated implementation supports either the /v1/completions endpoint or /v1/chat/completions, which is particularly useful in that it supports gpt-4 and gpt-3.5-turbo (which is 1/10 the cost of text-davinci-003). Huge thank you to the folks over at a16z for sponsoring the costs associated with building models and associated tools! Install via pip: from source (keeping the source): Key differences from self-instruct/alpaca support for either /v1/completions or /v1/chat/completions APIs (which allows gpt-3.5-turbo instead of text-davinci-003, as well as gpt-4 if you have access) support for custom topics list, custom topic generation prompt, or completely random topics in-memory vector db (Chroma) for similarity comparison, which is much faster than calculating rouge score for each generated instruction (seemingly) better prompts, which includes injection of random topics to relate the instructions to, which creates much more diverse synthetic instructions asyncio producers with configurable batch size several "instructors", each targetting specific use-cases, such as Orca style reasoning/math, role playing, etc. tries to ensure the context, if provided, is relevant to the topic and contains all the information that would be necessary to respond to the instruction, and nost just a link to article/etc. generally speaking, this implementation tries to reduce some of the noise Goal of this project Problem and proposed solution: Models can only ever be as good as the data they are trained on. High quality data is difficult to curate manually, so ideally the process can be automated by AI/LLMs. Large models (gpt-4, etc.) are pricey to build/run and out of reach for individuals/small-medium business, and are subject to RLHF bias, censorship, and changes without notice. Smaller models (llama-2-70b, etc.) can reach somewhat comparable performance in specific tasks to much larger models when trained on high quality data. The airoboros tool allows building datasets that are focused on specific tasks, which can then be used to build a plethora of individual expert models. This means we can crowdsource building experts. Using either a classifier model, or simply calculating vector embeddings for each item in the dataset and using faiss index/cosine similarity/etc. search, incoming requests can be routed to a particular expert (e.g. dynamically loading LoRAs) to get extremely high quality responses. Progress: ✅ PoC that training via self-instruction, that is, datasets generated from language models, works reasonably well. ✅ Iterate on the PoC to use higher quality prompts, more variety of instructions, etc. ✅ Split the code into separate "instructors", for specializing in any particular task (creative writing, songs, roleplay, coding, execution planning, function calling, etc.) [in progress]: PoC that an ensemble of LoRAs split by the category (i.e., the instructor used in airoboros) has better performance than the same param count model tuned on all data [in progress]: Remove the dependency on OpenAI/gpt-4 to generate the training data so all datasets can be completely free and open source. [future]: Automatic splitting of experts at some threshold, e.g. "coding" is split into python, js, golang, etc. [future]: Hosted service/site to build and/or extend datasets or models using airoboros. [future]: Depending on success of all of the above, potentially a hosted inference option with an exchange for private/paid LoRAs. LMoE LMoE is the simplest architecture I can think of for a mixture of experts. It doesn't use a switch transformer, doesn't require slicing and merging layers with additional fine-tuning, etc. It just dynamically loads the best PEFT/LoRA adapter model based on the incoming request. By using this method, we can theoretically crowdsource generation of dozens (or hundreds/thousands?) of very task-specific adapters and have an extremely powerful ensemble of models with very limited resources on top of a single base model (llama-2 7b/13b/70b). Tuning the experts The self-instruct code contained within this project uses many different "instructors" to generate training data to accomplish specific tasks. The output includes the instructor/category that generated the data. We can use this to automatically segment the training data to fine-tune specific "experts". See scripts/segment_experts.py for an example of how the training data can be segmented, with a sampling of each other expert in the event of misrouting. See scripts/tune_expert.py for an example of creating the adapter models (with positional args for expert name, model size, etc.) NOTE: this assumes use of my fork of qlora https://github.com/jondurbin/qlora Routing requests to the expert The "best" routing mechanism would probably be to train a classifier based on the instructions for each category, with the category/expert being the label, but that prohibits dynamic loading of new experts. Instead, this supports 3 options: faiss index similarity search using the training data for each expert (default) agent-based router using the "function" expert (query the LLM with a list of available experts and their descriptions, ask which would be best based on the user's input) specify the agent in the JSON request Running the API server First, download the base llama-2 model for whichever model size you want, e.g.: llama-2-7b-hf Next, download the LMoE package that corresponds to that base model, e.g.: airoboros-lmoe-7b-2.1 NOTE: 13b also available, 70b in progress Here's an example command to start the server: to use the agent-based router, add --agent-router to the arguments This uses flash attention via bettertransformers (in optimum). You may need to install torch nightly if you see an error like 'no kernel available', e.g.: Once started, you can infer using the same API scheme you'd query OpenAI API with, e.g.: I've also added an vllm-based server, but the results aren't quite as good (not sure why yet). To use it, make sure you install vllm and fschat, or pip install airoboros[vllm] Generating instructions NEW - 2023-07-18 To better accommodate the plethora of options, the configuration has been moved to a YAML config file. Please create a copy of example-config.yaml and configure as desired. Once you have the desired configuration, run: Generating topics NEW - 2023-07-18 Again, this is now all YAML configuration based! Please create a customized version of the YAML config file, then run: You can override the topic_prompt string in the configuration to use a different topic generation prompt. Support the work https://bmc.link/jondurbin ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf Models (research use only): gpt-4 versions llama-2 base model 2.1 dataset airoboros-l2-7b-2.1 airoboros-l2-13b-2.1 airoboros-l2-70b-2.1 airoboros-c34b-2.1 2.0/m2.0 airoboros-l2-7b-gpt4-2.0 airoboros-l2-7b-gpt4-m2.0 airoboros-l2-13b-gpt4-2.0 airoboros-l2-13b-gpt4-m2.0 Previous generation (1.4.1 dataset) airoboros-l2-70b-gpt4-1.4.1 airoboros-l2-13b-gpt4-1.4.1 airoboros-l2-7b-gpt4-1.4.1 original llama base model Latest version (2.0 / m2.0 datasets) airoboros-33b-gpt4-2.0 airoboros-33b-gpt4-m2.0 Previous generation (1.4.1 dataset) airoboros-65b-gpt4-1.4 airoboros-33b-gpt4-1.4 airoboros-13b-gpt4-1.4 airoboros-7b-gpt4-1.4 older versions on HF as well* mpt-30b base model airoboros-mpt-30b-gpt4-1.4 gpt-3.5-turbo versions airoboros-gpt-3.5-turbo-100k-7b airoboros-13b airoboros-7b Datasets airoboros-gpt-3.5-turbo airoboros-gpt4 airoboros-gpt4-1.1 airoboros-gpt4-1.2 airoboros-gpt4-1.3 airoboros-gpt4-1.4 airoboros-gpt4-2.0 (June only GPT4) airoboros-gpt4-m2.0 airoboros-2.1 (recommended)