VibeBuilders.ai Logo
VibeBuilders.ai

Avoiding

Explore resources related to avoiding to help implement AI solutions for your business.

[R] Forget the Data and Fine-tuning! Just Fold the Network to Compress [Feb, 2025]
reddit
LLM Vibe Score0
Human Vibe Score1
MegneousThis week

[R] Forget the Data and Fine-tuning! Just Fold the Network to Compress [Feb, 2025]

Abstract: We introduce model folding, a novel data-free model compression technique that merges structurally similar neurons across layers, significantly reducing the model size without the need for fine-tuning or access to training data. Unlike existing methods, model folding preserves data statistics during compression by leveraging k-means clustering, and using novel data-free techniques to prevent variance collapse or explosion. Our theoretical framework and experiments across standard benchmarks, including ResNet18 and LLaMA-7B, demonstrate that model folding achieves comparable performance to data-driven compression techniques and outperforms recently proposed data-free methods, especially at high sparsity levels. This approach is particularly effective for compressing large-scale models, making it suitable for deployment in resource-constrained environments. Our code is online. PDF Format: https://arxiv.org/pdf/2502.10216 Summary (AI used to summarize): Summary of Novel Contributions in "Just Fold the Network to Compress" Introduction Problem Addressed: Traditional model compression techniques (e.g., pruning, quantization) require fine-tuning or access to training data to maintain performance, limiting their use in data-constrained scenarios. Novelty: Data-Free Compression: Introduces model folding, a method that compresses models without fine-tuning or training data by merging structurally similar neurons. Variance Preservation: Addresses variance collapse (reduced activation variance degrading performance) and variance overshooting (excessive variance) through novel data-free techniques. Preliminaries Background: Prior work in neuron alignment (e.g., weight matching) and data-driven variance repair (e.g., REPAIR) relies on data or fine-tuning. Novelty: Data-Free Neuron Alignment: Extends weight matching to intra-model neuron clustering via k-means, avoiding dependency on input data. Theoretical Connection: Frames model folding as a k-means optimization problem, proving it minimizes Frobenius norm approximation error during compression. Model Folding Core Innovations: Layer-Wise Clustering: Merges neurons by applying k-means to weight matrices across consecutive layers, reducing redundancy while preserving inter-layer dependencies. Fold-AR (Approximate REPAIR): Estimates intra-cluster correlations to rescale activations, preventing variance collapse without data. Fold-DIR (Deep Inversion REPAIR): Uses synthetic data generated via Deep Inversion (optimizing noise to match BatchNorm statistics) to recalibrate activation variances. Handling Complex Architectures: Extends folding to residual connections and BatchNorm layers by clustering combined weight-normalization matrices. Experiments Key Results: High Sparsity Performance: Outperforms data-free methods (e.g., IFM, INN) by 10–15% accuracy at 70% sparsity on ResNet18/CIFAR10. LLM Compression: Achieves comparable perplexity to data-driven methods on LLaMA-7B without fine-tuning or data. Variance Alignment: Fold-AR and Fold-DIR maintain variance ratios close to 1, avoiding collapse/overshooting (Fig. 4). Limitations and Future Work Limitations: Effectiveness depends on model redundancy (less effective for compact models). Uniform sparsity per layer (future work may optimize layer-wise sparsity). Potential Benefits for SOTA Models Edge Deployment: Enables compression of large models (e.g., LLMs) for smartphones/IoT devices without data access or retraining. Privacy-Sensitive Domains: Critical for healthcare/finance where data cannot be used for calibration. Efficiency at Scale: Reduces LLM size by 20–50% with minimal performance loss, lowering inference costs. Robustness to OOD Data: Fold-AR/Fold-DIR mitigate performance drops caused by out-of-distribution calibration data in data-driven methods. Example Impact: A folded LLM could run on edge devices like NVIDIA Jetson Nano with ~50% fewer parameters, maintaining usability for tasks like text generation while reducing memory and energy consumption.

[D] Here are 17 ways of making PyTorch training faster – what did I miss?
reddit
LLM Vibe Score0
Human Vibe Score1
lorenzkuhnThis week

[D] Here are 17 ways of making PyTorch training faster – what did I miss?

I've been collecting methods to accelerate training in PyTorch – here's what I've found so far. What did I miss? What did I get wrong? The methods – roughly sorted from largest to smallest expected speed-up – are: Consider using a different learning rate schedule. Use multiple workers and pinned memory in DataLoader. Max out the batch size. Use Automatic Mixed Precision (AMP). Consider using a different optimizer. Turn on cudNN benchmarking. Beware of frequently transferring data between CPUs and GPUs. Use gradient/activation checkpointing. Use gradient accumulation. Use DistributedDataParallel for multi-GPU training. Set gradients to None rather than 0. Use .as\_tensor rather than .tensor() Turn off debugging APIs if not needed. Use gradient clipping. Turn off bias before BatchNorm. Turn off gradient computation during validation. Use input and batch normalization. Consider using another learning rate schedule The learning rate (schedule) you choose has a large impact on the speed of convergence as well as the generalization performance of your model. Cyclical Learning Rates and the 1Cycle learning rate schedule are both methods introduced by Leslie N. Smith (here and here), and then popularised by fast.ai's Jeremy Howard and Sylvain Gugger (here and here). Essentially, the 1Cycle learning rate schedule looks something like this: ​ https://preview.redd.it/sc37u5knmxa61.png?width=476&format=png&auto=webp&s=09b309b4dbd67eedb4ab5f86e03e0e83d7b072d1 Sylvain writes: \[1cycle consists of\]  two steps of equal lengths, one going from a lower learning rate to a higher one than go back to the minimum. The maximum should be the value picked with the Learning Rate Finder, and the lower one can be ten times lower. Then, the length of this cycle should be slightly less than the total number of epochs, and, in the last part of training, we should allow the learning rate to decrease more than the minimum, by several orders of magnitude. In the best case this schedule achieves a massive speed-up – what Smith calls Superconvergence – as compared to conventional learning rate schedules. Using the 1Cycle policy he needs \~10x fewer training iterations of a ResNet-56 on ImageNet to match the performance of the original paper, for instance). The schedule seems to perform robustly well across common architectures and optimizers. PyTorch implements both of these methods torch.optim.lrscheduler.CyclicLR and torch.optim.lrscheduler.OneCycleLR, see the documentation. One drawback of these schedulers is that they introduce a number of additional hyperparameters. This post and this repo, offer a nice overview and implementation of how good hyper-parameters can be found including the Learning Rate Finder mentioned above. Why does this work? It doesn't seem entirely clear but one possible explanation might be that regularly increasing the learning rate helps to traverse saddle points in the loss landscape more quickly. Use multiple workers and pinned memory in DataLoader When using torch.utils.data.DataLoader, set numworkers > 0, rather than the default value of 0, and pinmemory=True, rather than the default value of False. Details of this are explained here. Szymon Micacz achieves a 2x speed-up for a single training epoch by using four workers and pinned memory. A rule of thumb that people are using to choose the number of workers is to set it to four times the number of available GPUs with both a larger and smaller number of workers leading to a slow down. Note that increasing num\_workerswill increase your CPU memory consumption. Max out the batch size This is a somewhat contentious point. Generally, however, it seems like using the largest batch size your GPU memory permits will accelerate your training (see NVIDIA's Szymon Migacz, for instance). Note that you will also have to adjust other hyperparameters, such as the learning rate, if you modify the batch size. A rule of thumb here is to double the learning rate as you double the batch size. OpenAI has a nice empirical paper on the number of convergence steps needed for different batch sizes. Daniel Huynh runs some experiments with different batch sizes (also using the 1Cycle policy discussed above) where he achieves a 4x speed-up by going from batch size 64 to 512. One of the downsides of using large batch sizes, however, is that they might lead to solutions that generalize worse than those trained with smaller batches. Use Automatic Mixed Precision (AMP) The release of PyTorch 1.6 included a native implementation of Automatic Mixed Precision training to PyTorch. The main idea here is that certain operations can be run faster and without a loss of accuracy at semi-precision (FP16) rather than in the single-precision (FP32) used elsewhere. AMP, then, automatically decide which operation should be executed in which format. This allows both for faster training and a smaller memory footprint. In the best case, the usage of AMP would look something like this: import torch Creates once at the beginning of training scaler = torch.cuda.amp.GradScaler() for data, label in data_iter: optimizer.zero_grad() Casts operations to mixed precision with torch.cuda.amp.autocast(): loss = model(data) Scales the loss, and calls backward() to create scaled gradients scaler.scale(loss).backward() Unscales gradients and calls or skips optimizer.step() scaler.step(optimizer) Updates the scale for next iteration scaler.update() Benchmarking a number of common language and vision models on NVIDIA V100 GPUs, Huang and colleagues find that using AMP over regular FP32 training yields roughly 2x – but upto 5.5x – training speed-ups. Currently, only CUDA ops can be autocast in this way. See the documentation here for more details on this and other limitations. u/SVPERBlA points out that you can squeeze out some additional performance (\~ 20%) from AMP on NVIDIA Tensor Core GPUs if you convert your tensors to the Channels Last memory format. Refer to this section in the NVIDIA docs for an explanation of the speedup and more about NCHW versus NHWC tensor formats. Consider using another optimizer AdamW is Adam with weight decay (rather than L2-regularization) which was popularized by fast.ai and is now available natively in PyTorch as torch.optim.AdamW. AdamW seems to consistently outperform Adam in terms of both the error achieved and the training time. See this excellent blog post on why using weight decay instead of L2-regularization makes a difference for Adam. Both Adam and AdamW work well with the 1Cycle policy described above. There are also a few not-yet-native optimizers that have received a lot of attention recently, most notably LARS (pip installable implementation) and LAMB. NVIDA's APEX implements fused versions of a number of common optimizers such as Adam. This implementation avoid a number of passes to and from GPU memory as compared to the PyTorch implementation of Adam, yielding speed-ups in the range of 5%. Turn on cudNN benchmarking If your model architecture remains fixed and your input size stays constant, setting torch.backends.cudnn.benchmark = True might be beneficial (docs). This enables the cudNN autotuner which will benchmark a number of different ways of computing convolutions in cudNN and then use the fastest method from then on. For a rough reference on the type of speed-up you can expect from this, Szymon Migacz achieves a speed-up of 70% on a forward pass for a convolution and a 27% speed-up for a forward + backward pass of the same convolution. One caveat here is that this autotuning might become very slow if you max out the batch size as mentioned above. Beware of frequently transferring data between CPUs and GPUs Beware of frequently transferring tensors from a GPU to a CPU using tensor.cpu() and vice versa using tensor.cuda() as these are relatively expensive. The same applies for .item() and .numpy() – use .detach() instead. If you are creating a new tensor, you can also directly assign it to your GPU using the keyword argument device=torch.device('cuda:0'). If you do need to transfer data, using .to(non_blocking=True), might be useful as long as you don't have any synchronization points after the transfer. If you really have to, you might want to give Santosh Gupta's SpeedTorch a try, although it doesn't seem entirely clear when this actually does/doesn't provide speed-ups. Use gradient/activation checkpointing Quoting directly from the documentation: Checkpointing works by trading compute for memory. Rather than storing all intermediate activations of the entire computation graph for computing backward, the checkpointed part does not save intermediate activations, and instead recomputes them in backward pass. It can be applied on any part of a model. Specifically, in the forward pass, function will run in torch.no\grad() manner, i.e., not storing the intermediate activations. Instead, the forward pass saves the inputs tuple and the functionparameter. In the backwards pass, the saved inputs and function is retrieved, and the forward pass is computed on function again, now tracking the intermediate activations, and then the gradients are calculated using these activation values. So while this will might slightly increase your run time for a given batch size, you'll significantly reduce your memory footprint. This in turn will allow you to further increase the batch size you're using allowing for better GPU utilization. While checkpointing is implemented natively as torch.utils.checkpoint(docs), it does seem to take some thought and effort to implement properly. Priya Goyal has a good tutorial demonstrating some of the key aspects of checkpointing. Use gradient accumulation Another approach to increasing the batch size is to accumulate gradients across multiple .backward() passes before calling optimizer.step(). Following a post by Hugging Face's Thomas Wolf, gradient accumulation can be implemented as follows: model.zero_grad() Reset gradients tensors for i, (inputs, labels) in enumerate(training_set): predictions = model(inputs) Forward pass loss = loss_function(predictions, labels) Compute loss function loss = loss / accumulation_steps Normalize our loss (if averaged) loss.backward() Backward pass if (i+1) % accumulation_steps == 0: Wait for several backward steps optimizer.step() Now we can do an optimizer step model.zero_grad() Reset gradients tensors if (i+1) % evaluation_steps == 0: Evaluate the model when we... evaluate_model() ...have no gradients accumulate This method was developed mainly to circumvent GPU memory limitations and I'm not entirely clear on the trade-off between having additional .backward() loops. This discussion on the fastai forum seems to suggest that it can in fact accelerate training, so it's probably worth a try. Use Distributed Data Parallel for multi-GPU training Methods to accelerate distributed training probably warrant their own post but one simple one is to use torch.nn.DistributedDataParallel rather than torch.nn.DataParallel. By doing so, each GPU will be driven by a dedicated CPU core avoiding the GIL issues of DataParallel. In general, I can strongly recommend reading the documentation on distributed training. Set gradients to None rather than 0 Use .zerograd(settonone=True) rather than .zerograd(). Doing so will let the memory allocator handle the gradients rather than actively setting them to 0. This will lead to yield a modest speed-up as they say in the documentation, so don't expect any miracles. Watch out, doing this is not side-effect free! Check the docs for the details on this. Use .as_tensor() rather than .tensor() torch.tensor() always copies data. If you have a numpy array that you want to convert, use torch.astensor() or torch.fromnumpy() to avoid copying the data. Turn on debugging tools only when actually needed PyTorch offers a number of useful debugging tools like the autograd.profiler, autograd.grad\check, and autograd.anomaly\detection. Make sure to use them to better understand when needed but to also turn them off when you don't need them as they will slow down your training. Use gradient clipping Originally used to avoid exploding gradients in RNNs, there is both some empirical evidence as well as some theoretical support that clipping gradients (roughly speaking: gradient = min(gradient, threshold)) accelerates convergence. Hugging Face's Transformer implementation is a really clean example of how to use gradient clipping as well as some of the other methods such as AMP mentioned in this post. In PyTorch this can be done using torch.nn.utils.clipgradnorm(documentation). It's not entirely clear to me which models benefit how much from gradient clipping but it seems to be robustly useful for RNNs, Transformer-based and ResNets architectures and a range of different optimizers. Turn off bias before BatchNorm This is a very simple one: turn off the bias of layers before BatchNormalization layers. For a 2-D convolutional layer, this can be done by setting the bias keyword to False: torch.nn.Conv2d(..., bias=False, ...).  (Here's a reminder why this makes sense.) You will save some parameters, I would however expect the speed-up of this to be relatively small as compared to some of the other methods mentioned here. Turn off gradient computation during validation This one is straightforward: set torch.no_grad() during validation. Use input and batch normalization You're probably already doing this but you might want to double-check: Are you normalizing your input? Are you using batch-normalization? And here's a reminder of why you probably should. Bonus tip from the comments: Use JIT to fuse point-wise operations. If you have adjacent point-wise operations you can use PyTorch JIT to combine them into one FusionGroup which can then be launched on a single kernel rather than multiple kernels as would have been done per default. You'll also save some memory reads and writes. Szymon Migacz shows how you can use the @torch.jit.script decorator to fuse the operations in a GELU, for instance: @torch.jit.script def fused_gelu(x): return x 0.5 (1.0 + torch.erf(x / 1.41421)) In this case, fusing the operations leads to a 5x speed-up for the execution of fused_gelu as compared to the unfused version. See also this post for an example of how Torchscript can be used to accelerate an RNN. Hat tip to u/Patient_Atmosphere45 for the suggestion. Sources and additional resources Many of the tips listed above come from Szymon Migacz' talk and post in the PyTorch docs. PyTorch Lightning's William Falcon has two interesting posts with tips to speed-up training. PyTorch Lightning does already take care of some of the points above per-default. Thomas Wolf at Hugging Face has a number of interesting articles on accelerating deep learning – with a particular focus on language models. The same goes for Sylvain Gugger and Jeremy Howard: they have many interesting posts in particular on learning rates and AdamW. Thanks to Ben Hahn, Kevin Klein and Robin Vaaler for their feedback on a draft of this post! I've also put all of the above into this blog post.

Only 2 months of cash in the Bank for my business but was able to save it with the help of AI.
reddit
LLM Vibe Score0
Human Vibe Score1
CALLIRDAN90This week

Only 2 months of cash in the Bank for my business but was able to save it with the help of AI.

Hi there! I’m excited to share something very personal with you. We needed to book at least 2 appointments per day in the next 60 days, or my business would fail. We were already trying two acquisition channels, LinkedIn and email. The problem with these channels was that the positive response rate was very low in both. So I decided to focus on LinkedIn and get the attention of the lead by sending videos directly to them via LinkedIn messages. (You can send videos to your connections on LinkedIn if you use your cell phone.) This wasn’t new, but I added a small twist to get the lead’s attention. All the covers of the videos had a picture of me holding a sign with the person’s name and an interesting phrase. This showed some okay results, but the rest of the video was not personalized. Only the picture on the cover was. I even developed a Chrome extension for this because I thought this would be the answer and that I would book tons of appointments.  But after more trial and outreach, my leads responded, telling me that because the video itself wasn’t personalized for them, they felt like I didn’t put enough effort in, so they would not book a call with me. So after investing time and effort into my “new bright idea” and getting developers to make the Chrome extension, I was back to square one with no results. A few weeks went by, and after researching online, I found an online course from a guy who promised to teach me how to book 30+ appointments per month, guaranteed (at the time, I was making 2 or 3 appointments per week, maximum). He promised that I would only pay if he actually booked appointments for me and even offered to give me money if his course didn’t work for me. I never paid attention to internet gurus, but the offer was actually not bad, so I looked into this guy’s website. I found out he had hundreds of reviews from people who had taken his course and were talking amazing things about it. The more I read, the more excited I got. I booked a call that day and talked to a salesperson. The call was very short, and he promised I would get at least 2 appointments per day, easily. He seemed a bit cocky and told me that I just needed to trust him and the 100+ reviews from people who had taken the course. He didn’t share details, a proposal, or anything. I asked the price, and he told me it was close to $10k. (Not kidding, this was the price.) Then he told me that I would make the money back in no time with the clients I would get following his course, and that if it didn’t work, he would give me the money back. But I needed to follow everything the course said for at least 6 months. I had never paid $10k for anything in my life; it was extremely expensive for me. Also, my salary from my business was not in dollars but in a currency that was worth much less than the dollar. I continued to research more and more, but no other course was close to the number of reviews and promises that this guy had. I got desperate and told myself that I would bet everything on this course. If it worked for so many others, surely it would work for me. I got a loan from the bank and paid for the course. You might read this and think it was the most stupid thing ever, but the reality is that after 2 months in the course (I did the course as fast as I could), I learned a lot. The course was not bad; it was very extensive—probably more than 200 hours or so—and they taught a lot of things. I don’t think it was worth $10k for me, but I can see how for other people it might be worth that. Now, to the question you’re all thinking: did it get me the 2 appointments I needed per day? The answer is no. Here’s the thing: most of the techniques they taught were innovative and disruptive, but the focus was always on personalization, and they didn’t teach any way to automate the personalization. (I think, at the time they made the course, the tools didn’t exist yet.) So they taught how to do everything manually, and it took a lot—a lot of time and effort. And most annoyingly: an incredible amount of time doing operational things. I did get 2 appointments on some days, but it wasn’t consistent, and I didn’t have the time to spend 14 hours a day doing everything manually or the money to hire someone to do this for me. (I needed to also spend time delivering our service to our current clients; otherwise, they would leave.) I told them this, and they were very reasonable. After some negotiation, they gave me part of the money back. (To be fair, there was a lot of value in the course, so asking for the full $10k back would have been excessive because, in the end, it really taught me a lot of things I didn’t know.) So in the end, I spent $10k and 200+ hours on an online course, spent time and effort developing a Chrome extension, and was still not able to hit the meetings I needed. Money in the business was running out, and I needed to do something fast, or I was doomed. After investing time and effort in tools, research, and spending $10k and over 200 hours on a course that didn’t deliver the consistent results I needed, I was at a crossroads. My businesses were running out of money, and I knew I needed to find a solution quickly, or everything I had worked for would collapse. It was during this time of desperation that I started exploring other options. One night, while scrolling through the internet, I stumbled upon a 2024 article about how AI was being used to revolutionize various industries. It wasn’t directly related to appointment booking, but it sparked an idea in my mind. What if I could use AI to automate the personalization process that I had learned in the course? It seemed like a long shot, but I had nothing to lose. I started researching AI tools and technologies—YouTube videos, podcasts, pretty much everything related to AI—desperate to find something that could help me scale my outreach without investing too much time, while still maintaining the personalization that was so important. After a lot of trial and error, I found a few tools that showed promise. All of these tools were extremely new. Some of them had just launched the versions I needed just weeks ago. I can say I researched and tested more than 50 AI startups, experimenting with them, testing different approaches, checking prices (the problem was that most of them were cheap but became very expensive when applying the volume I needed to get results), and gradually refining my process. It wasn’t an overnight success, but for the first time, I felt like I was onto something that could truly work. The idea of combining AI personalization with volume was something new, and it gave me hope that I could finally book the meetings I needed without burning out. One day, I sent a video of myself talking—completely AI-generated—to my family chat group and waited for their response. None of them noticed it wasn’t actually me. At that moment, I said to myself: “Okay, I am ready to test this in the real world and see if it works.” Like everything in life, focus is key. As I mentioned earlier, we were already trying outbound strategies on LinkedIn and email, but I decided to narrow my focus to LinkedIn and specifically to video outreach. My goal was to stand out from the crowd, where most people were using text or sending generic videos. I knew that if my videos were 100% personalized, it would make a strong impression on my leads. I focused on two key metrics during my tests: Time spent on manual personalized outreach vs. AI-generated personalized outreach. Positive reply rate for non-personalized manual outreach vs. AI-generated personalized outreach. I ran a test using a sample of 50 one-minute videos sent to 50 leads, and here are the results: Time Spent to Make the Videos: Manual Process: It took me up to 10 hours to create and send 50 personalized videos. This included looking good on camera, brushing my hair, choosing appropriate clothing, ensuring proper lighting, not messing up the script, using a camera holder, recharging the phone, pausing to drink water, avoiding external sounds, being in an appropriate room, downloading the videos, deleting the videos that were not good, and sending the final ones. On average, it took me at least 12.5 minutes per one-minute video. AI Process: With AI, it took me just 32 seconds to create the exact same one-minute personalized video—without saying a word or recording a second of footage. In total, I could make and send the same 50 personalized videos in just 27 minutes. Result: The AI process was 24 times faster. Completely crazy! Positive Reply Rate: Non-Personalized Script (Manual): Using a good script without personalization (no name, job title, city, company, etc.) resulted in a positive reply rate of 4-6% on LinkedIn, including follow-ups. Personalized Script (AI): Using the same script but adding personalized details like the lead's name, company, city, and job title resulted in a positive reply rate of 15-20%, including follow-ups. Result: AI personalization led to 3x (three times) more replies. The best part was the responses. Almost everyone who replied thanked me for taking the time to research them, congratulated me on my speech, and appreciated the personalization and eloquence of my message.  These metrics were a complete breakthrough for me. I researched online to see if anyone else had done something similar, but I couldn’t find anything close. After achieving these metrics, booking the two appointments I desperately needed became easy. In fact, in the last 10 weeks, I’ve been able to consistently book 3-4 appointments per day. This success allowed me to train someone in my company to handle the process, freeing me up to focus on other aspects of the business and ultimately saving it. With the AI appointment machine we built, I even have free time now—time that I’ve been using to develop a methodology and tech tools that I now teach to others. I named the methodology Clip2Lead as a reference to the first Chrome extension I developed that didn’t work but ended up being the first step toward everything that followed. I’ve condensed everything I learned and throughout my experiences into a simple and short FREE training where I cover the entire AI appointment booking process. This includes how to find leads, create scripts, set up follow-up sequences, generate AI videos, clone your voice, compare non-AI metrics with AI metrics, and even navigate AI safety controls. I also offer Chrome extensions that helped me automate the process even further, so you can spend your time closing deals or focusing on other acquisition channels, while your AI machine for booking appointments runs with minimal effort from you. If you’re interested please get in touch with me and thank you for taking the time to read my personal story.

Prompt_Engineering
github
LLM Vibe Score0.611
Human Vibe Score0.9298414218113789
NirDiamantMar 28, 2025

Prompt_Engineering

🌟 Support This Project: Your sponsorship fuels innovation in prompt engineering development. Become a sponsor to help maintain and expand this valuable resource! Prompt Engineering Techniques: Comprehensive Repository for Development and Implementation 🖋️ Welcome to one of the most extensive and dynamic collections of Prompt Engineering tutorials and implementations available today. This repository serves as a comprehensive resource for learning, building, and sharing prompt engineering techniques, ranging from basic concepts to advanced strategies for leveraging large language models. 📫 Stay Updated! 🚀Cutting-edgeUpdates 💡ExpertInsights 🎯Top 0.1%Content Join over 15,000 of AI enthusiasts getting unique cutting-edge insights and free tutorials! Plus, subscribers get exclusive early access and special discounts to our upcoming RAG Techniques course! Introduction Prompt engineering is at the forefront of artificial intelligence, revolutionizing the way we interact with and leverage AI technologies. This repository is designed to guide you through the development journey, from basic prompt structures to advanced, cutting-edge techniques. Our goal is to provide a valuable resource for everyone - from beginners taking their first steps in AI to seasoned practitioners pushing the boundaries of what's possible. By offering a range of examples from foundational to complex, we aim to facilitate learning, experimentation, and innovation in the rapidly evolving field of prompt engineering. Furthermore, this repository serves as a platform for showcasing innovative prompt engineering techniques. Whether you've developed a novel approach or found an innovative application for existing techniques, we encourage you to share your work with the community. 📖 Get the Fully Explained Version of This Repo This repository contains 22 hands-on Jupyter Notebook tutorials covering key prompt engineering techniques. If you want to go deeper with full explanations, intuitive insights, and structured exercises, check out the expanded version in book format: 📚 Prompt Engineering from Zero to Hero 📖 All 22 techniques from this repo, fully explained in depth 🧠 Step-by-step breakdowns of key concepts & best practices 🏋️ Hands-on exercises to sharpen your skills 🎯 Designed for learners who want a structured, guided approach 📄 Instant access to the PDF upon purchase 📱 Readable on any device – computer, tablet, or phone 💡 Subscribers to the DiamantAI newsletter receive an exclusive 33% (!) discount on the book. 👉 Get the full explained version here Related Projects 📚 Explore my comprehensive guide on RAG techniques to learn how to enhance AI systems with external knowledge retrieval, complementing language model capabilities with rich, up-to-date information. 🤖 Dive into my GenAI Agents Repository for a wide range of AI agent implementations and tutorials, from simple conversational bots to complex, multi-agent systems for various applications. A Community-Driven Knowledge Hub This repository grows stronger with your contributions! Join our vibrant Discord community — the central hub for shaping and advancing this project together 🤝 DiamantAI Discord Community Whether you're a novice eager to learn or an expert ready to share your knowledge, your insights can shape the future of prompt engineering. Join us to propose ideas, get feedback, and collaborate on innovative implementations. For contribution guidelines, please refer to our CONTRIBUTING.md file. Let's advance prompt engineering technology together! 🔗 For discussions on GenAI, or to explore knowledge-sharing opportunities, feel free to connect on LinkedIn. Key Features 🎓 Learn prompt engineering techniques from beginner to advanced levels 🧠 Explore a wide range of prompt structures and applications 📚 Step-by-step tutorials and comprehensive documentation 🛠️ Practical, ready-to-use prompt implementations 🌟 Regular updates with the latest advancements in prompt engineering 🤝 Share your own prompt engineering creations with the community Prompt Engineering Techniques Explore our extensive list of prompt engineering techniques, ranging from basic to advanced: 🌱 Fundamental Concepts Introduction to Prompt Engineering Overview 🔎 A comprehensive introduction to the fundamental concepts of prompt engineering in the context of AI and language models. Implementation 🛠️ Combines theoretical explanations with practical demonstrations, covering basic concepts, structured prompts, comparative analysis, and problem-solving applications. Basic Prompt Structures Overview 🔎 Explores two fundamental types of prompt structures: single-turn prompts and multi-turn prompts (conversations). Implementation 🛠️ Uses OpenAI's GPT model and LangChain to demonstrate single-turn and multi-turn prompts, prompt templates, and conversation chains. Prompt Templates and Variables Overview 🔎 Introduces creating and using prompt templates with variables, focusing on Python and the Jinja2 templating engine. Implementation 🛠️ Covers template creation, variable insertion, conditional content, list processing, and integration with the OpenAI API. 🔧 Core Techniques Zero-Shot Prompting Overview 🔎 Explores zero-shot prompting, allowing language models to perform tasks without specific examples or prior training. Implementation 🛠️ Demonstrates direct task specification, role-based prompting, format specification, and multi-step reasoning using OpenAI and LangChain. Few-Shot Learning and In-Context Learning Overview 🔎 Covers Few-Shot Learning and In-Context Learning techniques using OpenAI's GPT models and the LangChain library. Implementation 🛠️ Implements basic and advanced few-shot learning, in-context learning, and best practices for example selection and evaluation. Chain of Thought (CoT) Prompting Overview 🔎 Introduces Chain of Thought (CoT) prompting, encouraging AI models to break down complex problems into step-by-step reasoning processes. Implementation 🛠️ Covers basic and advanced CoT techniques, applying them to various problem-solving scenarios and comparing results with standard prompts. 🔍 Advanced Strategies Self-Consistency and Multiple Paths of Reasoning Overview 🔎 Explores techniques for generating diverse reasoning paths and aggregating results to improve AI-generated answers. Implementation 🛠️ Demonstrates designing diverse reasoning prompts, generating multiple responses, implementing aggregation methods, and applying self-consistency checks. Constrained and Guided Generation Overview 🔎 Focuses on techniques to set up constraints for model outputs and implement rule-based generation. Implementation 🛠️ Uses LangChain's PromptTemplate for structured prompts, implements constraints, and explores rule-based generation techniques. Role Prompting Overview 🔎 Explores assigning specific roles to AI models and crafting effective role descriptions. Implementation 🛠️ Demonstrates creating role-based prompts, assigning roles to AI models, and refining role descriptions for various scenarios. 🚀 Advanced Implementations Task Decomposition in Prompts Overview 🔎 Explores techniques for breaking down complex tasks and chaining subtasks in prompts. Implementation 🛠️ Covers problem analysis, subtask definition, targeted prompt engineering, sequential execution, and result synthesis. Prompt Chaining and Sequencing Overview 🔎 Demonstrates how to connect multiple prompts and build logical flows for complex AI-driven tasks. Implementation 🛠️ Explores basic prompt chaining, sequential prompting, dynamic prompt generation, and error handling within prompt chains. Instruction Engineering Overview 🔎 Focuses on crafting clear and effective instructions for language models, balancing specificity and generality. Implementation 🛠️ Covers creating and refining instructions, experimenting with different structures, and implementing iterative improvement based on model responses. 🎨 Optimization and Refinement Prompt Optimization Techniques Overview 🔎 Explores advanced techniques for optimizing prompts, focusing on A/B testing and iterative refinement. Implementation 🛠️ Demonstrates A/B testing of prompts, iterative refinement processes, and performance evaluation using relevant metrics. Handling Ambiguity and Improving Clarity Overview 🔎 Focuses on identifying and resolving ambiguous prompts and techniques for writing clearer prompts. Implementation 🛠️ Covers analyzing ambiguous prompts, implementing strategies to resolve ambiguity, and exploring techniques for writing clearer prompts. Prompt Length and Complexity Management Overview 🔎 Explores techniques for managing prompt length and complexity when working with large language models. Implementation 🛠️ Demonstrates techniques for balancing detail and conciseness, and strategies for handling long contexts including chunking, summarization, and iterative processing. 🛠️ Specialized Applications Negative Prompting and Avoiding Undesired Outputs Overview 🔎 Explores negative prompting and techniques for avoiding undesired outputs from large language models. Implementation 🛠️ Covers basic negative examples, explicit exclusions, constraint implementation using LangChain, and methods for evaluating and refining negative prompts. Prompt Formatting and Structure Overview 🔎 Explores various prompt formats and structural elements, demonstrating their impact on AI model responses. Implementation 🛠️ Demonstrates creating various prompt formats, incorporating structural elements, and comparing responses from different prompt structures. Prompts for Specific Tasks Overview 🔎 Explores the creation and use of prompts for specific tasks: text summarization, question-answering, code generation, and creative writing. Implementation 🛠️ Covers designing task-specific prompt templates, implementing them using LangChain, executing with sample inputs, and analyzing outputs for each task type. 🌍 Advanced Applications Multilingual and Cross-lingual Prompting Overview 🔎 Explores techniques for designing prompts that work effectively across multiple languages and for language translation tasks. Implementation 🛠️ Covers creating multilingual prompts, implementing language detection and adaptation, designing cross-lingual translation prompts, and handling various writing systems and scripts. Ethical Considerations in Prompt Engineering Overview 🔎 Explores the ethical dimensions of prompt engineering, focusing on avoiding biases and creating inclusive and fair prompts. Implementation 🛠️ Covers identifying biases in prompts, implementing strategies to create inclusive prompts, and methods to evaluate and improve the ethical quality of AI outputs. Prompt Security and Safety Overview 🔎 Focuses on preventing prompt injections and implementing content filters in prompts for safe and secure AI applications. Implementation 🛠️ Covers techniques for prompt injection prevention, content filtering implementation, and testing the effectiveness of security and safety measures. Evaluating Prompt Effectiveness Overview 🔎 Explores methods and techniques for evaluating the effectiveness of prompts in AI language models. Implementation 🛠️ Covers setting up evaluation metrics, implementing manual and automated evaluation techniques, and providing practical examples using OpenAI and LangChain. Getting Started To begin exploring and implementing prompt engineering techniques: Clone this repository: Navigate to the technique you're interested in: Follow the detailed implementation guide in each technique's notebook. Contributing We welcome contributions from the community! If you have a new technique or improvement to suggest: Fork the repository Create your feature branch: git checkout -b feature/AmazingFeature Commit your changes: git commit -m 'Add some AmazingFeature' Push to the branch: git push origin feature/AmazingFeature Open a pull request License This project is licensed under a custom non-commercial license - see the LICENSE file for details. ⭐️ If you find this repository helpful, please consider giving it a star! Keywords: Prompt Engineering, AI, Machine Learning, Natural Language Processing, LLM, Language Models, NLP, Conversational AI, Zero-Shot Learning, Few-Shot Learning, Chain of Thought

aioquic
github
LLM Vibe Score0.518
Human Vibe Score0.04117299426077279
aiortcMar 27, 2025

aioquic

aioquic ======= .. image:: https://img.shields.io/pypi/l/aioquic.svg :target: https://pypi.python.org/pypi/aioquic :alt: License .. image:: https://img.shields.io/pypi/v/aioquic.svg :target: https://pypi.python.org/pypi/aioquic :alt: Version .. image:: https://img.shields.io/pypi/pyversions/aioquic.svg :target: https://pypi.python.org/pypi/aioquic :alt: Python versions .. image:: https://github.com/aiortc/aioquic/workflows/tests/badge.svg :target: https://github.com/aiortc/aioquic/actions :alt: Tests .. image:: https://img.shields.io/codecov/c/github/aiortc/aioquic.svg :target: https://codecov.io/gh/aiortc/aioquic :alt: Coverage .. image:: https://readthedocs.org/projects/aioquic/badge/?version=latest :target: https://aioquic.readthedocs.io/ :alt: Documentation What is `aioquic? aioquic is a library for the QUIC network protocol in Python. It features a minimal TLS 1.3 implementation, a QUIC stack and an HTTP/3 stack. aioquic is used by Python opensource projects such as dnspython_, hypercorn, mitmproxy and the Web Platform Tests_ cross-browser test suite. It has also been used extensively in research papers about QUIC. To learn more about aioquic please read the documentation_. Why should I use aioquic? aioquic has been designed to be embedded into Python client and server libraries wishing to support QUIC and / or HTTP/3. The goal is to provide a common codebase for Python libraries in the hope of avoiding duplicated effort. Both the QUIC and the HTTP/3 APIs follow the "bring your own I/O" pattern, leaving actual I/O operations to the API user. This approach has a number of advantages including making the code testable and allowing integration with different concurrency models. A lot of effort has gone into writing an extensive test suite for the aioquic code to ensure best-in-class code quality, and it is regularly tested for interoperability against other QUIC implementations. Features minimal TLS 1.3 implementation conforming with RFC 8446_ QUIC stack conforming with RFC 9000 (QUIC v1) and RFC 9369 (QUIC v2) IPv4 and IPv6 support connection migration and NAT rebinding logging TLS traffic secrets logging QUIC events in QLOG format version negotiation conforming with RFC 9368_ HTTP/3 stack conforming with RFC 9114_ server push support WebSocket bootstrapping conforming with RFC 9220_ datagram support conforming with RFC 9297_ Installing The easiest way to install aioquic is to run: .. code:: bash pip install aioquic Building from source If there are no wheels for your system or if you wish to build aioquic from source you will need the OpenSSL development headers. Linux ..... On Debian/Ubuntu run: .. code-block:: console sudo apt install libssl-dev python3-dev On Alpine Linux run: .. code-block:: console sudo apk add openssl-dev python3-dev bsd-compat-headers libffi-dev OS X .... On OS X run: .. code-block:: console brew install openssl You will need to set some environment variables to link against OpenSSL: .. code-block:: console export CFLAGS=-I$(brew --prefix openssl)/include export LDFLAGS=-L$(brew --prefix openssl)/lib Windows ....... On Windows the easiest way to install OpenSSL is to use Chocolatey_. .. code-block:: console choco install openssl You will need to set some environment variables to link against OpenSSL: .. code-block:: console $Env:INCLUDE = "C:\Progra~1\OpenSSL\include" $Env:LIB = "C:\Progra~1\OpenSSL\lib" Running the examples aioquic comes with a number of examples illustrating various QUIC usecases. You can browse these examples here: https://github.com/aiortc/aioquic/tree/main/examples License aioquic is released under the BSD license`_. .. _read the documentation: https://aioquic.readthedocs.io/en/latest/ .. _dnspython: https://github.com/rthalley/dnspython .. _hypercorn: https://github.com/pgjones/hypercorn .. _mitmproxy: https://github.com/mitmproxy/mitmproxy .. _Web Platform Tests: https://github.com/web-platform-tests/wpt .. _tested for interoperability: https://interop.seemann.io/ .. _QUIC implementations: https://github.com/quicwg/base-drafts/wiki/Implementations .. _cryptography: https://cryptography.io/ .. _Chocolatey: https://chocolatey.org/ .. _BSD license: https://aioquic.readthedocs.io/en/latest/license.html .. _RFC 8446: https://datatracker.ietf.org/doc/html/rfc8446 .. _RFC 9000: https://datatracker.ietf.org/doc/html/rfc9000 .. _RFC 9114: https://datatracker.ietf.org/doc/html/rfc9114 .. _RFC 9220: https://datatracker.ietf.org/doc/html/rfc9220 .. _RFC 9297: https://datatracker.ietf.org/doc/html/rfc9297 .. _RFC 9368: https://datatracker.ietf.org/doc/html/rfc9368 .. _RFC 9369: https://datatracker.ietf.org/doc/html/rfc9369