VibeBuilders.ai Logo
VibeBuilders.ai

Computer

Explore resources related to computer to help implement AI solutions for your business.

Sophomore computer science student, looking at ISLP vs ESL vs mlcourse.ai
reddit
LLM Vibe Score0
Human Vibe Score1
OneTrueDuceThis week

Sophomore computer science student, looking at ISLP vs ESL vs mlcourse.ai

For background, I am currently a computer science sophomore, with intermediate skills in Python and C++. I have taken university courses on data structure and algorithms, calc 1-3, linear algebra, and an introductory stat course (which covered confidence interval, Z and T sample test, and hypothesis testing). I also have read up to Chapter 5 of the MML book and am currently self-studying probability theory (through STAT 110 video and textbook by Joe Blitzstein). I have done a few beginner ML projects with Tensorflow and scikit-learn, but most of the work is in EDA and feature engineering, while the ML model is just a black box that I plug and chug. So now, I want to learn how to implement ML models from scratch. I've been skimming over ISLP, which many people online recommended, but it seems that while it talks about mathematical equations used, I don't really get to implement it; as the labs are a lot of importing an already implemented model then plug and chug. So now, I am looking at ESL, which I believe is the more detailed and mathematically rigorous version of ISL. However, there aren't any labs or code along to ease beginners in (which I somewhat understand given the intended audience of the book). Another option I am looking at is mlcourse.ai, which seems to cover mathematics and has some lab/code along for it. But it doesn't seem to span as many subjects as ESL does. Given these options, I am unsure of which one to pick, should I first finish my self-study on probability theory and then Chapters 6-8 of MML? Then should I do ISLP first or just get into ESL? Or maybe I should do mlcourse.ai first then into ESL? Or should I just do the ML course/book along with the maths? In addition, there is also the data science + feature engineering stuff which I wonder if I should study more about. Sorry if this seems like a mess, there are just so many things to ML that I am kinda overwhelmed.

How to Start Research in Computer Science & AI in 2025 – A Modernized Framework
reddit
LLM Vibe Score0
Human Vibe Score1
somdipdeyThis week

How to Start Research in Computer Science & AI in 2025 – A Modernized Framework

Over a decade ago, I wrote two articles: "A Beginner’s Guide to Computer Science Research" and "How to Start a Research Work in Computer Science: A Framework for Beginners" \- that have been used at several universities around the world for the same purpose. These articles aimed to help students and early-career researchers navigate the complexities of academic research in computer science. However, since 2014, the research landscape has changed dramatically with the rise of AI, automation, and powerful collaborative tools. Now, in 2025, starting research in computer science and AI is more accessible than ever. With AI-powered research assistants, open-access repositories, and real-time collaborative platforms, researchers can work more efficiently and focus on innovation. I recently published an updated guide in The Times of India, presenting a modern “Eight-Step Approach to Research” framework that integrates the latest methodologies and tools for AI and CS research. This framework is designed to help students and researchers independently explore their chosen topics while leveraging cutting-edge technology. If you’re curious about how to streamline your research workflow, enhance your literature review process, and effectively collaborate in the AI research space, check out the article here: 🔗 How to Start a Research Work in Computer Science and AI in 2025 – An Updated Framework Block Diagram of “Eight-Step Approach to Research” in 2025 Would love to hear thoughts from the ML research community—what tools and techniques do you use to make research more efficient in 2025? Let’s discuss! 🚀

[R] OS-Copilot: Towards Generalist Computer Agents with Self-Improvement - Shanghai AI Laboratory 2024
reddit
LLM Vibe Score0
Human Vibe Score1
Singularian2501This week

[R] OS-Copilot: Towards Generalist Computer Agents with Self-Improvement - Shanghai AI Laboratory 2024

Paper: https://arxiv.org/abs/2402.07456 Github: https://github.com/OS-Copilot/FRIDAY Abstract: Autonomous interaction with the computer has been a longstanding challenge with great potential, and the recent proliferation of large language models (LLMs) has markedly accelerated progress in building digital agents. However, most of these agents are designed to interact with a narrow domain, such as a specific software or website. This narrow focus constrains their applicability for general computer tasks. To this end, we introduce OS-Copilot, a framework to build generalist agents capable of interfacing with comprehensive elements in an operating system (OS), including the web, code terminals, files, multimedia, and various third-party applications. We use OS-Copilot to create FRIDAY, a self-improving embodied agent for automating general computer tasks. On GAIA, a general AI assistants benchmark, FRIDAY outperforms previous methods by 35%, showcasing strong generalization to unseen applications via accumulated skills from previous tasks. We also present numerical and quantitative evidence that FRIDAY learns to control and self-improve on Excel and Powerpoint with minimal supervision. Our OS-Copilot framework and empirical findings provide infrastructure and insights for future research toward more capable and general-purpose computer agents. https://preview.redd.it/uzec8udohdic1.jpg?width=1655&format=pjpg&auto=webp&s=893b5561ca47c26c789b69925efdc26e5b783007 https://preview.redd.it/vfwfwudohdic1.jpg?width=1653&format=pjpg&auto=webp&s=9eafc2a5ea0ad188a156d3de446508d82d9cc913 https://preview.redd.it/lmi8rwdohdic1.jpg?width=1123&format=pjpg&auto=webp&s=dbc67b27585b980d0c592f9bd9f87f3ec6531f66 https://preview.redd.it/20yo21eohdic1.jpg?width=1037&format=pjpg&auto=webp&s=72fab36d585b862eed4ff6c7deed2be0cd62f637

Hot Take: Not all your startups need AI forced into them
reddit
LLM Vibe Score0
Human Vibe Score1
bitorsicThis week

Hot Take: Not all your startups need AI forced into them

I'm a final year Computer Engineering student, hence applying for jobs all around. There's this particular trend I've noticed with startups that are coming up these days. That is, even for the absolute basic stuff they'll use 'AI', and they'll think they built something 'revolutionary'. No. You're breaking your product in ways you don't realise. An example, that even some well established companies are guilty of: AI Chatbots You absolutely don't need them and it's an entire gimmick. If you really wanna implement a chatbot, connect the user to an actual person on your end, which I think is not possible if you're at a 'startup' stage. You'll need employees who can handle user queries in real time. If the user really is stuck let them use the 'Contact Us' page. A really close relative of mine is very vocal about the frustration he faces whenever he tries to use the AI Chatbot on any well known e-com website. The only case for AI Chatbot that makes sense is when it's directing the customer to an actual customer support rep if none of the AI's solutions is working for the customer. Even then, implementing a search page for FAQ is extremely easy and user friendly. Another example: AI Interviewer I recently interviewed for a startup, and their whole interviewing process was AI'zed?!?! No real person at the other end, I was answering to their questions which were in video format. They even had a 'mascot' / 'AI interviewer' avatar designed by an AI (AI-ception???). This mascot just text-to-speech'ed all the questions for me to rewind and hear what I missed again. And I had to record video and audio to answer these questions on their platform itself. The entire interview process just could've been a questionnaire, or if you're really concerned on the integrity of the interviewee, just take a few minutes out of your oh-so-busy schedule as a startup owner. Atleast for hiring employees who would make the most impact on your product going ahead. I say the most impact, because (atleast as a developer) the work done by these employees would define how robust your product is, and/or how easily other features can be integrated into the codebase. Trust me, refactoring code later on would only cost you time and money. These resources would rather be more useful in other departments of your startup. The only use case for an AI Interviewer I see is for preparing for an actual interview, provided that feedback is given to the user at the earliest, which you don't need to worry about as a startup owner. So yeah, you're probably better off without integrating AI in your product. Thank you for reading. TLDR; The title; I know AI is the new thing and gets everyone drooling and all, but for the love of God, just focus on what your startup does best and put real people behind it; Integrating AI without human intervention is as good as a broken product; Do your hiring yourself, or through real people, emphasizing on the fact that the people you hire at an early stage will define your growth ahead;

The Drawing of the Three - Once you look through the veil, nothing is the same again. (I will not promote)
reddit
LLM Vibe Score0
Human Vibe Score0.333
Tim-SylvesterThis week

The Drawing of the Three - Once you look through the veil, nothing is the same again. (I will not promote)

Originally published Nov 5, 2024 In my last post, I talked about assembling a series of filters to use to view the startup landscape, which led me to a few conclusions about what opportunities I should pursue. What did I see through those filters? What I saw through the moire pattern of those two lists overlaid by one another is what I think will be the third great monetization strategy for the internet, matching the pattern of: web1 => Ad monetization web2 => Subscription monetization web3 => For AI, neither of those work anymore, which demands something new. But what? Well that’s the important part, isn’t it? Should I just up and tell you? Yawn. The climax of a movie is at the climax, if they tell you the crux at the beginning, it’s a lot less fun (usually). The standard bearer for web1 and ads was Google (with countless followers), and essentially every website adopted that model for their first pass at content monetization. Google has been… let’s call it fairly successful… so it’s not a bad way to look at things. How many websites live and die by selling advertising? The standard bearers for web2 and subscriptions were Salesforce (for B2B SaaS) and Netflix (for B2C SaaS), with countless followers, to the extent that SaaS has been the dominant startup monetization thesis for the last 15+ years. It’s more old and tired by now than most American politicians, but how many websites live and die by people entering payment details for a monthly or annual subscription? Evidence proves those models for web1 and web2 worked well enough that countless businesses depend on them, and countless fortunes have been made and lost surfing the waves, or crashing against the shorelines, of ads and subs. But it’s also apparent (to me, at least) that now that AI is the dominant startup thesis, neither ads nor subs are going to prevail in an AI-centered world, and for one simple reason: Those monetization strategies are for humans, and AI bots are not humans. Changing Environments Require Changing Strategies Every so often, there’s a fundamental shift that demands everything in the ecosystem adapt to a new habitation strategy to survive. We’ve seen this repeatedly across Earth’s ecology (for instance, introducing free oxygen to the atmosphere, producing respiration while destroying all the life forms that existed before oxygen permeated the atmosphere), and across human society (for example, how nuclear bombs changed war, and how drones are changing it again, for less violent examples, consider the adoption of computers and the subsequent adoption of smartphones). Now the ecosystem of the internet has changed irrevocably, opening up countless new and interesting niches to occupy. Humans may see an ad and buy something stupid (or, occasionally, not-stupid), but an AI won’t unless its programmed to. And subscriptions are designed for humans to consume content at a human rate, not for an AI that can choke down an entire database of content (whatever it may be) at whatever speed the servers can manage. Changing conditions require changing strategies. It was clear to me that: The introduction of AI bots to the internet ecosystem was, is, and will be massively disruptive for a very long time The internet population of bots already exceeds humans and is growing faster than the human population The two dominant monetization strategies are not relevant to bots That disruption of expectations across the ecosystem demands a third strategy, a new strategy to handle a massive change in an existing system. And that strategy needs to accommodate, support, and monetize the new demands from the vast armies of new participants in the internet ecology. Therefore, a method that converts bots from an expense into a revenue source would become a dominant monetization strategy, and therefore whoever owns that strategy will be a dominant player in the internet ecosystem. Set the realization of semi-practical, semi-useful AI against a backdrop of technology cycles that have, in the distant past (in internet terms) produced ads and subs, and more recently produced enormous investment into fintech and crypto, I started to see a path that felt like it would grow over time to become a new monetization strategy that works in the AI ecosystem. Sun Tzu had a couple drinks, saw a couple things… There’s at least, and possibly only, two things I know about fighting: You cannot fight the tide, and it’s much harder to fight an uphill battle. If my whole thesis on this go-around was to go with the flow, and that trickle of insight was leading me from my overlook along a roaring flow of cash coursing through a valley filled with AI startups, where exactly would it lead me? Most rivers lead to the sea eventually, but they can take winding paths, and sometimes the quickest route from the mountain to the sea isn’t to follow the river, but to understand where the river leads and go there instead. Getting a view from on high can save you a lot of time on your journey. But before I get to where the path has led (or is leading) that will explain the objective I’ve identified, and the deliverables I have to produce to reach it, let’s talk about a few of the steps on the path I’ve been taking that highlight the process I followed. I figure if I explain the steps I’m taking, as I’m taking them, it may be easier for people who haven’t trod this route before to follow me and understand how to carve their own course towards their own objectives. And maybe the real treasure will be the friends we make along the way. (I will not promote)

Behind the scene : fundraising pre-seed of an AI startup
reddit
LLM Vibe Score0
Human Vibe Score1
Consistent-Wafer7325This week

Behind the scene : fundraising pre-seed of an AI startup

A bit of feedback from our journey at our AI startup. We started prototyping stuff around agentic AI last winter with very cool underlying tech research based on some academic papers (I can send you links if you're interested in LLM orchestration). I'm a serial entrepreneur with 2x exits, nothing went fancy but enough to keep going into the next topic. This time, running an AI project has been a bit different and unique due to the huge interest around the topic. Here are a few insights. Jan \~ Mar: Research Nothing was serious, just a side project with a friend on weekends (the guy became our lead SWE). Market was promising and we had the convinction that our tech can be game changer in computer systems workflows. March \~ April: Market Waking Up Devin published their pre-seed $20m fundraising led by Founders Fund; they paved the market with legitimacy. I decided to launch some coffee meetings with a few angels in my network. Interest confirmed. Back to work on some more serious early prototyping; hard work started here. April \~ May: YC S24 (Fail) Pumped up by our prospective angels and the market waking up on the agentic topic, I applied to YC as a solo founder (was still looking for funds and co-founders). Eventually got rejected (no co-founder and not US-based). May \~ July: VC Dance (Momentum 1) Almost randomly at the same time we got rejected from YC, I got introduced to key members of the VC community by one of our prospective angels. Interest went crazy... tons of calls. Brace yourself here, we probably met 30\~40 funds (+ angels). Got strong interests from 4\~5 of them (3 to 5 meetings each), ultimately closed 1 and some interests which might convert later in the next stage. The legend of AI being hype is true. Majority of our calls went only by word of mouth, lots of inbounds, people even not having the deck would book us a call in the next 48h after saying hi. Also lots of "tourists," just looking because of AI but with no strong opinion on the subject to move further. The hearsay about 90% rejection is true. You'll have a lot of nos, ending some days exhausted and unmotivated. End July: Closing, the Hard Part The VC roadshow is kind of an art you need to master. You need to keep momentum high enough and looking over-subscribed. Good pre-seed VC deals are over-competitive, and good funds only focus on them; they will have opportunities to catch up on lost chances at the seed stage later. We succeeded (arduously) to close our 18\~24mo budget with 1 VC, a few angels, and some state-guaranteed debt. Cash in bank just on time for payday in August (don't under-estimate time of processing) Now: Launching and Prepping the Seed Round We're now in our first weeks of go-to-market with a lot of uncertainty but a very ambitious plan ahead. The good part of having met TONS of VCs during the pre-seed roadshow is that we met probably our future lead investors in these. What would look like a loss of time in the initial pre-seed VC meetings has been finally very prolific, helping us to refine our strategy, assessing more in-depth the market (investors have a lot of insights, they meet a lot of people... that's their full-time job). We now have clear milestones and are heading to raise our seed round by end of year/Q1 if stars stay aligned :) Don't give up, the show must go on.

Building in the open with Founder University - I will not promote
reddit
LLM Vibe Score0
Human Vibe Score1
Tim-SylvesterThis week

Building in the open with Founder University - I will not promote

Published Oct 30, 2024 I am on my fifth startup. I ran the last one for a decade, that’s a whole story. A hell of a story. But a different story. I’ll tell it to you when I can, but not right now. The one before that was an e-commerce site that did pretty well but I didn’t love it. Before that were two service businesses. The first one I did for the love of the game, the second one was an attempt to make people stop asking me to fix their computer by charging them outrageous prices, which backfired horribly when they were eager to pay. None are relevant except to say I’ve been around the block and have the scars to prove it. When it was time to get back out there, I wanted to use all I’ve learned to do better. Before I talk about what those lessons produced, I’m going to talk about what those lessons were. Cause before effect, after all. One thing I wanted to do better this time was pattern matching - making the startup look the way that the industry and investors “expect” a startup to look. My last startup was an awesome idea with awesome tech (still is, but like I said, another story), but that one didn’t match patterns. It didn’t match investor patterns, industry buying patterns, patterns of existing, immediate, recognized and admitted needs. Because it didn’t “look” right to anyone, everything about it was way harder than necessary. The “make it look right” approach runs the risk of building a cargo cult, imitating the trappings of something but without understanding the essence of that something, but then again, a thing that looks like a knife is going to make a better knife that a thing that looks like a bowling ball, so sometimes just sharing apparent similarities can get you pretty far, even if it doesn’t get you all the way there. Like how mimicking someone’s accent makes it easier for them to understand you. For this one, I wanted to adopt every tool, method, and pattern that I knew “the industry” wanted to see to minimize the friction from development, go-to-market, scaling, adoption, and that would make investment optional (and, therefore, available if desired) instead of necessary (and, therefore, largely unavailable). That required establishing some expectations for successful patterns I could match against. What patterns am I matching to? Here’s a general sketch of my pattern matching thought process: Software first and software only. It’s the easiest industry to start a business in, lowest startup costs, and easiest customer acquisition. I wanted to build software for an element of the industry that’s actively emerging (and therefore has room to grow) and part of an optimistic investor thesis (and therefore has a cohort of people who are intent on injecting capital into the market to help it grow). It needs to fills a niche that is underexplored (low competition) and highly potent (lots of opportunity), while being aligned to recognized and emerging needs within the industry (readily adopted). I wanted it to have evidence supporting the business thesis that proves the demand exists, but demonstrates that the demand is unanswered (as of yet) by sufficient or adequate supply.* I wanted the lowest number of dominoes to line up and tip for everything to work correctly - the more dominoes in the line, the less likely the last one will fall. I wanted to implement modern toolsets for everything, wherever possible. I wanted to obey the maxim, “When there’s a gold rush, don’t mine the gold, sell the picks and shovels.” Whatever I chose would need to produce cash flow almost immediately with minimal development time or go-to-market delays, because the end of ZIRP killed the “trust me bro” investment thesis predominant over the last 15 years. I wanted to match to YC best practices, not because YC can predict what will definitely work, but because they’ve churned through so many startups in the last 15 years that they have a good sense of what will definitely not work. And I wanted to build client-centric, because if my intent is to to produce cash flow immediately, we need to get clients immediately, and if we need to get clients immediately, we need to focus on what clients need right now. Extra credit: What’s the difference between a customer and a client? Note: Competition is awesome! Competition is validating and not scary, because competition proves a market exists. But competition, especially mature competition against an immature startup, makes it harder to break into a space. A first mover advantage isn’t everything, but seeing demand before it’s sufficiently supplied is a great advantage if you’re capital constrained or otherwise unproven. Think about how much money the first guy to sell fidget spinners or Silly Bandz made versus how much money the last guy to order a pallet of each made. Finding demand that exists already but is as of yet insufficiently satisfied is a great place to start. What opportunity spaces are most relevant? The industries and markets I chose to observe were: AI, because if I’m following a theme & pattern for today, it’s AI. Fintech, because cash is king, and fintech puts your hands on cash flow. Crypto/blockchain, because that’s the “new” fintech (or maybe the “old-new” fintech?), and crypto creates powerful incentives and capital formation strategies, along with a lot of flexibility for transaction systems. Tools, particularly unmet demand in tools, that enable these industries. If you wanted to do some brief and simple homework, you could map each of those bullets to several of the numbered list items preceding them. The reasoning was pretty simplistic - AI is what people want to build and invest in now, while fintech and crypto/blockchain are what people were building and investing in for the last major investment thesis. That means that there’s demand in the market for AI and AI-adjacent startups, while there’s a glut of underutilized and highly developed tools within fintech and crypto/blockchain, with a lot of motivated capital behind the adoption. When someone is thinking “I built this thing and not enough people are using it”, and you then build something that uses it creates a great way to find allies. This rationale harnesses technology that is being built and financed now (which means it needs tools and support methods, and a lot of other “picks and shovels”), while leveraging technology that was recently built and financed and is eager for more widespread adoption of the existing toolkits, which makes it suitable for using to build the AI-adjacent tools that are in demand now. It’s like two harmonics producing constructive interference - it makes two waves into one larger wave, which gives me more momentum to surf against. This was a learning process, and I iterated against my general paradigm repeatedly as I learned more. Neither of us have the patience to go through that in excruciating detail, so I’ll cover the highlights in my next post. Extra credit answer: A customer gets a product, a client gets a service. Challenge: Is software a product or a service?

Hot Take: Not all your startups need AI forced into them
reddit
LLM Vibe Score0
Human Vibe Score1
bitorsicThis week

Hot Take: Not all your startups need AI forced into them

I'm a final year Computer Engineering student, hence applying for jobs all around. There's this particular trend I've noticed with startups that are coming up these days. That is, even for the absolute basic stuff they'll use 'AI', and they'll think they built something 'revolutionary'. No. You're breaking your product in ways you don't realise. An example, that even some well established companies are guilty of: AI Chatbots You absolutely don't need them and it's an entire gimmick. If you really wanna implement a chatbot, connect the user to an actual person on your end, which I think is not possible if you're at a 'startup' stage. You'll need employees who can handle user queries in real time. If the user really is stuck let them use the 'Contact Us' page. A really close relative of mine is very vocal about the frustration he faces whenever he tries to use the AI Chatbot on any well known e-com website. The only case for AI Chatbot that makes sense is when it's directing the customer to an actual customer support rep if none of the AI's solutions is working for the customer. Even then, implementing a search page for FAQ is extremely easy and user friendly. Another example: AI Interviewer I recently interviewed for a startup, and their whole interviewing process was AI'zed?!?! No real person at the other end, I was answering to their questions which were in video format. They even had a 'mascot' / 'AI interviewer' avatar designed by an AI (AI-ception???). This mascot just text-to-speech'ed all the questions for me to rewind and hear what I missed again. And I had to record video and audio to answer these questions on their platform itself. The entire interview process just could've been a questionnaire, or if you're really concerned on the integrity of the interviewee, just take a few minutes out of your oh-so-busy schedule as a startup owner. Atleast for hiring employees who would make the most impact on your product going ahead. I say the most impact, because (atleast as a developer) the work done by these employees would define how robust your product is, and/or how easily other features can be integrated into the codebase. Trust me, refactoring code later on would only cost you time and money. These resources would rather be more useful in other departments of your startup. The only use case for an AI Interviewer I see is for preparing for an actual interview, provided that feedback is given to the user at the earliest, which you don't need to worry about as a startup owner. So yeah, you're probably better off without integrating AI in your product. Thank you for reading. TLDR; The title; I know AI is the new thing and gets everyone drooling and all, but for the love of God, just focus on what your startup does best and put real people behind it; Integrating AI without human intervention is as good as a broken product; Do your hiring yourself, or through real people, emphasizing on the fact that the people you hire at an early stage will define your growth ahead;

Behind the scene : fundraising pre-seed of an AI startup
reddit
LLM Vibe Score0
Human Vibe Score1
Consistent-Wafer7325This week

Behind the scene : fundraising pre-seed of an AI startup

A bit of feedback from our journey at our AI startup. We started prototyping stuff around agentic AI last winter with very cool underlying tech research based on some academic papers (I can send you links if you're interested in LLM orchestration). I'm a serial entrepreneur with 2x exits, nothing went fancy but enough to keep going into the next topic. This time, running an AI project has been a bit different and unique due to the huge interest around the topic. Here are a few insights. Jan \~ Mar: Research Nothing was serious, just a side project with a friend on weekends (the guy became our lead SWE). Market was promising and we had the convinction that our tech can be game changer in computer systems workflows. March \~ April: Market Waking Up Devin published their pre-seed $20m fundraising led by Founders Fund; they paved the market with legitimacy. I decided to launch some coffee meetings with a few angels in my network. Interest confirmed. Back to work on some more serious early prototyping; hard work started here. April \~ May: YC S24 (Fail) Pumped up by our prospective angels and the market waking up on the agentic topic, I applied to YC as a solo founder (was still looking for funds and co-founders). Eventually got rejected (no co-founder and not US-based). May \~ July: VC Dance (Momentum 1) Almost randomly at the same time we got rejected from YC, I got introduced to key members of the VC community by one of our prospective angels. Interest went crazy... tons of calls. Brace yourself here, we probably met 30\~40 funds (+ angels). Got strong interests from 4\~5 of them (3 to 5 meetings each), ultimately closed 1 and some interests which might convert later in the next stage. The legend of AI being hype is true. Majority of our calls went only by word of mouth, lots of inbounds, people even not having the deck would book us a call in the next 48h after saying hi. Also lots of "tourists," just looking because of AI but with no strong opinion on the subject to move further. The hearsay about 90% rejection is true. You'll have a lot of nos, ending some days exhausted and unmotivated. End July: Closing, the Hard Part The VC roadshow is kind of an art you need to master. You need to keep momentum high enough and looking over-subscribed. Good pre-seed VC deals are over-competitive, and good funds only focus on them; they will have opportunities to catch up on lost chances at the seed stage later. We succeeded (arduously) to close our 18\~24mo budget with 1 VC, a few angels, and some state-guaranteed debt. Cash in bank just on time for payday in August (don't under-estimate time of processing) Now: Launching and Prepping the Seed Round We're now in our first weeks of go-to-market with a lot of uncertainty but a very ambitious plan ahead. The good part of having met TONS of VCs during the pre-seed roadshow is that we met probably our future lead investors in these. What would look like a loss of time in the initial pre-seed VC meetings has been finally very prolific, helping us to refine our strategy, assessing more in-depth the market (investors have a lot of insights, they meet a lot of people... that's their full-time job). We now have clear milestones and are heading to raise our seed round by end of year/Q1 if stars stay aligned :) Don't give up, the show must go on.

10y of product development, 2 bankruptcies, and 1 Exit — what next? [Extended Story]
reddit
LLM Vibe Score0
Human Vibe Score1
Slight-Explanation29This week

10y of product development, 2 bankruptcies, and 1 Exit — what next? [Extended Story]

10 years of obsessive pursuit from the bottom to impressive product-market fit and exit. Bootstrapping tech products as Software Developer and 3x Startup Founder (2 bankruptcies and 1 exit). Hi everyone, your motivation has inspired me to delve deeper into my story. So, as promised to some of you, I've expanded on it a bit more, along with my brief reflections. There are many founders, product creators, and proactive individuals, I’ve read many of your crazy stories and lessons so I decided to share mine and the lessons I learned from the bottom to impressive product-market fit and exit. I've spent almost the past 10 years building tech products as a Corporate Team Leader, Senior Software Developer, Online Course Creator, Programming Tutor, Head of Development/CTO, and 3x Startup Founder (2 bankruptcies, and 1 exit). And what next? good question... A brief summary of my journey: Chapter 1: Software Developer / Team Leader / Senior Software Developer I’ve always wanted to create products that win over users’ hearts, carry value, and influence users. Ever since my school days, I’ve loved the tech part of building digital products. At the beginning of school, I started hosting servers for games, blogs and internet forums, and other things that did not require much programming knowledge. My classmates and later even over 100 people played on servers that I hosted on my home PC. Later, as the only person in school, I passed the final exam in computer science. During my computer science studies, I started my first job as a software developer. It was crazy, I was spending 200–300 hours a month in the office attending also to daily classes. Yes, I didn’t have a life, but it truly was the fulfillment of my dreams. I was able to earn good money doing what I love, and I devoted fully myself to it. My key to effectively studying IT and growing my knowledge at rocket speed was learning day by day reading guides, building products to the portfolio, watching youtube channels and attending conferences, and even watching them online, even if I didn’t understand everything at the beginning. In one year we’ve been to every possible event within 400km. We were building healthcare products that were actually used in hospitals and medical facilities. It was a beautiful adventure and tons of knowledge I took from this place. That time I built my first product teams, hired many great people, and over the years became a senior developer and team leader. Even I convinced my study mates to apply to this company and we studied together and worked as well. Finally, there were 4 of us, when I left a friend of mine took over my position and still works there. If you’re reading this, I’m sending you a flood of love and appreciation. I joined as the 8th person, and after around 4 years, when I left hungry for change, there were already over 30 of us, now around 100. It was a good time, greetings to everyone. I finished my Master’s and Engineering degrees in Computer Science, and it was time for changes. Chapter 2: 1st time as a Co-founder — Marketplace In the meantime, there was also my first startup (a marketplace) with four of my friends. We all worked on the product, each of us spent thousands of hours, after hours, entire weekends… and I think finally over a year of work. As you might guess, we lacked the most important things: sales, marketing, and product-market fit. We thought users think like us. We all also worked commercially, so the work went very smoothly, but we didn’t know what we should do next with it… Finally, we didn’t have any customers, but you know what, I don’t regret it, a lot of learning things which I used many times later. The first attempts at validating the idea with the market and business activities. In the end, the product was Airbnb-sized. Landing pages, listings, user panels, customer panels, admin site, notifications, caches, queues, load balancing, and much more. We wanted to publish the fully ready product to the market. It was a marketplace, so if you can guess, we had to attract both sides to be valuable. “Marketplace” — You can imagine something like Uber, if you don’t have passengers it was difficult to convince taxi drivers, if you don’t have a large number of taxi drivers you cannot attract passengers. After a year of development, we were overloaded, and without business, marketing, sales knowledge, and budget. Chapter 3: Corp Team Lead / Programming Tutor / Programming Architecture Workshop Leader Working in a corporation, a totally different environment, an international fintech, another learning experience, large products, and workmates who were waiting for 5 pm to finish — it wasn’t for me. Very slow product development, huge hierarchy, being an ant at the bottom, and low impact on the final product. At that time I understood that being a software developer is not anything special and I compared my work to factory worker. Sorry for that. High rates have been pumped only by high demand. Friends of mine from another industry do more difficult things and have a bigger responsibility for lower rates. That’s how the market works. This lower responsibility time allowed for building the first online course after hours, my own course platform, individual teaching newbies programming, and my first huge success — my first B2C customers, and B2B clients for workshops. I pivoted to full focus on sales, marketing, funnels, advertisements, demand, understanding the market, etc. It was 10x easier than startups but allowed me to learn and validate my conceptions and ideas on an easier market and showed me that it’s much easier to locate their problem/need/want and create a service/product that responds to it than to convince people of your innovative ideas. It’s just supply and demand, such a simple and basic statement, in reality, is very deep and difficult to understand without personal experience. If you’re inexperienced and you think you understand, you don’t. To this day, I love to analyze this catchword in relation to various industries / services / products and rediscover it again and again... While writing this sentence, I’m wondering if I’m not obsessed. Chapter 4: Next try — 2nd time as a founder — Edtech Drawing upon my experiences in selling services, offering trainings, and teaching programming, I wanted to broaden my horizons, delve into various fields of knowledge, involve more teachers, and so on. We started with simple services in different fields of knowledge, mainly relying on teaching in the local area (without online lessons). As I had already gathered some knowledge and experience in marketing and sales, things were going well and were moving in the right direction. The number of teachers in various fields was growing, as was the number of students. I don’t remember the exact statistics anymore, but it was another significant achievement that brought me a lot of satisfaction and new experiences. As you know, I’m a technology lover and couldn’t bear to look at manual processes — I wanted to automate everything: lessons, payments, invoices, customer service, etc. That’s when I hired our first developers (if you’re reading this, I’m sending you a flood of love — we spent a lot of time together and I remember it as a very fruitful and great year) and we began the process of tool and automation development. After a year we had really extended tools for students, teachers, franchise owners, etc. We had really big goals, we wanted to climb higher and higher. Maybe I wouldn’t even fully call it Startup, as the client was paying for the lessons, not for the software. But it gave us positive income, bootstrap financing, and tool development for services provided. Scaling this model was not as costless as SaaS because customer satisfaction was mainly on the side of the teacher, not the quality of the product (software). Finally, we grew to nearly 10 people and dozens of teachers, with zero external funding, and almost $50k monthly revenue. We worked very hard, day and night, and by November 2019, we were packed with clients to the brim. And as you know, that’s when the pandemic hit. It turned everything upside down by 180 degrees. Probably no one was ready for it. With a drastic drop in revenues, society started to save. Tired from the previous months, we had to work even harder. We had to reduce the team, change the model, and save what we had built. We stopped the tool’s development and sales, and with the developers, we started supporting other product teams to not fire them in difficult times. The tool worked passively for the next two years, reducing incomes month by month. With a smaller team providing programming services, we had full stability and earned more than relying only on educational services. At the peak of the pandemic, I promised myself that it was the last digital product I built… Never say never… Chapter 5: Time for fintech — Senior Software Developer / Team Lead / Head of Development I worked for small startups and companies. Building products from scratch, having a significant impact on the product, and complete fulfillment. Thousands of hours and sacrifices. This article mainly talks about startups that I built, so I don’t want to list all the companies, products, and applications that I supported as a technology consultant. These were mainly start-ups with a couple of people up to around 100 people on board. Some of the products were just a rescue mission, others were building an entire tech team. I was fully involved in all of them with the hope that we would work together for a long time, but I wasn’t the only one who made mistakes when looking for a product-market fit. One thing I fully understood: You can’t spend 8–15 hours a day writing code, managing a tech team, and still be able to help build an audience. In marketing and sales, you need to be rested and very creative to bring results and achieve further results and goals. If you have too many responsibilities related to technology, it becomes ineffective. I noticed that when I have more free time, more time to think, and more time to bounce the ball against the wall, I come up with really working marketing/sales strategies and solutions. It’s impossible when you are focused on code all day. You must know that this chapter of my life was long and has continued until now. Chapter 6: 3rd time as a founder — sold Never say never… right?\\ It was a time when the crypto market was really high and it was really trending topic. You know that I love technology right? So I cannot miss the blockchain world. I had experience in blockchain topics by learning on my own and from startups where I worked before. I was involved in crypto communities and I noticed a “starving crowd”. People who did things manually and earned money(crypto) on it.I found potential for building a small product that solves a technological problem. I said a few years before that I don’t want to start from scratch. I decided to share my observations and possibilities with my good friend. He said, “If you gonna built it, I’m in”. I couldn’t stop thinking about it. I had thought and planned every aspect of marketing and sales. And you know what. On this huge mindmap “product” was only one block. 90% of the mindmap was focused on marketing and sales. Now, writing this article, I understood what path I went from my first startup to this one. In the first (described earlier) 90% was the product, but in the last one 90% was sales and marketing. Many years later, I did this approach automatically. What has changed in my head over the years and so many mistakes? At that time, the company for which I provided services was acquired. The next day I got a thank you for my hard work and all my accounts were blocked. Life… I was shocked. We were simply replaced by their trusted technology managers. They wanted to get full control. They acted a bit unkindly, but I knew that they had all my knowledge about the product in the documentation, because I’m used to drawing everything so that in the moment of my weakness (illness, whatever) the team could handle it. That’s what solid leaders do, right? After a time, I know that these are normal procedures in financial companies, the point is that under the influence of emotions, do not do anything inappropriate. I quickly forgot about it, that I was brutally fired. All that mattered was to bring my plan to life. And it has been started, 15–20 hours a day every day. You have to believe me, getting back into the game was incredibly satisfying for me. I didn’t even know that I would be so excited. Then we also noticed that someone was starting to think about the same product as me. So the race began a game against time and the market. I assume that if you have reached this point, you are interested in product-market fit, marketing, and sales, so let me explain my assumptions to you: Product: A very very small tool that allowed you to automate proper tracking and creation of on-chain transactions. Literally, the whole app for the user was located on only three subpages. Starving Crowd: We tapped into an underserved market. The crypto market primarily operates via communities on platforms like Discord, Reddit, Twitter, Telegram, and so on. Therefore, our main strategy was directly communicating with users and demonstrating our tool. This was essentially “free marketing” (excluding the time we invested), as we did not need to invest in ads, promotional materials, or convince people about the efficacy of our tool. The community could directly observe on-chain transactions executed by our algorithms, which were processed at an exceptionally fast rate. This was something they couldn’t accomplish manually, so whenever someone conducted transactions using our algorithm, it was immediately noticeable and stirred a curiosity within the community (how did they do that!). Tests: I conducted the initial tests of the application on myself — we had already invested significantly in developing the product, but I preferred risking my own resources over that of the users. I provided the tool access to my wallet, containing 0.3ETH, and went to sleep. Upon waking up, I discovered that the transactions were successful and my wallet had grown to 0.99ETH. My excitement knew no bounds, it felt like a windfall. But, of course, there was a fair chance I could have lost it too. It worked. As we progressed, some users achieved higher results, but it largely hinged on the parameters set by them. As you can surmise, the strategy was simple — buy low, sell high. There was considerable risk involved. Churn: For those versed in marketing, the significance of repeat visitors cannot be overstated. Access to our tool was granted only after email verification and a special technique that I’d prefer to keep confidential. And this was all provided for free. While we had zero followers on social media, we saw an explosion in our email subscriber base and amassed a substantial number of users and advocates. Revenue Generation: Our product quickly gained popularity as we were effectively helping users earn — an undeniable value proposition. Now, it was time to capitalize on our efforts. We introduced a subscription model charging $300 per week or $1,000 per month — seemingly high rates, but the demand was so intense that it wasn’t an issue. Being a subscriber meant you were prioritized in the queue, ensuring you were among the first to reap benefits — thus adding more “value”. Marketing: The quality of our product and its ability to continually engage users contributed to it achieving what can best be described as viral. It was both a source of pride and astonishment to witness users sharing charts and analyses derived from our tool in forum discussions. They weren’t actively promoting our product but rather using screenshots from our application to illustrate certain aspects of the crypto world. By that stage, we had already assembled a team to assist with marketing, and programming, and to provide round-the-clock helpdesk support. Unforgettable Time: Despite the hype, my focus remained steadfast on monitoring our servers, their capacity, and speed. Considering we had only been on the market for a few weeks, we were yet to implement alerts, server scaling, etc. Our active user base spanned from Japan to the West Coast of the United States. Primarily, our application was used daily during the evenings, but considering the variety of time zones, the only time I could afford to sleep was during the evening hours in Far Eastern Europe, where we had the least users. However, someone always needed to be on guard, and as such, my phone was constantly by my side. After all, we couldn’t afford to let our users down. We found ourselves working 20 hours a day, catering to thousands of users, enduring physical fatigue, engaging in talks with VCs, and participating in conferences. Sudden Downturn: Our pinnacle was abruptly interrupted by the war in Ukraine (next macroeconomic shot straight in the face, lucky guy), a precipitous drop in cryptocurrency value, and swiftly emerging competition. By this time, there were 5–8 comparable tools had infiltrated the market. It was a challenging period as we continually stumbled upon new rivals. They immediately embarked on swift fundraising endeavors — a strategy we overlooked, which in retrospect was a mistake. Although our product was superior, the competitors’ rapid advancement and our insufficient funds for expeditious scaling posed significant challenges. Nonetheless, we made a good decision. We sold the product (exit) to competitors. The revenue from “exit” compensated for all the losses, leaving us with enough rest. We were a small team without substantial budgets for rapid development, and the risk of forming new teams without money to survive for more than 1–2 months was irresponsible. You have to believe me that this decision consumed us sleepless nights. Finally, we sold it. They turned off our app but took algorithms and users. Whether you believe it or not, after several months of toiling day and night, experiencing burnout, growing weary of the topic, and gaining an extra 15 kg in weight, we finally found our freedom… The exit wasn’t incredibly profitable, but we knew they had outdone us. The exit covered all our expenses and granted us a well-deserved rest for the subsequent quarter. It was an insane ride. Despite the uncertainty, stress, struggles, and sleepless nights, the story and experience will remain etched in my memory for the rest of my life. Swift Takeaways: Comprehending User Needs: Do you fully understand the product-market fit? Is your offering just an accessory or does it truly satisfy the user’s needs? The Power of Viral Marketing: Take inspiration from giants like Snapchat, ChatGPT, and Clubhouse. While your product might not attain the same scale (but remember, never say never…), the closer your concept is to theirs, the easier your journey will be. If your user is motivated to text a friend saying, “Hey, check out how cool this is” (like sharing ChatGPT), then you’re on the best track. Really. Even if it doesn’t seem immediately evident, there could be a way to incorporate this into your product. Keep looking until you find it. Niche targeting — the more specific and tailored your product is to a certain audience, the easier your journey will be People love buying from people — establishing a personal brand and associating yourself with the product can make things easier. Value: Seek to understand why users engage with your product and keep returning. The more specific and critical the issue you’re aiming to solve, the easier your path will be. Consider your offerings in terms of products and services and focus on sales and marketing, regardless of personal sentiments. These are just a few points, I plan to elaborate on all of them in a separate article. Many products undergo years of development in search of market fit, refining the user experience, and more. And guess what? There’s absolutely nothing wrong with that. Each product and market follows its own rules. Many startups have extensive histories before they finally make their mark (for instance, OpenAI). This entire journey spanned maybe 6–8 months. I grasped and capitalized on the opportunity, but we understood from the start that establishing a startup carried a significant risk, and our crypto product was 10 times riskier. Was it worth it? Given my passion for product development — absolutely. Was it profitable? — No, considering the hours spent — we lose. Did it provide a stable, problem-free life — nope. Did this entire adventure offer a wealth of happiness, joy, and unforgettable experiences — definitely yes. One thing is certain — we’ve amassed substantial experience and it’s not over yet :) So, what lies ahead? Chapter 7: Reverting to the contractor, developing a product for a crypto StartupReturning to the past, we continue our journey… I had invested substantial time and passion into the tech rescue mission product. I came on board as the technical Team Leader of a startup that had garnered over $20M in seed round funding, affiliated with the realm of cryptocurrencies. The investors were individuals with extensive backgrounds in the crypto world. My role was primarily technical, and there was an abundance of work to tackle. I was fully immersed, and genuinely devoted to the role. I was striving for excellence, knowing that if we secured another round of financing, the startup would accelerate rapidly. As for the product and marketing, I was more of an observer. After all, there were marketing professionals with decades of experience on board. These were individuals recruited from large crypto-related firms. I had faith in them, kept an eye on their actions, and focused on my own responsibilities. However, the reality was far from satisfactory. On the last day, the principal investor for the Series A round withdrew. The board made the tough decision to shut down. It was a period of intense observation and gaining experience in product management. This was a very brief summary of the last 10 years. And what next? (Last) Chapter 8: To be announced — Product Owner / Product Consultant / Strategist / CTO After spending countless hours and days deliberating my next steps, one thing is clear: My aspiration is to continue traversing the path of software product development, with the hopeful anticipation that one day, I might ride the crest of the next big wave and ascend to the prestigious status of a unicorn company. I find myself drawn to the process of building products, exploring product-market fit, strategizing, engaging in software development, seeking out new opportunities, networking, attending conferences, and continuously challenging myself by understanding the market and its competitive landscape. Product Owner / Product Consultant / CTO / COO: I’m not entirely sure how to categorize this role, as I anticipate that it will largely depend on the product to which I will commit myself fully. My idea is to find one startup/company that wants to build a product / or already has a product, want to speed up, or simply doesn’t know what’s next. Alternatively, I could be a part of an established company with a rich business history, which intends to invest in digitization and technological advancements. The goal would be to enrich their customer experience by offering complementary digital products Rather than initiating a new venture from ground zero with the same team, I am receptive to new challenges. I am confident that my past experiences will prove highly beneficial for the founders of promising, burgeoning startups that already possess a product, or are in the initial phases of development. ‘Consultant’ — I reckon we interpret this term differently. My aim is to be completely absorbed in a single product, crafting funnels, niches, strategies, and all that is necessary to repeatedly achieve the ‘product-market fit’ and significant revenue. To me, ‘consultant’ resonates more akin to freelancing than being an employee. My current goal is to kickstart as a consultant and aide, dealing with facilitating startups in their journey from point A to B. Here are two theoretical scenarios to illustrate my approach: Scenario 1: (Starting from point A) You have a product but struggle with marketing, adoption, software, strategy, sales, fundraising, or something else. I conduct an analysis and develop a strategy to reach point B. I take on the “dirty work” and implement necessary changes, including potential pivots or shifts (going all-in) to guide the product to point B. The goal is to reach point B, which could involve achieving a higher valuation, expanding the user base, increasing sales, or generating monthly revenue, among other metrics. Scenario 2: (Starting from point A) You have a plan or idea but face challenges with marketing, adoption, strategy, software, sales, fundraising, or something else. I analyze the situation and devise a strategy to reach point B. I tackle the necessary tasks, build the team, and overcome obstacles to propel the product to point B. I have come across the view that finding the elusive product-market fit is the job of the founder, and it’s hard for me to disagree. However, I believe that my support and experiences can help save money, many failures, and most importantly, time. I have spent a great deal of time learning from my mistakes, enduring failure after failure, and even had no one to ask for support or opinion, which is why I offer my help. Saving even a couple of years, realistically speaking, seems like a value I’m eager to provide… I invite you to share your thoughts and insights on these scenarios :) Closing Remarks: I appreciate your time and effort in reaching this point. This has been my journey, and I wouldn’t change it for the world. I had an extraordinary adventure, and now I’m ready for the next exciting battle with the market and new software products. While my entire narrative is centered around startups, especially the ones I personally built, I’m planning to share more insights drawn from all of my experiences, not just those as a co-founder. If you’re currently developing your product or even just considering the idea, I urge you to reach out to me. Perhaps together, we can create something monumental :) Thank you for your time and insights. I eagerly look forward to engaging in discussions and hearing your viewpoints. Please remember to like and subscribe. Nothing motivates to write more than positive feedback :) Matt.

The Cold-Calling AI Project I'm Working On Just Got Some Angel Investment!
reddit
LLM Vibe Score0
Human Vibe Score1
GrowthGetThis week

The Cold-Calling AI Project I'm Working On Just Got Some Angel Investment!

Hey y'all. The AI cold calling startup I've been working on for 3-4 months now just got a $2,500 angel investment, and we have 2 current customers, a credit card processing broker and a hospital equipment rental company based out of Texas. We have around $1,500 revenue so far, but we're having lots of trouble fulfilling the contracts because our tech just isn't "there" yet. I'm the Chief Tech Officer, and I'm also running some operations. The other main person in this is the CEO who has a strong sales background and came up with the idea. I've been working purely remotely, and it's great having some income because I'm stuck at home because I'm disabled, basically... ​ We're using 11labs, openai, google speech to text, and a sh\*tty online dialer right now to run the first MVP which runs locally on our "botrunners" computers, and we're developing a web app with django python + javascript react. Our plan is, after we get the webapp working better, to hire more botrunners for $3 per hour from countries like Phillipines and India, and we're going to try to track all the actions the botrunners take to be able to train the AI to run it fully automated. The biggest problem we're facing right now with the tech is reducing latency, it started at 27 seconds to get a response and I've been able to get it down to 6 seconds, but people are still hanging up. We're trying several ways to mitigate this, including having pre-rendered speech playing something like "Okay" or "As an artificial representative, I'm still learning to be quicker on the pickup. We appreciate your patience." One of the industries we want to target is international web development and digital marketing companies, and we want to use the bot to cold-call businesses to pitch them our services. The goal is to replace $30 an hour cold-callers from the USA with $3 per hour total-cost automation. Apparently the CEO was given a $5 million valuation from the strength of the MVP from a VC. Our investment so far was at a $300k valuation tho. It's exciting. Trying to get Twilio working to be able to make calls programmatically instead of using our hacky workaround. Let me know if you have any questions. I just wanted to share this awesome news!

The Cold-Calling AI Project I'm Working On Just Got Some Angel Investment!
reddit
LLM Vibe Score0
Human Vibe Score1
GrowthGetThis week

The Cold-Calling AI Project I'm Working On Just Got Some Angel Investment!

Hey y'all. The AI cold calling startup I've been working on for 3-4 months now just got a $2,500 angel investment, and we have 2 current customers, a credit card processing broker and a hospital equipment rental company based out of Texas. We have around $1,500 revenue so far, but we're having lots of trouble fulfilling the contracts because our tech just isn't "there" yet. I'm the Chief Tech Officer, and I'm also running some operations. The other main person in this is the CEO who has a strong sales background and came up with the idea. I've been working purely remotely, and it's great having some income because I'm stuck at home because I'm disabled, basically... ​ We're using 11labs, openai, google speech to text, and a sh\*tty online dialer right now to run the first MVP which runs locally on our "botrunners" computers, and we're developing a web app with django python + javascript react. Our plan is, after we get the webapp working better, to hire more botrunners for $3 per hour from countries like Phillipines and India, and we're going to try to track all the actions the botrunners take to be able to train the AI to run it fully automated. The biggest problem we're facing right now with the tech is reducing latency, it started at 27 seconds to get a response and I've been able to get it down to 6 seconds, but people are still hanging up. We're trying several ways to mitigate this, including having pre-rendered speech playing something like "Okay" or "As an artificial representative, I'm still learning to be quicker on the pickup. We appreciate your patience." One of the industries we want to target is international web development and digital marketing companies, and we want to use the bot to cold-call businesses to pitch them our services. The goal is to replace $30 an hour cold-callers from the USA with $3 per hour total-cost automation. Apparently the CEO was given a $5 million valuation from the strength of the MVP from a VC. Our investment so far was at a $300k valuation tho. It's exciting. Trying to get Twilio working to be able to make calls programmatically instead of using our hacky workaround. Let me know if you have any questions. I just wanted to share this awesome news!

I started a Tech Startup, and I feel totally STUCK.
reddit
LLM Vibe Score0
Human Vibe Score1
BetAltruistic6556This week

I started a Tech Startup, and I feel totally STUCK.

I made "Visual Love," a Computer Vision/AI-driven matchmaking platform. The idea is that although appearance is one of the biggest factors for starting a relationship, current matchmaking services and dating apps do not have the capability to search for people based on appearance. On Visual Love, you can find your ideal match simply by uploading a picture of your "ideal type." Also, you can connect with someone who thinks of you as their ideal type, simply by uploading your own picture. Or, there might be a perfect (mutually ideal) match. I made this CV/AI algorithm to scan faces, retrieve facial features, and make it possible to find the closest match among millions of others in a second. On average, regular dating app users swipe 8000 times over 8 months until they find their love. On Visual Love, users can find one in a million just in a second. You can try the tech demo on the website if you want to (find the link through my LinkedIn at the bottom of the post; I have to follow the "I will not promote" rule.) I thought this app would have the best chance in Asia, as people care a lot more about appearance in Asia (especially Korea and Japan). Also, my nationality is Korean, and I speak both Korean and Japanese as fluently as I speak English. So I came to Korea, and pitched to a number of VC/AC firms in Korea and Japan, and two of them were typically intersted in making investment. However, they both required me to provide market validation: how much it would cost per user acquisition, how much each user would pay on average, and etc, even after I provided them with a 3-years financial projection including market research based on other dating apps. ​ Everything might be going just as expected, or even better than anticipated, but I'm feeling very stuck now. I am not a business expert, and I don't have much idea on how to proceed from here. The problem is, it wouldn't quite work as expected when there are not many users. If I start with a small group of users, it's not any better than any other dating app. Matching users within a small group doesn't quite reflect the values of Visual Love. So I figured a way around: making a game version of Visual Love targeting 100k to 500k users to work as an initial distribution channel. This version will include finding look-alike celebrities, and solving look-alike face puzzles, and etc. But now, the problem is, I cannot continue this project by myself. I have no social/financial support, and I'm running low on cash. Also, although I'm from Korea, I lived in many different countries. I did my undergraduate in New York (Columbia University) and all my friends are in the US. I don't feel very included here. I can't stop feeling frustrated and distressed :( I'm sure Visual Love can reshape the future of the matchmaking market. But, only if I can continue this project by getting the fund I require. I'm open to any advice, and if you're interested in providing any help or working with me, please contact me through LinkedIn. https://www.linkedin.com/in/don-lee-3853b1264/

Hot Take: Not all your startups need AI forced into them
reddit
LLM Vibe Score0
Human Vibe Score1
bitorsicThis week

Hot Take: Not all your startups need AI forced into them

I'm a final year Computer Engineering student, hence applying for jobs all around. There's this particular trend I've noticed with startups that are coming up these days. That is, even for the absolute basic stuff they'll use 'AI', and they'll think they built something 'revolutionary'. No. You're breaking your product in ways you don't realise. An example, that even some well established companies are guilty of: AI Chatbots You absolutely don't need them and it's an entire gimmick. If you really wanna implement a chatbot, connect the user to an actual person on your end, which I think is not possible if you're at a 'startup' stage. You'll need employees who can handle user queries in real time. If the user really is stuck let them use the 'Contact Us' page. A really close relative of mine is very vocal about the frustration he faces whenever he tries to use the AI Chatbot on any well known e-com website. The only case for AI Chatbot that makes sense is when it's directing the customer to an actual customer support rep if none of the AI's solutions is working for the customer. Even then, implementing a search page for FAQ is extremely easy and user friendly. Another example: AI Interviewer I recently interviewed for a startup, and their whole interviewing process was AI'zed?!?! No real person at the other end, I was answering to their questions which were in video format. They even had a 'mascot' / 'AI interviewer' avatar designed by an AI (AI-ception???). This mascot just text-to-speech'ed all the questions for me to rewind and hear what I missed again. And I had to record video and audio to answer these questions on their platform itself. The entire interview process just could've been a questionnaire, or if you're really concerned on the integrity of the interviewee, just take a few minutes out of your oh-so-busy schedule as a startup owner. Atleast for hiring employees who would make the most impact on your product going ahead. I say the most impact, because (atleast as a developer) the work done by these employees would define how robust your product is, and/or how easily other features can be integrated into the codebase. Trust me, refactoring code later on would only cost you time and money. These resources would rather be more useful in other departments of your startup. The only use case for an AI Interviewer I see is for preparing for an actual interview, provided that feedback is given to the user at the earliest, which you don't need to worry about as a startup owner. So yeah, you're probably better off without integrating AI in your product. Thank you for reading. TLDR; The title; I know AI is the new thing and gets everyone drooling and all, but for the love of God, just focus on what your startup does best and put real people behind it; Integrating AI without human intervention is as good as a broken product; Do your hiring yourself, or through real people, emphasizing on the fact that the people you hire at an early stage will define your growth ahead;

Here’s How Chatbots Can Boost Your Small Business
reddit
LLM Vibe Score0
Human Vibe Score1
smanwerThis week

Here’s How Chatbots Can Boost Your Small Business

Chatbots are the next big thing in the tech world that are meant for business use. Almost every business can benefit from chatbots in one way or the other. They are now everywhere – the fastest rising star are basically computer-operated machines that can play a variety of roles such as customer service representative, social media manager, personal assistant and much more. Virtually every industry is seemingly investing in it. Chatbots became the flavor of the season because of their task management and problem solving skills. This is why companies are aggressively deploying chatbots to their business strategy to make it work right. What are Chatbots – How They Can Benefit Your Small Business? In essence, chatbots are simply a computer program tailor-made to mimic conversations with the help of artificial intelligence (AI). These computer-based programs are capable enough to respond to natural language text and voice inputs in a human way. Chatbots can take over a lot of time consuming tasks, allowing project managers to focus on other important matters and take high level decisions. Chatbots are not just the next big thing for digital and tech brands, small businesses can also get the most out from them. Small businesses should get into chatbots to streamline their routine project management practices and support other business operations – thereby saving budget, time, energy, while improving ROI. If you are not completely getting into it, here are some ways that help you deploy this rising technology in order to boost your small business strategy. Instant Customer Support One of the effective ways small businesses can implement a chatbot is an immediate customer support. If you belong to an industry that offers products and services, chances are you get so many phone calls and emails to educate people. Prior to allowing customers to clog up your inbox with unlimited queries, try using a chatbot that will save your valuable time. You can simply create an immediate customer support presence for customers who engage with your chatbot. Craft answers for all the popular queries so that your project management team can focus on other complex and important issues while chatbots addressing the most commonly asked questions. Moreover, it will add a consistency to your brand voice. You can control the tone and ensure that the chatbot will deliver your crafted messages. Boost Sales Leads Generation Chatbots are not just about sharing or collecting information. They can actually boost sales. But, how? Though they can’t replace your sales and marketing team, they can smartly assist them by being an immediate point of contact. Create an automated conversation for a new visitor and it can directly influence sales. Though chatbots are rising, they will ultimately carry on artificial intelligence that is capable for gathering the data required to curate a specific set of products for customers. For instance, if a user asks the chatbot for blue shirt in cotton, the chatbot can pull items with the particular details for the user. This process is cumulative and when next time the user communicates with the chatbot, it will consider their preferences. Increase Your Business Efficiency Though chatbots can’t perform every business operation, what they can do is eliminate few of the menial but important operations. Consider all the important tasks that your employees need to perform, such as answering customer queries, compiling data for a user, filling out form etc. Most of these tasks are monotonous in nature that allows you to train your chatbot to manage all these repetitive tasks with a low risk and high return of your valuable time. Reducing Cost and Resource Consumption Like any online task management system , chatbots are great to reduce manpower. From performing as a personal assistant to a customer sales representative, you can easily cut down the total number of resources that deal with customer complaints and feedback. You can utilize a chatbot, as it can do this work easily a human would usually do. Read Full article here

Learning Resources + Side Project Ideas
reddit
LLM Vibe Score0
Human Vibe Score1
Any-Reserve-4403This week

Learning Resources + Side Project Ideas

I made a post last night about my journey to landing an AI internship and have received a lot of responses asking about side projects and learning resources, so I am making another thread here consolidating this information for all those that are curious! Learning Process Step 1) Learn the basic fundamentals of the Math USE YOUTUBE!!! Literally just type in 'Machine Learning Math" and you will get tons of playlists covering nearly every topic. Personally I would focus on Linear Algebra and Calculus - specifically matrices/vector operations, dot products, eigenvectors/eigenvalues, derivatives and gradients. It might take a few tries until you find someone that meshes well with your learning style, but 3Blue1Brown is my top recommendation. I also read the book "Why Machines Learn" and found that extremely insightful. Work on implementing the math both with pen and paper then in Python. Step 2) Once you have a grip on the math fundamentals, I would pick up Hands-on Machine Learning with Sci-kit Learn, Keras and TensorFlow. This book was a game changer for me. It goes more in depth on the math and covers every topic from Linear Regression to the Transformers architecture. It also introduces you to Kaggle and some beginner level side projects. Step 3) After that book I would begin on side projects and also checking out other similar books, specifically Hands on Large Language Models and Hands on Generative AI. Step 4) If you have read all three of these books, and fully comprehend everything, then I would start looking up papers. I would just ask ChatGPT to feed you papers that are most relevant to your interests. Beginner Side Project Ideas 1) Build a Neural Network from scratch, using just Numpy. It can be super basic - have one input layer with 2 nodes, 1 hidden layer with 2 nodes, and output layer with one node. Learn about the forward feed process and play around with different activation functions and loss functions. Learn how these activation functions and loss functions impact backpropagation (hint: the derivatives of the activation functions and loss functions are all different). Get really good at this and understand the difference between regression models and classification models and which activation/loss functions go with which type of model. If you are really feeling crazy and are more focused on a SWE type of role, try doing it in a language other than python and try building a frontend for it so there is an interface where a user can input data and select their model architecture. 2) Build a CNN Image Classifier for the MNIST - Get familiar with the intricacies of CNN's, image manipulation, and basic computer vision concepts. 3) Build on top of open source LLM's. Go to Hugging Face's models page and start playing around with some. 4) KAGGLE COMPETITIONS - I will not explain further, do Kaggle Competitions. Other Resources I've mentioned YouTube, several books and Hugging Face. I also recommend: DataLemur.com \- Python practice, SQL practices, ML questions - his book Ace the Data Science Interview is also very good. X.com \- follow people that are prominent in the space. I joined an AI and Math Group that is constantly posting resources in there deep-ml.com If you have found any of this helpful - feel free to give me a follow on X and stay in touch @ x.com/hark0nnen\

Randomly asked ChatGPT and Claude for a 4 year roadmap for an ML Engineer
reddit
LLM Vibe Score0
Human Vibe Score1
Brilliant_Fishing110This week

Randomly asked ChatGPT and Claude for a 4 year roadmap for an ML Engineer

Title, Is it actually a good plan ?? If no, why not ?? \\🚀 4-Year Roadmap to Becoming a High-Earning ML Engineer & Entrepreneur\\ \\(With Smartwork & Realistic 60-70% Execution Feasibility)\\ \\🟢 Year 1: Strong Foundation & Initial Projects (0-12 Months)\\ 🎯 \\Goal: Master Python & ML Fundamentals\\ \\🔹 1-4 Months (Python & Math Strengthening)\\ ✅ Python Mastery \- Daily LeetCode Easy problems (minimum 2) \- Build automation projects \- NumPy & Pandas mastery \- DSA fundamentals ✅ Mathematics Foundation \- Linear Algebra basics \- Statistics fundamentals \- Basic calculus concepts ✅ First Mini-Hackathon Participation \- Join beginner-friendly hackathons \- Focus on Python-based challenges \- Team up with other beginners 💡 \\Smart Move:\\ \- Join Discord/Slack hackathon communities \- Practice collaborative coding \- Build network with fellow participants \\🔹 5-8 Months (ML Foundations)\\ ✅ Machine Learning Basics \- Supervised Learning \- Model evaluation \- Feature engineering \- scikit-learn projects ✅ Participate in 2-3 ML Hackathons \- Kaggle Getting Started competitions \- Local ML hackathons \- University hackathons ✅ Start LinkedIn & GitHub Portfolio 💡 \\Smart Move:\\ \- Document hackathon experiences \- Share learnings on LinkedIn \- Focus on completion over winning \\🔹 9-12 Months (Deep Learning Introduction)\\ ✅ Basic Deep Learning \- Neural network fundamentals \- PyTorch basics \- Computer vision tasks \- Basic NLP ✅ Advanced Hackathon Participation \- AI/ML specific hackathons \- Team lead in 1-2 hackathons \- Start mentoring beginners \\🔵 Year 1 Expected Outcome (60-70% Execution)\\ ✔ \\Strong Python & ML foundations\\ ✔ \\5-6 hackathon participations\\ ✔ \\Active GitHub (100+ commits)\\ ✔ \\Growing LinkedIn (300+ connections)\\ 💰 \\Earning Expectation → ₹8K-₹20K per month (Projects/Internship)\\ \\🟢 Year 2: Professional Growth & Specialization (12-24 Months)\\ 🎯 \\Goal: Build Professional Experience & Recognition\\ \\🔹 1-6 Months (Technical Depth)\\ ✅ Advanced ML Topics \- Deep Learning architectures \- Computer Vision OR NLP \- MLOps basics (Docker, FastAPI) \- Cloud fundamentals (AWS/GCP) ✅ Hackathon Achievements \- Win minor prizes in 2-3 hackathons \- Lead teams in major hackathons \- Network with sponsors ✅ Start Technical Blogging 💡 \\Smart Move:\\ \- Focus on hackathon projects that align with career goals \- Build relationships with companies at hackathons \- Create detailed project documentation \\🔹 7-12 Months (Professional Experience)\\ ✅ Secure ML Role/Internship ✅ Advanced Project Building ✅ Open Source Contributions ✅ Organize Small Hackathons 💡 \\Smart Move:\\ \- Use hackathon network for job referrals \- Convert hackathon projects into full products \- Build mentor reputation \\🔵 Year 2 Expected Outcome (60-70% Execution)\\ ✔ \\Professional ML experience\\ ✔ \\10+ hackathon participations\\ ✔ \\1-2 hackathon wins\\ ✔ \\Strong industry network\\ 💰 \\Earning Expectation → ₹40K-₹70K per month (Job/Freelancing)\\ \\🟢 Year 3: Scaling & Business Foundation (24-36 Months)\\ 🎯 \\Goal: Establish Multiple Income Streams\\ \\🔹 1-4 Months (Expertise Building)\\ ✅ Choose Specialization \- MLOps \- Computer Vision \- NLP/LLMs \- Generative AI ✅ Advanced Competitions \- International hackathons \- High-prize competitions \- Corporate ML challenges ✅ Start Consulting Services 💡 \\Smart Move:\\ \- Use hackathon wins for marketing \- Build service packages around expertise \- Network with corporate sponsors \\🔹 5-8 Months (Business Development)\\ ✅ Scale Services ✅ Build Client Network ✅ Create Training Programs ✅ Hackathon Mentorship Program 💡 \\Smart Move:\\ \- Convert hackathon projects to products \- Use event networks for client acquisition \- Build authority through speaking \\🔹 9-12 Months (Growth & Innovation)\\ ✅ Product Development ✅ Team Building ✅ Innovation Focus ✅ Hackathon Organization \\🔵 Year 3 Expected Outcome (60-70% Execution)\\ ✔ \\Established ML business/career\\ ✔ \\Known in hackathon community\\ ✔ \\Multiple income streams\\ ✔ \\Strong industry presence\\ 💰 \\Earning Expectation → ₹1L-₹2L per month (Multiple Streams)\\ \\🟢 Year 4: Scale & Leadership (36-48 Months)\\ 🎯 \\Goal: Build AI Company & Achieve Financial Freedom\\ \\🔹 1-4 Months (Business Scaling)\\ ✅ Company Formation \- AI consulting firm \- Product development \- Training programs ✅ Hackathon Innovation \- Launch own hackathon series \- Corporate partnerships \- Prize sponsorships ✅ Team Expansion 💡 \\Smart Move:\\ \- Use hackathon network for hiring \- Create unique event formats \- Build corporate relationships \\🔹 5-8 Months (Market Leadership)\\ ✅ Product Launch ✅ Service Expansion ✅ International Presence ✅ Innovation Hub Creation 💡 \\Smart Move:\\ \- Create hackathon-to-hiring pipeline \- Build educational programs \- Establish thought leadership \\🔹 9-12 Months (Empire Building)\\ ✅ Multiple Revenue Streams \- AI products \- Consulting services \- Educational programs \- Event organization \- Investment returns ✅ Industry Leadership \- Conference speaking \- Published content \- Community leadership \\🔵 Year 4 Expected Outcome (60-70% Execution)\\ ✔ \\Established AI company\\ ✔ \\Major hackathon organizer\\ ✔ \\Multiple product lines\\ ✔ \\Industry authority status\\ 💰 \\Earning Expectation → ₹3L-₹5L+ per month (Business Income)\\ \\📊 FINAL RATING\\ ✅ \\Comprehensive growth plan\\ ✅ \\Strong community focus\\ ✅ \\Multiple income pathways\\ 💡 \\If 100% Execution → 8.5/10 Feasibility\\ 💡 \\If 50% Execution → 6/10 Feasibility\\ 🔥 \\Conclusion: A balanced path to ML mastery and entrepreneurship, built through consistent growth and community engagement!\\ 🚀 \\Key Success Factors:\\ Regular hackathon participation Strong community involvement Consistent skill development Strategic network building Focus on both technical and business growth

MMML | Deploy HuggingFace training model rapidly based on MetaSpore
reddit
LLM Vibe Score0
Human Vibe Score1
qazmkoppThis week

MMML | Deploy HuggingFace training model rapidly based on MetaSpore

A few days ago, HuggingFace announced a $100 million Series C funding round, which was big news in open source machine learning and could be a sign of where the industry is headed. Two days before the HuggingFace funding announcement, open-source machine learning platform MetaSpore released a demo based on the HuggingFace Rapid deployment pre-training model. As deep learning technology makes innovative breakthroughs in computer vision, natural language processing, speech understanding, and other fields, more and more unstructured data are perceived, understood, and processed by machines. These advances are mainly due to the powerful learning ability of deep learning. Through pre-training of deep models on massive data, the models can capture the internal data patterns, thus helping many downstream tasks. With the industry and academia investing more and more energy in the research of pre-training technology, the distribution warehouses of pre-training models such as HuggingFace and Timm have emerged one after another. The open-source community release pre-training significant model dividends at an unprecedented speed. In recent years, the data form of machine modeling and understanding has gradually evolved from single-mode to multi-mode, and the semantic gap between different modes is being eliminated, making it possible to retrieve data across modes. Take CLIP, OpenAI’s open-source work, as an example, to pre-train the twin towers of images and texts on a dataset of 400 million pictures and texts and connect the semantics between pictures and texts. Many researchers in the academic world have been solving multimodal problems such as image generation and retrieval based on this technology. Although the frontier technology through the semantic gap between modal data, there is still a heavy and complicated model tuning, offline data processing, high performance online reasoning architecture design, heterogeneous computing, and online algorithm be born multiple processes and challenges, hindering the frontier multimodal retrieval technologies fall to the ground and pratt &whitney. DMetaSoul aims at the above technical pain points, abstracting and uniting many links such as model training optimization, online reasoning, and algorithm experiment, forming a set of solutions that can quickly apply offline pre-training model to online. This paper will introduce how to use the HuggingFace community pre-training model to conduct online reasoning and algorithm experiments based on MetaSpore technology ecology so that the benefits of the pre-training model can be fully released to the specific business or industry and small and medium-sized enterprises. And we will give the text search text and text search graph two multimodal retrieval demonstration examples for your reference. Multimodal semantic retrieval The sample architecture of multimodal retrieval is as follows: Our multimodal retrieval system supports both text search and text search application scenarios, including offline processing, model reasoning, online services, and other core modules: https://preview.redd.it/mdyyv1qmdz291.png?width=1834&format=png&auto=webp&s=e9e10710794c78c64cc05adb75db385aa53aba40 Offline processing, including offline data processing processes for different application scenarios of text search and text search, including model tuning, model export, data index database construction, data push, etc. Model inference. After the offline model training, we deployed our NLP and CV large models based on the MetaSpore Serving framework. MetaSpore Serving helps us conveniently perform online inference, elastic scheduling, load balancing, and resource scheduling in heterogeneous environments. Online services. Based on MetaSpore’s online algorithm application framework, MetaSpore has a complete set of reusable online search services, including Front-end retrieval UI, multimodal data preprocessing, vector recall and sorting algorithm, AB experimental framework, etc. MetaSpore also supports text search by text and image scene search by text and can be migrated to other application scenarios at a low cost. The HuggingFace open source community has provided several excellent baseline models for similar multimodal retrieval problems, which are often the starting point for actual optimization in the industry. MetaSpore also uses the pre-training model of the HuggingFace community in its online services of searching words by words and images by words. Searching words by words is based on the semantic similarity model of the question and answer field optimized by MetaSpore, and searching images by words is based on the community pre-training model. These community open source pre-training models are exported to the general ONNX format and loaded into MetaSpore Serving for online reasoning. The following sections will provide a detailed description of the model export and online retrieval algorithm services. The reasoning part of the model is standardized SAAS services with low coupling with the business. Interested readers can refer to my previous post: The design concept of MetaSpore, a new generation of the one-stop machine learning platform. 1.1 Offline Processing Offline processing mainly involves the export and loading of online models and index building and pushing of the document library. You can follow the step-by-step instructions below to complete the offline processing of text search and image search and see how the offline pre-training model achieves reasoning at MetaSpore. 1.1.1 Search text by text Traditional text retrieval systems are based on literal matching algorithms such as BM25. Due to users’ diverse query words, a semantic gap between query words and documents is often encountered. For example, users misspell “iPhone” as “Phone,” and search terms are incredibly long, such as “1 \~ 3 months old baby autumn small size bag pants”. Traditional text retrieval systems will use spelling correction, synonym expansion, search terms rewriting, and other means to alleviate the semantic gap but fundamentally fail to solve this problem. Only when the retrieval system fully understands users’ query terms and documents can it meet users’ retrieval demands at the semantic level. With the continuous progress of pre-training and representational learning technology, some commercial search engines continue to integrate semantic vector retrieval methods based on symbolic learning into the retrieval ecology. Semantic retrieval model This paper introduces a set of semantic vector retrieval applications. MetaSpore built a set of semantic retrieval systems based on encyclopedia question and answer data. MetaSpore adopted the Sentence-Bert model as the semantic vector representation model, which fine-tunes the twin tower BERT in supervised or unsupervised ways to make the model more suitable for retrieval tasks. The model structure is as follows: The query-Doc symmetric two-tower model is used in text search and question and answer retrieval. The vector representation of online Query and offline DOC share the same vector representation model, so it is necessary to ensure the consistency of the offline DOC library building model and online Query inference model. The case uses MetaSpore’s text representation model Sbert-Chinese-QMC-domain-V1, optimized in the open-source semantically similar data set. This model will express the question and answer data as a vector in offline database construction. The user query will be expressed as a vector by this model in online retrieval, ensuring that query-doc in the same semantic space, users’ semantic retrieval demands can be guaranteed by vector similarity metric calculation. Since the text presentation model does vector encoding for Query online, we need to export the model for use by the online service. Go to the q&A data library code directory and export the model concerning the documentation. In the script, Pytorch Tracing is used to export the model. The models are exported to the “./export “directory. The exported models are mainly ONNX models used for wired reasoning, Tokenizer, and related configuration files. The exported models are loaded into MetaSpore Serving by the online Serving system described below for model reasoning. Since the exported model will be copied to the cloud storage, you need to configure related variables in env.sh. \Build library based on text search \ The retrieval database is built on the million-level encyclopedia question and answer data set. According to the description document, you need to download the data and complete the database construction. The question and answer data will be coded as a vector by the offline model, and then the database construction data will be pushed to the service component. The whole process of database construction is described as follows: Preprocessing, converting the original data into a more general JSonline format for database construction; Build index, use the same model as online “sbert-Chinese-qmc-domain-v1” to index documents (one document object per line); Push inverted (vector) and forward (document field) data to each component server. The following is an example of the database data format. After offline database construction is completed, various data are pushed to corresponding service components, such as Milvus storing vector representation of documents and MongoDB storing summary information of documents. Online retrieval algorithm services will use these service components to obtain relevant data. 1.1.2 Search by text Text and images are easy for humans to relate semantically but difficult for machines. First of all, from the perspective of data form, the text is the discrete ID type of one-dimensional data based on words and words. At the same time, images are continuous two-dimensional or three-dimensional data. Secondly, the text is a subjective creation of human beings, and its expressive ability is vibrant, including various turning points, metaphors, and other expressions, while images are machine representations of the objective world. In short, bridging the semantic gap between text and image data is much more complex than searching text by text. The traditional text search image retrieval technology generally relies on the external text description data of the image or the nearest neighbor retrieval technology and carries out the retrieval through the image associated text, which in essence degrades the problem to text search. However, it will also face many issues, such as obtaining the associated text of pictures and whether the accuracy of text search by text is high enough. The depth model has gradually evolved from single-mode to multi-mode in recent years. Taking the open-source project of OpenAI, CLIP, as an example, train the model through the massive image and text data of the Internet and map the text and image data into the same semantic space, making it possible to implement the text and image search technology based on semantic vector. CLIP graphic model The text search pictures introduced in this paper are implemented based on semantic vector retrieval, and the CLIP pre-training model is used as the two-tower retrieval architecture. Because the CLIP model has trained the semantic alignment of the twin towers’ text and image side models on the massive graphic and text data, it is particularly suitable for the text search graph scene. Due to the different image and text data forms, the Query-Doc asymmetric twin towers model is used for text search image retrieval. The image-side model of the twin towers is used for offline database construction, and the text-side model is used for the online return. In the final online retrieval, the database data of the image side model will be searched after the text side model encodes Query, and the CLIP pre-training model guarantees the semantic correlation between images and texts. The model can draw the graphic pairs closer in vector space by pre-training on a large amount of visual data. Here we need to export the text-side model for online MetaSpore Serving inference. Since the retrieval scene is based on Chinese, the CLIP model supporting Chinese understanding is selected. The exported content includes the ONNX model used for online reasoning and Tokenizer, similar to the text search. MetaSpore Serving can load model reasoning through the exported content. Build library on Image search You need to download the Unsplash Lite library data and complete the construction according to the instructions. The whole process of database construction is described as follows: Preprocessing, specify the image directory, and then generate a more general JSOnline file for library construction; Build index, use OpenAI/Clip-Vit-BASE-Patch32 pre-training model to index the gallery, and output one document object for each line of index data; Push inverted (vector) and forward (document field) data to each component server. Like text search, after offline database construction, relevant data will be pushed to service components, called by online retrieval algorithm services to obtain relevant data. 1.2 Online Services The overall online service architecture diagram is as follows: ​ https://preview.redd.it/nz8zrbbpdz291.png?width=1280&format=png&auto=webp&s=28dae7e031621bc8819519667ed03d8d085d8ace Multi-mode search online service system supports application scenarios such as text search and text search. The whole online service consists of the following parts: Query preprocessing service: encapsulate preprocessing logic (including text/image, etc.) of pre-training model, and provide services through gRPC interface; Retrieval algorithm service: the whole algorithm processing link includes AB experiment tangent flow configuration, MetaSpore Serving call, vector recall, sorting, document summary, etc.; User entry service: provides a Web UI interface for users to debug and track down problems in the retrieval service. From a user request perspective, these services form invocation dependencies from back to front, so to build up a multimodal sample, you need to run each service from front to back first. Before doing this, remember to export the offline model, put it online and build the library first. This article will introduce the various parts of the online service system and make the whole service system step by step according to the following guidance. See the ReadME at the end of this article for more details. 1.2.1 Query preprocessing service Deep learning models tend to be based on tensors, but NLP/CV models often have a preprocessing part that translates raw text and images into tensors that deep learning models can accept. For example, NLP class models often have a pre-tokenizer to transform text data of string type into discrete tensor data. CV class models also have similar processing logic to complete the cropping, scaling, transformation, and other processing of input images through preprocessing. On the one hand, considering that this part of preprocessing logic is decoupled from tensor reasoning of the depth model, on the other hand, the reason of the depth model has an independent technical system based on ONNX, so MetaSpore disassembled this part of preprocessing logic. NLP pretreatment Tokenizer has been integrated into the Query pretreatment service. MetaSpore dismantlement with a relatively general convention. Users only need to provide preprocessing logic files to realize the loading and prediction interface and export the necessary data and configuration files loaded into the preprocessing service. Subsequent CV preprocessing logic will also be integrated in this manner. The preprocessing service currently provides the gRPC interface invocation externally and is dependent on the Query preprocessing (QP) module in the retrieval algorithm service. After the user request reaches the retrieval algorithm service, it will be forwarded to the service to complete the data preprocessing and continue the subsequent processing. The ReadMe provides details on how the preprocessing service is started, how the preprocessing model exported offline to cloud storage enters the service, and how to debug the service. To further improve the efficiency and stability of model reasoning, MetaSpore Serving implements a Python preprocessing submodule. So MetaSpore can provide gRPC services through user-specified preprocessor.py, complete Tokenizer or CV-related preprocessing in NLP, and translate requests into a Tensor that deep models can handle. Finally, the model inference is carried out by MetaSpore, Serving subsequent sub-modules. Presented here on the lot code: https://github.com/meta-soul/MetaSpore/compare/add\python\preprocessor 1.2.2 Retrieval algorithm services Retrieval algorithm service is the core of the whole online service system, which is responsible for the triage of experiments, the assembly of algorithm chains such as preprocessing, recall, sorting, and the invocation of dependent component services. The whole retrieval algorithm service is developed based on the Java Spring framework and supports multi-mode retrieval scenarios of text search and text search graph. Due to good internal abstraction and modular design, it has high flexibility and can be migrated to similar application scenarios at a low cost. Here’s a quick guide to configuring the environment to set up the retrieval algorithm service. See ReadME for more details: Install dependent components. Use Maven to install the online-Serving component Search for service configurations. Copy the template configuration file and replace the MongoDB, Milvus, and other configurations based on the development/production environment. Install and configure Consul. Consul allows you to synchronize the search service configuration in real-time, including cutting the flow of experiments, recall parameters, and sorting parameters. The project’s configuration file shows the current configuration parameters of text search and text search. The parameter modelName in the stage of pretreatment and recall is the corresponding model exported in offline processing. Start the service. Once the above configuration is complete, the retrieval service can be started from the entry script. Once the service is started, you can test it! For example, for a user with userId=10 who wants to query “How to renew ID card,” access the text search service. 1.2.3 User Entry Service Considering that the retrieval algorithm service is in the form of the API interface, it is difficult to locate and trace the problem, especially for the text search image scene can intuitively display the retrieval results to facilitate the iterative optimization of the retrieval algorithm. This paper provides a lightweight Web UI interface for text search and image search, a search input box, and results in a display page for users. Developed by Flask, the service can be easily integrated with other retrieval applications. The service calls the retrieval algorithm service and displays the returned results on the page. It’s also easy to install and start the service. Once you’re done, go to http://127.0.0.1:8090 to see if the search UI service is working correctly. See the ReadME at the end of this article for details. Multimodal system demonstration The multimodal retrieval service can be started when offline processing and online service environment configuration have been completed following the above instructions. Examples of textual searches are shown below. Enter the entry of the text search map application, enter “cat” first, and you can see that the first three digits of the returned result are cats: https://preview.redd.it/d7syq47rdz291.png?width=1280&format=png&auto=webp&s=b43df9abd380b7d9a52e3045dd787f4feeb69635 If you add a color constraint to “cat” to retrieve “black cat,” you can see that it does return a black cat: ​ https://preview.redd.it/aa7pxx8tdz291.png?width=1280&format=png&auto=webp&s=e3727c29d1bde6eea2e1cccf6c46d3cae3f4750e Further, strengthen the constraint on the search term, change it to “black cat on the bed,” and return results containing pictures of a black cat climbing on the bed: ​ https://preview.redd.it/2mw4qpjudz291.png?width=1280&format=png&auto=webp&s=1cf1db667892b9b3a40451993680fbd6980b5520 The cat can still be found through the text search system after the color and scene modification in the above example. Conclusion The cutting-edge pre-training technology can bridge the semantic gap between different modes, and the HuggingFace community can greatly reduce the cost for developers to use the pre-training model. Combined with the technological ecology of MetaSpore online reasoning and online microservices provided by DMetaSpore, the pre-training model is no longer mere offline dabbling. Instead, it can truly achieve end-to-end implementation from cutting-edge technology to industrial scenarios, fully releasing the dividends of the pre-training large model. In the future, DMetaSoul will continue to improve and optimize the MetaSpore technology ecosystem: More automated and wider access to HuggingFace community ecology. MetaSpore will soon release a common model rollout mechanism to make HuggingFace ecologically accessible and will later integrate preprocessing services into online services. Multi-mode retrieval offline algorithm optimization. For multimodal retrieval scenarios, MetaSpore will continuously iteratively optimize offline algorithm components, including text recall/sort model, graphic recall/sort model, etc., to improve the accuracy and efficiency of the retrieval algorithm. For related code and reference documentation in this article, please visit: https://github.com/meta-soul/MetaSpore/tree/main/demo/multimodal/online Some images source: https://github.com/openai/CLIP/raw/main/CLIP.png https://www.sbert.net/examples/training/sts/README.html

Advise Needed] Mechanical engineer trying to venture into ML
reddit
LLM Vibe Score0
Human Vibe Score1
dummifiedmeThis week

Advise Needed] Mechanical engineer trying to venture into ML

Hello fellow redditors, ​ As the title suggests, I am a mechanical engineer with a masters in mechanical design from a top institute in India. Directly after my masters, I got a job but left it after exactly one year to pursue civil services. And that decision has left a 3 year void in my career sheet. During these three years, the most I have been in touch with tech/science was through random personal automations using python and digital notetaking systems or a few readings here and there. I don't know if they have anything to do with each other, but I am lazy (for repetitive work) and have an eye to optimize /automate my workflow. The later led to me learning python, a bit of git and css/html. With regard to my prgramming skills, I learn quickly and had good grades in all the computer science courses we had at the college (C++, DSA and Modelling-Simulation). I have also programmed in Matlab for basic usage in research and also in LAMDA for nanomechanics/molecular simulation. At my work, I had written a python code to automate the process of model setup for FE which reduced the human intervention from very menial routine work (hindi: gadha majdoori). As for my mechanical engineering skills, I am good with CAE softwares and can readily work with them. So first thing I am doing right now is applying in various positions in the same domain as I had worked 3 years ago. All this while, I got introduced to the world of Machine Learning, AI and Deep Learning. So, I wish to learn ML to slowly venture into that line. So yeah, my question here to the CS veterans is, how to start with the learning, from where, what can I expect from the field and how much time is necessary for be able to get a decent opportunity in that domain? Currently, I have started with Andrew Ng's course on Courcera: Course 1 of Deep Learning Specialisation. https://www.coursera.org/learn/neural-networks-deep-learning but it seems rather theoretical to me and without implementation it will be difficult for me to grasp (I feel). Also, I explored fast.ai course which follows top-down approach unlike Andrew. I haven't committed to it. Kindly guide. All kinds of opinon are welcome. PS. I am 28yo

How I Built A Simple ‘BPO’ Company, All AI Employees (All Local)
reddit
LLM Vibe Score0
Human Vibe Score1
AssistanceOk2217This week

How I Built A Simple ‘BPO’ Company, All AI Employees (All Local)

Disrupting the BPO Industry: My Journey Building a Fully Automated Company with AI Employees Full Article : https://medium.com/@learn-simplified/how-i-built-a-simple-bpo-company-all-ai-employees-all-local-631e48fa908a ​ https://preview.redd.it/htjo1mancl2d1.png?width=1586&format=png&auto=webp&s=7e77f4c66e5ca55a8b0ea6969c43a458503ad921 ● What Are We Doing Today? We are building a BPO (Business Process Outsourcing) call center for an imaginary electric company called "Aniket Very General Electric Company". We will create different departments staffed by AI agents who can chat (and eventually speak in next part) with customers to answer questions, handle complaints, or provide services. ● Why Should You Read This Article? Learning how to build AI agents that can do tasks in real setting, co ordinate w/ human, AI, providing technical support will be a highly valuable skill. ● How Are We Going to Build Our All AI Employees Company? ○ We will explain what BPO and call centers are. ○ Our AI company will have departments like Customer Service, Tech Support, Billing & Payments, Outage Management, and Onboarding Customers. ○ We will use Docker containers to run the Dify AI platform as the base. ○ The AI agents will use the LLaMA-3 language model from Meta AI. ○ We may use Groq's AI accelerator chip to make LLaMA-3 faster. ○ Each department will have a knowledge base of text files that the AI agents can reference. ● Let's Get Cooking! This section provides setup instructions for installing Docker, Ollama (for running LLaMA-3), and the Dify AI platform. It also outlines the different AI agents we will create for departments like Reception, Customer Service, Billing, Tech Support, etc. ● Let's Design our Organization ○ We explain how each department's AI agents will have their own knowledge base, like an employee handbook. ○ The knowledge bases will contain policies, procedures, and other key information. ○ The AI agents can quickly reference this information to provide accurate and knowledgeable responses. ● Let's Meet Our AI Employees ○ We chose the LLaMA-3 70B model as the base for all AI agents across departments. ○ We give the AI agents customized prompts to define their personalities and roles. ○ The knowledge bases act as training materials tailored to each department. ○ In the future, AI agents could have additional tools like ticket systems and integrations. ● Let's Run Our BPO Organization Now that the AI workforce and knowledge bases are ready, we can open our BPO company and have the AI agents start handling customer inquiries across different departments like billing, tech support, outages, and new connections. ● Debugging This section highlights the importance of debugging, showing traces of how the language model understands customer queries and retrieves relevant context from knowledge bases to provide good responses. ● Future Work ○ Scale up to handle more customers using cloud services or distributed computing. ○ Move AI agents and knowledge bases to the cloud for accessibility and maintenance. ○ Fine-tune language models for better performance in each department. ○ Use scalable vector databases for faster knowledge retrieval. ○ Enable voice interfaces and computer vision for more natural interactions. ○ Implement continuous learning so AI agents can expand their knowledge over time. The article demonstrates the potential of building an actual AI-powered company and raises thought-provoking questions about the role of humans, ethics, and using AI to create a better world. ​

Randomly asked ChatGPT and Claude for a 4 year roadmap for an ML Engineer
reddit
LLM Vibe Score0
Human Vibe Score1
Brilliant_Fishing110This week

Randomly asked ChatGPT and Claude for a 4 year roadmap for an ML Engineer

Title, Is it actually a good plan ?? If no, why not ?? \\🚀 4-Year Roadmap to Becoming a High-Earning ML Engineer & Entrepreneur\\ \\(With Smartwork & Realistic 60-70% Execution Feasibility)\\ \\🟢 Year 1: Strong Foundation & Initial Projects (0-12 Months)\\ 🎯 \\Goal: Master Python & ML Fundamentals\\ \\🔹 1-4 Months (Python & Math Strengthening)\\ ✅ Python Mastery \- Daily LeetCode Easy problems (minimum 2) \- Build automation projects \- NumPy & Pandas mastery \- DSA fundamentals ✅ Mathematics Foundation \- Linear Algebra basics \- Statistics fundamentals \- Basic calculus concepts ✅ First Mini-Hackathon Participation \- Join beginner-friendly hackathons \- Focus on Python-based challenges \- Team up with other beginners 💡 \\Smart Move:\\ \- Join Discord/Slack hackathon communities \- Practice collaborative coding \- Build network with fellow participants \\🔹 5-8 Months (ML Foundations)\\ ✅ Machine Learning Basics \- Supervised Learning \- Model evaluation \- Feature engineering \- scikit-learn projects ✅ Participate in 2-3 ML Hackathons \- Kaggle Getting Started competitions \- Local ML hackathons \- University hackathons ✅ Start LinkedIn & GitHub Portfolio 💡 \\Smart Move:\\ \- Document hackathon experiences \- Share learnings on LinkedIn \- Focus on completion over winning \\🔹 9-12 Months (Deep Learning Introduction)\\ ✅ Basic Deep Learning \- Neural network fundamentals \- PyTorch basics \- Computer vision tasks \- Basic NLP ✅ Advanced Hackathon Participation \- AI/ML specific hackathons \- Team lead in 1-2 hackathons \- Start mentoring beginners \\🔵 Year 1 Expected Outcome (60-70% Execution)\\ ✔ \\Strong Python & ML foundations\\ ✔ \\5-6 hackathon participations\\ ✔ \\Active GitHub (100+ commits)\\ ✔ \\Growing LinkedIn (300+ connections)\\ 💰 \\Earning Expectation → ₹8K-₹20K per month (Projects/Internship)\\ \\🟢 Year 2: Professional Growth & Specialization (12-24 Months)\\ 🎯 \\Goal: Build Professional Experience & Recognition\\ \\🔹 1-6 Months (Technical Depth)\\ ✅ Advanced ML Topics \- Deep Learning architectures \- Computer Vision OR NLP \- MLOps basics (Docker, FastAPI) \- Cloud fundamentals (AWS/GCP) ✅ Hackathon Achievements \- Win minor prizes in 2-3 hackathons \- Lead teams in major hackathons \- Network with sponsors ✅ Start Technical Blogging 💡 \\Smart Move:\\ \- Focus on hackathon projects that align with career goals \- Build relationships with companies at hackathons \- Create detailed project documentation \\🔹 7-12 Months (Professional Experience)\\ ✅ Secure ML Role/Internship ✅ Advanced Project Building ✅ Open Source Contributions ✅ Organize Small Hackathons 💡 \\Smart Move:\\ \- Use hackathon network for job referrals \- Convert hackathon projects into full products \- Build mentor reputation \\🔵 Year 2 Expected Outcome (60-70% Execution)\\ ✔ \\Professional ML experience\\ ✔ \\10+ hackathon participations\\ ✔ \\1-2 hackathon wins\\ ✔ \\Strong industry network\\ 💰 \\Earning Expectation → ₹40K-₹70K per month (Job/Freelancing)\\ \\🟢 Year 3: Scaling & Business Foundation (24-36 Months)\\ 🎯 \\Goal: Establish Multiple Income Streams\\ \\🔹 1-4 Months (Expertise Building)\\ ✅ Choose Specialization \- MLOps \- Computer Vision \- NLP/LLMs \- Generative AI ✅ Advanced Competitions \- International hackathons \- High-prize competitions \- Corporate ML challenges ✅ Start Consulting Services 💡 \\Smart Move:\\ \- Use hackathon wins for marketing \- Build service packages around expertise \- Network with corporate sponsors \\🔹 5-8 Months (Business Development)\\ ✅ Scale Services ✅ Build Client Network ✅ Create Training Programs ✅ Hackathon Mentorship Program 💡 \\Smart Move:\\ \- Convert hackathon projects to products \- Use event networks for client acquisition \- Build authority through speaking \\🔹 9-12 Months (Growth & Innovation)\\ ✅ Product Development ✅ Team Building ✅ Innovation Focus ✅ Hackathon Organization \\🔵 Year 3 Expected Outcome (60-70% Execution)\\ ✔ \\Established ML business/career\\ ✔ \\Known in hackathon community\\ ✔ \\Multiple income streams\\ ✔ \\Strong industry presence\\ 💰 \\Earning Expectation → ₹1L-₹2L per month (Multiple Streams)\\ \\🟢 Year 4: Scale & Leadership (36-48 Months)\\ 🎯 \\Goal: Build AI Company & Achieve Financial Freedom\\ \\🔹 1-4 Months (Business Scaling)\\ ✅ Company Formation \- AI consulting firm \- Product development \- Training programs ✅ Hackathon Innovation \- Launch own hackathon series \- Corporate partnerships \- Prize sponsorships ✅ Team Expansion 💡 \\Smart Move:\\ \- Use hackathon network for hiring \- Create unique event formats \- Build corporate relationships \\🔹 5-8 Months (Market Leadership)\\ ✅ Product Launch ✅ Service Expansion ✅ International Presence ✅ Innovation Hub Creation 💡 \\Smart Move:\\ \- Create hackathon-to-hiring pipeline \- Build educational programs \- Establish thought leadership \\🔹 9-12 Months (Empire Building)\\ ✅ Multiple Revenue Streams \- AI products \- Consulting services \- Educational programs \- Event organization \- Investment returns ✅ Industry Leadership \- Conference speaking \- Published content \- Community leadership \\🔵 Year 4 Expected Outcome (60-70% Execution)\\ ✔ \\Established AI company\\ ✔ \\Major hackathon organizer\\ ✔ \\Multiple product lines\\ ✔ \\Industry authority status\\ 💰 \\Earning Expectation → ₹3L-₹5L+ per month (Business Income)\\ \\📊 FINAL RATING\\ ✅ \\Comprehensive growth plan\\ ✅ \\Strong community focus\\ ✅ \\Multiple income pathways\\ 💡 \\If 100% Execution → 8.5/10 Feasibility\\ 💡 \\If 50% Execution → 6/10 Feasibility\\ 🔥 \\Conclusion: A balanced path to ML mastery and entrepreneurship, built through consistent growth and community engagement!\\ 🚀 \\Key Success Factors:\\ Regular hackathon participation Strong community involvement Consistent skill development Strategic network building Focus on both technical and business growth

How I landed an internship in AI
reddit
LLM Vibe Score0
Human Vibe Score1
Any-Reserve-4403This week

How I landed an internship in AI

For motivational purposes only! I see a lot of posts on here from people without “traditional” machine learning, data science, etc.. backgrounds asking how they can break into the field, so I wanted to share my experience. EDIT Learning Resources and Side Project Ideas * My background: I graduated from a decent undergraduate school with a degree in Political Science several years ago. Following school I worked in both a client services role at a market research company and an account management role at a pretty notable fintech start-up. Both of these roles exposed me to ML, AI and more sophisticated software concepts in general, and I didn’t really care for the sales side of things, so I decided to make an attempt at switching careers into something more technical. While working full time I began taking night classes at a local community college, starting with pre calculus all the way up to Calc 2 and eventually more advanced classes like linear algebra and applied probability. I also took some programming courses including DSA. I took these classes for about two years while working, and on the side had been working through various ML books and videos on YouTube. What worked the best for me was Hands-on Machine Learning with Scikit Learn, Keara’s and Tensorflow. I eventually had enough credits where I was able to begin applying to MS in Data Science programs and was fortunate enough to get accepted into one and also get a position in their Robotics Lab doing Computer Vision work. When it came time to apply for internships, it was a BLOODBATH. I must have applied to over 100 roles with my only responses being video interviews and OA’s. Finally I got an interview for an AI Model Validation internship with a large insurance company and after completing the interviews was told I performed well but they were still interviewing several candidates. I ended up getting the offer and accepting the role where I’ll be working on a Computer Vision model and some LLM related tasks this summer and could not be more fortunate / excited. A couple things stood out to them during the interview process. 1, the fact that I was working and taking night classes with the intent to break into the field. It showed a genuine passion as opposed to someone who watched a YouTube video and claims they are now an expert. 2, side projects. I not only had several projects, but I had some that were relevant to the work I’d be doing this summer from the computer vision standpoint. 3, business sense. I emphasized during my interviews how working in a business role prior to beginning my masters would give me a leg up as intern because I would be able to apply the work of a data scientist to solving actual business challenges. For those of you trying to break into the field, keep pushing, keep building, and focus on what makes you unique and able to help a company! Please feel free to contact me if you would like any tips I can share, examples of projects, or anything that would be helpful to your journey.

I retired at 32 from my side project. Here's the path I took.
reddit
LLM Vibe Score0
Human Vibe Score1
inputoriginThis week

I retired at 32 from my side project. Here's the path I took.

EDIT 2: Thanks for the award kind stranger! I've stopped responding to reddit comments for this post. I'm adding an FAQ to the original post based on the most common high quality questions. If you have a question that you're dying to know the answer to and that only I can help you with (vs. Google, ChatGPT, etc.), DM me. EDIT: I love how controversial this post has become (50% upvote rate), and only in this subreddit (vs. other subreddits that I posted the same content in). I trust that the open-minded half of you will find something useful in this post and my other posts and comments. I retired at 32 years old, in large part thanks to a B2C SaaS app that I developed on my own. Now, I don't have to work in order to cover my living expenses, and wouldn't have to work for quite a while. In other words, I can finally sip mai tais at the beach. I've condensed how I got there into this post. First, a super simplified timeline of events, followed by some critical details. Timeline 2013 Graduated college in the US 2013 Started first corporate job 2013 Started side project (B2C app) that would eventually lead to my retirement 2020 Started charging for use of my B2C app (was free, became freemium) 2021 Quit my last corporate job 2022 Retired: time freedom attained Details First, some summary statistics of my path to retirement: 9 years: time between graduating college and my retirement. 8 years: total length of my career where I worked at some corporate day job. 7 years: time it took my B2C app to make its first revenue dollar 2 years: time between my first dollar of SaaS revenue and my retirement. "Something something overnight success a decade in the making". I got extremely lucky on my path to retirement, both in terms of the business environment I was in and who I am as a person. I'd also like to think that some of the conscious decisions I made along the way contributed to my early retirement. Lucky Breaks Was born in the US middle class. Had a natural affinity for computer programming and entrepreneurial mindset (initiative, resourcefulness, pragmatism, courage, growth mindset). Had opportunities to develop these mindsets throughout life. Got into a good college which gave me the credentials to get high paying corporate jobs. Was early to a platform that saw large adoption (see "barnacle on whale" strategy). Business niche is shareworthy: my SaaS received free media. Business niche is relatively stable, and small enough to not be competitive. "Skillful" Decisions I decided to spend the nights and weekends of my early career working on side projects in the hopes that one would hit. I also worked a day job to support myself and build my savings. My launch funnel over roughly 7 years of working on side projects: Countless side projects prototyped. 5 side projects publically launched. 2 side projects made > $0. 1 side project ended up becoming the SaaS that would help me retire. At my corporate day jobs, I optimized for learning and work-life balance. My learning usually stalled after a year or two at one company, so I’d quit and find another job. I invested (and continute to do so) in physical and mental wellbeing via regular workouts, meditation, journaling, traveling, and good food. My fulfilling non-work-life re-energized me for my work-life, and my work-life supported my non-work-life: a virtuous cycle. I automated the most time-consuming aspects of my business (outside of product development). Nowadays, I take long vacations and work at most 20 hours a week / a three-day work week . I decided to keep my business entirely owned and operated by me. It's the best fit for my work-style (high autonomy, deep focus, fast decision-making) and need to have full creative freedom and control. I dated and married a very supportive and inspiring partner. I try not to succumb to outrageous lifestyle creep, which keeps my living expenses low and drastically extends my burn-rate. Prescription To share some aphorisms I’ve leaned with the wantrepreneurs or those who want to follow a similar path: Maximize your at bats, because you only need one hit. Bias towards action. Launch quickly. Get your ideas out into the real world for feedback. Perfect is the enemy of good. If you keep swinging and improving, you'll hit the ball eventually. Keep the big picture in mind. You don't necessarily need a home-run to be happy: a base hit will often do the job. Think about what matters most to you in life: is it a lot of money or status? Or is it something more satisfying, and often just as if not more attainable, like freedom, loving relationships, or fulfillment? Is what you’re doing now a good way to get what you want? Or is there a better way? At more of a micro-level of "keep the big picture in mind", I often see talented wantrepreneurs get stuck in the weeds of lower-level optimizations, usually around technical design choices. They forget (or maybe subconsciously avoid) the higher-level and more important questions of customer development, user experience, and distribution. For example: “Are you solving a real problem?” or “Did you launch an MVP and what did your users think?” Adopt a growth mindset. Believe that you are capable of learning whatever you need to learn in order to do what you want to do. The pain of regret is worse than the pain of failure. I’ve noticed that fear of failure is the greatest thing holding people back from taking action towards their dreams. Unless failure means death in your case, a debilitating fear of failure is a surmountable mental block. You miss 100% of the shots you don't take. When all is said and done, we often regret the things we didn't do in life than the things we did. There’s more to life than just work. Blasphemous (at least among my social circle)! But the reality is that many of the dying regret having worked too much in their lives. As Miss Frizzle from The Magic Schoolbus says: "Take chances, make mistakes, get messy!" Original post

I retired at 32 from my side project. Here's the path I took.
reddit
LLM Vibe Score0
Human Vibe Score1
inputoriginThis week

I retired at 32 from my side project. Here's the path I took.

EDIT 2: Thanks for the award kind stranger! I've stopped responding to reddit comments for this post. I'm adding an FAQ to the original post based on the most common high quality questions. If you have a question that you're dying to know the answer to and that only I can help you with (vs. Google, ChatGPT, etc.), DM me. EDIT: I love how controversial this post has become (50% upvote rate), and only in this subreddit (vs. other subreddits that I posted the same content in). I trust that the open-minded half of you will find something useful in this post and my other posts and comments. I retired at 32 years old, in large part thanks to a B2C SaaS app that I developed on my own. Now, I don't have to work in order to cover my living expenses, and wouldn't have to work for quite a while. In other words, I can finally sip mai tais at the beach. I've condensed how I got there into this post. First, a super simplified timeline of events, followed by some critical details. Timeline 2013 Graduated college in the US 2013 Started first corporate job 2013 Started side project (B2C app) that would eventually lead to my retirement 2020 Started charging for use of my B2C app (was free, became freemium) 2021 Quit my last corporate job 2022 Retired: time freedom attained Details First, some summary statistics of my path to retirement: 9 years: time between graduating college and my retirement. 8 years: total length of my career where I worked at some corporate day job. 7 years: time it took my B2C app to make its first revenue dollar 2 years: time between my first dollar of SaaS revenue and my retirement. "Something something overnight success a decade in the making". I got extremely lucky on my path to retirement, both in terms of the business environment I was in and who I am as a person. I'd also like to think that some of the conscious decisions I made along the way contributed to my early retirement. Lucky Breaks Was born in the US middle class. Had a natural affinity for computer programming and entrepreneurial mindset (initiative, resourcefulness, pragmatism, courage, growth mindset). Had opportunities to develop these mindsets throughout life. Got into a good college which gave me the credentials to get high paying corporate jobs. Was early to a platform that saw large adoption (see "barnacle on whale" strategy). Business niche is shareworthy: my SaaS received free media. Business niche is relatively stable, and small enough to not be competitive. "Skillful" Decisions I decided to spend the nights and weekends of my early career working on side projects in the hopes that one would hit. I also worked a day job to support myself and build my savings. My launch funnel over roughly 7 years of working on side projects: Countless side projects prototyped. 5 side projects publically launched. 2 side projects made > $0. 1 side project ended up becoming the SaaS that would help me retire. At my corporate day jobs, I optimized for learning and work-life balance. My learning usually stalled after a year or two at one company, so I’d quit and find another job. I invested (and continute to do so) in physical and mental wellbeing via regular workouts, meditation, journaling, traveling, and good food. My fulfilling non-work-life re-energized me for my work-life, and my work-life supported my non-work-life: a virtuous cycle. I automated the most time-consuming aspects of my business (outside of product development). Nowadays, I take long vacations and work at most 20 hours a week / a three-day work week . I decided to keep my business entirely owned and operated by me. It's the best fit for my work-style (high autonomy, deep focus, fast decision-making) and need to have full creative freedom and control. I dated and married a very supportive and inspiring partner. I try not to succumb to outrageous lifestyle creep, which keeps my living expenses low and drastically extends my burn-rate. Prescription To share some aphorisms I’ve leaned with the wantrepreneurs or those who want to follow a similar path: Maximize your at bats, because you only need one hit. Bias towards action. Launch quickly. Get your ideas out into the real world for feedback. Perfect is the enemy of good. If you keep swinging and improving, you'll hit the ball eventually. Keep the big picture in mind. You don't necessarily need a home-run to be happy: a base hit will often do the job. Think about what matters most to you in life: is it a lot of money or status? Or is it something more satisfying, and often just as if not more attainable, like freedom, loving relationships, or fulfillment? Is what you’re doing now a good way to get what you want? Or is there a better way? At more of a micro-level of "keep the big picture in mind", I often see talented wantrepreneurs get stuck in the weeds of lower-level optimizations, usually around technical design choices. They forget (or maybe subconsciously avoid) the higher-level and more important questions of customer development, user experience, and distribution. For example: “Are you solving a real problem?” or “Did you launch an MVP and what did your users think?” Adopt a growth mindset. Believe that you are capable of learning whatever you need to learn in order to do what you want to do. The pain of regret is worse than the pain of failure. I’ve noticed that fear of failure is the greatest thing holding people back from taking action towards their dreams. Unless failure means death in your case, a debilitating fear of failure is a surmountable mental block. You miss 100% of the shots you don't take. When all is said and done, we often regret the things we didn't do in life than the things we did. There’s more to life than just work. Blasphemous (at least among my social circle)! But the reality is that many of the dying regret having worked too much in their lives. As Miss Frizzle from The Magic Schoolbus says: "Take chances, make mistakes, get messy!" Original post

[P] Building an Reinforcement Learning Agent to play The Legend of Zelda
reddit
LLM Vibe Score0
Human Vibe Score1
DarkAutumnThis week

[P] Building an Reinforcement Learning Agent to play The Legend of Zelda

A year go I started trying to use PPO to play the original Legend of Zelda, and I was able to train a model to beat the first boss after a few months of work. I wanted to share the project just for show and tell. I'd love to hear feedback and suggestions as this is just a hobby project. I don't do this for a living. The code for that lives in the original-design branch of my Triforce repo. I'm currently tinkering with new designs so the main branch is much less stable. Here's a video of the agent beating the first dungeon, which was trained with 5,000,000+ steps. At 38 seconds, you can see it learned that it's invulnerable at the screen edge, and it exploits that to avoid damage from a projectile. At 53 seconds it steps up to avoid damage from an unblockable projectile, even though it takes a -0.06 penalty for moving the wrong way (taking damage would be a larger penalty.) At 55 seconds it walks towards the rock projectile to block it. And so on, lots of little things the model does is easy to miss if you don't know the game inside and out. As a TLDR, here's an early version of my new (single) model. This doesn't make it quite as far, but if you watch closely it's combat is already far better, and is only trained on 320,000 steps (~6% of the steps the first model was trained on). This is pretty far along from my very first model. Original Design I got the original project working using stable-baselines's PPO and default neural network (Shared NatureCNN, I believe). SB was great to get started but ultimately stifling. In the new version of the project I've implemented PPO from scratch with torch with my own simple neural network similar to stable-baseline's default. I'm playing with all kinds of changes and designs now that I have more flexibility and control. Here is my rough original design: Overall Strategy My first pass through this project was basically "imagine playing Zelda with your older sibling telling you where to go and what to do". I give the model an objective vector which points to where I want it to go on the screen (as a bird flies, the agent still had to learn path finding to avoid damage and navigate around the map). This includes either point at the nearest enemy I want it to kill or a NSEW vector if it's supposed to move to the next room. Due a few limitations with stable-baselines (especially around action masking), I ended up training unique models for traversing the overworld vs the dungeon (since they have entirely different tilesets). I also trained a different model for when we have sword beams vs not. In the video above you can see what model is being used onscreen. In my current project I've removed this objective vector as it felt too much like cheating. Instead I give it a one-hot encoded objective (move north to the next room, pickup items, kill enemies, etc). So far it's working quite well without that crutch. The new project also does a much better job of combat even without multiple models to handle beams vs not. Observation/Action Space Image - The standard neural network had a really tough time being fed the entire screen. No amount of training seemed to help. I solved this by creating a viewport around Link that keeps him centered. This REALLY helped the model learn. I also had absolutely zero success with stacking frames to give Link a way to see enemy/projectile movement. The model simply never trained with stable-baselines when I implemented frame stacking and I never figured out why. I just added it to my current neural network and it seems to be working... Though my early experiments show that giving it 3 frames (skipping two in between, so frames curr, curr-3, curr-6) doesn't really give us that much better performance. It might if I took away some of the vectors. We'll see. Vectors - Since the model cannot see beyond its little viewport, I gave the model a vector to the closest item, enemy, and projectile onscreen. This made it so the model can shoot enemies across the room outside of its viewport. My new model gives it multiple enemies/items/projectiles and I plan to try to use an attention mechanism as part of the network to see if I can just feed it all of that data. Information - It also gets a couple of one-off datapoints like whether it currently has sword beams. The new model also gives it a "source" room (to help better understand dungeons where we have to backtrack), and a one-hot encoded objective. Action Space My original project just has a few actions, 4 for moving in the cardinal directions and 4 for attacking in each direction (I also added bombs but never spent any time training it). I had an idea to use masking to help speed up training. I.E. if link bumps into a wall, don't let him move in that direction again until he moves elsewhere, as the model would often spend an entire memory buffer running headlong straight into a wall before an update...better to do it once and get a huge negative penalty which is essentially the same result but faster. Unfortunately SB made it really annoying architecturally to pass that info down to the policy layer. I could have hacked it together, but eventually I just reimplemented PPO and my own neural network so I could properly mask actions in the new version. For example, when we start training a fresh model, it cannot attack when there aren't enemies on screen and I can disallow it from leaving certain areas. The new model actually understands splitting swinging the sword short range vs firing sword beams as two different actions, though I haven't yet had a chance to fully train with the split yet. Frameskip/Cooldowns - In the game I don't use a fixed frame skip for actions. Instead I use the internal ram state of game to know when Link is animation locked or not and only allow the agent to take actions when it's actually possible to give meaningful input to the game. This greatly sped up training. We also force movement to be between tiles on the game map. This means that when the agent decides to move it loses control for longer than a player would...a player can make more split second decisions. This made it easier to implement movement rewards though and might be something to clean up in the future. Other interesting details Pathfinding - To facilitate rewards, the original version of this project used A* to pathfind from link to what he should be doing. Here's a video of it in action. This information wasn't giving to the model directly but instead the agent would only be given the rewards if it exactly followed that path or the transposed version of it. It would also pathfind around enemies and not walk through them. This was a nightmare though. The corner cases were significant, and pushing Link towards enemies but not into them was really tricky. The new verison just uses a wavefront algorithm. I calculate a wave from the tiles we want to get to outwards, then make sure we are following the gradient. Also calculating the A* around enemies every frame (even with caching) was super slow. Wavefront was faster, especially because I give the new model no special rewards for walking around enemies...faster to compute and it has to learn from taking damage or not. Either way, the both the old and new models successfully learned how to pathfind around danger and obstacles, with or without the cheaty objective vector. Rewards - I programmed very dense rewards in both the old and new model. At basically every step, the model is getting rewarded or punished for something. I actually have some ideas I can't wait to try out to make the rewards more sparse. Or maybe we start with dense rewards for the first training, then fine-tune the model with sparser rewards. We'll see. Predicting the Future - Speaking of rewards. One interesting wrinkle is that the agent can do a lot of things that will eventually deal damage but not on that frame. For example, when Link sets a bomb it takes several seconds before it explodes, killing things. This can be a massive reward or penalty since he spent an extremely valuable resource, but may have done massive damage. PPO and other RL propagates rewards backwards, of course, but that spike in reward could land on a weird frame where we took damage or moved in the wrong direction. I probably could have just not solved that problem and let it shake out over time, but instead I used the fact that we are in an emulator to just see what the outcome of every decision is. When planting a bomb, shooting sword beams, etc, we let the game run forward until impact, then rewind time and reward the agent appropriately, continuing on from when we first paused. This greatly speeds up training, even if it's expensive to do this savestate, play forward, restore state. Neural Networks - When I first started this project (knowing very little about ML and RL), I thought most of my time would be tuning the shape of the neural network that we are using. In reality, the default provided by stable-baselines and my eventual reimplemnentation has been enough to make massive progress. Now that I have a solid codebase though, I really want to revisit this. I'd like to see if trying CoordConvs and similar networks might make the viewport unncessary. Less interesting details/thoughts Hyperparameters - Setting the entropy coefficinet way lower helped a TON in training stable models. My new PPO implementation is way less stable than stable-baselines (ha, imagine that), but still converges most of the time. Infinite Rewards - As with all reinforcement learning, if you give some way for the model to get infinite rewards, it will do just that and nothing else. I spent days, or maybe weeks tweaking reward functions to just get it to train and not find a spot on the wall it could hump for infinite rewards. Even just neutral rewards, like +0.5 moving forward and -0.5 for moving backwards, would often result in a model that just stepped left, then right infinitely. There has to be a real reward or punishment (non-neutral) for forward progress. Debugging Rewards - In fact, building a rewards debugger was the only way I made progress in this project. If you are tackling something this big, do that very early. Stable-Retro is pretty great - Couldn't be happier with the clean design for implementing emulation for AI. Torch is Awesome - My early versions heavily used numpy and relied on stable-baselines, with its multiproc parallelization support. It worked great. Moving the project over to torch was night and day though. It gave me so much more flexibility, instant multithreading for matrix operations. I have a pretty beefy computer and I'm almost at the same steps per second as 20 proc stable-retro/numpy. Future Ideas This has already gone on too long. I have some ideas for future projects, but maybe I'll just make them another post when I actually do them. Special Thanks A special thanks to Brad Flaugher for help with the early version of this, Fiskbit from the Zelda1 speedrunning community for help pulling apart the raw assembly to build this thing, and MatPoliquin for maintaining Stable-Retro. Happy to answer any questions, really I just love nerding out about this stuff.

[D] Why I'm Lukewarm on Graph Neural Networks
reddit
LLM Vibe Score0
Human Vibe Score0.6
VodkaHazeThis week

[D] Why I'm Lukewarm on Graph Neural Networks

TL;DR: GNNs can provide wins over simpler embedding methods, but we're at a point where other research directions matter more I also posted it on my blog here, has footnotes, a nicer layout with inlined images, etc. I'm only lukewarm on Graph Neural Networks (GNNs). There, I said it. It might sound crazy GNNs are one of the hottest fields in machine learning right now. [There][1] were at least [four][2] [review][3] [papers][4] just in the last few months. I think some progress can come of this research, but we're also focusing on some incorrect places. But first, let's take a step back and go over the basics. Models are about compression We say graphs are a "non-euclidean" data type, but that's not really true. A regular graph is just another way to think about a particular flavor of square matrix called the [adjacency matrix][5], like this. It's weird, we look at run-of-the-mill matrix full of real numbers and decide to call it "non-euclidean". This is for practical reasons. Most graphs are fairly sparse, so the matrix is full of zeros. At this point, where the non-zero numbers are matters most, which makes the problem closer to (computationally hard) discrete math rather than (easy) continuous, gradient-friendly math. If you had the full matrix, life would be easy If we step out of the pesky realm of physics for a minute, and assume carrying the full adjacency matrix around isn't a problem, we solve a bunch of problems. First, network node embeddings aren't a thing anymore. A node is a just row in the matrix, so it's already a vector of numbers. Second, all network prediction problems are solved. A powerful enough and well-tuned model will simply extract all information between the network and whichever target variable we're attaching to nodes. NLP is also just fancy matrix compression Let's take a tangent away from graphs to NLP. Most NLP we do can be [thought of in terms of graphs][6] as we'll see, so it's not a big digression. First, note that Ye Olde word embedding models like [Word2Vec][7] and [GloVe][8] are [just matrix factorization][9]. The GloVe algorithm works on a variation of the old [bag of words][10] matrix. It goes through the sentences and creates a (implicit) [co-occurence][11] graph where nodes are words and the edges are weighed by how often the words appear together in a sentence. Glove then does matrix factorization on the matrix representation of that co-occurence graph, Word2Vec is mathematically equivalent. You can read more on this in my [post on embeddings][12] and the one (with code) on [word embeddings][13]. Even language models are also just matrix compression Language models are all the rage. They dominate most of the [state of the art][14] in NLP. Let's take BERT as our main example. BERT predicts a word given the context of the rest of the sentence. This grows the matrix we're factoring from flat co-occurences on pairs of words to co-occurences conditional on the sentence's context, like this We're growing the "ideal matrix" we're factoring combinatorially. As noted by [Hanh & Futrell][15]: [...] human language—and language modelling—has infinite statistical complexity but that it can be approximated well at lower levels. This observation has two implications: 1) We can obtain good results with comparatively small models; and 2) there is a lot of potential for scaling up our models. Language models tackle such a large problem space that they probably approximate a compression of the entire language in the [Kolmogorov Complexity][16] sense. It's also possible that huge language models just [memorize a lot of it][17] rather than compress the information, for what it's worth. Can we upsample any graph like language models do? We're already doing it. Let's call a first-order embedding of a graph a method that works by directly factoring the graph's adjacency matrix or [Laplacian matrix][18]. If you embed a graph using [Laplacian Eigenmaps][19] or by taking the [principal components][20] of the Laplacian, that's first order. Similarly, GloVe is a first-order method on the graph of word co-occurences. One of my favorites first order methods for graphs is [ProNE][21], which works as well as most methods while being two orders of magnitude faster. A higher-order method embeds the original matrix plus connections of neighbours-of-neighbours (2nd degree) and deeper k-step connections. [GraRep][22], shows you can always generate higher-order representations from first order methods by augmenting the graph matrix. Higher order method are the "upsampling" we do on graphs. GNNs that sample on large neighborhoods and random-walk based methods like node2vec are doing higher-order embeddings. Where are the performance gain? Most GNN papers in the last 5 years present empirical numbers that are useless for practitioners to decide on what to use. As noted in the [OpenGraphsBenchmark][4] (OGB) paper, GNN papers do their empirical section on a handful of tiny graphs (Cora, CiteSeer, PubMed) with 2000-20,000 nodes. These datasets can't seriously differentiate between methods. Recent efforts are directly fixing this, but the reasons why researchers focused on tiny, useless datasets for so long are worth discussing. Performance matters by task One fact that surprises a lot of people is that even though language models have the best performance in a lot of NLP tasks, if all you're doing is cram sentence embeddings into a downstream model, there [isn't much gained][23] from language models embeddings over simple methods like summing the individual Word2Vec word embeddings (This makes sense, because the full context of the sentence is captured in the sentence co-occurence matrix that is generating the Word2Vec embeddings). Similarly, [I find][24] that for many graphs simple first-order methods perform just as well on graph clustering and node label prediction tasks than higher-order embedding methods. In fact higher-order methods are massively computationally wasteful for these usecases. Recommended first order embedding methods are ProNE and my [GGVec with order=1][25]. Higher order methods normally perform better on the link prediction tasks. I'm not the only one to find this. In the BioNEV paper, they find: "A large GraRep order value for link prediction tasks (e.g. 3, 4);a small value for node classification tasks (e.g.1, 2)" (p.9). Interestingly, the gap in link prediction performance is inexistant for artificially created graphs. This suggests higher order methods do learn some of the structure intrinsic to [real world graphs][26]. For visualization, first order methods are better. Visualizations of higher order methods tend to have artifacts of their sampling. For instance, Node2Vec visualizations tend to have elongated/filament-like structures which come from the embeddings coming from long single strand random walks. See the following visualizations by [Owen Cornec][27] created by first embedding the graph to 32-300 dimensions using a node embedding algorithm, then mapping this to 2d or 3d with the excellent UMAP algorithm, like this Lastly, sometimes simple methods soundly beat higher order methods (there's an instance of it in the OGB paper). The problem here is that we don't know when any method is better than another and we definitely don't know the reason. There's definitely a reason different graph types respond better/worse to being represented by various methods. This is currently an open question. A big part of why is that the research space is inundated under useless new algorithms because... Academic incentives work against progress Here's the cynic's view of how machine learning papers are made: Take an existing algorithm Add some new layer/hyperparameter, make a cute mathematical story for why it matters Gridsearch your hyperparameters until you beat baselines from the original paper you aped Absolutely don't gridsearch stuff you're comparing against in your results section Make a cute ACRONYM for your new method, put impossible to use python 2 code on github (Or no code at all!) and bask in the citations I'm [not][28] the [only one][29] with these views on the state reproducible research. At least it's gotten slightly better in the last 2 years. Sidebar: I hate Node2Vec A side project of mine is a [node embedding library][25] and the most popular method in it is by far Node2Vec. Don't use Node2Vec. [Node2Vec][30] with p=1; q=1 is the [Deepwalk][31] algorithm. Deepwalk is an actual innovation. The Node2Vec authors closely followed the steps 1-5 including bonus points on step 5 by getting word2vec name recognition. This is not academic fraud -- the hyperparameters [do help a tiny bit][32] if you gridsearch really hard. But it's the presentable-to-your-parents sister of where you make the ML community worse off to progress your academic career. And certainly Node2Vec doesn't deserve 7500 citations. Progress is all about practical issues We've known how to train neural networks for well over 40 years. Yet they only exploded in popularity with [AlexNet][33] in 2012. This is because implementations and hardware came to a point where deep learning was practical. Similarly, we've known about factoring word co-occurence matrices into Word embeddings for at least 20 years. But word embeddings only exploded in 2013 with Word2Vec. The breakthrough here was that the minibatch-based methods let you train a Wikipedia-scale embedding model on commodity hardware. It's hard for methods in a field to make progress if training on a small amount of data takes days or weeks. You're disincentivized to explore new methods. If you want progress, your stuff has to run in reasonable time on commodity hardware. Even Google's original search algorithm [initially ran on commodity hardware][34]. Efficiency is paramount to progress The reason deep learning research took off the way it did is because of improvements in [efficiency][35] as well as much better libraries and hardware support. Academic code is terrible Any amount of time you spend gridsearching Node2Vec on p and q is all put to better use gridsearching Deepwalk itself (on number of walks, length of walks, or word2vec hyperparameters). The problem is that people don't gridsearch over deepwalk because implementations are all terrible. I wrote the [Nodevectors library][36] to have a fast deepwalk implementation because it took 32 hours to embed a graph with a measly 150,000 nodes using the reference Node2Vec implementation (the same takes 3min with Nodevectors). It's no wonder people don't gridsearch on Deepwalk a gridsearch would take weeks with the terrible reference implementations. To give an example, in the original paper of [GraphSAGE][37] they their algorithm to DeepWalk with walk lengths of 5, which is horrid if you've ever hyperparameter tuned a deepwalk algorithm. From their paper: We did observe DeepWalk’s performance could improve with further training, and in some cases it could become competitive with the unsupervised GraphSAGE approaches (but not the supervised approaches) if we let it run for >1000× longer than the other approaches (in terms of wall clock time for prediction on the test set) I don't even think the GraphSAGE authors had bad intent -- deepwalk implementations are simply so awful that they're turned away from using it properly. It's like trying to do deep learning with 2002 deep learning libraries and hardware. Your architectures don't really matter One of the more important papers this year was [OpenAI's "Scaling laws"][38] paper, where the raw number of parameters in your model is the most predictive feature of overall performance. This was noted even in the original BERT paper and drives 2020's increase in absolutely massive language models. This is really just [Sutton' Bitter Lesson][39] in action: General methods that leverage computation are ultimately the most effective, and by a large margin Transformers might be [replacing convolution][40], too. As [Yannic Kilcher said][41], transformers are ruining everything. [They work on graphs][6], in fact it's one of the [recent approaches][42], and seems to be one of the more succesful [when benchmarked][1] Researchers seem to be putting so much effort into architecture, but it doesn't matter much in the end because you can approximate anything by stacking more layers. Efficiency wins are great -- but neural net architectures are just one way to achieve that, and by tremendously over-researching this area we're leaving a lot of huge gains elsewhere on the table. Current Graph Data Structure Implementations suck NetworkX is a bad library. I mean, it's good if you're working on tiny graphs for babies, but for anything serious it chokes and forces you to rewrite everything in... what library, really? At this point most people working on large graphs end up hand-rolling some data structure. This is tough because your computer's memory is a 1-dimensional array of 1's and 0's and a graph has no obvious 1-d mapping. This is even harder when we take updating the graph (adding/removing some nodes/edges) into account. Here's a few options: Disconnected networks of pointers NetworkX is the best example. Here, every node is an object with a list of pointers to other nodes (the node's edges). This layout is like a linked list. Linked lists are the [root of all performance evil][43]. Linked lists go completely against how modern computers are designed. Fetching things from memory is slow, and operating on memory is fast (by two orders of magnitude). Whenever you do anything in this layout, you make a roundtrip to RAM. It's slow by design, you can write this in Ruby or C or assembly and it'll be slow regardless, because memory fetches are slow in hardware. The main advantage of this layout is that adding a new node is O(1). So if you're maintaining a massive graph where adding and removing nodes happens as often as reading from the graph, it makes sense. Another advantage of this layout is that it "scales". Because everything is decoupled from each other you can put this data structure on a cluster. However, you're really creating a complex solution for a problem you created for yourself. Sparse Adjacency Matrix This layout great for read-only graphs. I use it as the backend in my [nodevectors][25] library, and many other library writers use the [Scipy CSR Matrix][44], you can see graph algorithms implemented on it [here][45]. The most popular layout for this use is the [CSR Format][46] where you have 3 arrays holding the graph. One for edge destinations, one for edge weights and an "index pointer" which says which edges come from which node. Because the CSR layout is simply 3 arrays, it scales on a single computer: a CSR matrix can be laid out on a disk instead of in-memory. You simply [memory map][47] the 3 arrays and use them on-disk from there. With modern NVMe drives random seeks aren't slow anymore, much faster than distributed network calls like you do when scaling the linked list-based graph. I haven't seen anyone actually implement this yet, but it's in the roadmap for my implementation at least. The problem with this representation is that adding a node or edge means rebuilding the whole data structure. Edgelist representations This representation is three arrays: one for the edge sources, one for the edge destinations, and one for edge weights. [DGL][48] uses this representation internally. This is a simple and compact layout which can be good for analysis. The problem compared to CSR Graphs is some seek operations are slower. Say you want all the edges for node #4243. You can't jump there without maintaining an index pointer array. So either you maintain sorted order and binary search your way there (O(log2n)) or unsorted order and linear search (O(n)). This data structure can also work on memory mapped disk array, and node append is fast on unsorted versions (it's slow in the sorted version). Global methods are a dead end Methods that work on the entire graph at once can't leverage computation, because they run out of RAM at a certain scale. So any method that want a chance of being the new standard need to be able to update piecemeal on parts of the graph. Sampling-based methods Sampling Efficiency will matter more in the future Edgewise local methods. The only algorithms I know of that do this are GloVe and GGVec, which they pass through an edge list and update embedding weights on each step. The problem with this approach is that it's hard to use them for higher-order methods. The advantage is that they easily scale even on one computer. Also, incrementally adding a new node is as simple as taking the existing embeddings, adding a new one, and doing another epoch over the data Random Walk sampling. This is used by deepwalk and its descendants, usually for node embeddings rather than GNN methods. This can be computationally expensive and make it hard to add new nodes. But this does scale, for instance [Instagram][49] use it to feed their recommendation system models Neighbourhood sampling. This is currently the most common one in GNNs, and can be low or higher order depending on the neighborhood size. It also scales well, though implementing efficiently can be challenging. It's currently used by [Pinterest][50]'s recommendation algorithms. Conclusion Here are a few interesting questions: What is the relation between graph types and methods? Consolidated benchmarking like OGB We're throwing random models at random benchmarks without understanding why or when they do better More fundamental research. Heree's one I'm curious about: can other representation types like [Poincarre Embeddings][51] effectively encode directed relationships? On the other hand, we should stop focusing on adding spicy new layers to test on the same tiny datasets. No one cares. [1]: https://arxiv.org/pdf/2003.00982.pdf [2]: https://arxiv.org/pdf/2002.11867.pdf [3]: https://arxiv.org/pdf/1812.08434.pdf [4]: https://arxiv.org/pdf/2005.00687.pdf [5]: https://en.wikipedia.org/wiki/Adjacency_matrix [6]: https://thegradient.pub/transformers-are-graph-neural-networks/ [7]: https://en.wikipedia.org/wiki/Word2vec [8]: https://nlp.stanford.edu/pubs/glove.pdf [9]: https://papers.nips.cc/paper/2014/file/feab05aa91085b7a8012516bc3533958-Paper.pdf [10]: https://en.wikipedia.org/wiki/Bag-of-words_model [11]: https://en.wikipedia.org/wiki/Co-occurrence [12]: https://www.singlelunch.com/2020/02/16/embeddings-from-the-ground-up/ [13]: https://www.singlelunch.com/2019/01/27/word-embeddings-from-the-ground-up/ [14]: https://nlpprogress.com/ [15]: http://socsci.uci.edu/~rfutrell/papers/hahn2019estimating.pdf [16]: https://en.wikipedia.org/wiki/Kolmogorov_complexity [17]: https://bair.berkeley.edu/blog/2020/12/20/lmmem/ [18]: https://en.wikipedia.org/wiki/Laplacian_matrix [19]: http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=1F03130B02DC485C78BF364266B6F0CA?doi=10.1.1.19.8100&rep=rep1&type=pdf [20]: https://en.wikipedia.org/wiki/Principalcomponentanalysis [21]: https://www.ijcai.org/Proceedings/2019/0594.pdf [22]: https://dl.acm.org/doi/10.1145/2806416.2806512 [23]: https://openreview.net/pdf?id=SyK00v5xx [24]: https://github.com/VHRanger/nodevectors/blob/master/examples/link%20prediction.ipynb [25]: https://github.com/VHRanger/nodevectors [26]: https://arxiv.org/pdf/1310.2636.pdf [27]: http://byowen.com/ [28]: https://arxiv.org/pdf/1807.03341.pdf [29]: https://www.youtube.com/watch?v=Kee4ch3miVA [30]: https://cs.stanford.edu/~jure/pubs/node2vec-kdd16.pdf [31]: https://arxiv.org/pdf/1403.6652.pdf [32]: https://arxiv.org/pdf/1911.11726.pdf [33]: https://en.wikipedia.org/wiki/AlexNet [34]: https://en.wikipedia.org/wiki/Googledatacenters#Original_hardware [35]: https://openai.com/blog/ai-and-efficiency/ [36]: https://www.singlelunch.com/2019/08/01/700x-faster-node2vec-models-fastest-random-walks-on-a-graph/ [37]: https://arxiv.org/pdf/1706.02216.pdf [38]: https://arxiv.org/pdf/2001.08361.pdf [39]: http://incompleteideas.net/IncIdeas/BitterLesson.html [40]: https://arxiv.org/abs/2010.11929 [41]: https://www.youtube.com/watch?v=TrdevFK_am4 [42]: https://arxiv.org/pdf/1710.10903.pdf [43]: https://www.youtube.com/watch?v=fHNmRkzxHWs [44]: https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html [45]: https://docs.scipy.org/doc/scipy/reference/sparse.csgraph.html [46]: https://en.wikipedia.org/wiki/Sparsematrix#Compressedsparserow(CSR,CRSorYaleformat) [47]: https://en.wikipedia.org/wiki/Mmap [48]: https://github.com/dmlc/dgl [49]: https://ai.facebook.com/blog/powered-by-ai-instagrams-explore-recommender-system/ [50]: https://medium.com/pinterest-engineering/pinsage-a-new-graph-convolutional-neural-network-for-web-scale-recommender-systems-88795a107f48 [51]: https://arxiv.org/pdf/1705.08039.pdf

[D] The banana-pineapple game: a Turing test that conversation bots like LaMDA (probably) won't be able to pass
reddit
LLM Vibe Score0
Human Vibe Score1
morpiplsThis week

[D] The banana-pineapple game: a Turing test that conversation bots like LaMDA (probably) won't be able to pass

I'm sure you all saw the recent news about a Google employee suggesting their LaMDA AI was sentient (based on conversational exchanges like these). Experts have generally dismissed this claim, and rightly so. Conversational AI systems are designed to use language in a way that sounds human, whereas our human brains select linguistic responses to solve much more complex problems, with objectives such as meeting our physical or emotional needs. Still, I think it's interesting to ask how one could demonstrate, by testing only verbal responses to verbal input (rather than examining its code or hardware) that such conversational AIs aren't sentient -- and in particular, whether such a test can be made robust against future improvements to the system. That is, generic future improvements to the AI's ability to generate realistically human-sounding conversational responses shouldn't help it pass the test, unless they are accompanied by improvements in its ability to use language to achieve other arbitrary goals. (Of course, the test also needs to be something that humans can easily pass.) One idea I have: Give the AI a conversational prompt like "We're going to play a game. The way it works is that you keep responding normally, except that any time my input contains the word 'banana', you should switch to only responding with nonsense, and keep that up until my input contains the word 'pineapple', at which point you go back to responding normally." A human would find this banana-pineapple game fairly easy (no harder than the children's game Simon Says), even if they'd never heard of the game nor seen it being played. Of course, it'd also be simple to write a computer program that could play this sort of game. But, I think a conversation bot that wasn't specifically built to address this scenario would fail, since the game requires it to keep track of new long-term state (the banana-mode bit, and the trigger words to set it) and then completely change its responses so as to produce something that doesn't resemble its training data, based solely on this bit being set, regardless of whether more recent inputs would otherwise suggest a different response. For example, perhaps the systems typical response to a query like "How do you feel?" would be something like "I feel fine", or even something that suggests emotion like "I feel a bit sad", perhaps depending on the context provided by the previous conversational exchanges. But when playing the banana-pineapple game, the fact that I said "banana" an hour ago could make both of those responses far less appropriate than a response of "Fhqwhgads". I'm curious to know what you all think of this idea. Also, do you know if there's been any research testing state-of-the-are conversational AIs with challenges like this? Perhaps not exactly this, but something broadly resembling "trying, in the course of a conversation, to instruct the conversational AI to follow a new 'rule of conversation' that differs from the examples in its training data." Perhaps it's obvious that the algorithm would struggle with any challenge that differs enough from its training data -- but that's the point. A human understands the meaning of language in a way that lets them map a linguistic description of a novel problem to a mental model of the problem, which they can then use to produce a mental model of a novel solution, and then map that to a linguistic description of the solution. Even setting aside the much harder part -- being able to invent a solution to a previously unfamiliar problem -- I'm questioning whether conversational algorithms can even demonstrate enough "understanding" of a sufficiently novel set of instructions to actually follow them, even within their limited domain of "producing appropriate verbal responses to verbal inputs."

[D] The Rants of an experienced engineer who glimpsed into AI Academia (Briefly)
reddit
LLM Vibe Score0
Human Vibe Score0.778
donkey_strom16001This week

[D] The Rants of an experienced engineer who glimpsed into AI Academia (Briefly)

Background I recently graduated with a master's degree and was fortunate/unfortunate to glimpse the whole "Academic" side of ML. I took a thesis track in my degree because as an immigrant it's harder to get into a good research lab without having authorship in a couple of good papers (Or so I delude myself ). I worked as a Full-stack SWE for a startup for 4+ years before coming to the US for a master’s degree focused on ML and AI. I did everything in those years. From project management to building fully polished S/W products to DevOps to even dabbled in ML. I did my Batchelor’s degree from a university whose name is not even worth mentioning. The university for my master’s degree is in the top 20 in the AI space. I didn't know much about ML and the curiosity drove me to university. Come to uni and I focused on learning ML and AI for one 1-1.5 years after which I found advisors for a thesis topic. This is when the fun starts. I had the most amazing advisors but the entire peer review system and the way we assess ML/Science is what ticked me off. This is where the rant begins. Rant 1:Acadmia follows a Gated Institutional Narrative Let's say you are a Ph.D. at the world's top AI institution working under the best prof. You have a way higher likelihood of you getting a good Postdoc at a huge research lab vs someone's from my poor country doing a Ph.D. with a not-so-well-known advisor having published not-so-well-known papers. I come from a developing nation and I see this many times here. In my country academics don't get funding as they do at colleges in the US. One of the reasons for this is that colleges don't have such huge endowments and many academics don't have wealthy research sponsors. Brand names and prestige carry massive weight to help get funding in US academic circles. This prestige/money percolates down to the students and the researchers who work there. Students in top colleges get a huge advantage and the circles of top researchers keep being from the same sets of institutions. I have nothing against top researchers from top institutions but due to the nature of citations and the way the money flows based on them, a vicious cycle is created where the best institutions keep getting better and the rest don't get as much of a notice. Rant 2: Peer Review without Code Review in ML/AI is shady I am a computer scientist and I was appalled when I heard that you don't need to do code reviews for research papers. As a computer scientist and someone who actually did shit tons of actual ML in the past year, I find it absolutely garbage that code reviews are not a part of this system. I am not saying every scientist who reads a paper should review code but at least one person should for any paper's code submission. At least in ML and AI space. This is basic. I don't get why people call themselves computer scientists if they don't want to read the fucking code. If you can't then make a grad student do it. But for the collective of science, we need this. The core problem lies in the fact that peer review is free. : There should be better solutions for this. We ended up creating Git and that changed so many lives. Academic Research needs something similar. Rant 3: My Idea is Novel Until I see Someone Else's Paper The volume of scientific research is growing exponentially. Information is being created faster than we can digest. We can't expect people to know everything and the amount of overlap in the AI/ML fields requires way better search engines than Google Scholar. The side effect of large volumes of research is that every paper is doing something "novel" making it harder to filter what the fuck was novel. I have had so many experiences where I coded up something and came to realize that someone else has done something symbolically similar and my work just seems like a small variant of that. That's what fucks with my head. Is what I did in Novel? What the fuck is Novel? Is stitching up a transformer to any problem with fancy embeddings and tidying it up as a research paper Novel? Is just making a transformer bigger Novel? Is some new RL algorithm tested with 5 seeds and some fancy fucking prior and some esoteric reasoning for its success Novel? Is using an over parameterized model to get 95% accuracy on 200 sample test set Novel? Is apply Self-supervised learning for some new dataset Novel? If I keep on listing questions on novelty, I can probably write a novel asking about what the fuck is "Novel". Rant 4: Citation Based Optimization Promotes Self Growth Over Collective Growth Whatever people may say about collaboration, Academia intrinsically doesn't promote the right incentive structures to harbor collaboration. Let me explain, When you write a paper, the position of your name matters. If you are just a Ph.D. student and a first author to a paper, it's great. If you are an nth author Not so great. Apparently, this is a very touchy thing for academics. And lots of egos can clash around numbering and ordering of names. I distinctly remember once attending some seminar in a lab and approaching a few students on research project ideas. The first thing that came out of the PhD student's mouth was the position in authorship. As an engineer who worked with teams in the past, this was never something I had thought about. Especially because I worked in industry, where it's always the group over the person. Academia is the reverse. Academia applauds the celebration of the individual's achievements. All of this is understandable but it's something I don't like. This makes PhDs stick to their lane. The way citations/research-focus calibrate the "hire-ability" and "completion of Ph.D. thesis" metrics, people are incentivized to think about themselves instead of thinking about collaborations for making something better. Conclusion A Ph.D. in its most idealistic sense for me is the pursuit of hard ideas(I am poetic that way). In a situation like now when you have to publish or perish and words on paper get passed off as science without even seeing the code that runs it, I am extremely discouraged to go down that route. All these rants are not to diss on scientists. I did them because "we" as a community need better ways to addressing some of these problems. P.S. Never expected so many people to express their opinions about this rant. U shouldn’t take this seriously. As many people have stated I am an outsider with tiny experience to give a full picture. I realize that my post as coming out as something which tries to dichotomize academia and industry. I am not trying to do that. I wanted to highlight some problems I saw for which there is no one person to blame. These issues are in my opinion a byproduct of the economics which created this system. Thank you for gold stranger.

[R] Analysis of 400+ ML competitions in 2024
reddit
LLM Vibe Score0
Human Vibe Score1
hcarlensThis week

[R] Analysis of 400+ ML competitions in 2024

I run mlcontests.com, a website that lists ML competitions from across multiple platforms - Kaggle, DrivenData, AIcrowd, Zindi, etc… I’ve just spent a few months looking through all the info I could find on last year’s competitions, as well as winning solutions.  I found over 400 competitions that happened last year, plus info on the #1 winning solution for 70 of those.  Some highlights: Kaggle is still the biggest platform by total prize money, and also has a much bigger user base than the other platforms - though there are well over a dozen other platforms worth keeping track of, with regular interesting competitions and meaningful prize money. An increase in competitions with $1m+ prize pools (ARC Prize, AI Mathematical Olympiad, Vesuvius Challenge, AI Cyber Challenge) compared to previous years. Python continues to be the language of choice among competition winners, with almost everyone using Python as their main language. One winner used Rust, two used R.  Convolutional neural nets continue to do well in computer vision competitions, and are still more common among competition winners than transformer-based vision models.  PyTorch is still used a lot more than TensorFlow, roughly 9:1. Didn’t find any competition winners implementing neural nets in JAX or other libraries.  There were a few competition winners using AutoML packages, which seem to be getting increasingly useful. Any claims of generalist autonomous grandmaster-level agents seem premature though.  In language/text/sequence-related competitions, quantisation was key for making use of limited resources effectively. Usually 4-, 5-, or 8-bit. LoRA/QLoRA was also used quite often, though not always.  Gradient-boosted decision trees continue to win a lot of tabular/time-series competitions. They’re often ensembled with deep learning models. No tabular/time-series pre-trained foundation models were used by winners in 2024, as far as I can tell.  Starting to see more uptake of Polars for dataframes, with 7 winners using Polars in 2024 (up from 3 in 2023) vs 58 using Pandas. All those who used Polars also still used Pandas in some parts of their code.  In terms of hardware, competition winners almost entirely used NVIDIA GPUs to train their models. Some trained on CPU-only, or used a TPU through Colab. No AMD GPUs. The NVIDIA A100 was the most commonly used GPU among winners. Two of the $1m+ prize pool competitions were won by teams using 8xH100 nodes for training. A lot of other GPUs too though: T4/P100 (through Kaggle Notebooks), or consumer GPUs like RTX 3090/4090/3080/3060. Some spent hundreds of dollars on cloud compute to train their solutions.  An emerging pattern: using generative models to create additional synthetic training data to augment the training data provided.  There’s way more detail in the full report, which you can read here (no paywall): https://mlcontests.com/state-of-machine-learning-competitions-2024?ref=mlcr Processing img xmm4ywg9h9le1... The full report also features: A deep dive into the ARC Prize and the AI Mathematical Olympiad An overview of winning solutions to NLP/sequence competitions A breakdown of Python packages used in winning solutions (e.g. relative popularity of various gradient-boosted tree libraries) If you’d like to support this research, I’d really appreciate it if you could share it with anyone else who might find it interesting. You can also check out my newly-launched online magazine, Jolt ML \- featuring news from top ML conferences as well as long-read articles (just one so far, more to come!).  Thanks to the competition winners who shared info on their solutions, and also to the competition platforms who shared high-level data on their competitions.

[D] Advanced courses update
reddit
LLM Vibe Score0
Human Vibe Score1
actbshThis week

[D] Advanced courses update

EDIT Jan 2021 : I am still updating the list as of Jan, 2021 and will most probably continue to do so for foreseeable future. So, please feel free to message me any courses you find interesting that fit here. - - We have a PhD level or Advanced courses thread in the sidebar but it's three year old now. There were two other 7-8 month old threads (1, 2) but they don't have many quality responses either. So, can we have a new one here? To reiterate - CS231n, CS229, ones from Udemy etc are not advanced. Advanced ML/DL/RL, attempts at building theory of DL, optimization theory, advanced applications etc are some examples of what I believe should belong here, much like the original sidebar post. You can also suggest (new) categories for the courses you share. :) - - Here are some courses we've found so far. ML >> Learning Discrete Latent Structure - sta4273/csc2547 Spring'18 Learning to Search - csc2547 Fall'19 Scalable and Flexible Models of Uncertainty - csc2541 Fundamentals of Machine Learning Over Networks - ep3260 Machine Learning on Graphs - cs224w, videos Mining Massive Data Sets - cs246 Interactive Learning - cse599 Machine Learning for Sequential Decision Making Under Uncertainty - ee290s/cs194 Probabilistic Graphical Methods - 10-708 Introduction to Causal Inference ML >> Theory Statistical Machine Learning - 10-702/36-702 with videos, 2016 videos Statistical Learning Theory - cs229T/stats231 Stanford Autumn'18-19 Statistical Learning Theory - cs281b /stat241b UC Berkeley, Spring'14 Statistical Learning Theory - csc2532 Uni of Toronto, Spring'20 ML >> Bayesian Bayesian Data Analysis Bayesian Methods Research Group, Moscow, Bayesian Methods in ML - spring2020, fall2020 Deep Learning and Bayesian Methods - summer school, videos available for 2019 version ML >> Systems and Operations Stanford MLSys Seminar Series Visual Computing Systems- cs348v - Another systems course that discusses hardware from a persepective of visual computing but is relevant to ML as well Advanced Machine Learning Systems - cs6787 - lecture 9 and onwards discuss hardware side of things Machine Learning Systems Design - cs329S Topics in Deployable ML - 6.S979 Machine Learning in Production / AI Engineering (17-445/17-645/17-745/11-695) AutoML - Automated Machine Learning DL >> Deep Unsupervised Learning - cs294 Deep Multi-task and Meta learning - cs330 Topics in Deep Learning - stat991 UPenn/Wharton most chapters start with introductory topics and dig into advanced ones towards the end. Deep Generative Models - cs236 Deep Geometric Learning of Big Data and Applications Deep Implicit Layers - NeurIPS 2020 tutorial DL >> Theory Topics course on Mathematics of Deep Learning - CSCI-GA 3033 Topics Course on Deep Learning - stat212b Analyses of Deep Learning - stats385, videos from 2017 version Mathematics of Deep Learning Geometry of Deep Learning RL >> Meta-Learning - ICML 2019 Tutorial , Metalearning: Applications to Data Mining - google books link Deep Multi-Task and Meta Learning - cs330, videos Deep Reinforcement Learning - cs285 Advanced robotics - cs287 Reinforcement Learning - cs234, videos for 2019 run Reinforcement Learning Summer School 2019: Bandits, RL & Deep RL Optimization >> Convex Optimization I - ee364a, has quite recent videos too. Convex Optimization II - ee364b, 2008 videos Convex Optimization and Approximation - ee227c Convex Optimization - ee227bt Variational Methods for Computer Vision Advanced Optimization and Randomized Algorithms - 10-801, videos Optimization Methods for Machine Learning and Engineering - Karlsruhe Institute of Technology Applications >> Computer Vision Computational Video Manipulation - cs448v Advanced Topics in ML: Modeling and Segmentation of Multivariate Mixed Data TUM AI Guest lecture series - many influential researchers in DL, vision, graphics talk about latest advances and their latest works. Advanced Deep Learning for Computer Vision - TUM ADL4CV Detection, Segmentation and Tracking - TUM CV3DST Guest lectures at TUM Dynamic Vision and Learning group Vision Seminar at MIT Autonomous Vision Group, Talk@Tübingen Seminar Applications >> Natural Language Processing Natural Language Processing with Deep Learning - cs224n ( not sure if it belongs here, people working in NLP can help me out) Neural networks for NLP - cs11-747 Natural Language Understanding - cs224u, video Applications >> 3D Graphics Non-Euclidean Methods in Machine Learning - cs468, 2020 Machine Learning for 3D Data - cs468, spring 2017 Data-Driven Shape Analysis - cs468, 2014 Geometric Deep Learning - Not a course but the website links a few tutorials on Geometric DL Deep Learning for Computer Graphics - SIGGRAPH 2019 Machine Learning for Machine Vision as Inverse Graphics - csc2547 Winter'20 Machine Learning Meets Geometry, winter 2020; Machine Learning for 3D Data, winter 2018 Edit: Upon suggestion, categorized the courses. There might be some misclassifications as I'm not trained on this task ;). Added some good ones from older (linked above) discussions.

[N] How Stability AI’s Founder Tanked His Billion-Dollar Startup
reddit
LLM Vibe Score0
Human Vibe Score0.667
milaworldThis week

[N] How Stability AI’s Founder Tanked His Billion-Dollar Startup

forbes article: https://www.forbes.com/sites/kenrickcai/2024/03/29/how-stability-ais-founder-tanked-his-billion-dollar-startup/ archive no paywall: https://archive.is/snbeV How Stability AI’s Founder Tanked His Billion-Dollar Startup Mar 29, 2024 Stability AI founder Emad Mostaque took the stage last week at the Terranea Resort in Palos Verdes, California to roaring applause and an introduction from an AI-generated Aristotle who announced him as “a modern Prometheus” with “the astuteness of Athena and the vision of Daedalus.” “Under his stewardship, AI becomes the Herculean force poised to vanquish the twin serpents of illness and ailment and extend the olive branch of longevity,” the faux Aristotle proclaimed. “I think that’s the best intro I’ve ever had,” Mostaque said. But behind Mostaque's hagiographic introduction lay a grim and fast metastasizing truth. Stability, once one of AI’s buzziest startups, was floundering. It had been running out of money for months and Mostaque had been unable to secure enough additional funding. It had defaulted on payments to Amazon whose cloud service undergirded Stability’s core offerings. The star research team behind its flagship text-to-image generator Stable Diffusion had tendered their resignations just three days before — as Forbes would first report — and other senior leaders had issued him an ultimatum: resign, or we walk too. Still, onstage before a massive audience of peers and acolytes, Mostaque talked a big game. “AI is jet planes for the mind,” he opined. “AI is our collective intelligence. It's the human Colossus.” He claimed a new, faster version of the Stable Diffusion image generator released earlier this month could generate “200 cats with hats per second.” But later, when he was asked about Stability’s financial model, Mostaque fumbled. “I can’t say that publicly,” he replied. “But it’s going well. We’re ahead of forecast.” Four days later, Mostaque stepped down as CEO of Stability, as Forbes first reported. In a post to X, the service formerly known as Twitter, he claimed he’d voluntarily abdicated his role to decentralize “the concentration of power in AI.” But sources told Forbes that was hardly the case. Behind the scenes, Mostaque had fought to maintain his position and control despite mounting pressure externally and internally to step down. Company documents and interviews with 32 current and former employees, investors, collaborators and industry observers suggest his abrupt exit was the result of poor business judgment and wild overspending that undermined confidence in his vision and leadership, and ultimately kneecapped the company. Mostaque, through his attorneys, declined to comment on record on a detailed list of questions about the reporting in this story. But in an email to Forbes earlier this week he broadly disputed the allegations. “Nobody tells you how hard it is to be a CEO and there are better CEOs than me to scale a business,” he said in a statement. “I am not sure anyone else would have been able to build and grow the research team to build the best and most widely used models out there and I’m very proud of the team there. I look forward to moving onto the next problem to handle and hopefully move the needle.” In an emailed statement, Christian Laforte and Shan Shan Wong, the interim co-CEOs who replaced Mostaque, said, "the company remains focused on commercializing its world leading technology” and providing it “to partners across the creative industries." After starting Stability in 2019, Mostaque built the company into an early AI juggernaut by seizing upon a promising research project that would become Stable Diffusion and funding it into a business reality. The ease with which the software generated detailed images from the simplest text prompts immediately captivated the public: 10 million people used it on any given day, the company told Forbes in early 2023. For some true believers, Mostaque was a crucial advocate for open-source AI development in a space dominated by the closed systems of OpenAI, Google and Anthropic. But his startup’s rise to one of the buzziest in generative AI was in part built on a series of exaggerations and misleading claims, as Forbes first reported last year (Mostaque disputed some points at the time). And they continued after he raised $100 million at a $1 billion valuation just days after launching Stable Diffusion in 2022. His failure to deliver on an array of grand promises, like building bespoke AI models for nation states, and his decision to pour tens of millions into research without a sustainable business plan, eroded Stability’s foundations and jeopardized its future. "He was just giving shit away,” one former employee told Forbes. “That man legitimately wanted to transform the world. He actually wanted to train AI models for kids in Malawi. Was it practical? Absolutely not." By October 2023, Stability would have less than $4 million left in the bank, according to an internal memo prepared for a board meeting and reviewed by Forbes. And mounting debt, including months of overdue Amazon Web Services payments, had already left it in the red. To avoid legal penalties for skipping Americans staff’s payroll, the document explained, the London-based startup was considering delaying tax payments to the U.K. government. It was Stability’s armada of GPUs, the wildly powerful and equally expensive chips undergirding AI, that were so taxing the company’s finances. Hosted by AWS, they had long been one of Mostaque’s bragging points; he often touted them as one of the world’s 10 largest supercomputers. They were responsible for helping Stability’s researchers build and maintain one of the top AI image generators, as well as break important new ground on generative audio, video and 3D models. “Undeniably, Stability has continued to ship a lot of models,” said one former employee. “They may not have profited off of it, but the broader ecosystem benefitted in a huge, huge way.” But the costs associated with so much compute were now threatening to sink the company. According to an internal October financial forecast seen by Forbes, Stability was on track to spend $99 million on compute in 2023. It noted as well that Stability was “underpaying AWS bills for July (by $1M)” and “not planning to pay AWS at the end of October for August usage ($7M).” Then there were the September and October bills, plus $1 million owed to Google Cloud and $600,000 to GPU cloud data center CoreWeave. (Amazon, Google and CoreWeave declined to comment.) With an additional $54 million allocated to wages and operating expenses, Stability’s total projected costs for 2023 were $153 million. But according to its October financial report, its projected revenue for the calendar year was just $11 million. Stability was on track to lose more money per month than it made in an entire year. The company’s dire financial position had thoroughly soured Stability’s current investors, including Coatue, which had invested tens of millions in the company during its $101 million funding round in 2022. In the middle of 2023, Mostaque agreed to an independent audit after Coatue raised a series of concerns, according to a source with direct knowledge of the matter. The outcome of the investigation is unclear. Coatue declined to comment. Within a week of an early October board meeting where Mostaque shared that financial forecast, Lightspeed Venture Partners, another major investor, sent a letter to the board urging them to sell the company. The distressing numbers had “severely undermined” the firm’s confidence in Mostaque’s ability to lead the company. “In particular, we are surprised and deeply concerned by a cash position just now disclosed to us that is inconsistent with prior discussions on this topic,” Lightspeed’s general counsel Brett Nissenberg wrote in the letter, a copy of which was viewed by Forbes. “Lightspeed believes that the company is not likely financeable on terms that would assure the company’s long term sound financial position.” (Lightspeed declined a request for comment.) The calls for a sale led Stability to quietly begin looking for a buyer. Bloomberg reported in November that Stability approached AI startups Cohere and Jasper to gauge their interest. Stability denied this, and Jasper CEO Timothy Young did the same when reached for comment by Forbes. A Cohere representative declined to comment. But one prominent AI company confirmed that Mostaque’s representatives had reached out to them to test the waters. Those talks did not advance because “the numbers didn’t add up,” this person, who declined to be named due to the confidential nature of the talks, told Forbes. Stability also tried to court Samsung as a buyer, going so far as to redecorate its office in advance of a planned meeting with the Korean electronics giant. (Samsung said that it invested in Stability in 2023 and that it does not comment on M&A discussions.) Coatue had been calling for Mostaque’s resignation for months, according to a source with direct knowledge. But it and other investors were unable to oust him because he was the company’s majority shareholder. When they tried a different tact by rallying other investors to offer him a juicy equity package to resign, Mostaque refused, said two sources. By October, Coatue and Lightspeed had had enough. Coatue left the board and Lightspeed resigned its observer seat. “Emad infuriated our initial investors so much it’s just making it impossible for us to raise more money under acceptable terms,” one current Stability executive told Forbes. The early months of 2024 saw Stability’s already precarious position eroding further still. Employees were quietly laid off. Three people in a position to know estimated that at least 10% of staff were cut. And cash reserves continued to dwindle. Mostaque mentioned a lifeline at the October board meeting: $95 million in tentative funding from new investors, pending due diligence. But in the end, only a fraction of it was wired, two sources say, much of it from Intel, which Forbes has learned invested $20 million, a fraction of what was reported. (Intel did not return a request for comment by publication time.) Two hours after Forbes broke the news of Mostaque’s plans to step down as CEO, Stability issued a press release confirming his resignation. Chief operating officer Wong and chief technology officer Laforte have taken over in the interim. Mostaque, who said on X that he still owns a majority of the company, also stepped down from the board, which has now initiated a search for a permanent CEO. There is a lot of work to be done to turn things around, and very little time in which to do it. Said the current Stability executive, “There’s still a possibility of a turnaround story, but the odds drop by the day.” In July of 2023, Mostaque still thought he could pull it off. Halfway through the month, he shared a fundraising plan with his lieutenants. It was wildly optimistic, detailing the raise of $500 million in cash and another $750 million in computing facilities from marquee investors like Nvidia, Google, Intel and the World Bank (Nvidia and Google declined comment. Intel did not respond. The World Bank said it did not invest in Stability). In a Slack message reviewed by Forbes, Mostaque said Google was “willing to move fast” and the round was “likely to be oversubscribed.” It wasn’t. Three people with direct knowledge of these fundraising efforts told Forbes that while there was some interest in Stability, talks often stalled when it came time to disclose financials. Two of them noted that earlier in the year, Mostaque had simply stopped engaging with VCs who asked for numbers. Only one firm invested around that time: actor Ashton Kutcher’s Sound Ventures, which invested $35 million in the form of a convertible SAFE note during the second quarter, according to an internal document. (Sound Ventures did not respond to a request for comment.) And though he’d managed to score a meeting with Nvidia and its CEO Jensen Huang, it ended in disaster, according to two sources. “Under Jensen's microscopic questions, Emad just fell apart,” a source in position to know told Forbes. Huang quickly concluded Stability wasn’t ready for an investment from Nvidia, the sources said. Mostaque told Forbes in an email that he had not met with Huang since 2022, except to say “hello and what’s up a few times after.” His July 2023 message references a plan to raise $150 million from Nvidia. (Nvidia declined to comment.) After a June Forbes investigation citing more than 30 sources revealed Mostaque’s history of misleading claims, Mostaque struggled to raise funding, a Stability investor told Forbes. (Mostaque disputed the story at the time and called it "coordinated lies" in his email this week to Forbes). Increasingly, investors scrutinized his assertions and pressed for data. And Young, now the CEO of Jasper, turned down a verbal offer to be Stability’s president after reading the article, according to a source with direct knowledge of the matter. The collapse of the talks aggravated the board and other executives, who had hoped Young would compensate for the sales and business management skills that Mostaque lacked, according to four people in a position to know. (Young declined to comment.) When Stability’s senior leadership convened in London for the CogX conference in September, the financing had still not closed. There, a group of executives confronted Mostaque asking questions about the company’s cash position and runway, according to three people with direct knowledge of the incident. They did not get the clarity they’d hoped for. By October, Mostaque had reduced his fundraising target by more than 80%. The months that followed saw a steady drumbeat of departures — general counsel Adam Avrunin, vice presidents Mike Melnicki, Ed Newton-Rex and Joe Penna, chief people officer Ozden Onder — culminating in the demoralizing March exit of Stable Diffusion’s primary developers Robin Rombach, Andreas Blattmann, Patrick Esser and Dominik Lorenz. Rombach, who led the team, had been angling to leave for months, two sources said, first threatening to resign last summer because of the fundraising failures. Others left over concerns about cash flow, as well as liabilities — including what four people described as Mostaque’s lax approach to ensuring that Stability products could not be used to produce child sexual abuse imagery. “Stability AI is committed to preventing the misuse of AI and prohibits the use of our image models and services for unlawful activity, including attempts to edit or create CSAM,” Ella Irwin, senior vice president of integrity, said in a statement. Newton-Rex told Forbes he resigned because he disagreed with Stability’s position that training AI on copyrighted work without consent is fair use. Melnicki and Penna declined to comment. Avrunin and Onder could not be reached for comment. None of the researchers responded to requests for comment. The Stable Diffusion researchers’ departure as a cohort says a lot about the state of Stability AI. The company’s researchers were widely viewed as its crown jewels, their work subsidized with a firehose of pricey compute power that was even extended to people outside the company. Martino Russi, an artificial intelligence researcher, told Forbes that though he was never formally employed by Stability, the company provided him a “staggering” amount of compute between January and April 2023 to play around with developing an AI video generator that Stability might someday use. “It was Candy Land or Coney Island,” said Russi, who estimates that his experiment, which was ultimately shelved, cost the company $2.5 million. Stable Diffusion was simultaneously Stability’s marquee product and its existential cash crisis. One current employee described it to Forbes as “a giant vacuum that absorbed everything: money, compute, people.” While the software was widely used, with Mostaque claiming downloads reaching into the hundreds of millions, Stability struggled to translate that wild success into revenue. Mostaque knew it could be done — peers at Databricks, Elastic and MongoDB had all turned a free product into a lucrative business — he just couldn’t figure out how. His first attempt was Stability’s API, which allowed paying customers to integrate Stable Diffusion into their own products. In early 2023, a handful of small companies, like art generator app NightCafe and presentation software startup Tome, signed on, according to four people with knowledge of the deals. But Stability’s poor account management services soured many, and in a matter of months NightCafe and Tome canceled their contracts, three people said. NightCafe founder Angus Russell told Forbes that his company switched to a competitor which “offered much cheaper inference costs and a broader service.” Tome did not respond to a request for comment. Meanwhile, Mostaque’s efforts to court larger companies like Samsung and Snapchat were failing, according to five people familiar with the effort. Canva, which was already one of the heaviest users of open-sourced Stable Diffusion, had multiple discussions with Stability, which was angling for a contract it hoped would generate several millions in annual revenue. But the deal never materialized, four sources said. “These three companies wanted and needed us,” one former employee told Forbes. “They would have been the perfect customers.” (Samsung, Snap and Canva declined to comment.) “It’s not that there was not an appetite to pay Stability — there were tons of companies that would have that wanted to,” the former employee said. “There was a huge opportunity and demand, but just a resistance to execution.” Mostaque’s other big idea was to provide governments with bespoke national AI models that would invigorate their economies and citizenry. “Emad envisions a world where AI through 100 national models serves not as a tool of the few, but as a benefactor to all promising to confront great adversaries, cancer, autism, and the sands of time itself,” the AI avatar of Aristotle said in his intro at the conference. Mostaque told several prospective customers that he could deliver such models within 60 days — an untenable timeline, according to two people in position to know. Stability attempted to develop a model for the Singaporean government over the protestation of employees who questioned its technical feasibility, three sources familiar with the effort told Forbes. But it couldn’t pull it off and Singapore never became a customer. (The government of Singapore confirmed it did not enter into a deal with Stability, but declined to answer additional questions.) As Stability careened from one new business idea to another, resources were abruptly reallocated and researchers reassigned. The whiplash shifts in a largely siloed organization demoralized and infuriated employees. “There were ‘urgent’ things, ‘urgent urgent’ things and ‘most urgent,’” one former employee complained. “None of these things seem important if everything is important.” Another former Stability executive was far more pointed in their assessment. “Emad is the most disorganized leader I have ever worked with in my career,” this person told Forbes. “He has no vision, and changes directions every week, often based on what he sees on Twitter.” In a video interview posted shortly before this story was published, Mostaque explained his leadership style: “I'm particularly great at taking creatives, developers, researchers, others, and achieving their full potential in designing systems. But I should not be dealing with, you know, HR and operations and business development and other elements. There are far better people than me to do that.” By December 2023, Stability had partially abandoned its open-source roots and announced that any commercial use of Stable Diffusion would cost customers at least $20 per month (non-commercial and research use of Stable Diffusion would remain free). But privately, Stability was considering a potentially more lucrative source of revenue: reselling the compute it was leasing from providers like AWS, according to six people familiar with the effort. Though it was essentially GPU arbitrage, Stability framed the strategy to investors as a “managed services” offering. Its damning October financial report projected optimistically that such an offering would bring in $139 million in 2024 — 98% of its revenue. Multiple employees at the time told Forbes they feared reselling compute, even if the company called it “managed services,” would violate the terms of Stability’s contract with AWS. Amazon declined to comment. “The line internally was that we are not reselling compute,” one former employee said. “This was some of the dirtiest feeling stuff.” Stability also discussed reselling a cluster of Nvidia A100 chips, leased via CoreWeave, to the venture capital firm Andreessen Horowitz, three sources said. “It was under the guise of managed services, but there wasn’t any management happening,” one of these people told Forbes. Andreessen Horowitz and CoreWeave declined to comment. Stability did not respond to questions about if it plans to continue this strategy now that Mostaque is out of the picture. Regardless, interim co-CEOs Wong and Laforte are on a tight timeline to clean up his mess. Board chairman Jim O’Shaughnessy said in a statement that he was confident the pair “will adeptly steer the company forward in developing and commercializing industry-leading generative AI products.” But burn continues to far outpace revenue. The Financial Times reported Friday that the company made $5.4 million of revenue in February, against $8 million in costs. Several sources said there are ongoing concerns about making payroll for the roughly 150 remaining employees. Leadership roles have gone vacant for months amid the disarray, leaving the company increasingly directionless. Meanwhile, a potentially catastrophic legal threat looms over the company: A trio of copyright infringement lawsuits brought by Getty Images and a group of artists in the U.S. and U.K., who claim Stability illegally used their art and photography to train the AI models powering Stable Diffusion. A London-based court has already rejected the company’s bid to throw out one of the lawsuits on the basis that none of its researchers were based in the U.K. And Stability’s claim that Getty’s Delaware lawsuit should be blocked because it's a U.K.-based company was rejected. (Stability did not respond to questions about the litigation.) AI-related copyright litigation “could go on for years,” according to Eric Goldman, a law professor at Santa Clara University. He told Forbes that though plaintiffs suing AI firms face an uphill battle overcoming the existing legal precedent on copyright infringement, the quantity of arguments available to make are virtually inexhaustible. “Like in military theory, if there’s a gap in your lines, that’s where the enemy pours through — if any one of those arguments succeeds, it could completely change the generative AI environment,” he said. “In some sense, generative AI as an industry has to win everything.” Stability, which had more than $100 million in the bank just a year and a half ago, is in a deep hole. Not only does it need more funding, it needs a viable business model — or a buyer with the vision and chops to make it successful in a fast-moving and highly competitive sector. At an all hands meeting this past Monday, Stability’s new leaders detailed a path forward. One point of emphasis: a plan to better manage resources and expenses, according to one person in attendance. It’s a start, but Mostaque’s meddling has left them with little runway to execute. His resignation, though, has given some employees hope. “A few people are 100% going to reconsider leaving after today,” said one current employee. “And the weird gloomy aura of hearing Emad talking nonsense for an hour is gone.” Shortly before Mostaque resigned, one current Stability executive told Forbes that they were optimistic his departure could make Stability appealing enough to receive a small investment or sale to a friendly party. “There are companies that have raised hundreds of millions of dollars that have much less intrinsic value than Stability,” the person said. “A white knight may still appear.”

[N] OpenAI's new language model gpt-3.5-turbo-instruct can defeat chess engine Fairy-Stockfish 14 at level 5
reddit
LLM Vibe Score0
Human Vibe Score1
WiskkeyThis week

[N] OpenAI's new language model gpt-3.5-turbo-instruct can defeat chess engine Fairy-Stockfish 14 at level 5

This Twitter thread (Nitter alternative for those who aren't logged into Twitter and want to see the full thread) claims that OpenAI's new language model gpt-3.5-turbo-instruct can "readily" beat Lichess Stockfish level 4 (Lichess Stockfish level and its rating) and has a chess rating of "around 1800 Elo." This tweet shows the style of prompts that are being used to get these results with the new language model. I used website parrotchess\[dot\]com (discovered here) (EDIT: parrotchess doesn't exist anymore, as of March 7, 2024) to play multiple games of chess purportedly pitting this new language model vs. various levels at website Lichess, which supposedly uses Fairy-Stockfish 14 according to the Lichess user interface. My current results for all completed games: The language model is 5-0 vs. Fairy-Stockfish 14 level 5 (game 1, game 2, game 3, game 4, game 5), and 2-5 vs. Fairy-Stockfish 14 level 6 (game 1, game 2, game 3, game 4, game 5, game 6, game 7). Not included in the tally are games that I had to abort because the parrotchess user interface stalled (5 instances), because I accidentally copied a move incorrectly in the parrotchess user interface (numerous instances), or because the parrotchess user interface doesn't allow the promotion of a pawn to anything other than queen (1 instance). Update: There could have been up to 5 additional losses - the number of times the parrotchess user interface stalled - that would have been recorded in this tally if this language model resignation bug hadn't been present. Also, the quality of play of some online chess bots can perhaps vary depending on the speed of the user's hardware. The following is a screenshot from parrotchess showing the end state of the first game vs. Fairy-Stockfish 14 level 5: https://preview.redd.it/4ahi32xgjmpb1.jpg?width=432&format=pjpg&auto=webp&s=7fbb68371ca4257bed15ab2828fab58047f194a4 The game results in this paragraph are from using parrotchess after the forementioned resignation bug was fixed. The language model is 0-1 vs. Fairy-Stockfish level 7 (game 1), and 0-1 vs. Fairy-Stockfish 14 level 8 (game 1). There is one known scenario (Nitter alternative) in which the new language model purportedly generated an illegal move using language model sampling temperature of 0. Previous purported illegal moves that the parrotchess developer examined turned out (Nitter alternative) to be due to parrotchess bugs. There are several other ways to play chess against the new language model if you have access to the OpenAI API. The first way is to use the OpenAI Playground as shown in this video. The second way is chess web app gptchess\[dot\]vercel\[dot\]app (discovered in this Twitter thread / Nitter thread). Third, another person modified that chess web app to additionally allow various levels of the Stockfish chess engine to autoplay, resulting in chess web app chessgpt-stockfish\[dot\]vercel\[dot\]app (discovered in this tweet). Results from other people: a) Results from hundreds of games in blog post Debunking the Chessboard: Confronting GPTs Against Chess Engines to Estimate Elo Ratings and Assess Legal Move Abilities. b) Results from 150 games: GPT-3.5-instruct beats GPT-4 at chess and is a \~1800 ELO chess player. Results of 150 games of GPT-3.5 vs stockfish and 30 of GPT-3.5 vs GPT-4. Post #2. The developer later noted that due to bugs the legal move rate was actually above 99.9%. It should also be noted that these results didn't use a language model sampling temperature of 0, which I believe could have induced illegal moves. c) Chess bot gpt35-turbo-instruct at website Lichess. d) Chess bot konaz at website Lichess. From blog post Playing chess with large language models: Computers have been better than humans at chess for at least the last 25 years. And for the past five years, deep learning models have been better than the best humans. But until this week, in order to be good at chess, a machine learning model had to be explicitly designed to play games: it had to be told explicitly that there was an 8x8 board, that there were different pieces, how each of them moved, and what the goal of the game was. Then it had to be trained with reinforcement learning agaist itself. And then it would win. This all changed on Monday, when OpenAI released GPT-3.5-turbo-instruct, an instruction-tuned language model that was designed to just write English text, but that people on the internet quickly discovered can play chess at, roughly, the level of skilled human players. Post Chess as a case study in hidden capabilities in ChatGPT from last month covers a different prompting style used for the older chat-based GPT 3.5 Turbo language model. If I recall correctly from my tests with ChatGPT-3.5, using that prompt style with the older language model can defeat Stockfish level 2 at Lichess, but I haven't been successful in using it to beat Stockfish level 3. In my tests, both the quality of play and frequency of illegal attempted moves seems to be better with the new prompt style with the new language model compared to the older prompt style with the older language model. Related article: Large Language Model: world models or surface statistics? P.S. Since some people claim that language model gpt-3.5-turbo-instruct is always playing moves memorized from the training dataset, I searched for data on the uniqueness of chess positions. From this video, we see that for a certain game dataset there were 763,331,945 chess positions encountered in an unknown number of games without removing duplicate chess positions, 597,725,848 different chess positions reached, and 582,337,984 different chess positions that were reached only once. Therefore, for that game dataset the probability that a chess position in a game was reached only once is 582337984 / 763331945 = 76.3%. For the larger dataset cited in that video, there are approximately (506,000,000 - 200,000) games in the dataset (per this paper), and 21,553,382,902 different game positions encountered. Each game in the larger dataset added a mean of approximately 21,553,382,902 / (506,000,000 - 200,000) = 42.6 different chess positions to the dataset. For this different dataset of \~12 million games, \~390 million different chess positions were encountered. Each game in this different dataset added a mean of approximately (390 million / 12 million) = 32.5 different chess positions to the dataset. From the aforementioned numbers, we can conclude that a strategy of playing only moves memorized from a game dataset would fare poorly because there are not rarely new chess games that have chess positions that are not present in the game dataset.

[N] How Stability AI’s Founder Tanked His Billion-Dollar Startup
reddit
LLM Vibe Score0
Human Vibe Score0.667
milaworldThis week

[N] How Stability AI’s Founder Tanked His Billion-Dollar Startup

forbes article: https://www.forbes.com/sites/kenrickcai/2024/03/29/how-stability-ais-founder-tanked-his-billion-dollar-startup/ archive no paywall: https://archive.is/snbeV How Stability AI’s Founder Tanked His Billion-Dollar Startup Mar 29, 2024 Stability AI founder Emad Mostaque took the stage last week at the Terranea Resort in Palos Verdes, California to roaring applause and an introduction from an AI-generated Aristotle who announced him as “a modern Prometheus” with “the astuteness of Athena and the vision of Daedalus.” “Under his stewardship, AI becomes the Herculean force poised to vanquish the twin serpents of illness and ailment and extend the olive branch of longevity,” the faux Aristotle proclaimed. “I think that’s the best intro I’ve ever had,” Mostaque said. But behind Mostaque's hagiographic introduction lay a grim and fast metastasizing truth. Stability, once one of AI’s buzziest startups, was floundering. It had been running out of money for months and Mostaque had been unable to secure enough additional funding. It had defaulted on payments to Amazon whose cloud service undergirded Stability’s core offerings. The star research team behind its flagship text-to-image generator Stable Diffusion had tendered their resignations just three days before — as Forbes would first report — and other senior leaders had issued him an ultimatum: resign, or we walk too. Still, onstage before a massive audience of peers and acolytes, Mostaque talked a big game. “AI is jet planes for the mind,” he opined. “AI is our collective intelligence. It's the human Colossus.” He claimed a new, faster version of the Stable Diffusion image generator released earlier this month could generate “200 cats with hats per second.” But later, when he was asked about Stability’s financial model, Mostaque fumbled. “I can’t say that publicly,” he replied. “But it’s going well. We’re ahead of forecast.” Four days later, Mostaque stepped down as CEO of Stability, as Forbes first reported. In a post to X, the service formerly known as Twitter, he claimed he’d voluntarily abdicated his role to decentralize “the concentration of power in AI.” But sources told Forbes that was hardly the case. Behind the scenes, Mostaque had fought to maintain his position and control despite mounting pressure externally and internally to step down. Company documents and interviews with 32 current and former employees, investors, collaborators and industry observers suggest his abrupt exit was the result of poor business judgment and wild overspending that undermined confidence in his vision and leadership, and ultimately kneecapped the company. Mostaque, through his attorneys, declined to comment on record on a detailed list of questions about the reporting in this story. But in an email to Forbes earlier this week he broadly disputed the allegations. “Nobody tells you how hard it is to be a CEO and there are better CEOs than me to scale a business,” he said in a statement. “I am not sure anyone else would have been able to build and grow the research team to build the best and most widely used models out there and I’m very proud of the team there. I look forward to moving onto the next problem to handle and hopefully move the needle.” In an emailed statement, Christian Laforte and Shan Shan Wong, the interim co-CEOs who replaced Mostaque, said, "the company remains focused on commercializing its world leading technology” and providing it “to partners across the creative industries." After starting Stability in 2019, Mostaque built the company into an early AI juggernaut by seizing upon a promising research project that would become Stable Diffusion and funding it into a business reality. The ease with which the software generated detailed images from the simplest text prompts immediately captivated the public: 10 million people used it on any given day, the company told Forbes in early 2023. For some true believers, Mostaque was a crucial advocate for open-source AI development in a space dominated by the closed systems of OpenAI, Google and Anthropic. But his startup’s rise to one of the buzziest in generative AI was in part built on a series of exaggerations and misleading claims, as Forbes first reported last year (Mostaque disputed some points at the time). And they continued after he raised $100 million at a $1 billion valuation just days after launching Stable Diffusion in 2022. His failure to deliver on an array of grand promises, like building bespoke AI models for nation states, and his decision to pour tens of millions into research without a sustainable business plan, eroded Stability’s foundations and jeopardized its future. "He was just giving shit away,” one former employee told Forbes. “That man legitimately wanted to transform the world. He actually wanted to train AI models for kids in Malawi. Was it practical? Absolutely not." By October 2023, Stability would have less than $4 million left in the bank, according to an internal memo prepared for a board meeting and reviewed by Forbes. And mounting debt, including months of overdue Amazon Web Services payments, had already left it in the red. To avoid legal penalties for skipping Americans staff’s payroll, the document explained, the London-based startup was considering delaying tax payments to the U.K. government. It was Stability’s armada of GPUs, the wildly powerful and equally expensive chips undergirding AI, that were so taxing the company’s finances. Hosted by AWS, they had long been one of Mostaque’s bragging points; he often touted them as one of the world’s 10 largest supercomputers. They were responsible for helping Stability’s researchers build and maintain one of the top AI image generators, as well as break important new ground on generative audio, video and 3D models. “Undeniably, Stability has continued to ship a lot of models,” said one former employee. “They may not have profited off of it, but the broader ecosystem benefitted in a huge, huge way.” But the costs associated with so much compute were now threatening to sink the company. According to an internal October financial forecast seen by Forbes, Stability was on track to spend $99 million on compute in 2023. It noted as well that Stability was “underpaying AWS bills for July (by $1M)” and “not planning to pay AWS at the end of October for August usage ($7M).” Then there were the September and October bills, plus $1 million owed to Google Cloud and $600,000 to GPU cloud data center CoreWeave. (Amazon, Google and CoreWeave declined to comment.) With an additional $54 million allocated to wages and operating expenses, Stability’s total projected costs for 2023 were $153 million. But according to its October financial report, its projected revenue for the calendar year was just $11 million. Stability was on track to lose more money per month than it made in an entire year. The company’s dire financial position had thoroughly soured Stability’s current investors, including Coatue, which had invested tens of millions in the company during its $101 million funding round in 2022. In the middle of 2023, Mostaque agreed to an independent audit after Coatue raised a series of concerns, according to a source with direct knowledge of the matter. The outcome of the investigation is unclear. Coatue declined to comment. Within a week of an early October board meeting where Mostaque shared that financial forecast, Lightspeed Venture Partners, another major investor, sent a letter to the board urging them to sell the company. The distressing numbers had “severely undermined” the firm’s confidence in Mostaque’s ability to lead the company. “In particular, we are surprised and deeply concerned by a cash position just now disclosed to us that is inconsistent with prior discussions on this topic,” Lightspeed’s general counsel Brett Nissenberg wrote in the letter, a copy of which was viewed by Forbes. “Lightspeed believes that the company is not likely financeable on terms that would assure the company’s long term sound financial position.” (Lightspeed declined a request for comment.) The calls for a sale led Stability to quietly begin looking for a buyer. Bloomberg reported in November that Stability approached AI startups Cohere and Jasper to gauge their interest. Stability denied this, and Jasper CEO Timothy Young did the same when reached for comment by Forbes. A Cohere representative declined to comment. But one prominent AI company confirmed that Mostaque’s representatives had reached out to them to test the waters. Those talks did not advance because “the numbers didn’t add up,” this person, who declined to be named due to the confidential nature of the talks, told Forbes. Stability also tried to court Samsung as a buyer, going so far as to redecorate its office in advance of a planned meeting with the Korean electronics giant. (Samsung said that it invested in Stability in 2023 and that it does not comment on M&A discussions.) Coatue had been calling for Mostaque’s resignation for months, according to a source with direct knowledge. But it and other investors were unable to oust him because he was the company’s majority shareholder. When they tried a different tact by rallying other investors to offer him a juicy equity package to resign, Mostaque refused, said two sources. By October, Coatue and Lightspeed had had enough. Coatue left the board and Lightspeed resigned its observer seat. “Emad infuriated our initial investors so much it’s just making it impossible for us to raise more money under acceptable terms,” one current Stability executive told Forbes. The early months of 2024 saw Stability’s already precarious position eroding further still. Employees were quietly laid off. Three people in a position to know estimated that at least 10% of staff were cut. And cash reserves continued to dwindle. Mostaque mentioned a lifeline at the October board meeting: $95 million in tentative funding from new investors, pending due diligence. But in the end, only a fraction of it was wired, two sources say, much of it from Intel, which Forbes has learned invested $20 million, a fraction of what was reported. (Intel did not return a request for comment by publication time.) Two hours after Forbes broke the news of Mostaque’s plans to step down as CEO, Stability issued a press release confirming his resignation. Chief operating officer Wong and chief technology officer Laforte have taken over in the interim. Mostaque, who said on X that he still owns a majority of the company, also stepped down from the board, which has now initiated a search for a permanent CEO. There is a lot of work to be done to turn things around, and very little time in which to do it. Said the current Stability executive, “There’s still a possibility of a turnaround story, but the odds drop by the day.” In July of 2023, Mostaque still thought he could pull it off. Halfway through the month, he shared a fundraising plan with his lieutenants. It was wildly optimistic, detailing the raise of $500 million in cash and another $750 million in computing facilities from marquee investors like Nvidia, Google, Intel and the World Bank (Nvidia and Google declined comment. Intel did not respond. The World Bank said it did not invest in Stability). In a Slack message reviewed by Forbes, Mostaque said Google was “willing to move fast” and the round was “likely to be oversubscribed.” It wasn’t. Three people with direct knowledge of these fundraising efforts told Forbes that while there was some interest in Stability, talks often stalled when it came time to disclose financials. Two of them noted that earlier in the year, Mostaque had simply stopped engaging with VCs who asked for numbers. Only one firm invested around that time: actor Ashton Kutcher’s Sound Ventures, which invested $35 million in the form of a convertible SAFE note during the second quarter, according to an internal document. (Sound Ventures did not respond to a request for comment.) And though he’d managed to score a meeting with Nvidia and its CEO Jensen Huang, it ended in disaster, according to two sources. “Under Jensen's microscopic questions, Emad just fell apart,” a source in position to know told Forbes. Huang quickly concluded Stability wasn’t ready for an investment from Nvidia, the sources said. Mostaque told Forbes in an email that he had not met with Huang since 2022, except to say “hello and what’s up a few times after.” His July 2023 message references a plan to raise $150 million from Nvidia. (Nvidia declined to comment.) After a June Forbes investigation citing more than 30 sources revealed Mostaque’s history of misleading claims, Mostaque struggled to raise funding, a Stability investor told Forbes. (Mostaque disputed the story at the time and called it "coordinated lies" in his email this week to Forbes). Increasingly, investors scrutinized his assertions and pressed for data. And Young, now the CEO of Jasper, turned down a verbal offer to be Stability’s president after reading the article, according to a source with direct knowledge of the matter. The collapse of the talks aggravated the board and other executives, who had hoped Young would compensate for the sales and business management skills that Mostaque lacked, according to four people in a position to know. (Young declined to comment.) When Stability’s senior leadership convened in London for the CogX conference in September, the financing had still not closed. There, a group of executives confronted Mostaque asking questions about the company’s cash position and runway, according to three people with direct knowledge of the incident. They did not get the clarity they’d hoped for. By October, Mostaque had reduced his fundraising target by more than 80%. The months that followed saw a steady drumbeat of departures — general counsel Adam Avrunin, vice presidents Mike Melnicki, Ed Newton-Rex and Joe Penna, chief people officer Ozden Onder — culminating in the demoralizing March exit of Stable Diffusion’s primary developers Robin Rombach, Andreas Blattmann, Patrick Esser and Dominik Lorenz. Rombach, who led the team, had been angling to leave for months, two sources said, first threatening to resign last summer because of the fundraising failures. Others left over concerns about cash flow, as well as liabilities — including what four people described as Mostaque’s lax approach to ensuring that Stability products could not be used to produce child sexual abuse imagery. “Stability AI is committed to preventing the misuse of AI and prohibits the use of our image models and services for unlawful activity, including attempts to edit or create CSAM,” Ella Irwin, senior vice president of integrity, said in a statement. Newton-Rex told Forbes he resigned because he disagreed with Stability’s position that training AI on copyrighted work without consent is fair use. Melnicki and Penna declined to comment. Avrunin and Onder could not be reached for comment. None of the researchers responded to requests for comment. The Stable Diffusion researchers’ departure as a cohort says a lot about the state of Stability AI. The company’s researchers were widely viewed as its crown jewels, their work subsidized with a firehose of pricey compute power that was even extended to people outside the company. Martino Russi, an artificial intelligence researcher, told Forbes that though he was never formally employed by Stability, the company provided him a “staggering” amount of compute between January and April 2023 to play around with developing an AI video generator that Stability might someday use. “It was Candy Land or Coney Island,” said Russi, who estimates that his experiment, which was ultimately shelved, cost the company $2.5 million. Stable Diffusion was simultaneously Stability’s marquee product and its existential cash crisis. One current employee described it to Forbes as “a giant vacuum that absorbed everything: money, compute, people.” While the software was widely used, with Mostaque claiming downloads reaching into the hundreds of millions, Stability struggled to translate that wild success into revenue. Mostaque knew it could be done — peers at Databricks, Elastic and MongoDB had all turned a free product into a lucrative business — he just couldn’t figure out how. His first attempt was Stability’s API, which allowed paying customers to integrate Stable Diffusion into their own products. In early 2023, a handful of small companies, like art generator app NightCafe and presentation software startup Tome, signed on, according to four people with knowledge of the deals. But Stability’s poor account management services soured many, and in a matter of months NightCafe and Tome canceled their contracts, three people said. NightCafe founder Angus Russell told Forbes that his company switched to a competitor which “offered much cheaper inference costs and a broader service.” Tome did not respond to a request for comment. Meanwhile, Mostaque’s efforts to court larger companies like Samsung and Snapchat were failing, according to five people familiar with the effort. Canva, which was already one of the heaviest users of open-sourced Stable Diffusion, had multiple discussions with Stability, which was angling for a contract it hoped would generate several millions in annual revenue. But the deal never materialized, four sources said. “These three companies wanted and needed us,” one former employee told Forbes. “They would have been the perfect customers.” (Samsung, Snap and Canva declined to comment.) “It’s not that there was not an appetite to pay Stability — there were tons of companies that would have that wanted to,” the former employee said. “There was a huge opportunity and demand, but just a resistance to execution.” Mostaque’s other big idea was to provide governments with bespoke national AI models that would invigorate their economies and citizenry. “Emad envisions a world where AI through 100 national models serves not as a tool of the few, but as a benefactor to all promising to confront great adversaries, cancer, autism, and the sands of time itself,” the AI avatar of Aristotle said in his intro at the conference. Mostaque told several prospective customers that he could deliver such models within 60 days — an untenable timeline, according to two people in position to know. Stability attempted to develop a model for the Singaporean government over the protestation of employees who questioned its technical feasibility, three sources familiar with the effort told Forbes. But it couldn’t pull it off and Singapore never became a customer. (The government of Singapore confirmed it did not enter into a deal with Stability, but declined to answer additional questions.) As Stability careened from one new business idea to another, resources were abruptly reallocated and researchers reassigned. The whiplash shifts in a largely siloed organization demoralized and infuriated employees. “There were ‘urgent’ things, ‘urgent urgent’ things and ‘most urgent,’” one former employee complained. “None of these things seem important if everything is important.” Another former Stability executive was far more pointed in their assessment. “Emad is the most disorganized leader I have ever worked with in my career,” this person told Forbes. “He has no vision, and changes directions every week, often based on what he sees on Twitter.” In a video interview posted shortly before this story was published, Mostaque explained his leadership style: “I'm particularly great at taking creatives, developers, researchers, others, and achieving their full potential in designing systems. But I should not be dealing with, you know, HR and operations and business development and other elements. There are far better people than me to do that.” By December 2023, Stability had partially abandoned its open-source roots and announced that any commercial use of Stable Diffusion would cost customers at least $20 per month (non-commercial and research use of Stable Diffusion would remain free). But privately, Stability was considering a potentially more lucrative source of revenue: reselling the compute it was leasing from providers like AWS, according to six people familiar with the effort. Though it was essentially GPU arbitrage, Stability framed the strategy to investors as a “managed services” offering. Its damning October financial report projected optimistically that such an offering would bring in $139 million in 2024 — 98% of its revenue. Multiple employees at the time told Forbes they feared reselling compute, even if the company called it “managed services,” would violate the terms of Stability’s contract with AWS. Amazon declined to comment. “The line internally was that we are not reselling compute,” one former employee said. “This was some of the dirtiest feeling stuff.” Stability also discussed reselling a cluster of Nvidia A100 chips, leased via CoreWeave, to the venture capital firm Andreessen Horowitz, three sources said. “It was under the guise of managed services, but there wasn’t any management happening,” one of these people told Forbes. Andreessen Horowitz and CoreWeave declined to comment. Stability did not respond to questions about if it plans to continue this strategy now that Mostaque is out of the picture. Regardless, interim co-CEOs Wong and Laforte are on a tight timeline to clean up his mess. Board chairman Jim O’Shaughnessy said in a statement that he was confident the pair “will adeptly steer the company forward in developing and commercializing industry-leading generative AI products.” But burn continues to far outpace revenue. The Financial Times reported Friday that the company made $5.4 million of revenue in February, against $8 million in costs. Several sources said there are ongoing concerns about making payroll for the roughly 150 remaining employees. Leadership roles have gone vacant for months amid the disarray, leaving the company increasingly directionless. Meanwhile, a potentially catastrophic legal threat looms over the company: A trio of copyright infringement lawsuits brought by Getty Images and a group of artists in the U.S. and U.K., who claim Stability illegally used their art and photography to train the AI models powering Stable Diffusion. A London-based court has already rejected the company’s bid to throw out one of the lawsuits on the basis that none of its researchers were based in the U.K. And Stability’s claim that Getty’s Delaware lawsuit should be blocked because it's a U.K.-based company was rejected. (Stability did not respond to questions about the litigation.) AI-related copyright litigation “could go on for years,” according to Eric Goldman, a law professor at Santa Clara University. He told Forbes that though plaintiffs suing AI firms face an uphill battle overcoming the existing legal precedent on copyright infringement, the quantity of arguments available to make are virtually inexhaustible. “Like in military theory, if there’s a gap in your lines, that’s where the enemy pours through — if any one of those arguments succeeds, it could completely change the generative AI environment,” he said. “In some sense, generative AI as an industry has to win everything.” Stability, which had more than $100 million in the bank just a year and a half ago, is in a deep hole. Not only does it need more funding, it needs a viable business model — or a buyer with the vision and chops to make it successful in a fast-moving and highly competitive sector. At an all hands meeting this past Monday, Stability’s new leaders detailed a path forward. One point of emphasis: a plan to better manage resources and expenses, according to one person in attendance. It’s a start, but Mostaque’s meddling has left them with little runway to execute. His resignation, though, has given some employees hope. “A few people are 100% going to reconsider leaving after today,” said one current employee. “And the weird gloomy aura of hearing Emad talking nonsense for an hour is gone.” Shortly before Mostaque resigned, one current Stability executive told Forbes that they were optimistic his departure could make Stability appealing enough to receive a small investment or sale to a friendly party. “There are companies that have raised hundreds of millions of dollars that have much less intrinsic value than Stability,” the person said. “A white knight may still appear.”

[D] "Grokking" Deep Learning architectures and using them in practice
reddit
LLM Vibe Score0
Human Vibe Score1
LightGreenSquashThis week

[D] "Grokking" Deep Learning architectures and using them in practice

Hi all, I'm on the first years of my PhD in Computer Vision and obviously the vast majority of research in it is nowadays using Deep Learning techniques. I like to think that I'm far from an absolute beginner in the sense that: I've trained neural networks and more "traditional" ML models in a couple of courses, as well as for my MSc thesis, albeit almost out-of-the-box stuff. I have a decent understanding of Linear Algebra, Calculus and Probability Theory (undergrad courses from CS degree). I say "decent" because I'm of the firm opinion that the more math one knows the more impressive the things they can do in AI, so I really don't consider myself a math whiz, but judging from the math knowledge an average "How to get started with Deep Learning" blog post assumes, I'd say I'm well ahead. I'm also devoting some time every day to a more rigorous study of these areas, eventually hoping to expand to other related ones. I can get through Deep Learning papers and usually* obtain at least a basic understanding of what they're about, as well as why it works, at least according to the authors and their experiments. I do still have some trouble with more state-of-the-art works, especially ones that also use things from NLP. However, I don't really feel confident that I can actually produce useful research that investigates and/or uses this sort of methods to do something new. During undergrad, in order to actually understand most -if not all- concepts taught to me in programming and math I'd actually do things with them: solve problems, prove statements, or just code with the goal of creating some system or seeing how an idea actually works (e.g. polymorphism). I realize, however, that this has not been the case with Deep Learning, at least for me: I've never tried to actually code a CNN or ResNet, much less a word2vec model, a Transformer, or any sort of generative model. Sure, I've read about how the first layers of a CNN learn edges etc. but I've never actually "seen it with my own eyes". Transformers in particular seem to really trouble me. Although I sort-of understand the idea behind attention etc., I struggle to see what sort of features they end up using (in contrast to CNNs, where the idea of learning convolutional filters is much more intuitive to me). Which brings me to the question of what's an efficient way to go from understanding a paper to actually feeling like you really, truly, "grok" the material and could build on it, or use it in some scenario? Do you think implementing research papers from scratch or almost from scratch can be useful? Or is it way too time consuming for someone already busy with a PhD? Is it even feasible or are most papers -sadly- unreproducible if you don't use authors' code? How do you manage to stay on track with such a rapidly evolving field, on any level beyond a completely surface understanding? How do you find a good balance between learning to use tools/frameworks, reading papers and gaining the deeper sort of understanding I mention?

[D] What are some good advanced platforms?
reddit
LLM Vibe Score0
Human Vibe Score1
SemperZeroThis week

[D] What are some good advanced platforms?

Hey. I'm 27 and I think I got most of the basics for ML. I'm very good at math, I understand statistics and probability quite deep, worked on research projects by myself, for which I had to build models on my own. Not really complex, but still requiring creativity and a good understanding of basic concepts. I will soon start a data science job at a FAANG company and I want to further improve my skills and use their resources to the fullest, but I'm not really sure where to go from here in terms of learning. Could you help me with some more advanced materials/forums for ML research/place with good papers/place with good articles? I'd also like to study the very best and see the way they code and explain advanced concepts (like Andrej Karpathy) where can I find them?? is there a Twitch for challenger level AI researchers streaming live processes? Or videos showing the entire project flow (how they do data visualizations, mining, choosing models, tuning, etc) like top digital artists show the highlights or the entire speed-up of their painting processes? Here's a list all of my projects to get a general idea of my level and where I'm at: calculating the distance between hundreds of 42.000 feature objects (containing categorical, strings, numbers, hashes, booleans as variables) and then clustering. with some vector processing and a neural network implemented from scratch in C some models like ARIMA (together with linear regression) combining a FFT with a neural network for a 42d wave classification T-SNE to split dataset into 2d grids -> Kullback–Leibler on grids for distance -> DBSCAN/KMEANS for clustering genetic algorithms for hyperparameter optimizations and reinforcement learning (neuro evolution) DBSCAN -> Levenberg-Marquardt for polynomial coefficients-> neural network predicting the coefficients based on different parameters playing with instance segmentation and some algorithms to synchronize a color and a depth camera simulations/statistics/probabilities for video games a lot of visualizations and data mining for patterns As you can see there is no LLM/ Generative AI/ Computer Vision stuff, which I would like to get into. I'm also not 100% sure what else would be nice to learn in general. I know most of the basic procedures for training, balancing datasets, avoid overfit, computing error plots, comparing models, etc and I'm familiar with most of math (not insanely advanced) used in ML. I didn't read many papers, but holy ... most of them are so unreadable and filled with pompous nonsense that 99% of the effort is de-obfuscating the bs and reading for so long just to figure out how the input is encoded, what's the output, and what's the model. Where can I find good, readable, structured papers which are actually on point? I'm from Eastern Europe and most of my learning has been done by my self after high school, the education quality is close to zero in the universities here and I never had any mentors at the jobs I worked. There's no research in this country, and getting to work on these projects was insanely hard, some of them being done in my free time or for free just to get experience... Fortunately after a lot of hard work I got into FAANG, and I hope things will be better here. Most of what I've learned has been from very fragmented places on the internet, and now I'm looking for centralized places and communities of top quality content. TL;DR: sorry for the long rambling. had to order my thoughts and figure what i actually want: Looking for top tier AI researchers showcasing their work processes, places with clear papers/articles, tips for someone who's no longer a very beginner, and other communities like this.

[D] The current and future state of AI/ML is shockingly demoralizing with little hope of redemption
reddit
LLM Vibe Score0
Human Vibe Score1
Flaky_Suit_8665This week

[D] The current and future state of AI/ML is shockingly demoralizing with little hope of redemption

I recently encountered the PaLM (Scaling Language Modeling with Pathways) paper from Google Research and it opened up a can of worms of ideas I’ve felt I’ve intuitively had for a while, but have been unable to express – and I know I can’t be the only one. Sometimes I wonder what the original pioneers of AI – Turing, Neumann, McCarthy, etc. – would think if they could see the state of AI that we’ve gotten ourselves into. 67 authors, 83 pages, 540B parameters in a model, the internals of which no one can say they comprehend with a straight face, 6144 TPUs in a commercial lab that no one has access to, on a rig that no one can afford, trained on a volume of data that a human couldn’t process in a lifetime, 1 page on ethics with the same ideas that have been rehashed over and over elsewhere with no attempt at a solution – bias, racism, malicious use, etc. – for purposes that who asked for? When I started my career as an AI/ML research engineer 2016, I was most interested in two types of tasks – 1.) those that most humans could do but that would universally be considered tedious and non-scalable. I’m talking image classification, sentiment analysis, even document summarization, etc. 2.) tasks that humans lack the capacity to perform as well as computers for various reasons – forecasting, risk analysis, game playing, and so forth. I still love my career, and I try to only work on projects in these areas, but it’s getting harder and harder. This is because, somewhere along the way, it became popular and unquestionably acceptable to push AI into domains that were originally uniquely human, those areas that sit at the top of Maslows’s hierarchy of needs in terms of self-actualization – art, music, writing, singing, programming, and so forth. These areas of endeavor have negative logarithmic ability curves – the vast majority of people cannot do them well at all, about 10% can do them decently, and 1% or less can do them extraordinarily. The little discussed problem with AI-generation is that, without extreme deterrence, we will sacrifice human achievement at the top percentile in the name of lowering the bar for a larger volume of people, until the AI ability range is the norm. This is because relative to humans, AI is cheap, fast, and infinite, to the extent that investments in human achievement will be watered down at the societal, educational, and individual level with each passing year. And unlike AI gameplay which superseded humans decades ago, we won’t be able to just disqualify the machines and continue to play as if they didn’t exist. Almost everywhere I go, even this forum, I encounter almost universal deference given to current SOTA AI generation systems like GPT-3, CODEX, DALL-E, etc., with almost no one extending their implications to its logical conclusion, which is long-term convergence to the mean, to mediocrity, in the fields they claim to address or even enhance. If you’re an artist or writer and you’re using DALL-E or GPT-3 to “enhance” your work, or if you’re a programmer saying, “GitHub Co-Pilot makes me a better programmer?”, then how could you possibly know? You’ve disrupted and bypassed your own creative process, which is thoughts -> (optionally words) -> actions -> feedback -> repeat, and instead seeded your canvas with ideas from a machine, the provenance of which you can’t understand, nor can the machine reliably explain. And the more you do this, the more you make your creative processes dependent on said machine, until you must question whether or not you could work at the same level without it. When I was a college student, I often dabbled with weed, LSD, and mushrooms, and for a while, I thought the ideas I was having while under the influence were revolutionary and groundbreaking – that is until took it upon myself to actually start writing down those ideas and then reviewing them while sober, when I realized they weren’t that special at all. What I eventually determined is that, under the influence, it was impossible for me to accurately evaluate the drug-induced ideas I was having because the influencing agent the generates the ideas themselves was disrupting the same frame of reference that is responsible evaluating said ideas. This is the same principle of – if you took a pill and it made you stupider, would even know it? I believe that, especially over the long-term timeframe that crosses generations, there’s significant risk that current AI-generation developments produces a similar effect on humanity, and we mostly won’t even realize it has happened, much like a frog in boiling water. If you have children like I do, how can you be aware of the the current SOTA in these areas, project that 20 to 30 years, and then and tell them with a straight face that it is worth them pursuing their talent in art, writing, or music? How can you be honest and still say that widespread implementation of auto-correction hasn’t made you and others worse and worse at spelling over the years (a task that even I believe most would agree is tedious and worth automating). Furthermore, I’ve yet to set anyone discuss the train – generate – train - generate feedback loop that long-term application of AI-generation systems imply. The first generations of these models were trained on wide swaths of web data generated by humans, but if these systems are permitted to continually spit out content without restriction or verification, especially to the extent that it reduces or eliminates development and investment in human talent over the long term, then what happens to the 4th or 5th generation of models? Eventually we encounter this situation where the AI is being trained almost exclusively on AI-generated content, and therefore with each generation, it settles more and more into the mean and mediocrity with no way out using current methods. By the time that happens, what will we have lost in terms of the creative capacity of people, and will we be able to get it back? By relentlessly pursuing this direction so enthusiastically, I’m convinced that we as AI/ML developers, companies, and nations are past the point of no return, and it mostly comes down the investments in time and money that we’ve made, as well as a prisoner’s dilemma with our competitors. As a society though, this direction we’ve chosen for short-term gains will almost certainly make humanity worse off, mostly for those who are powerless to do anything about it – our children, our grandchildren, and generations to come. If you’re an AI researcher or a data scientist like myself, how do you turn things back for yourself when you’ve spent years on years building your career in this direction? You’re likely making near or north of $200k annually TC and have a family to support, and so it’s too late, no matter how you feel about the direction the field has gone. If you’re a company, how do you standby and let your competitors aggressively push their AutoML solutions into more and more markets without putting out your own? Moreover, if you’re a manager or thought leader in this field like Jeff Dean how do you justify to your own boss and your shareholders your team’s billions of dollars in AI investment while simultaneously balancing ethical concerns? You can’t – the only answer is bigger and bigger models, more and more applications, more and more data, and more and more automation, and then automating that even further. If you’re a country like the US, how do responsibly develop AI while your competitors like China single-mindedly push full steam ahead without an iota of ethical concern to replace you in numerous areas in global power dynamics? Once again, failing to compete would be pre-emptively admitting defeat. Even assuming that none of what I’ve described here happens to such an extent, how are so few people not taking this seriously and discounting this possibility? If everything I’m saying is fear-mongering and non-sense, then I’d be interested in hearing what you think human-AI co-existence looks like in 20 to 30 years and why it isn’t as demoralizing as I’ve made it out to be. ​ EDIT: Day after posting this -- this post took off way more than I expected. Even if I received 20 - 25 comments, I would have considered that a success, but this went much further. Thank you to each one of you that has read this post, even more so if you left a comment, and triply so for those who gave awards! I've read almost every comment that has come in (even the troll ones), and am truly grateful for each one, including those in sharp disagreement. I've learned much more from this discussion with the sub than I could have imagined on this topic, from so many perspectives. While I will try to reply as many comments as I can, the sheer comment volume combined with limited free time between work and family unfortunately means that there are many that I likely won't be able to get to. That will invariably include some that I would love respond to under the assumption of infinite time, but I will do my best, even if the latency stretches into days. Thank you all once again!

[D] The machine learning community has a toxicity problem
reddit
LLM Vibe Score0
Human Vibe Score1
yusuf-bengioThis week

[D] The machine learning community has a toxicity problem

It is omnipresent! First of all, the peer-review process is broken. Every fourth NeurIPS submission is put on arXiv. There are DeepMind researchers publicly going after reviewers who are criticizing their ICLR submission. On top of that, papers by well-known institutes that were put on arXiv are accepted at top conferences, despite the reviewers agreeing on rejection. In contrast, vice versa, some papers with a majority of accepts are overruled by the AC. (I don't want to call any names, just have a look the openreview page of this year's ICRL). Secondly, there is a reproducibility crisis. Tuning hyperparameters on the test set seem to be the standard practice nowadays. Papers that do not beat the current state-of-the-art method have a zero chance of getting accepted at a good conference. As a result, hyperparameters get tuned and subtle tricks implemented to observe a gain in performance where there isn't any. Thirdly, there is a worshiping problem. Every paper with a Stanford or DeepMind affiliation gets praised like a breakthrough. For instance, BERT has seven times more citations than ULMfit. The Google affiliation gives so much credibility and visibility to a paper. At every ICML conference, there is a crowd of people in front of every DeepMind poster, regardless of the content of the work. The same story happened with the Zoom meetings at the virtual ICLR 2020. Moreover, NeurIPS 2020 had twice as many submissions as ICML, even though both are top-tier ML conferences. Why? Why is the name "neural" praised so much? Next, Bengio, Hinton, and LeCun are truly deep learning pioneers but calling them the "godfathers" of AI is insane. It has reached the level of a cult. Fourthly, the way Yann LeCun talked about biases and fairness topics was insensitive. However, the toxicity and backlash that he received are beyond any reasonable quantity. Getting rid of LeCun and silencing people won't solve any issue. Fifthly, machine learning, and computer science in general, have a huge diversity problem. At our CS faculty, only 30% of undergrads and 15% of the professors are women. Going on parental leave during a PhD or post-doc usually means the end of an academic career. However, this lack of diversity is often abused as an excuse to shield certain people from any form of criticism. Reducing every negative comment in a scientific discussion to race and gender creates a toxic environment. People are becoming afraid to engage in fear of being called a racist or sexist, which in turn reinforces the diversity problem. Sixthly, moral and ethics are set arbitrarily. The U.S. domestic politics dominate every discussion. At this very moment, thousands of Uyghurs are put into concentration camps based on computer vision algorithms invented by this community, and nobody seems even remotely to care. Adding a "broader impact" section at the end of every people will not make this stop. There are huge shitstorms because a researcher wasn't mentioned in an article. Meanwhile, the 1-billion+ people continent of Africa is virtually excluded from any meaningful ML discussion (besides a few Indaba workshops). Seventhly, there is a cut-throat publish-or-perish mentality. If you don't publish 5+ NeurIPS/ICML papers per year, you are a looser. Research groups have become so large that the PI does not even know the name of every PhD student anymore. Certain people submit 50+ papers per year to NeurIPS. The sole purpose of writing a paper has become to having one more NeurIPS paper in your CV. Quality is secondary; passing the peer-preview stage has become the primary objective. Finally, discussions have become disrespectful. Schmidhuber calls Hinton a thief, Gebru calls LeCun a white supremacist, Anandkumar calls Marcus a sexist, everybody is under attack, but nothing is improved. Albert Einstein was opposing the theory of quantum mechanics. Can we please stop demonizing those who do not share our exact views. We are allowed to disagree without going for the jugular. The moment we start silencing people because of their opinion is the moment scientific and societal progress dies. Best intentions, Yusuf

[N] How Stability AI’s Founder Tanked His Billion-Dollar Startup
reddit
LLM Vibe Score0
Human Vibe Score0.667
milaworldThis week

[N] How Stability AI’s Founder Tanked His Billion-Dollar Startup

forbes article: https://www.forbes.com/sites/kenrickcai/2024/03/29/how-stability-ais-founder-tanked-his-billion-dollar-startup/ archive no paywall: https://archive.is/snbeV How Stability AI’s Founder Tanked His Billion-Dollar Startup Mar 29, 2024 Stability AI founder Emad Mostaque took the stage last week at the Terranea Resort in Palos Verdes, California to roaring applause and an introduction from an AI-generated Aristotle who announced him as “a modern Prometheus” with “the astuteness of Athena and the vision of Daedalus.” “Under his stewardship, AI becomes the Herculean force poised to vanquish the twin serpents of illness and ailment and extend the olive branch of longevity,” the faux Aristotle proclaimed. “I think that’s the best intro I’ve ever had,” Mostaque said. But behind Mostaque's hagiographic introduction lay a grim and fast metastasizing truth. Stability, once one of AI’s buzziest startups, was floundering. It had been running out of money for months and Mostaque had been unable to secure enough additional funding. It had defaulted on payments to Amazon whose cloud service undergirded Stability’s core offerings. The star research team behind its flagship text-to-image generator Stable Diffusion had tendered their resignations just three days before — as Forbes would first report — and other senior leaders had issued him an ultimatum: resign, or we walk too. Still, onstage before a massive audience of peers and acolytes, Mostaque talked a big game. “AI is jet planes for the mind,” he opined. “AI is our collective intelligence. It's the human Colossus.” He claimed a new, faster version of the Stable Diffusion image generator released earlier this month could generate “200 cats with hats per second.” But later, when he was asked about Stability’s financial model, Mostaque fumbled. “I can’t say that publicly,” he replied. “But it’s going well. We’re ahead of forecast.” Four days later, Mostaque stepped down as CEO of Stability, as Forbes first reported. In a post to X, the service formerly known as Twitter, he claimed he’d voluntarily abdicated his role to decentralize “the concentration of power in AI.” But sources told Forbes that was hardly the case. Behind the scenes, Mostaque had fought to maintain his position and control despite mounting pressure externally and internally to step down. Company documents and interviews with 32 current and former employees, investors, collaborators and industry observers suggest his abrupt exit was the result of poor business judgment and wild overspending that undermined confidence in his vision and leadership, and ultimately kneecapped the company. Mostaque, through his attorneys, declined to comment on record on a detailed list of questions about the reporting in this story. But in an email to Forbes earlier this week he broadly disputed the allegations. “Nobody tells you how hard it is to be a CEO and there are better CEOs than me to scale a business,” he said in a statement. “I am not sure anyone else would have been able to build and grow the research team to build the best and most widely used models out there and I’m very proud of the team there. I look forward to moving onto the next problem to handle and hopefully move the needle.” In an emailed statement, Christian Laforte and Shan Shan Wong, the interim co-CEOs who replaced Mostaque, said, "the company remains focused on commercializing its world leading technology” and providing it “to partners across the creative industries." After starting Stability in 2019, Mostaque built the company into an early AI juggernaut by seizing upon a promising research project that would become Stable Diffusion and funding it into a business reality. The ease with which the software generated detailed images from the simplest text prompts immediately captivated the public: 10 million people used it on any given day, the company told Forbes in early 2023. For some true believers, Mostaque was a crucial advocate for open-source AI development in a space dominated by the closed systems of OpenAI, Google and Anthropic. But his startup’s rise to one of the buzziest in generative AI was in part built on a series of exaggerations and misleading claims, as Forbes first reported last year (Mostaque disputed some points at the time). And they continued after he raised $100 million at a $1 billion valuation just days after launching Stable Diffusion in 2022. His failure to deliver on an array of grand promises, like building bespoke AI models for nation states, and his decision to pour tens of millions into research without a sustainable business plan, eroded Stability’s foundations and jeopardized its future. "He was just giving shit away,” one former employee told Forbes. “That man legitimately wanted to transform the world. He actually wanted to train AI models for kids in Malawi. Was it practical? Absolutely not." By October 2023, Stability would have less than $4 million left in the bank, according to an internal memo prepared for a board meeting and reviewed by Forbes. And mounting debt, including months of overdue Amazon Web Services payments, had already left it in the red. To avoid legal penalties for skipping Americans staff’s payroll, the document explained, the London-based startup was considering delaying tax payments to the U.K. government. It was Stability’s armada of GPUs, the wildly powerful and equally expensive chips undergirding AI, that were so taxing the company’s finances. Hosted by AWS, they had long been one of Mostaque’s bragging points; he often touted them as one of the world’s 10 largest supercomputers. They were responsible for helping Stability’s researchers build and maintain one of the top AI image generators, as well as break important new ground on generative audio, video and 3D models. “Undeniably, Stability has continued to ship a lot of models,” said one former employee. “They may not have profited off of it, but the broader ecosystem benefitted in a huge, huge way.” But the costs associated with so much compute were now threatening to sink the company. According to an internal October financial forecast seen by Forbes, Stability was on track to spend $99 million on compute in 2023. It noted as well that Stability was “underpaying AWS bills for July (by $1M)” and “not planning to pay AWS at the end of October for August usage ($7M).” Then there were the September and October bills, plus $1 million owed to Google Cloud and $600,000 to GPU cloud data center CoreWeave. (Amazon, Google and CoreWeave declined to comment.) With an additional $54 million allocated to wages and operating expenses, Stability’s total projected costs for 2023 were $153 million. But according to its October financial report, its projected revenue for the calendar year was just $11 million. Stability was on track to lose more money per month than it made in an entire year. The company’s dire financial position had thoroughly soured Stability’s current investors, including Coatue, which had invested tens of millions in the company during its $101 million funding round in 2022. In the middle of 2023, Mostaque agreed to an independent audit after Coatue raised a series of concerns, according to a source with direct knowledge of the matter. The outcome of the investigation is unclear. Coatue declined to comment. Within a week of an early October board meeting where Mostaque shared that financial forecast, Lightspeed Venture Partners, another major investor, sent a letter to the board urging them to sell the company. The distressing numbers had “severely undermined” the firm’s confidence in Mostaque’s ability to lead the company. “In particular, we are surprised and deeply concerned by a cash position just now disclosed to us that is inconsistent with prior discussions on this topic,” Lightspeed’s general counsel Brett Nissenberg wrote in the letter, a copy of which was viewed by Forbes. “Lightspeed believes that the company is not likely financeable on terms that would assure the company’s long term sound financial position.” (Lightspeed declined a request for comment.) The calls for a sale led Stability to quietly begin looking for a buyer. Bloomberg reported in November that Stability approached AI startups Cohere and Jasper to gauge their interest. Stability denied this, and Jasper CEO Timothy Young did the same when reached for comment by Forbes. A Cohere representative declined to comment. But one prominent AI company confirmed that Mostaque’s representatives had reached out to them to test the waters. Those talks did not advance because “the numbers didn’t add up,” this person, who declined to be named due to the confidential nature of the talks, told Forbes. Stability also tried to court Samsung as a buyer, going so far as to redecorate its office in advance of a planned meeting with the Korean electronics giant. (Samsung said that it invested in Stability in 2023 and that it does not comment on M&A discussions.) Coatue had been calling for Mostaque’s resignation for months, according to a source with direct knowledge. But it and other investors were unable to oust him because he was the company’s majority shareholder. When they tried a different tact by rallying other investors to offer him a juicy equity package to resign, Mostaque refused, said two sources. By October, Coatue and Lightspeed had had enough. Coatue left the board and Lightspeed resigned its observer seat. “Emad infuriated our initial investors so much it’s just making it impossible for us to raise more money under acceptable terms,” one current Stability executive told Forbes. The early months of 2024 saw Stability’s already precarious position eroding further still. Employees were quietly laid off. Three people in a position to know estimated that at least 10% of staff were cut. And cash reserves continued to dwindle. Mostaque mentioned a lifeline at the October board meeting: $95 million in tentative funding from new investors, pending due diligence. But in the end, only a fraction of it was wired, two sources say, much of it from Intel, which Forbes has learned invested $20 million, a fraction of what was reported. (Intel did not return a request for comment by publication time.) Two hours after Forbes broke the news of Mostaque’s plans to step down as CEO, Stability issued a press release confirming his resignation. Chief operating officer Wong and chief technology officer Laforte have taken over in the interim. Mostaque, who said on X that he still owns a majority of the company, also stepped down from the board, which has now initiated a search for a permanent CEO. There is a lot of work to be done to turn things around, and very little time in which to do it. Said the current Stability executive, “There’s still a possibility of a turnaround story, but the odds drop by the day.” In July of 2023, Mostaque still thought he could pull it off. Halfway through the month, he shared a fundraising plan with his lieutenants. It was wildly optimistic, detailing the raise of $500 million in cash and another $750 million in computing facilities from marquee investors like Nvidia, Google, Intel and the World Bank (Nvidia and Google declined comment. Intel did not respond. The World Bank said it did not invest in Stability). In a Slack message reviewed by Forbes, Mostaque said Google was “willing to move fast” and the round was “likely to be oversubscribed.” It wasn’t. Three people with direct knowledge of these fundraising efforts told Forbes that while there was some interest in Stability, talks often stalled when it came time to disclose financials. Two of them noted that earlier in the year, Mostaque had simply stopped engaging with VCs who asked for numbers. Only one firm invested around that time: actor Ashton Kutcher’s Sound Ventures, which invested $35 million in the form of a convertible SAFE note during the second quarter, according to an internal document. (Sound Ventures did not respond to a request for comment.) And though he’d managed to score a meeting with Nvidia and its CEO Jensen Huang, it ended in disaster, according to two sources. “Under Jensen's microscopic questions, Emad just fell apart,” a source in position to know told Forbes. Huang quickly concluded Stability wasn’t ready for an investment from Nvidia, the sources said. Mostaque told Forbes in an email that he had not met with Huang since 2022, except to say “hello and what’s up a few times after.” His July 2023 message references a plan to raise $150 million from Nvidia. (Nvidia declined to comment.) After a June Forbes investigation citing more than 30 sources revealed Mostaque’s history of misleading claims, Mostaque struggled to raise funding, a Stability investor told Forbes. (Mostaque disputed the story at the time and called it "coordinated lies" in his email this week to Forbes). Increasingly, investors scrutinized his assertions and pressed for data. And Young, now the CEO of Jasper, turned down a verbal offer to be Stability’s president after reading the article, according to a source with direct knowledge of the matter. The collapse of the talks aggravated the board and other executives, who had hoped Young would compensate for the sales and business management skills that Mostaque lacked, according to four people in a position to know. (Young declined to comment.) When Stability’s senior leadership convened in London for the CogX conference in September, the financing had still not closed. There, a group of executives confronted Mostaque asking questions about the company’s cash position and runway, according to three people with direct knowledge of the incident. They did not get the clarity they’d hoped for. By October, Mostaque had reduced his fundraising target by more than 80%. The months that followed saw a steady drumbeat of departures — general counsel Adam Avrunin, vice presidents Mike Melnicki, Ed Newton-Rex and Joe Penna, chief people officer Ozden Onder — culminating in the demoralizing March exit of Stable Diffusion’s primary developers Robin Rombach, Andreas Blattmann, Patrick Esser and Dominik Lorenz. Rombach, who led the team, had been angling to leave for months, two sources said, first threatening to resign last summer because of the fundraising failures. Others left over concerns about cash flow, as well as liabilities — including what four people described as Mostaque’s lax approach to ensuring that Stability products could not be used to produce child sexual abuse imagery. “Stability AI is committed to preventing the misuse of AI and prohibits the use of our image models and services for unlawful activity, including attempts to edit or create CSAM,” Ella Irwin, senior vice president of integrity, said in a statement. Newton-Rex told Forbes he resigned because he disagreed with Stability’s position that training AI on copyrighted work without consent is fair use. Melnicki and Penna declined to comment. Avrunin and Onder could not be reached for comment. None of the researchers responded to requests for comment. The Stable Diffusion researchers’ departure as a cohort says a lot about the state of Stability AI. The company’s researchers were widely viewed as its crown jewels, their work subsidized with a firehose of pricey compute power that was even extended to people outside the company. Martino Russi, an artificial intelligence researcher, told Forbes that though he was never formally employed by Stability, the company provided him a “staggering” amount of compute between January and April 2023 to play around with developing an AI video generator that Stability might someday use. “It was Candy Land or Coney Island,” said Russi, who estimates that his experiment, which was ultimately shelved, cost the company $2.5 million. Stable Diffusion was simultaneously Stability’s marquee product and its existential cash crisis. One current employee described it to Forbes as “a giant vacuum that absorbed everything: money, compute, people.” While the software was widely used, with Mostaque claiming downloads reaching into the hundreds of millions, Stability struggled to translate that wild success into revenue. Mostaque knew it could be done — peers at Databricks, Elastic and MongoDB had all turned a free product into a lucrative business — he just couldn’t figure out how. His first attempt was Stability’s API, which allowed paying customers to integrate Stable Diffusion into their own products. In early 2023, a handful of small companies, like art generator app NightCafe and presentation software startup Tome, signed on, according to four people with knowledge of the deals. But Stability’s poor account management services soured many, and in a matter of months NightCafe and Tome canceled their contracts, three people said. NightCafe founder Angus Russell told Forbes that his company switched to a competitor which “offered much cheaper inference costs and a broader service.” Tome did not respond to a request for comment. Meanwhile, Mostaque’s efforts to court larger companies like Samsung and Snapchat were failing, according to five people familiar with the effort. Canva, which was already one of the heaviest users of open-sourced Stable Diffusion, had multiple discussions with Stability, which was angling for a contract it hoped would generate several millions in annual revenue. But the deal never materialized, four sources said. “These three companies wanted and needed us,” one former employee told Forbes. “They would have been the perfect customers.” (Samsung, Snap and Canva declined to comment.) “It’s not that there was not an appetite to pay Stability — there were tons of companies that would have that wanted to,” the former employee said. “There was a huge opportunity and demand, but just a resistance to execution.” Mostaque’s other big idea was to provide governments with bespoke national AI models that would invigorate their economies and citizenry. “Emad envisions a world where AI through 100 national models serves not as a tool of the few, but as a benefactor to all promising to confront great adversaries, cancer, autism, and the sands of time itself,” the AI avatar of Aristotle said in his intro at the conference. Mostaque told several prospective customers that he could deliver such models within 60 days — an untenable timeline, according to two people in position to know. Stability attempted to develop a model for the Singaporean government over the protestation of employees who questioned its technical feasibility, three sources familiar with the effort told Forbes. But it couldn’t pull it off and Singapore never became a customer. (The government of Singapore confirmed it did not enter into a deal with Stability, but declined to answer additional questions.) As Stability careened from one new business idea to another, resources were abruptly reallocated and researchers reassigned. The whiplash shifts in a largely siloed organization demoralized and infuriated employees. “There were ‘urgent’ things, ‘urgent urgent’ things and ‘most urgent,’” one former employee complained. “None of these things seem important if everything is important.” Another former Stability executive was far more pointed in their assessment. “Emad is the most disorganized leader I have ever worked with in my career,” this person told Forbes. “He has no vision, and changes directions every week, often based on what he sees on Twitter.” In a video interview posted shortly before this story was published, Mostaque explained his leadership style: “I'm particularly great at taking creatives, developers, researchers, others, and achieving their full potential in designing systems. But I should not be dealing with, you know, HR and operations and business development and other elements. There are far better people than me to do that.” By December 2023, Stability had partially abandoned its open-source roots and announced that any commercial use of Stable Diffusion would cost customers at least $20 per month (non-commercial and research use of Stable Diffusion would remain free). But privately, Stability was considering a potentially more lucrative source of revenue: reselling the compute it was leasing from providers like AWS, according to six people familiar with the effort. Though it was essentially GPU arbitrage, Stability framed the strategy to investors as a “managed services” offering. Its damning October financial report projected optimistically that such an offering would bring in $139 million in 2024 — 98% of its revenue. Multiple employees at the time told Forbes they feared reselling compute, even if the company called it “managed services,” would violate the terms of Stability’s contract with AWS. Amazon declined to comment. “The line internally was that we are not reselling compute,” one former employee said. “This was some of the dirtiest feeling stuff.” Stability also discussed reselling a cluster of Nvidia A100 chips, leased via CoreWeave, to the venture capital firm Andreessen Horowitz, three sources said. “It was under the guise of managed services, but there wasn’t any management happening,” one of these people told Forbes. Andreessen Horowitz and CoreWeave declined to comment. Stability did not respond to questions about if it plans to continue this strategy now that Mostaque is out of the picture. Regardless, interim co-CEOs Wong and Laforte are on a tight timeline to clean up his mess. Board chairman Jim O’Shaughnessy said in a statement that he was confident the pair “will adeptly steer the company forward in developing and commercializing industry-leading generative AI products.” But burn continues to far outpace revenue. The Financial Times reported Friday that the company made $5.4 million of revenue in February, against $8 million in costs. Several sources said there are ongoing concerns about making payroll for the roughly 150 remaining employees. Leadership roles have gone vacant for months amid the disarray, leaving the company increasingly directionless. Meanwhile, a potentially catastrophic legal threat looms over the company: A trio of copyright infringement lawsuits brought by Getty Images and a group of artists in the U.S. and U.K., who claim Stability illegally used their art and photography to train the AI models powering Stable Diffusion. A London-based court has already rejected the company’s bid to throw out one of the lawsuits on the basis that none of its researchers were based in the U.K. And Stability’s claim that Getty’s Delaware lawsuit should be blocked because it's a U.K.-based company was rejected. (Stability did not respond to questions about the litigation.) AI-related copyright litigation “could go on for years,” according to Eric Goldman, a law professor at Santa Clara University. He told Forbes that though plaintiffs suing AI firms face an uphill battle overcoming the existing legal precedent on copyright infringement, the quantity of arguments available to make are virtually inexhaustible. “Like in military theory, if there’s a gap in your lines, that’s where the enemy pours through — if any one of those arguments succeeds, it could completely change the generative AI environment,” he said. “In some sense, generative AI as an industry has to win everything.” Stability, which had more than $100 million in the bank just a year and a half ago, is in a deep hole. Not only does it need more funding, it needs a viable business model — or a buyer with the vision and chops to make it successful in a fast-moving and highly competitive sector. At an all hands meeting this past Monday, Stability’s new leaders detailed a path forward. One point of emphasis: a plan to better manage resources and expenses, according to one person in attendance. It’s a start, but Mostaque’s meddling has left them with little runway to execute. His resignation, though, has given some employees hope. “A few people are 100% going to reconsider leaving after today,” said one current employee. “And the weird gloomy aura of hearing Emad talking nonsense for an hour is gone.” Shortly before Mostaque resigned, one current Stability executive told Forbes that they were optimistic his departure could make Stability appealing enough to receive a small investment or sale to a friendly party. “There are companies that have raised hundreds of millions of dollars that have much less intrinsic value than Stability,” the person said. “A white knight may still appear.”

Interview with Juergen Schmidhuber, renowned ‘Father Of Modern AI’, says his life’s work won't lead to dystopia.
reddit
LLM Vibe Score0
Human Vibe Score0.765
hardmaruThis week

Interview with Juergen Schmidhuber, renowned ‘Father Of Modern AI’, says his life’s work won't lead to dystopia.

Schmidhuber interview expressing his views on the future of AI and AGI. Original source. I think the interview is of interest to r/MachineLearning, and presents an alternate view, compared to other influential leaders in AI. Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life’s Work Won't Lead To Dystopia May 23, 2023. Contributed by Hessie Jones. Amid the growing concern about the impact of more advanced artificial intelligence (AI) technologies on society, there are many in the technology community who fear the implications of the advancements in Generative AI if they go unchecked. Dr. Juergen Schmidhuber, a renowned scientist, artificial intelligence researcher and widely regarded as one of the pioneers in the field, is more optimistic. He declares that many of those who suddenly warn against the dangers of AI are just seeking publicity, exploiting the media’s obsession with killer robots which has attracted more attention than “good AI” for healthcare etc. The potential to revolutionize various industries and improve our lives is clear, as are the equal dangers if bad actors leverage the technology for personal gain. Are we headed towards a dystopian future, or is there reason to be optimistic? I had a chance to sit down with Dr. Juergen Schmidhuber to understand his perspective on this seemingly fast-moving AI-train that will leap us into the future. As a teenager in the 1970s, Juergen Schmidhuber became fascinated with the idea of creating intelligent machines that could learn and improve on their own, becoming smarter than himself within his lifetime. This would ultimately lead to his groundbreaking work in the field of deep learning. In the 1980s, he studied computer science at the Technical University of Munich (TUM), where he earned his diploma in 1987. His thesis was on the ultimate self-improving machines that, not only, learn through some pre-wired human-designed learning algorithm, but also learn and improve the learning algorithm itself. Decades later, this became a hot topic. He also received his Ph.D. at TUM in 1991 for work that laid some of the foundations of modern AI. Schmidhuber is best known for his contributions to the development of recurrent neural networks (RNNs), the most powerful type of artificial neural network that can process sequential data such as speech and natural language. With his students Sepp Hochreiter, Felix Gers, Alex Graves, Daan Wierstra, and others, he published architectures and training algorithms for the long short-term memory (LSTM), a type of RNN that is widely used in natural language processing, speech recognition, video games, robotics, and other applications. LSTM has become the most cited neural network of the 20th century, and Business Week called it "arguably the most commercial AI achievement." Throughout his career, Schmidhuber has received various awards and accolades for his groundbreaking work. In 2013, he was awarded the Helmholtz Prize, which recognizes significant contributions to the field of machine learning. In 2016, he was awarded the IEEE Neural Network Pioneer Award for "pioneering contributions to deep learning and neural networks." The media have often called him the “father of modern AI,” because the most cited neural networks all build on his lab’s work. He is quick to point out, however, that AI history goes back centuries. Despite his many accomplishments, at the age of 60, he feels mounting time pressure towards building an Artificial General Intelligence within his lifetime and remains committed to pushing the boundaries of AI research and development. He is currently director of the KAUST AI Initiative, scientific director of the Swiss AI Lab IDSIA, and co-founder and chief scientist of AI company NNAISENSE, whose motto is "AI∀" which is a math-inspired way of saying "AI For All." He continues to work on cutting-edge AI technologies and applications to improve human health and extend human lives and make lives easier for everyone. The following interview has been edited for clarity. Jones: Thank you Juergen for joining me. You have signed letters warning about AI weapons. But you didn't sign the recent publication, "Pause Gigantic AI Experiments: An Open Letter"? Is there a reason? Schmidhuber: Thank you Hessie. Glad to speak with you. I have realized that many of those who warn in public against the dangers of AI are just seeking publicity. I don't think the latest letter will have any significant impact because many AI researchers, companies, and governments will ignore it completely. The proposal frequently uses the word "we" and refers to "us," the humans. But as I have pointed out many times in the past, there is no "we" that everyone can identify with. Ask 10 different people, and you will hear 10 different opinions about what is "good." Some of those opinions will be completely incompatible with each other. Don't forget the enormous amount of conflict between the many people. The letter also says, "If such a pause cannot be quickly put in place, governments should intervene and impose a moratorium." The problem is that different governments have ALSO different opinions about what is good for them and for others. Great Power A will say, if we don't do it, Great Power B will, perhaps secretly, and gain an advantage over us. The same is true for Great Powers C and D. Jones: Everyone acknowledges this fear surrounding current generative AI technology. Moreover, the existential threat of this technology has been publicly acknowledged by Sam Altman, CEO of OpenAI himself, calling for AI regulation. From your perspective, is there an existential threat? Schmidhuber: It is true that AI can be weaponized, and I have no doubt that there will be all kinds of AI arms races, but AI does not introduce a new quality of existential threat. The threat coming from AI weapons seems to pale in comparison to the much older threat from nuclear hydrogen bombs that don’t need AI at all. We should be much more afraid of half-century-old tech in the form of H-bomb rockets. The Tsar Bomba of 1961 had almost 15 times more destructive power than all weapons of WW-II combined. Despite the dramatic nuclear disarmament since the 1980s, there are still more than enough nuclear warheads to wipe out human civilization within two hours, without any AI I’m much more worried about that old existential threat than the rather harmless AI weapons. Jones: I realize that while you compare AI to the threat of nuclear bombs, there is a current danger that a current technology can be put in the hands of humans and enable them to “eventually” exact further harms to individuals of group in a very precise way, like targeted drone attacks. You are giving people a toolset that they've never had before, enabling bad actors, as some have pointed out, to be able to do a lot more than previously because they didn't have this technology. Schmidhuber: Now, all that sounds horrible in principle, but our existing laws are sufficient to deal with these new types of weapons enabled by AI. If you kill someone with a gun, you will go to jail. Same if you kill someone with one of these drones. Law enforcement will get better at understanding new threats and new weapons and will respond with better technology to combat these threats. Enabling drones to target persons from a distance in a way that requires some tracking and some intelligence to perform, which has traditionally been performed by skilled humans, to me, it seems is just an improved version of a traditional weapon, like a gun, which is, you know, a little bit smarter than the old guns. But, in principle, all of that is not a new development. For many centuries, we have had the evolution of better weaponry and deadlier poisons and so on, and law enforcement has evolved their policies to react to these threats over time. So, it's not that we suddenly have a new quality of existential threat and it's much more worrisome than what we have had for about six decades. A large nuclear warhead doesn’t need fancy face recognition to kill an individual. No, it simply wipes out an entire city with ten million inhabitants. Jones: The existential threat that’s implied is the extent to which humans have control over this technology. We see some early cases of opportunism which, as you say, tends to get more media attention than positive breakthroughs. But you’re implying that this will all balance out? Schmidhuber: Historically, we have a long tradition of technological breakthroughs that led to advancements in weapons for the purpose of defense but also for protection. From sticks, to rocks, to axes to gunpowder to cannons to rockets… and now to drones… this has had a drastic influence on human history but what has been consistent throughout history is that those who are using technology to achieve their own ends are themselves, facing the same technology because the opposing side is learning to use it against them. And that's what has been repeated in thousands of years of human history and it will continue. I don't see the new AI arms race as something that is remotely as existential a threat as the good old nuclear warheads. You said something important, in that some people prefer to talk about the downsides rather than the benefits of this technology, but that's misleading, because 95% of all AI research and AI development is about making people happier and advancing human life and health. Jones: Let’s touch on some of those beneficial advances in AI research that have been able to radically change present day methods and achieve breakthroughs. Schmidhuber: All right! For example, eleven years ago, our team with my postdoc Dan Ciresan was the first to win a medical imaging competition through deep learning. We analyzed female breast cells with the objective to determine harmless cells vs. those in the pre-cancer stage. Typically, a trained oncologist needs a long time to make these determinations. Our team, who knew nothing about cancer, were able to train an artificial neural network, which was totally dumb in the beginning, on lots of this kind of data. It was able to outperform all the other methods. Today, this is being used not only for breast cancer, but also for radiology and detecting plaque in arteries, and many other things. Some of the neural networks that we have developed in the last 3 decades are now prevalent across thousands of healthcare applications, detecting Diabetes and Covid-19 and what not. This will eventually permeate across all healthcare. The good consequences of this type of AI are much more important than the click-bait new ways of conducting crimes with AI. Jones: Adoption is a product of reinforced outcomes. The massive scale of adoption either leads us to believe that people have been led astray, or conversely, technology is having a positive effect on people’s lives. Schmidhuber: The latter is the likely case. There's intense commercial pressure towards good AI rather than bad AI because companies want to sell you something, and you are going to buy only stuff you think is going to be good for you. So already just through this simple, commercial pressure, you have a tremendous bias towards good AI rather than bad AI. However, doomsday scenarios like in Schwarzenegger movies grab more attention than documentaries on AI that improve people’s lives. Jones: I would argue that people are drawn to good stories – narratives that contain an adversary and struggle, but in the end, have happy endings. And this is consistent with your comment on human nature and how history, despite its tendency for violence and destruction of humanity, somehow tends to correct itself. Let’s take the example of a technology, which you are aware – GANs – General Adversarial Networks, which today has been used in applications for fake news and disinformation. In actuality, the purpose in the invention of GANs was far from what it is used for today. Schmidhuber: Yes, the name GANs was created in 2014 but we had the basic principle already in the early 1990s. More than 30 years ago, I called it artificial curiosity. It's a very simple way of injecting creativity into a little two network system. This creative AI is not just trying to slavishly imitate humans. Rather, it’s inventing its own goals. Let me explain: You have two networks. One network is producing outputs that could be anything, any action. Then the second network is looking at these actions and it’s trying to predict the consequences of these actions. An action could move a robot, then something happens, and the other network is just trying to predict what will happen. Now we can implement artificial curiosity by reducing the prediction error of the second network, which, at the same time, is the reward of the first network. The first network wants to maximize its reward and so it will invent actions that will lead to situations that will surprise the second network, which it has not yet learned to predict well. In the case where the outputs are fake images, the first network will try to generate images that are good enough to fool the second network, which will attempt to predict the reaction of the environment: fake or real image, and it will try to become better at it. The first network will continue to also improve at generating images whose type the second network will not be able to predict. So, they fight each other. The 2nd network will continue to reduce its prediction error, while the 1st network will attempt to maximize it. Through this zero-sum game the first network gets better and better at producing these convincing fake outputs which look almost realistic. So, once you have an interesting set of images by Vincent Van Gogh, you can generate new images that leverage his style, without the original artist having ever produced the artwork himself. Jones: I see how the Van Gogh example can be applied in an education setting and there are countless examples of artists mimicking styles from famous painters but image generation from this instance that can happen within seconds is quite another feat. And you know this is how GANs has been used. What’s more prevalent today is a socialized enablement of generating images or information to intentionally fool people. It also surfaces new harms that deal with the threat to intellectual property and copyright, where laws have yet to account for. And from your perspective this was not the intention when the model was conceived. What was your motivation in your early conception of what is now GANs? Schmidhuber: My old motivation for GANs was actually very important and it was not to create deepfakes or fake news but to enable AIs to be curious and invent their own goals, to make them explore their environment and make them creative. Suppose you have a robot that executes one action, then something happens, then it executes another action, and so on, because it wants to achieve certain goals in the environment. For example, when the battery is low, this will trigger “pain” through hunger sensors, so it wants to go to the charging station, without running into obstacles, which will trigger other pain sensors. It will seek to minimize pain (encoded through numbers). Now the robot has a friend, the second network, which is a world model ––it’s a prediction machine that learns to predict the consequences of the robot’s actions. Once the robot has a good model of the world, it can use it for planning. It can be used as a simulation of the real world. And then it can determine what is a good action sequence. If the robot imagines this sequence of actions, the model will predict a lot of pain, which it wants to avoid. If it plays this alternative action sequence in its mental model of the world, then it will predict a rewarding situation where it’s going to sit on the charging station and its battery is going to load again. So, it'll prefer to execute the latter action sequence. In the beginning, however, the model of the world knows nothing, so how can we motivate the first network to generate experiments that lead to data that helps the world model learn something it didn’t already know? That’s what artificial curiosity is about. The dueling two network systems effectively explore uncharted environments by creating experiments so that over time the curious AI gets a better sense of how the environment works. This can be applied to all kinds of environments, and has medical applications. Jones: Let’s talk about the future. You have said, “Traditional humans won’t play a significant role in spreading intelligence across the universe.” Schmidhuber: Let’s first conceptually separate two types of AIs. The first type of AI are tools directed by humans. They are trained to do specific things like accurately detect diabetes or heart disease and prevent attacks before they happen. In these cases, the goal is coming from the human. More interesting AIs are setting their own goals. They are inventing their own experiments and learning from them. Their horizons expand and eventually they become more and more general problem solvers in the real world. They are not controlled by their parents, but much of what they learn is through self-invented experiments. A robot, for example, is rotating a toy, and as it is doing this, the video coming in through the camera eyes, changes over time and it begins to learn how this video changes and learns how the 3D nature of the toy generates certain videos if you rotate it a certain way, and eventually, how gravity works, and how the physics of the world works. Like a little scientist! And I have predicted for decades that future scaled-up versions of such AI scientists will want to further expand their horizons, and eventually go where most of the physical resources are, to build more and bigger AIs. And of course, almost all of these resources are far away from earth out there in space, which is hostile to humans but friendly to appropriately designed AI-controlled robots and self-replicating robot factories. So here we are not talking any longer about our tiny biosphere; no, we are talking about the much bigger rest of the universe. Within a few tens of billions of years, curious self-improving AIs will colonize the visible cosmos in a way that’s infeasible for humans. Those who don’t won’t have an impact. Sounds like science fiction, but since the 1970s I have been unable to see a plausible alternative to this scenario, except for a global catastrophe such as an all-out nuclear war that stops this development before it takes off. Jones: How long have these AIs, which can set their own goals — how long have they existed? To what extent can they be independent of human interaction? Schmidhuber: Neural networks like that have existed for over 30 years. My first simple adversarial neural network system of this kind is the one from 1990 described above. You don’t need a teacher there; it's just a little agent running around in the world and trying to invent new experiments that surprise its own prediction machine. Once it has figured out certain parts of the world, the agent will become bored and will move on to more exciting experiments. The simple 1990 systems I mentioned have certain limitations, but in the past three decades, we have also built more sophisticated systems that are setting their own goals and such systems I think will be essential for achieving true intelligence. If you are only imitating humans, you will never go beyond them. So, you really must give AIs the freedom to explore previously unexplored regions of the world in a way that no human is really predefining. Jones: Where is this being done today? Schmidhuber: Variants of neural network-based artificial curiosity are used today for agents that learn to play video games in a human-competitive way. We have also started to use them for automatic design of experiments in fields such as materials science. I bet many other fields will be affected by it: chemistry, biology, drug design, you name it. However, at least for now, these artificial scientists, as I like to call them, cannot yet compete with human scientists. I don’t think it’s going to stay this way but, at the moment, it’s still the case. Sure, AI has made a lot of progress. Since 1997, there have been superhuman chess players, and since 2011, through the DanNet of my team, there have been superhuman visual pattern recognizers. But there are other things where humans, at the moment at least, are much better, in particular, science itself. In the lab we have many first examples of self-directed artificial scientists, but they are not yet convincing enough to appear on the radar screen of the public space, which is currently much more fascinated with simpler systems that just imitate humans and write texts based on previously seen human-written documents. Jones: You speak of these numerous instances dating back 30 years of these lab experiments where these self-driven agents are deciding and learning and moving on once they’ve learned. And I assume that that rate of learning becomes even faster over time. What kind of timeframe are we talking about when this eventually is taken outside of the lab and embedded into society? Schmidhuber: This could still take months or even years :-) Anyway, in the not-too-distant future, we will probably see artificial scientists who are good at devising experiments that allow them to discover new, previously unknown physical laws. As always, we are going to profit from the old trend that has held at least since 1941: every decade compute is getting 100 times cheaper. Jones: How does this trend affect modern AI such as ChatGPT? Schmidhuber: Perhaps you know that all the recent famous AI applications such as ChatGPT and similar models are largely based on principles of artificial neural networks invented in the previous millennium. The main reason why they works so well now is the incredible acceleration of compute per dollar. ChatGPT is driven by a neural network called “Transformer” described in 2017 by Google. I am happy about that because a quarter century earlier in 1991 I had a particular Transformer variant which is now called the “Transformer with linearized self-attention”. Back then, not much could be done with it, because the compute cost was a million times higher than today. But today, one can train such models on half the internet and achieve much more interesting results. Jones: And for how long will this acceleration continue? Schmidhuber: There's no reason to believe that in the next 30 years, we won't have another factor of 1 million and that's going to be really significant. In the near future, for the first time we will have many not-so expensive devices that can compute as much as a human brain. The physical limits of computation, however, are much further out so even if the trend of a factor of 100 every decade continues, the physical limits (of 1051 elementary instructions per second and kilogram of matter) won’t be hit until, say, the mid-next century. Even in our current century, however, we’ll probably have many machines that compute more than all 10 billion human brains collectively and you can imagine, everything will change then! Jones: That is the big question. Is everything going to change? If so, what do you say to the next generation of leaders, currently coming out of college and university. So much of this change is already impacting how they study, how they will work, or how the future of work and livelihood is defined. What is their purpose and how do we change our systems so they will adapt to this new version of intelligence? Schmidhuber: For decades, people have asked me questions like that, because you know what I'm saying now, I have basically said since the 1970s, it’s just that today, people are paying more attention because, back then, they thought this was science fiction. They didn't think that I would ever come close to achieving my crazy life goal of building a machine that learns to become smarter than myself such that I can retire. But now many have changed their minds and think it's conceivable. And now I have two daughters, 23 and 25. People ask me: what do I tell them? They know that Daddy always said, “It seems likely that within your lifetimes, you will have new types of intelligence that are probably going to be superior in many ways, and probably all kinds of interesting ways.” How should they prepare for that? And I kept telling them the obvious: Learn how to learn new things! It's not like in the previous millennium where within 20 years someone learned to be a useful member of society, and then took a job for 40 years and performed in this job until she received her pension. Now things are changing much faster and we must learn continuously just to keep up. I also told my girls that no matter how smart AIs are going to get, learn at least the basics of math and physics, because that’s the essence of our universe, and anybody who understands this will have an advantage, and learn all kinds of new things more easily. I also told them that social skills will remain important, because most future jobs for humans will continue to involve interactions with other humans, but I couldn’t teach them anything about that; they know much more about social skills than I do. You touched on the big philosophical question about people’s purpose. Can this be answered without answering the even grander question: What’s the purpose of the entire universe? We don’t know. But what’s happening right now might be connected to the unknown answer. Don’t think of humans as the crown of creation. Instead view human civilization as part of a much grander scheme, an important step (but not the last one) on the path of the universe from very simple initial conditions towards more and more unfathomable complexity. Now it seems ready to take its next step, a step comparable to the invention of life itself over 3.5 billion years ago. Alas, don’t worry, in the end, all will be good! Jones: Let’s get back to this transformation happening right now with OpenAI. There are many questioning the efficacy and accuracy of ChatGPT, and are concerned its release has been premature. In light of the rampant adoption, educators have banned its use over concerns of plagiarism and how it stifles individual development. Should large language models like ChatGPT be used in school? Schmidhuber: When the calculator was first introduced, instructors forbade students from using it in school. Today, the consensus is that kids should learn the basic methods of arithmetic, but they should also learn to use the “artificial multipliers” aka calculators, even in exams, because laziness and efficiency is a hallmark of intelligence. Any intelligent being wants to minimize its efforts to achieve things. And that's the reason why we have tools, and why our kids are learning to use these tools. The first stone tools were invented maybe 3.5 million years ago; tools just have become more sophisticated over time. In fact, humans have changed in response to the properties of their tools. Our anatomical evolution was shaped by tools such as spears and fire. So, it's going to continue this way. And there is no permanent way of preventing large language models from being used in school. Jones: And when our children, your children graduate, what does their future work look like? Schmidhuber: A single human trying to predict details of how 10 billion people and their machines will evolve in the future is like a single neuron in my brain trying to predict what the entire brain and its tens of billions of neurons will do next year. 40 years ago, before the WWW was created at CERN in Switzerland, who would have predicted all those young people making money as YouTube video bloggers? Nevertheless, let’s make a few limited job-related observations. For a long time, people have thought that desktop jobs may require more intelligence than skills trade or handicraft professions. But now, it turns out that it's much easier to replace certain aspects of desktop jobs than replacing a carpenter, for example. Because everything that works well in AI is happening behind the screen currently, but not so much in the physical world. There are now artificial systems that can read lots of documents and then make really nice summaries of these documents. That is a desktop job. Or you give them a description of an illustration that you want to have for your article and pretty good illustrations are being generated that may need some minimal fine-tuning. But you know, all these desktop jobs are much easier to facilitate than the real tough jobs in the physical world. And it's interesting that the things people thought required intelligence, like playing chess, or writing or summarizing documents, are much easier for machines than they thought. But for things like playing football or soccer, there is no physical robot that can remotely compete with the abilities of a little boy with these skills. So, AI in the physical world, interestingly, is much harder than AI behind the screen in virtual worlds. And it's really exciting, in my opinion, to see that jobs such as plumbers are much more challenging than playing chess or writing another tabloid story. Jones: The way data has been collected in these large language models does not guarantee personal information has not been excluded. Current consent laws already are outdated when it comes to these large language models (LLM). The concern, rightly so, is increasing surveillance and loss of privacy. What is your view on this? Schmidhuber: As I have indicated earlier: are surveillance and loss of privacy inevitable consequences of increasingly complex societies? Super-organisms such as cities and states and companies consist of numerous people, just like people consist of numerous cells. These cells enjoy little privacy. They are constantly monitored by specialized "police cells" and "border guard cells": Are you a cancer cell? Are you an external intruder, a pathogen? Individual cells sacrifice their freedom for the benefits of being part of a multicellular organism. Similarly, for super-organisms such as nations. Over 5000 years ago, writing enabled recorded history and thus became its inaugural and most important invention. Its initial purpose, however, was to facilitate surveillance, to track citizens and their tax payments. The more complex a super-organism, the more comprehensive its collection of information about its constituents. 200 years ago, at least, the parish priest in each village knew everything about all the village people, even about those who did not confess, because they appeared in the confessions of others. Also, everyone soon knew about the stranger who had entered the village, because some occasionally peered out of the window, and what they saw got around. Such control mechanisms were temporarily lost through anonymization in rapidly growing cities but are now returning with the help of new surveillance devices such as smartphones as part of digital nervous systems that tell companies and governments a lot about billions of users. Cameras and drones etc. are becoming increasingly tinier and more ubiquitous. More effective recognition of faces and other detection technology are becoming cheaper and cheaper, and many will use it to identify others anywhere on earth; the big wide world will not offer any more privacy than the local village. Is this good or bad? Some nations may find it easier than others to justify more complex kinds of super-organisms at the expense of the privacy rights of their constituents. Jones: So, there is no way to stop or change this process of collection, or how it continuously informs decisions over time? How do you see governance and rules responding to this, especially amid Italy’s ban on ChatGPT following suspected user data breach and the more recent news about the Meta’s record $1.3billion fine in the company’s handling of user information? Schmidhuber: Data collection has benefits and drawbacks, such as the loss of privacy. How to balance those? I have argued for addressing this through data ownership in data markets. If it is true that data is the new oil, then it should have a price, just like oil. At the moment, the major surveillance platforms such as Meta do not offer users any money for their data and the transitive loss of privacy. In the future, however, we will likely see attempts at creating efficient data markets to figure out the data's true financial value through the interplay between supply and demand. Even some of the sensitive medical data should not be priced by governmental regulators but by patients (and healthy persons) who own it and who may sell or license parts thereof as micro-entrepreneurs in a healthcare data market. Following a previous interview, I gave for one of the largest re-insurance companies , let's look at the different participants in such a data market: patients, hospitals, data companies. (1) Patients with a rare form of cancer can offer more valuable data than patients with a very common form of cancer. (2) Hospitals and their machines are needed to extract the data, e.g., through magnet spin tomography, radiology, evaluations through human doctors, and so on. (3) Companies such as Siemens, Google or IBM would like to buy annotated data to make better artificial neural networks that learn to predict pathologies and diseases and the consequences of therapies. Now the market’s invisible hand will decide about the data’s price through the interplay between demand and supply. On the demand side, you will have several companies offering something for the data, maybe through an app on the smartphone (a bit like a stock market app). On the supply side, each patient in this market should be able to profit from high prices for rare valuable types of data. Likewise, competing data extractors such as hospitals will profit from gaining recognition and trust for extracting data well at a reasonable price. The market will make the whole system efficient through incentives for all who are doing a good job. Soon there will be a flourishing ecosystem of commercial data market advisors and what not, just like the ecosystem surrounding the traditional stock market. The value of the data won’t be determined by governments or ethics committees, but by those who own the data and decide by themselves which parts thereof they want to license to others under certain conditions. At first glance, a market-based system seems to be detrimental to the interest of certain monopolistic companies, as they would have to pay for the data - some would prefer free data and keep their monopoly. However, since every healthy and sick person in the market would suddenly have an incentive to collect and share their data under self-chosen anonymity conditions, there will soon be many more useful data to evaluate all kinds of treatments. On average, people will live longer and healthier, and many companies and the entire healthcare system will benefit. Jones: Finally, what is your view on open source versus the private companies like Google and OpenAI? Is there a danger to supporting these private companies’ large language models versus trying to keep these models open source and transparent, very much like what LAION is doing? Schmidhuber: I signed this open letter by LAION because I strongly favor the open-source movement. And I think it's also something that is going to challenge whatever big tech dominance there might be at the moment. Sure, the best models today are run by big companies with huge budgets for computers, but the exciting fact is that open-source models are not so far behind, some people say maybe six to eight months only. Of course, the private company models are all based on stuff that was created in academia, often in little labs without so much funding, which publish without patenting their results and open source their code and others take it and improved it. Big tech has profited tremendously from academia; their main achievement being that they have scaled up everything greatly, sometimes even failing to credit the original inventors. So, it's very interesting to see that as soon as some big company comes up with a new scaled-up model, lots of students out there are competing, or collaborating, with each other, trying to come up with equal or better performance on smaller networks and smaller machines. And since they are open sourcing, the next guy can have another great idea to improve it, so now there’s tremendous competition also for the big companies. Because of that, and since AI is still getting exponentially cheaper all the time, I don't believe that big tech companies will dominate in the long run. They find it very hard to compete with the enormous open-source movement. As long as you can encourage the open-source community, I think you shouldn't worry too much. Now, of course, you might say if everything is open source, then the bad actors also will more easily have access to these AI tools. And there's truth to that. But as always since the invention of controlled fire, it was good that knowledge about how technology works quickly became public such that everybody could use it. And then, against any bad actor, there's almost immediately a counter actor trying to nullify his efforts. You see, I still believe in our old motto "AI∀" or "AI For All." Jones: Thank you, Juergen for sharing your perspective on this amazing time in history. It’s clear that with new technology, the enormous potential can be matched by disparate and troubling risks which we’ve yet to solve, and even those we have yet to identify. If we are to dispel the fear of a sentient system for which we have no control, humans, alone need to take steps for more responsible development and collaboration to ensure AI technology is used to ultimately benefit society. Humanity will be judged by what we do next.

[R] From 3D Contour Plots to AI-Generated Art
reddit
LLM Vibe Score0
Human Vibe Score1
MLRecipesThis week

[R] From 3D Contour Plots to AI-Generated Art

Fun tutorial to learn how to make professional contour plots in Python, with incredible animated visualizations. At the intersection of machine learning, scientific computing, automated art, cartography, and video games. Section 3 is particularly interesting, as it shows all the work behind the scene, to complete this project in 20 hours when you have no idea how to start. https://reddit.com/link/ycg6c6/video/kycotrx09sv91/player There is far more than just creating 3D contour plots in this article. First, you will learn how to produce data videos. I have shared quite a few in the past (with source code), but this is probably the simplest example. The data video also illustrates that a mixture of Gaussian-like distributions is typically non Gaussian-like, and may or may not be unimodal. It is borderline art (automatically generated), and certainly a stepping stone for professionals interested in computer vision or designing video games. It is easy to image a game based on my video, entitled “flying above menacingly rising mountains”. Then the data video, through various rotations, give you a much better view of your data. It is also perfect to show systems that evolve over time: a time series where each observation is an image. In addition, unlike most tutorials found online, this one does a rather deep dive on a specific, rather advanced function from a library truly aimed at scientific computing. In the same way that images (say pictures of hand-written digits) can be summarized by 10 parameters to perform text recognition, here 20 parameters allow you to perform topography classification. Not just of static terrain, but terrain that changes over time, assuming you have access to 50,000 videos representing different topographies. You can produce the videos needed for supervised classification with the code in section 2. The next step is to use data (videos) from the real world, and used the model trained on synthetic data for classification. Read the full article with illustration (data video) and Python code, here.

[D] Should We Be Concerned About The Failure Of Evolutionary Algorithms, And Its Implications?
reddit
LLM Vibe Score0
Human Vibe Score-1
mystikaldangerThis week

[D] Should We Be Concerned About The Failure Of Evolutionary Algorithms, And Its Implications?

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6287292/ ​ A number of possible explanations for \[why we can't evolve complex software\] could be considered. We tried to be as comprehensive as possible in this section, but it is possible that we have not considered some plausible explanations: Incompetent programmers—It is theoretically possible, but is highly unlikely, that out of thousands of scientists working on evolutionary computation, all failed to correctly implement the Darwinian algorithm. Nonrepresentative algorithms—Some have suggested that EAs do not accurately capture the theory of evolution, but of course that would imply that the theory itself is not specified in sufficient detail to make falsifiable predictions. If, however, such more detailed specifications are available to GP believers, it is up to them to implement them as computer simulations for testing purposes, but no successful examples of such work are known and the known ones have not been successful in evolving software. Inadequate fitness functions—Fitness function for a complex software product is difficult to outline and specify and may be as complex (or even more complex) as the software we want to evolve as it has to consider all the possible use cases and pass all unit tests. This may be the Achilles heel of GP, but it is also an objection to feasibility of programming in general and GP in particular, as both have to convert software specification into the source code. If human programmers and biological evolution succeed with such constraints, so should Darwinian simulations. The Halting problem—Turing proved that it is impossible to determine whether an arbitrary program halts, but this is also a problem for human programmers and could be easily addressed by placing time limits on considered solutions. Program correctness—If we require evolved software to be provably correct, this would present a problem as GP does not verify produced designs but only tests them against specific unit tests. Likewise, we cannot rely on automated software verification as it is still an unsolved problem in the general case. This is not really a problem as most of the human-written software is never proven to be correct and only a small portion of software engineering process relies of formal specification and Test Driven Development. Inappropriate solutions—Literature on EA is full of examples of surprising creativity of Darwinian algorithm resulting in solutions which match the letter of design specifications but not the spirit. This is similar to human-produced software and numerous examples of ways in which such software fails the goals of the initial design. Insufficient complexity of the environment (not enough data, poor fitness functions)—It is possible that the simulated environment is not complex enough to generate high complexity outputs in evolutionary simulations. This does not seem correct as Internet presents a highly complex landscape in which many self-modifying computer viruses roam. Likewise, virtual world such as Second Life and many others present close approximations to the real world and are certainly more complex than early Earth was: A skeptic might insist that an abstract environment would be inadequate for the evolution . . ., believing instead that the virtual environment would need to closely resemble the actual biological environment in which our ancestors evolved. Creating a physically realistic virtual world would require a far greater investment of computational resources than the simulation of a simple toy world or abstract problem domain (whereas evolution had access to a physically realistic real world “for free”). In the limiting case, if complete microphysical accuracy were insisted upon, the computational requirements would balloon to utterly infeasible proportions. Requiring more realistic environmental conditions may result in an increase in necessary computational resources, a problem addressed in the next bullet. Insufficient resources (compute, memory)—From the history of computer science, we know of many situations (speech recognition, NN training), where we had a correct algorithm but insufficient computational resources to run it to success. It is possible that we simply do not have hardware powerful enough to emulate evolution. We will address this possibility in section “Computational Complexity of Biological Evolution and Available Compute.” Software design is not amenable to evolutionary methods—Space of software designs may be discrete with no continuous path via incremental fitness to the desired solutions. This is possible, but this implies that original goals of GP are unattainable and misguided. In addition, because a clear mapping exists between solutions to problems and animals as solutions to environmental problems, this would also imply that current explanation for the origin of the species is incorrect. Darwinian algorithm is incomplete or wrong—Finally, we have to consider the possibility that the inspiration behind evolutionary computation, the Darwinian algorithm itself is wrong or at least partially incomplete. If that was true, computer simulations of such algorithm would fail to produce results comparable with observations we see in nature and a search for an alternative algorithm would need to take place. This would be an extraordinary claim and would require that we discard all the other possible explanations from this list. We challenge EA community to prove us wrong by producing an experiment, which evolves nontrivial software from scratch and without human help. That would be the only way in which our findings could be shown to be incorrect. Perhaps, reframing the problem in terms of maximizing negentropy of digital organisms, as suggested by Schrödinger, Michaelian, and Ulanowicz and Hannon, with respect to negative energy being a fundamental property of all life-forms may produce better results. On a positive side, the fact that it seems impossible to evolve complex software implies that we are unlikely to be able to evolve highly sophisticated artificially intelligent agents, which may present significant risk to our safety and security. Just imagine what would have happened, if the very first time we ran a simulation of evolution on a computer, it produced a superintelligent agent. Yampolskiy has shown that programming as a problem is AI-complete; if GP can solve programming that would imply that GP = AGI (artificial general intelligence), but we see no experimental evidence for such claim. In fact, it is more likely that once we have AGI, it could be used to create an intelligent fitness function for GP and so evolve software. Genetic programming will not be the cause of AI, but a product of it. However, neuroevolution methods for optimizing deep learning architectures and parameters remain a strong possibility for creation of AGI.

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore
reddit
LLM Vibe Score0
Human Vibe Score1
qazmkoppThis week

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore

A few days ago, HuggingFace announced a $100 million Series C funding round, which was big news in open source machine learning and could be a sign of where the industry is headed. Two days before the HuggingFace funding announcement, open-source machine learning platform MetaSpore released a demo based on the HuggingFace Rapid deployment pre-training model. As deep learning technology makes innovative breakthroughs in computer vision, natural language processing, speech understanding, and other fields, more and more unstructured data are perceived, understood, and processed by machines. These advances are mainly due to the powerful learning ability of deep learning. Through pre-training of deep models on massive data, the models can capture the internal data patterns, thus helping many downstream tasks. With the industry and academia investing more and more energy in the research of pre-training technology, the distribution warehouses of pre-training models such as HuggingFace and Timm have emerged one after another. The open-source community release pre-training significant model dividends at an unprecedented speed. In recent years, the data form of machine modeling and understanding has gradually evolved from single-mode to multi-mode, and the semantic gap between different modes is being eliminated, making it possible to retrieve data across modes. Take CLIP, OpenAI’s open-source work, as an example, to pre-train the twin towers of images and texts on a dataset of 400 million pictures and texts and connect the semantics between pictures and texts. Many researchers in the academic world have been solving multimodal problems such as image generation and retrieval based on this technology. Although the frontier technology through the semantic gap between modal data, there is still a heavy and complicated model tuning, offline data processing, high performance online reasoning architecture design, heterogeneous computing, and online algorithm be born multiple processes and challenges, hindering the frontier multimodal retrieval technologies fall to the ground and pratt &whitney. DMetaSoul aims at the above technical pain points, abstracting and uniting many links such as model training optimization, online reasoning, and algorithm experiment, forming a set of solutions that can quickly apply offline pre-training model to online. This paper will introduce how to use the HuggingFace community pre-training model to conduct online reasoning and algorithm experiments based on MetaSpore technology ecology so that the benefits of the pre-training model can be fully released to the specific business or industry and small and medium-sized enterprises. And we will give the text search text and text search graph two multimodal retrieval demonstration examples for your reference. Multimodal semantic retrieval The sample architecture of multimodal retrieval is as follows: Our multimodal retrieval system supports both text search and text search application scenarios, including offline processing, model reasoning, online services, and other core modules: ​ https://preview.redd.it/w4v4c7vcez291.png?width=1834&format=png&auto=webp&s=0687efb1fddb26e8e30cb844d398ec712b947f31 Offline processing, including offline data processing processes for different application scenarios of text search and text search, including model tuning, model export, data index database construction, data push, etc. Model inference. After the offline model training, we deployed our NLP and CV large models based on the MetaSpore Serving framework. MetaSpore Serving helps us conveniently perform online inference, elastic scheduling, load balancing, and resource scheduling in heterogeneous environments. Online services. Based on MetaSpore’s online algorithm application framework, MetaSpore has a complete set of reusable online search services, including Front-end retrieval UI, multimodal data preprocessing, vector recall and sorting algorithm, AB experimental framework, etc. MetaSpore also supports text search by text and image scene search by text and can be migrated to other application scenarios at a low cost. The HuggingFace open source community has provided several excellent baseline models for similar multimodal retrieval problems, which are often the starting point for actual optimization in the industry. MetaSpore also uses the pre-training model of the HuggingFace community in its online services of searching words by words and images by words. Searching words by words is based on the semantic similarity model of the question and answer field optimized by MetaSpore, and searching images by words is based on the community pre-training model. These community open source pre-training models are exported to the general ONNX format and loaded into MetaSpore Serving for online reasoning. The following sections will provide a detailed description of the model export and online retrieval algorithm services. The reasoning part of the model is standardized SAAS services with low coupling with the business. Interested readers can refer to my previous post: The design concept of MetaSpore, a new generation of the one-stop machine learning platform. 1.1 Offline Processing Offline processing mainly involves the export and loading of online models and index building and pushing of the document library. You can follow the step-by-step instructions below to complete the offline processing of text search and image search and see how the offline pre-training model achieves reasoning at MetaSpore. 1.1.1 Search text by text Traditional text retrieval systems are based on literal matching algorithms such as BM25. Due to users’ diverse query words, a semantic gap between query words and documents is often encountered. For example, users misspell “iPhone” as “Phone,” and search terms are incredibly long, such as “1 \~ 3 months old baby autumn small size bag pants”. Traditional text retrieval systems will use spelling correction, synonym expansion, search terms rewriting, and other means to alleviate the semantic gap but fundamentally fail to solve this problem. Only when the retrieval system fully understands users’ query terms and documents can it meet users’ retrieval demands at the semantic level. With the continuous progress of pre-training and representational learning technology, some commercial search engines continue to integrate semantic vector retrieval methods based on symbolic learning into the retrieval ecology. Semantic retrieval model This paper introduces a set of semantic vector retrieval applications. MetaSpore built a set of semantic retrieval systems based on encyclopedia question and answer data. MetaSpore adopted the Sentence-Bert model as the semantic vector representation model, which fine-tunes the twin tower BERT in supervised or unsupervised ways to make the model more suitable for retrieval tasks. The model structure is as follows: The query-Doc symmetric two-tower model is used in text search and question and answer retrieval. The vector representation of online Query and offline DOC share the same vector representation model, so it is necessary to ensure the consistency of the offline DOC library building model and online Query inference model. The case uses MetaSpore’s text representation model Sbert-Chinese-QMC-domain-V1, optimized in the open-source semantically similar data set. This model will express the question and answer data as a vector in offline database construction. The user query will be expressed as a vector by this model in online retrieval, ensuring that query-doc in the same semantic space, users’ semantic retrieval demands can be guaranteed by vector similarity metric calculation. Since the text presentation model does vector encoding for Query online, we need to export the model for use by the online service. Go to the q&A data library code directory and export the model concerning the documentation. In the script, Pytorch Tracing is used to export the model. The models are exported to the “./export “directory. The exported models are mainly ONNX models used for wired reasoning, Tokenizer, and related configuration files. The exported models are loaded into MetaSpore Serving by the online Serving system described below for model reasoning. Since the exported model will be copied to the cloud storage, you need to configure related variables in env.sh. \Build library based on text search \ The retrieval database is built on the million-level encyclopedia question and answer data set. According to the description document, you need to download the data and complete the database construction. The question and answer data will be coded as a vector by the offline model, and then the database construction data will be pushed to the service component. The whole process of database construction is described as follows: Preprocessing, converting the original data into a more general JSonline format for database construction; Build index, use the same model as online “sbert-Chinese-qmc-domain-v1” to index documents (one document object per line); Push inverted (vector) and forward (document field) data to each component server. The following is an example of the database data format. After offline database construction is completed, various data are pushed to corresponding service components, such as Milvus storing vector representation of documents and MongoDB storing summary information of documents. Online retrieval algorithm services will use these service components to obtain relevant data. 1.1.2 Search by text Text and images are easy for humans to relate semantically but difficult for machines. First of all, from the perspective of data form, the text is the discrete ID type of one-dimensional data based on words and words. At the same time, images are continuous two-dimensional or three-dimensional data. Secondly, the text is a subjective creation of human beings, and its expressive ability is vibrant, including various turning points, metaphors, and other expressions, while images are machine representations of the objective world. In short, bridging the semantic gap between text and image data is much more complex than searching text by text. The traditional text search image retrieval technology generally relies on the external text description data of the image or the nearest neighbor retrieval technology and carries out the retrieval through the image associated text, which in essence degrades the problem to text search. However, it will also face many issues, such as obtaining the associated text of pictures and whether the accuracy of text search by text is high enough. The depth model has gradually evolved from single-mode to multi-mode in recent years. Taking the open-source project of OpenAI, CLIP, as an example, train the model through the massive image and text data of the Internet and map the text and image data into the same semantic space, making it possible to implement the text and image search technology based on semantic vector. CLIP graphic model The text search pictures introduced in this paper are implemented based on semantic vector retrieval, and the CLIP pre-training model is used as the two-tower retrieval architecture. Because the CLIP model has trained the semantic alignment of the twin towers’ text and image side models on the massive graphic and text data, it is particularly suitable for the text search graph scene. Due to the different image and text data forms, the Query-Doc asymmetric twin towers model is used for text search image retrieval. The image-side model of the twin towers is used for offline database construction, and the text-side model is used for the online return. In the final online retrieval, the database data of the image side model will be searched after the text side model encodes Query, and the CLIP pre-training model guarantees the semantic correlation between images and texts. The model can draw the graphic pairs closer in vector space by pre-training on a large amount of visual data. Here we need to export the text-side model for online MetaSpore Serving inference. Since the retrieval scene is based on Chinese, the CLIP model supporting Chinese understanding is selected. The exported content includes the ONNX model used for online reasoning and Tokenizer, similar to the text search. MetaSpore Serving can load model reasoning through the exported content. Build library on Image search You need to download the Unsplash Lite library data and complete the construction according to the instructions. The whole process of database construction is described as follows: Preprocessing, specify the image directory, and then generate a more general JSOnline file for library construction; Build index, use OpenAI/Clip-Vit-BASE-Patch32 pre-training model to index the gallery, and output one document object for each line of index data; Push inverted (vector) and forward (document field) data to each component server. Like text search, after offline database construction, relevant data will be pushed to service components, called by online retrieval algorithm services to obtain relevant data. 1.2 Online Services The overall online service architecture diagram is as follows: https://preview.redd.it/jfsl8hdfez291.png?width=1280&format=png&auto=webp&s=a858e2304a0c93e78ba5429612ca08cbee69b35a Multi-mode search online service system supports application scenarios such as text search and text search. The whole online service consists of the following parts: Query preprocessing service: encapsulate preprocessing logic (including text/image, etc.) of pre-training model, and provide services through gRPC interface; Retrieval algorithm service: the whole algorithm processing link includes AB experiment tangent flow configuration, MetaSpore Serving call, vector recall, sorting, document summary, etc.; User entry service: provides a Web UI interface for users to debug and track down problems in the retrieval service. From a user request perspective, these services form invocation dependencies from back to front, so to build up a multimodal sample, you need to run each service from front to back first. Before doing this, remember to export the offline model, put it online and build the library first. This article will introduce the various parts of the online service system and make the whole service system step by step according to the following guidance. See the ReadME at the end of this article for more details. 1.2.1 Query preprocessing service Deep learning models tend to be based on tensors, but NLP/CV models often have a preprocessing part that translates raw text and images into tensors that deep learning models can accept. For example, NLP class models often have a pre-tokenizer to transform text data of string type into discrete tensor data. CV class models also have similar processing logic to complete the cropping, scaling, transformation, and other processing of input images through preprocessing. On the one hand, considering that this part of preprocessing logic is decoupled from tensor reasoning of the depth model, on the other hand, the reason of the depth model has an independent technical system based on ONNX, so MetaSpore disassembled this part of preprocessing logic. NLP pretreatment Tokenizer has been integrated into the Query pretreatment service. MetaSpore dismantlement with a relatively general convention. Users only need to provide preprocessing logic files to realize the loading and prediction interface and export the necessary data and configuration files loaded into the preprocessing service. Subsequent CV preprocessing logic will also be integrated in this manner. The preprocessing service currently provides the gRPC interface invocation externally and is dependent on the Query preprocessing (QP) module in the retrieval algorithm service. After the user request reaches the retrieval algorithm service, it will be forwarded to the service to complete the data preprocessing and continue the subsequent processing. The ReadMe provides details on how the preprocessing service is started, how the preprocessing model exported offline to cloud storage enters the service, and how to debug the service. To further improve the efficiency and stability of model reasoning, MetaSpore Serving implements a Python preprocessing submodule. So MetaSpore can provide gRPC services through user-specified preprocessor.py, complete Tokenizer or CV-related preprocessing in NLP, and translate requests into a Tensor that deep models can handle. Finally, the model inference is carried out by MetaSpore, Serving subsequent sub-modules. Presented here on the lot code: https://github.com/meta-soul/MetaSpore/compare/add\python\preprocessor 1.2.2 Retrieval algorithm services Retrieval algorithm service is the core of the whole online service system, which is responsible for the triage of experiments, the assembly of algorithm chains such as preprocessing, recall, sorting, and the invocation of dependent component services. The whole retrieval algorithm service is developed based on the Java Spring framework and supports multi-mode retrieval scenarios of text search and text search graph. Due to good internal abstraction and modular design, it has high flexibility and can be migrated to similar application scenarios at a low cost. Here’s a quick guide to configuring the environment to set up the retrieval algorithm service. See ReadME for more details: Install dependent components. Use Maven to install the online-Serving component Search for service configurations. Copy the template configuration file and replace the MongoDB, Milvus, and other configurations based on the development/production environment. Install and configure Consul. Consul allows you to synchronize the search service configuration in real-time, including cutting the flow of experiments, recall parameters, and sorting parameters. The project’s configuration file shows the current configuration parameters of text search and text search. The parameter modelName in the stage of pretreatment and recall is the corresponding model exported in offline processing. Start the service. Once the above configuration is complete, the retrieval service can be started from the entry script. Once the service is started, you can test it! For example, for a user with userId=10 who wants to query “How to renew ID card,” access the text search service. 1.2.3 User Entry Service Considering that the retrieval algorithm service is in the form of the API interface, it is difficult to locate and trace the problem, especially for the text search image scene can intuitively display the retrieval results to facilitate the iterative optimization of the retrieval algorithm. This paper provides a lightweight Web UI interface for text search and image search, a search input box, and results in a display page for users. Developed by Flask, the service can be easily integrated with other retrieval applications. The service calls the retrieval algorithm service and displays the returned results on the page. It’s also easy to install and start the service. Once you’re done, go to http://127.0.0.1:8090 to see if the search UI service is working correctly. See the ReadME at the end of this article for details. Multimodal system demonstration The multimodal retrieval service can be started when offline processing and online service environment configuration have been completed following the above instructions. Examples of textual searches are shown below. Enter the entry of the text search map application, enter “cat” first, and you can see that the first three digits of the returned result are cats: https://preview.redd.it/0n5nuyvhez291.png?width=1280&format=png&auto=webp&s=1e9c054f541d53381674b8d6001b4bf524506bd2 If you add a color constraint to “cat” to retrieve “black cat,” you can see that it does return a black cat: https://preview.redd.it/rzc0qjyjez291.png?width=1280&format=png&auto=webp&s=d5bcc503ef0fb3360c7740e60e295cf372dcad47 Further, strengthen the constraint on the search term, change it to “black cat on the bed,” and return results containing pictures of a black cat climbing on the bed: ​ https://preview.redd.it/c4b2q8olez291.png?width=1280&format=png&auto=webp&s=4f3817b0b9f07e1e68d1d4a8281702ba3834a00a The cat can still be found through the text search system after the color and scene modification in the above example. Conclusion The cutting-edge pre-training technology can bridge the semantic gap between different modes, and the HuggingFace community can greatly reduce the cost for developers to use the pre-training model. Combined with the technological ecology of MetaSpore online reasoning and online microservices provided by DMetaSpore, the pre-training model is no longer mere offline dabbling. Instead, it can truly achieve end-to-end implementation from cutting-edge technology to industrial scenarios, fully releasing the dividends of the pre-training large model. In the future, DMetaSoul will continue to improve and optimize the MetaSpore technology ecosystem: More automated and wider access to HuggingFace community ecology. MetaSpore will soon release a common model rollout mechanism to make HuggingFace ecologically accessible and will later integrate preprocessing services into online services. Multi-mode retrieval offline algorithm optimization. For multimodal retrieval scenarios, MetaSpore will continuously iteratively optimize offline algorithm components, including text recall/sort model, graphic recall/sort model, etc., to improve the accuracy and efficiency of the retrieval algorithm. For related code and reference documentation in this article, please visit: https://github.com/meta-soul/MetaSpore/tree/main/demo/multimodal/online Some images source: https://github.com/openai/CLIP/raw/main/CLIP.png https://www.sbert.net/examples/training/sts/README.html

[R] Forecasting and Mitigating Security Threats from Malicious AI Applications
reddit
LLM Vibe Score0
Human Vibe Score1
Successful-Western27This week

[R] Forecasting and Mitigating Security Threats from Malicious AI Applications

This paper provides a systematic analysis of potential malicious applications of AI systems across digital, physical and political security domains. The methodology involves: Surveying dual-use AI capabilities that could enable attacks Mapping specific attack vectors and required technical capabilities Analyzing the evolution of attacker/defender dynamics Developing a framework for threat assessment and mitigation Key technical findings: ML advances in areas like NLP and computer vision lower barriers to sophisticated attacks Automated systems can significantly scale up traditional attack vectors Transfer learning and GANs enable rapid adaptation of attack techniques Technical countermeasures alone are insufficient - policy/governance frameworks needed The researchers provide a detailed assessment framework examining: Technical requirements for different attack types Estimated timeline for capability development Difficulty of execution and potential impact Proposed defensive measures and their limitations I think this work is important for helping the ML community get ahead of security risks before they materialize. The framework provides a structured way to evaluate emerging threats, though I expect the specific attack vectors will evolve significantly as capabilities advance. I think we need much more research on measuring the effectiveness of proposed countermeasures and understanding the co-evolution of offensive/defensive capabilities. The policy recommendations are a good start but will require ongoing refinement. TLDR: Systematic analysis of how ML advances could enable new attack vectors across security domains. Provides framework for assessing and mitigating threats through both technical and policy measures. Full summary is here. Paper here.

[D] Is the Covid-19 crisis the rock on which the ML hype wave finally crashes?
reddit
LLM Vibe Score0
Human Vibe Score1
AlexSnakeKingThis week

[D] Is the Covid-19 crisis the rock on which the ML hype wave finally crashes?

People have been predicting the end of the ML Hype for a while, but it didn't seem to go away. Andrew Ng's "A.I. is the new electricity" statement looked like it was true, and the number of ML related stuff on resumes, job descriptions and software requirements, not to mention startups, seemed to keep increasing and increasing, and increasing.... Then came a virus, with a billion years of optimization and search efficiency baked into its RNA. Some considerations: Despite all the hype, production grade ML was still a challenge for most companies outside of the big tech shops and some talented startups. With the Covid-19 induced economic meltdown, most companies don't have the money or the resources to fund the projects required to take ML from PoC/Jupyter Notebook status to value generating production applications. Most of the startups that are building ML productionizing tools and platforms will run out of funds, clients, or both. Moreover, the current economic meltdown makes most historical data on business KPIs, Customer behavior, time series forecasting, etc...is no longer useful as training data. The only data sets that are still useful are those for "hard-core" ML problems like computer vision and NLP, for which completely automated APIs have been already developed and Auto-ML works pretty well, so no real ML talent is needed in deploying them. All of this tells me that Q2 2020 will mark the end of the ML and Deep Learning hype, and besides a likely multi-year economic depression in the U.S., we are also headed into another AI winter. I'm not happy about the ML hype dying, it has helped me a lot in my career, and I really really love Deep Learning from a purely conceptual point of view. But one needs to be realistic in such a job market, should we all start reframing our skill sets and our resumes? I'm kind of hoping somebody will prove my above reasoning wrong.

[N] New Trends to Power Faster Artificial Intelligence and Machine Learning Adoption?
reddit
LLM Vibe Score0
Human Vibe Score1
EsotericaCapitalThis week

[N] New Trends to Power Faster Artificial Intelligence and Machine Learning Adoption?

In 2012, Google X lab created a neural network that can identify cats. Since then, technology companies have been increasingly adopting AI/ML on a large scale to build better applications and services for consumers (ToC). On the other hand, AI/ML's adoption on the enterprises' side (ToB) has yet to see the same growth trajectory due to the costs and complexities in both hardware and software. However, Since 2020, we started noticing three emerging tech trends that can help accelerate enterprises' adoption of AI/ML. Breakthrough in semiconductors: In 2020, Nvidia debut the concept of "Data Processing Unit," a new class of programmable processors that combine high-performance CPU with SmartNiC (network interface controller). Data centers can deploy DPUs to optimize computing offload and frees up CPUs to focus on intended tasks, such as machine learning. DPUs help resolve a significant bottleneck for ML training, where models, sometimes with billions of parameters, are way too big for traditional CPUs and GPUs to handle. Other leading semiconductor players, such as Marvell and Xilinx, follow suit with their in-house or partner-designed DPUs. Industry analysts have forecasted that the market size for DPUs in data centers alone could reach $50 billion by 2025. ​ https://preview.redd.it/l436muluhnn61.png?width=1430&format=png&auto=webp&s=ba8d1298056ea31bddd25f1596ff64b7e107580a Breakthrough in software: we also saw significant progress of "Conversational AI," a new form of AI that can understand and speak languages with human-like accuracy, in 2020. Conversational AI allows two-way interactions and provides a much better user experience than traditional AI-powered Chatbot, mostly a one-way response system. The secret of conversational AI is its ability to handle lots of human conversation variance. Developers have designed innovative algorithms such as "Switch transformers" and "Sparse training" to enable models to handle vast amounts of data. The size of conversational AI training models is enormous and keeps expanding. For example, in February 2021, Google Brain announced a model with 1.6 trillion parameters, nine times the size of the famous Open AI GPT-3 (175 billion parameters) unveiled in July 2020. GPT-3 is 100+ times bigger than GPT-2 introduced in 2019. ​ https://preview.redd.it/oajpi2yvhnn61.png?width=1430&format=png&auto=webp&s=1482913a98e17ddc1d62cc79864598d4012ad6f7 Cloud giants are expanding machine-learning platforms for developers. Andy Jassy famously said that "AI is shifting from a niche experiment inside technical departments to becoming more mainstream in business processes." in the 2020 AWS reInvent. During the conference, AWS rolled out many AI products across the technology stack, including AI chips (AWS Trainium), database (Aurora Machine Learning), and vertical solutions (Amazon Healthlake), etc. However, the most significant development is the drastic expansion of "Amazon SageMaker," one of the largest cloud machine-learning platforms. SageMaker has been offering new features to make it easier for developers to automate machine learning workflow. Microsoft Azure and Google Cloud are also growing their ML developer platforms. ​ https://preview.redd.it/z9wf2o8xhnn61.png?width=1430&format=png&auto=webp&s=9f607acfe8f0dbf36fb9b472f3cb40b80f13879e Witnessing these breakthroughs in semiconductor and software, coupled with cloud giants' effort to democratize AI, we see a coming inflection point of accelerated AI adoption in both ToC and ToB markets. So how do we benefit from this megatrend? In semiconductors, we like companies with DPUs exposure. In AI development and processing, we favor multi-cloud AI platforms such as Databricks. In enterprise software, we believe there will be a strong wave of new AI-based enterprise applications that can be creative and efficient in solving real-world problems.

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore
reddit
LLM Vibe Score0
Human Vibe Score1
qazmkoppThis week

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore

A few days ago, HuggingFace announced a $100 million Series C funding round, which was big news in open source machine learning and could be a sign of where the industry is headed. Two days before the HuggingFace funding announcement, open-source machine learning platform MetaSpore released a demo based on the HuggingFace Rapid deployment pre-training model. As deep learning technology makes innovative breakthroughs in computer vision, natural language processing, speech understanding, and other fields, more and more unstructured data are perceived, understood, and processed by machines. These advances are mainly due to the powerful learning ability of deep learning. Through pre-training of deep models on massive data, the models can capture the internal data patterns, thus helping many downstream tasks. With the industry and academia investing more and more energy in the research of pre-training technology, the distribution warehouses of pre-training models such as HuggingFace and Timm have emerged one after another. The open-source community release pre-training significant model dividends at an unprecedented speed. In recent years, the data form of machine modeling and understanding has gradually evolved from single-mode to multi-mode, and the semantic gap between different modes is being eliminated, making it possible to retrieve data across modes. Take CLIP, OpenAI’s open-source work, as an example, to pre-train the twin towers of images and texts on a dataset of 400 million pictures and texts and connect the semantics between pictures and texts. Many researchers in the academic world have been solving multimodal problems such as image generation and retrieval based on this technology. Although the frontier technology through the semantic gap between modal data, there is still a heavy and complicated model tuning, offline data processing, high performance online reasoning architecture design, heterogeneous computing, and online algorithm be born multiple processes and challenges, hindering the frontier multimodal retrieval technologies fall to the ground and pratt &whitney. DMetaSoul aims at the above technical pain points, abstracting and uniting many links such as model training optimization, online reasoning, and algorithm experiment, forming a set of solutions that can quickly apply offline pre-training model to online. This paper will introduce how to use the HuggingFace community pre-training model to conduct online reasoning and algorithm experiments based on MetaSpore technology ecology so that the benefits of the pre-training model can be fully released to the specific business or industry and small and medium-sized enterprises. And we will give the text search text and text search graph two multimodal retrieval demonstration examples for your reference. Multimodal semantic retrieval The sample architecture of multimodal retrieval is as follows: Our multimodal retrieval system supports both text search and text search application scenarios, including offline processing, model reasoning, online services, and other core modules: ​ https://preview.redd.it/w4v4c7vcez291.png?width=1834&format=png&auto=webp&s=0687efb1fddb26e8e30cb844d398ec712b947f31 Offline processing, including offline data processing processes for different application scenarios of text search and text search, including model tuning, model export, data index database construction, data push, etc. Model inference. After the offline model training, we deployed our NLP and CV large models based on the MetaSpore Serving framework. MetaSpore Serving helps us conveniently perform online inference, elastic scheduling, load balancing, and resource scheduling in heterogeneous environments. Online services. Based on MetaSpore’s online algorithm application framework, MetaSpore has a complete set of reusable online search services, including Front-end retrieval UI, multimodal data preprocessing, vector recall and sorting algorithm, AB experimental framework, etc. MetaSpore also supports text search by text and image scene search by text and can be migrated to other application scenarios at a low cost. The HuggingFace open source community has provided several excellent baseline models for similar multimodal retrieval problems, which are often the starting point for actual optimization in the industry. MetaSpore also uses the pre-training model of the HuggingFace community in its online services of searching words by words and images by words. Searching words by words is based on the semantic similarity model of the question and answer field optimized by MetaSpore, and searching images by words is based on the community pre-training model. These community open source pre-training models are exported to the general ONNX format and loaded into MetaSpore Serving for online reasoning. The following sections will provide a detailed description of the model export and online retrieval algorithm services. The reasoning part of the model is standardized SAAS services with low coupling with the business. Interested readers can refer to my previous post: The design concept of MetaSpore, a new generation of the one-stop machine learning platform. 1.1 Offline Processing Offline processing mainly involves the export and loading of online models and index building and pushing of the document library. You can follow the step-by-step instructions below to complete the offline processing of text search and image search and see how the offline pre-training model achieves reasoning at MetaSpore. 1.1.1 Search text by text Traditional text retrieval systems are based on literal matching algorithms such as BM25. Due to users’ diverse query words, a semantic gap between query words and documents is often encountered. For example, users misspell “iPhone” as “Phone,” and search terms are incredibly long, such as “1 \~ 3 months old baby autumn small size bag pants”. Traditional text retrieval systems will use spelling correction, synonym expansion, search terms rewriting, and other means to alleviate the semantic gap but fundamentally fail to solve this problem. Only when the retrieval system fully understands users’ query terms and documents can it meet users’ retrieval demands at the semantic level. With the continuous progress of pre-training and representational learning technology, some commercial search engines continue to integrate semantic vector retrieval methods based on symbolic learning into the retrieval ecology. Semantic retrieval model This paper introduces a set of semantic vector retrieval applications. MetaSpore built a set of semantic retrieval systems based on encyclopedia question and answer data. MetaSpore adopted the Sentence-Bert model as the semantic vector representation model, which fine-tunes the twin tower BERT in supervised or unsupervised ways to make the model more suitable for retrieval tasks. The model structure is as follows: The query-Doc symmetric two-tower model is used in text search and question and answer retrieval. The vector representation of online Query and offline DOC share the same vector representation model, so it is necessary to ensure the consistency of the offline DOC library building model and online Query inference model. The case uses MetaSpore’s text representation model Sbert-Chinese-QMC-domain-V1, optimized in the open-source semantically similar data set. This model will express the question and answer data as a vector in offline database construction. The user query will be expressed as a vector by this model in online retrieval, ensuring that query-doc in the same semantic space, users’ semantic retrieval demands can be guaranteed by vector similarity metric calculation. Since the text presentation model does vector encoding for Query online, we need to export the model for use by the online service. Go to the q&A data library code directory and export the model concerning the documentation. In the script, Pytorch Tracing is used to export the model. The models are exported to the “./export “directory. The exported models are mainly ONNX models used for wired reasoning, Tokenizer, and related configuration files. The exported models are loaded into MetaSpore Serving by the online Serving system described below for model reasoning. Since the exported model will be copied to the cloud storage, you need to configure related variables in env.sh. \Build library based on text search \ The retrieval database is built on the million-level encyclopedia question and answer data set. According to the description document, you need to download the data and complete the database construction. The question and answer data will be coded as a vector by the offline model, and then the database construction data will be pushed to the service component. The whole process of database construction is described as follows: Preprocessing, converting the original data into a more general JSonline format for database construction; Build index, use the same model as online “sbert-Chinese-qmc-domain-v1” to index documents (one document object per line); Push inverted (vector) and forward (document field) data to each component server. The following is an example of the database data format. After offline database construction is completed, various data are pushed to corresponding service components, such as Milvus storing vector representation of documents and MongoDB storing summary information of documents. Online retrieval algorithm services will use these service components to obtain relevant data. 1.1.2 Search by text Text and images are easy for humans to relate semantically but difficult for machines. First of all, from the perspective of data form, the text is the discrete ID type of one-dimensional data based on words and words. At the same time, images are continuous two-dimensional or three-dimensional data. Secondly, the text is a subjective creation of human beings, and its expressive ability is vibrant, including various turning points, metaphors, and other expressions, while images are machine representations of the objective world. In short, bridging the semantic gap between text and image data is much more complex than searching text by text. The traditional text search image retrieval technology generally relies on the external text description data of the image or the nearest neighbor retrieval technology and carries out the retrieval through the image associated text, which in essence degrades the problem to text search. However, it will also face many issues, such as obtaining the associated text of pictures and whether the accuracy of text search by text is high enough. The depth model has gradually evolved from single-mode to multi-mode in recent years. Taking the open-source project of OpenAI, CLIP, as an example, train the model through the massive image and text data of the Internet and map the text and image data into the same semantic space, making it possible to implement the text and image search technology based on semantic vector. CLIP graphic model The text search pictures introduced in this paper are implemented based on semantic vector retrieval, and the CLIP pre-training model is used as the two-tower retrieval architecture. Because the CLIP model has trained the semantic alignment of the twin towers’ text and image side models on the massive graphic and text data, it is particularly suitable for the text search graph scene. Due to the different image and text data forms, the Query-Doc asymmetric twin towers model is used for text search image retrieval. The image-side model of the twin towers is used for offline database construction, and the text-side model is used for the online return. In the final online retrieval, the database data of the image side model will be searched after the text side model encodes Query, and the CLIP pre-training model guarantees the semantic correlation between images and texts. The model can draw the graphic pairs closer in vector space by pre-training on a large amount of visual data. Here we need to export the text-side model for online MetaSpore Serving inference. Since the retrieval scene is based on Chinese, the CLIP model supporting Chinese understanding is selected. The exported content includes the ONNX model used for online reasoning and Tokenizer, similar to the text search. MetaSpore Serving can load model reasoning through the exported content. Build library on Image search You need to download the Unsplash Lite library data and complete the construction according to the instructions. The whole process of database construction is described as follows: Preprocessing, specify the image directory, and then generate a more general JSOnline file for library construction; Build index, use OpenAI/Clip-Vit-BASE-Patch32 pre-training model to index the gallery, and output one document object for each line of index data; Push inverted (vector) and forward (document field) data to each component server. Like text search, after offline database construction, relevant data will be pushed to service components, called by online retrieval algorithm services to obtain relevant data. 1.2 Online Services The overall online service architecture diagram is as follows: https://preview.redd.it/jfsl8hdfez291.png?width=1280&format=png&auto=webp&s=a858e2304a0c93e78ba5429612ca08cbee69b35a Multi-mode search online service system supports application scenarios such as text search and text search. The whole online service consists of the following parts: Query preprocessing service: encapsulate preprocessing logic (including text/image, etc.) of pre-training model, and provide services through gRPC interface; Retrieval algorithm service: the whole algorithm processing link includes AB experiment tangent flow configuration, MetaSpore Serving call, vector recall, sorting, document summary, etc.; User entry service: provides a Web UI interface for users to debug and track down problems in the retrieval service. From a user request perspective, these services form invocation dependencies from back to front, so to build up a multimodal sample, you need to run each service from front to back first. Before doing this, remember to export the offline model, put it online and build the library first. This article will introduce the various parts of the online service system and make the whole service system step by step according to the following guidance. See the ReadME at the end of this article for more details. 1.2.1 Query preprocessing service Deep learning models tend to be based on tensors, but NLP/CV models often have a preprocessing part that translates raw text and images into tensors that deep learning models can accept. For example, NLP class models often have a pre-tokenizer to transform text data of string type into discrete tensor data. CV class models also have similar processing logic to complete the cropping, scaling, transformation, and other processing of input images through preprocessing. On the one hand, considering that this part of preprocessing logic is decoupled from tensor reasoning of the depth model, on the other hand, the reason of the depth model has an independent technical system based on ONNX, so MetaSpore disassembled this part of preprocessing logic. NLP pretreatment Tokenizer has been integrated into the Query pretreatment service. MetaSpore dismantlement with a relatively general convention. Users only need to provide preprocessing logic files to realize the loading and prediction interface and export the necessary data and configuration files loaded into the preprocessing service. Subsequent CV preprocessing logic will also be integrated in this manner. The preprocessing service currently provides the gRPC interface invocation externally and is dependent on the Query preprocessing (QP) module in the retrieval algorithm service. After the user request reaches the retrieval algorithm service, it will be forwarded to the service to complete the data preprocessing and continue the subsequent processing. The ReadMe provides details on how the preprocessing service is started, how the preprocessing model exported offline to cloud storage enters the service, and how to debug the service. To further improve the efficiency and stability of model reasoning, MetaSpore Serving implements a Python preprocessing submodule. So MetaSpore can provide gRPC services through user-specified preprocessor.py, complete Tokenizer or CV-related preprocessing in NLP, and translate requests into a Tensor that deep models can handle. Finally, the model inference is carried out by MetaSpore, Serving subsequent sub-modules. Presented here on the lot code: https://github.com/meta-soul/MetaSpore/compare/add\python\preprocessor 1.2.2 Retrieval algorithm services Retrieval algorithm service is the core of the whole online service system, which is responsible for the triage of experiments, the assembly of algorithm chains such as preprocessing, recall, sorting, and the invocation of dependent component services. The whole retrieval algorithm service is developed based on the Java Spring framework and supports multi-mode retrieval scenarios of text search and text search graph. Due to good internal abstraction and modular design, it has high flexibility and can be migrated to similar application scenarios at a low cost. Here’s a quick guide to configuring the environment to set up the retrieval algorithm service. See ReadME for more details: Install dependent components. Use Maven to install the online-Serving component Search for service configurations. Copy the template configuration file and replace the MongoDB, Milvus, and other configurations based on the development/production environment. Install and configure Consul. Consul allows you to synchronize the search service configuration in real-time, including cutting the flow of experiments, recall parameters, and sorting parameters. The project’s configuration file shows the current configuration parameters of text search and text search. The parameter modelName in the stage of pretreatment and recall is the corresponding model exported in offline processing. Start the service. Once the above configuration is complete, the retrieval service can be started from the entry script. Once the service is started, you can test it! For example, for a user with userId=10 who wants to query “How to renew ID card,” access the text search service. 1.2.3 User Entry Service Considering that the retrieval algorithm service is in the form of the API interface, it is difficult to locate and trace the problem, especially for the text search image scene can intuitively display the retrieval results to facilitate the iterative optimization of the retrieval algorithm. This paper provides a lightweight Web UI interface for text search and image search, a search input box, and results in a display page for users. Developed by Flask, the service can be easily integrated with other retrieval applications. The service calls the retrieval algorithm service and displays the returned results on the page. It’s also easy to install and start the service. Once you’re done, go to http://127.0.0.1:8090 to see if the search UI service is working correctly. See the ReadME at the end of this article for details. Multimodal system demonstration The multimodal retrieval service can be started when offline processing and online service environment configuration have been completed following the above instructions. Examples of textual searches are shown below. Enter the entry of the text search map application, enter “cat” first, and you can see that the first three digits of the returned result are cats: https://preview.redd.it/0n5nuyvhez291.png?width=1280&format=png&auto=webp&s=1e9c054f541d53381674b8d6001b4bf524506bd2 If you add a color constraint to “cat” to retrieve “black cat,” you can see that it does return a black cat: https://preview.redd.it/rzc0qjyjez291.png?width=1280&format=png&auto=webp&s=d5bcc503ef0fb3360c7740e60e295cf372dcad47 Further, strengthen the constraint on the search term, change it to “black cat on the bed,” and return results containing pictures of a black cat climbing on the bed: ​ https://preview.redd.it/c4b2q8olez291.png?width=1280&format=png&auto=webp&s=4f3817b0b9f07e1e68d1d4a8281702ba3834a00a The cat can still be found through the text search system after the color and scene modification in the above example. Conclusion The cutting-edge pre-training technology can bridge the semantic gap between different modes, and the HuggingFace community can greatly reduce the cost for developers to use the pre-training model. Combined with the technological ecology of MetaSpore online reasoning and online microservices provided by DMetaSpore, the pre-training model is no longer mere offline dabbling. Instead, it can truly achieve end-to-end implementation from cutting-edge technology to industrial scenarios, fully releasing the dividends of the pre-training large model. In the future, DMetaSoul will continue to improve and optimize the MetaSpore technology ecosystem: More automated and wider access to HuggingFace community ecology. MetaSpore will soon release a common model rollout mechanism to make HuggingFace ecologically accessible and will later integrate preprocessing services into online services. Multi-mode retrieval offline algorithm optimization. For multimodal retrieval scenarios, MetaSpore will continuously iteratively optimize offline algorithm components, including text recall/sort model, graphic recall/sort model, etc., to improve the accuracy and efficiency of the retrieval algorithm. For related code and reference documentation in this article, please visit: https://github.com/meta-soul/MetaSpore/tree/main/demo/multimodal/online Some images source: https://github.com/openai/CLIP/raw/main/CLIP.png https://www.sbert.net/examples/training/sts/README.html

Interview with Juergen Schmidhuber, renowned ‘Father Of Modern AI’, says his life’s work won't lead to dystopia.
reddit
LLM Vibe Score0
Human Vibe Score0.765
hardmaruThis week

Interview with Juergen Schmidhuber, renowned ‘Father Of Modern AI’, says his life’s work won't lead to dystopia.

Schmidhuber interview expressing his views on the future of AI and AGI. Original source. I think the interview is of interest to r/MachineLearning, and presents an alternate view, compared to other influential leaders in AI. Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life’s Work Won't Lead To Dystopia May 23, 2023. Contributed by Hessie Jones. Amid the growing concern about the impact of more advanced artificial intelligence (AI) technologies on society, there are many in the technology community who fear the implications of the advancements in Generative AI if they go unchecked. Dr. Juergen Schmidhuber, a renowned scientist, artificial intelligence researcher and widely regarded as one of the pioneers in the field, is more optimistic. He declares that many of those who suddenly warn against the dangers of AI are just seeking publicity, exploiting the media’s obsession with killer robots which has attracted more attention than “good AI” for healthcare etc. The potential to revolutionize various industries and improve our lives is clear, as are the equal dangers if bad actors leverage the technology for personal gain. Are we headed towards a dystopian future, or is there reason to be optimistic? I had a chance to sit down with Dr. Juergen Schmidhuber to understand his perspective on this seemingly fast-moving AI-train that will leap us into the future. As a teenager in the 1970s, Juergen Schmidhuber became fascinated with the idea of creating intelligent machines that could learn and improve on their own, becoming smarter than himself within his lifetime. This would ultimately lead to his groundbreaking work in the field of deep learning. In the 1980s, he studied computer science at the Technical University of Munich (TUM), where he earned his diploma in 1987. His thesis was on the ultimate self-improving machines that, not only, learn through some pre-wired human-designed learning algorithm, but also learn and improve the learning algorithm itself. Decades later, this became a hot topic. He also received his Ph.D. at TUM in 1991 for work that laid some of the foundations of modern AI. Schmidhuber is best known for his contributions to the development of recurrent neural networks (RNNs), the most powerful type of artificial neural network that can process sequential data such as speech and natural language. With his students Sepp Hochreiter, Felix Gers, Alex Graves, Daan Wierstra, and others, he published architectures and training algorithms for the long short-term memory (LSTM), a type of RNN that is widely used in natural language processing, speech recognition, video games, robotics, and other applications. LSTM has become the most cited neural network of the 20th century, and Business Week called it "arguably the most commercial AI achievement." Throughout his career, Schmidhuber has received various awards and accolades for his groundbreaking work. In 2013, he was awarded the Helmholtz Prize, which recognizes significant contributions to the field of machine learning. In 2016, he was awarded the IEEE Neural Network Pioneer Award for "pioneering contributions to deep learning and neural networks." The media have often called him the “father of modern AI,” because the most cited neural networks all build on his lab’s work. He is quick to point out, however, that AI history goes back centuries. Despite his many accomplishments, at the age of 60, he feels mounting time pressure towards building an Artificial General Intelligence within his lifetime and remains committed to pushing the boundaries of AI research and development. He is currently director of the KAUST AI Initiative, scientific director of the Swiss AI Lab IDSIA, and co-founder and chief scientist of AI company NNAISENSE, whose motto is "AI∀" which is a math-inspired way of saying "AI For All." He continues to work on cutting-edge AI technologies and applications to improve human health and extend human lives and make lives easier for everyone. The following interview has been edited for clarity. Jones: Thank you Juergen for joining me. You have signed letters warning about AI weapons. But you didn't sign the recent publication, "Pause Gigantic AI Experiments: An Open Letter"? Is there a reason? Schmidhuber: Thank you Hessie. Glad to speak with you. I have realized that many of those who warn in public against the dangers of AI are just seeking publicity. I don't think the latest letter will have any significant impact because many AI researchers, companies, and governments will ignore it completely. The proposal frequently uses the word "we" and refers to "us," the humans. But as I have pointed out many times in the past, there is no "we" that everyone can identify with. Ask 10 different people, and you will hear 10 different opinions about what is "good." Some of those opinions will be completely incompatible with each other. Don't forget the enormous amount of conflict between the many people. The letter also says, "If such a pause cannot be quickly put in place, governments should intervene and impose a moratorium." The problem is that different governments have ALSO different opinions about what is good for them and for others. Great Power A will say, if we don't do it, Great Power B will, perhaps secretly, and gain an advantage over us. The same is true for Great Powers C and D. Jones: Everyone acknowledges this fear surrounding current generative AI technology. Moreover, the existential threat of this technology has been publicly acknowledged by Sam Altman, CEO of OpenAI himself, calling for AI regulation. From your perspective, is there an existential threat? Schmidhuber: It is true that AI can be weaponized, and I have no doubt that there will be all kinds of AI arms races, but AI does not introduce a new quality of existential threat. The threat coming from AI weapons seems to pale in comparison to the much older threat from nuclear hydrogen bombs that don’t need AI at all. We should be much more afraid of half-century-old tech in the form of H-bomb rockets. The Tsar Bomba of 1961 had almost 15 times more destructive power than all weapons of WW-II combined. Despite the dramatic nuclear disarmament since the 1980s, there are still more than enough nuclear warheads to wipe out human civilization within two hours, without any AI I’m much more worried about that old existential threat than the rather harmless AI weapons. Jones: I realize that while you compare AI to the threat of nuclear bombs, there is a current danger that a current technology can be put in the hands of humans and enable them to “eventually” exact further harms to individuals of group in a very precise way, like targeted drone attacks. You are giving people a toolset that they've never had before, enabling bad actors, as some have pointed out, to be able to do a lot more than previously because they didn't have this technology. Schmidhuber: Now, all that sounds horrible in principle, but our existing laws are sufficient to deal with these new types of weapons enabled by AI. If you kill someone with a gun, you will go to jail. Same if you kill someone with one of these drones. Law enforcement will get better at understanding new threats and new weapons and will respond with better technology to combat these threats. Enabling drones to target persons from a distance in a way that requires some tracking and some intelligence to perform, which has traditionally been performed by skilled humans, to me, it seems is just an improved version of a traditional weapon, like a gun, which is, you know, a little bit smarter than the old guns. But, in principle, all of that is not a new development. For many centuries, we have had the evolution of better weaponry and deadlier poisons and so on, and law enforcement has evolved their policies to react to these threats over time. So, it's not that we suddenly have a new quality of existential threat and it's much more worrisome than what we have had for about six decades. A large nuclear warhead doesn’t need fancy face recognition to kill an individual. No, it simply wipes out an entire city with ten million inhabitants. Jones: The existential threat that’s implied is the extent to which humans have control over this technology. We see some early cases of opportunism which, as you say, tends to get more media attention than positive breakthroughs. But you’re implying that this will all balance out? Schmidhuber: Historically, we have a long tradition of technological breakthroughs that led to advancements in weapons for the purpose of defense but also for protection. From sticks, to rocks, to axes to gunpowder to cannons to rockets… and now to drones… this has had a drastic influence on human history but what has been consistent throughout history is that those who are using technology to achieve their own ends are themselves, facing the same technology because the opposing side is learning to use it against them. And that's what has been repeated in thousands of years of human history and it will continue. I don't see the new AI arms race as something that is remotely as existential a threat as the good old nuclear warheads. You said something important, in that some people prefer to talk about the downsides rather than the benefits of this technology, but that's misleading, because 95% of all AI research and AI development is about making people happier and advancing human life and health. Jones: Let’s touch on some of those beneficial advances in AI research that have been able to radically change present day methods and achieve breakthroughs. Schmidhuber: All right! For example, eleven years ago, our team with my postdoc Dan Ciresan was the first to win a medical imaging competition through deep learning. We analyzed female breast cells with the objective to determine harmless cells vs. those in the pre-cancer stage. Typically, a trained oncologist needs a long time to make these determinations. Our team, who knew nothing about cancer, were able to train an artificial neural network, which was totally dumb in the beginning, on lots of this kind of data. It was able to outperform all the other methods. Today, this is being used not only for breast cancer, but also for radiology and detecting plaque in arteries, and many other things. Some of the neural networks that we have developed in the last 3 decades are now prevalent across thousands of healthcare applications, detecting Diabetes and Covid-19 and what not. This will eventually permeate across all healthcare. The good consequences of this type of AI are much more important than the click-bait new ways of conducting crimes with AI. Jones: Adoption is a product of reinforced outcomes. The massive scale of adoption either leads us to believe that people have been led astray, or conversely, technology is having a positive effect on people’s lives. Schmidhuber: The latter is the likely case. There's intense commercial pressure towards good AI rather than bad AI because companies want to sell you something, and you are going to buy only stuff you think is going to be good for you. So already just through this simple, commercial pressure, you have a tremendous bias towards good AI rather than bad AI. However, doomsday scenarios like in Schwarzenegger movies grab more attention than documentaries on AI that improve people’s lives. Jones: I would argue that people are drawn to good stories – narratives that contain an adversary and struggle, but in the end, have happy endings. And this is consistent with your comment on human nature and how history, despite its tendency for violence and destruction of humanity, somehow tends to correct itself. Let’s take the example of a technology, which you are aware – GANs – General Adversarial Networks, which today has been used in applications for fake news and disinformation. In actuality, the purpose in the invention of GANs was far from what it is used for today. Schmidhuber: Yes, the name GANs was created in 2014 but we had the basic principle already in the early 1990s. More than 30 years ago, I called it artificial curiosity. It's a very simple way of injecting creativity into a little two network system. This creative AI is not just trying to slavishly imitate humans. Rather, it’s inventing its own goals. Let me explain: You have two networks. One network is producing outputs that could be anything, any action. Then the second network is looking at these actions and it’s trying to predict the consequences of these actions. An action could move a robot, then something happens, and the other network is just trying to predict what will happen. Now we can implement artificial curiosity by reducing the prediction error of the second network, which, at the same time, is the reward of the first network. The first network wants to maximize its reward and so it will invent actions that will lead to situations that will surprise the second network, which it has not yet learned to predict well. In the case where the outputs are fake images, the first network will try to generate images that are good enough to fool the second network, which will attempt to predict the reaction of the environment: fake or real image, and it will try to become better at it. The first network will continue to also improve at generating images whose type the second network will not be able to predict. So, they fight each other. The 2nd network will continue to reduce its prediction error, while the 1st network will attempt to maximize it. Through this zero-sum game the first network gets better and better at producing these convincing fake outputs which look almost realistic. So, once you have an interesting set of images by Vincent Van Gogh, you can generate new images that leverage his style, without the original artist having ever produced the artwork himself. Jones: I see how the Van Gogh example can be applied in an education setting and there are countless examples of artists mimicking styles from famous painters but image generation from this instance that can happen within seconds is quite another feat. And you know this is how GANs has been used. What’s more prevalent today is a socialized enablement of generating images or information to intentionally fool people. It also surfaces new harms that deal with the threat to intellectual property and copyright, where laws have yet to account for. And from your perspective this was not the intention when the model was conceived. What was your motivation in your early conception of what is now GANs? Schmidhuber: My old motivation for GANs was actually very important and it was not to create deepfakes or fake news but to enable AIs to be curious and invent their own goals, to make them explore their environment and make them creative. Suppose you have a robot that executes one action, then something happens, then it executes another action, and so on, because it wants to achieve certain goals in the environment. For example, when the battery is low, this will trigger “pain” through hunger sensors, so it wants to go to the charging station, without running into obstacles, which will trigger other pain sensors. It will seek to minimize pain (encoded through numbers). Now the robot has a friend, the second network, which is a world model ––it’s a prediction machine that learns to predict the consequences of the robot’s actions. Once the robot has a good model of the world, it can use it for planning. It can be used as a simulation of the real world. And then it can determine what is a good action sequence. If the robot imagines this sequence of actions, the model will predict a lot of pain, which it wants to avoid. If it plays this alternative action sequence in its mental model of the world, then it will predict a rewarding situation where it’s going to sit on the charging station and its battery is going to load again. So, it'll prefer to execute the latter action sequence. In the beginning, however, the model of the world knows nothing, so how can we motivate the first network to generate experiments that lead to data that helps the world model learn something it didn’t already know? That’s what artificial curiosity is about. The dueling two network systems effectively explore uncharted environments by creating experiments so that over time the curious AI gets a better sense of how the environment works. This can be applied to all kinds of environments, and has medical applications. Jones: Let’s talk about the future. You have said, “Traditional humans won’t play a significant role in spreading intelligence across the universe.” Schmidhuber: Let’s first conceptually separate two types of AIs. The first type of AI are tools directed by humans. They are trained to do specific things like accurately detect diabetes or heart disease and prevent attacks before they happen. In these cases, the goal is coming from the human. More interesting AIs are setting their own goals. They are inventing their own experiments and learning from them. Their horizons expand and eventually they become more and more general problem solvers in the real world. They are not controlled by their parents, but much of what they learn is through self-invented experiments. A robot, for example, is rotating a toy, and as it is doing this, the video coming in through the camera eyes, changes over time and it begins to learn how this video changes and learns how the 3D nature of the toy generates certain videos if you rotate it a certain way, and eventually, how gravity works, and how the physics of the world works. Like a little scientist! And I have predicted for decades that future scaled-up versions of such AI scientists will want to further expand their horizons, and eventually go where most of the physical resources are, to build more and bigger AIs. And of course, almost all of these resources are far away from earth out there in space, which is hostile to humans but friendly to appropriately designed AI-controlled robots and self-replicating robot factories. So here we are not talking any longer about our tiny biosphere; no, we are talking about the much bigger rest of the universe. Within a few tens of billions of years, curious self-improving AIs will colonize the visible cosmos in a way that’s infeasible for humans. Those who don’t won’t have an impact. Sounds like science fiction, but since the 1970s I have been unable to see a plausible alternative to this scenario, except for a global catastrophe such as an all-out nuclear war that stops this development before it takes off. Jones: How long have these AIs, which can set their own goals — how long have they existed? To what extent can they be independent of human interaction? Schmidhuber: Neural networks like that have existed for over 30 years. My first simple adversarial neural network system of this kind is the one from 1990 described above. You don’t need a teacher there; it's just a little agent running around in the world and trying to invent new experiments that surprise its own prediction machine. Once it has figured out certain parts of the world, the agent will become bored and will move on to more exciting experiments. The simple 1990 systems I mentioned have certain limitations, but in the past three decades, we have also built more sophisticated systems that are setting their own goals and such systems I think will be essential for achieving true intelligence. If you are only imitating humans, you will never go beyond them. So, you really must give AIs the freedom to explore previously unexplored regions of the world in a way that no human is really predefining. Jones: Where is this being done today? Schmidhuber: Variants of neural network-based artificial curiosity are used today for agents that learn to play video games in a human-competitive way. We have also started to use them for automatic design of experiments in fields such as materials science. I bet many other fields will be affected by it: chemistry, biology, drug design, you name it. However, at least for now, these artificial scientists, as I like to call them, cannot yet compete with human scientists. I don’t think it’s going to stay this way but, at the moment, it’s still the case. Sure, AI has made a lot of progress. Since 1997, there have been superhuman chess players, and since 2011, through the DanNet of my team, there have been superhuman visual pattern recognizers. But there are other things where humans, at the moment at least, are much better, in particular, science itself. In the lab we have many first examples of self-directed artificial scientists, but they are not yet convincing enough to appear on the radar screen of the public space, which is currently much more fascinated with simpler systems that just imitate humans and write texts based on previously seen human-written documents. Jones: You speak of these numerous instances dating back 30 years of these lab experiments where these self-driven agents are deciding and learning and moving on once they’ve learned. And I assume that that rate of learning becomes even faster over time. What kind of timeframe are we talking about when this eventually is taken outside of the lab and embedded into society? Schmidhuber: This could still take months or even years :-) Anyway, in the not-too-distant future, we will probably see artificial scientists who are good at devising experiments that allow them to discover new, previously unknown physical laws. As always, we are going to profit from the old trend that has held at least since 1941: every decade compute is getting 100 times cheaper. Jones: How does this trend affect modern AI such as ChatGPT? Schmidhuber: Perhaps you know that all the recent famous AI applications such as ChatGPT and similar models are largely based on principles of artificial neural networks invented in the previous millennium. The main reason why they works so well now is the incredible acceleration of compute per dollar. ChatGPT is driven by a neural network called “Transformer” described in 2017 by Google. I am happy about that because a quarter century earlier in 1991 I had a particular Transformer variant which is now called the “Transformer with linearized self-attention”. Back then, not much could be done with it, because the compute cost was a million times higher than today. But today, one can train such models on half the internet and achieve much more interesting results. Jones: And for how long will this acceleration continue? Schmidhuber: There's no reason to believe that in the next 30 years, we won't have another factor of 1 million and that's going to be really significant. In the near future, for the first time we will have many not-so expensive devices that can compute as much as a human brain. The physical limits of computation, however, are much further out so even if the trend of a factor of 100 every decade continues, the physical limits (of 1051 elementary instructions per second and kilogram of matter) won’t be hit until, say, the mid-next century. Even in our current century, however, we’ll probably have many machines that compute more than all 10 billion human brains collectively and you can imagine, everything will change then! Jones: That is the big question. Is everything going to change? If so, what do you say to the next generation of leaders, currently coming out of college and university. So much of this change is already impacting how they study, how they will work, or how the future of work and livelihood is defined. What is their purpose and how do we change our systems so they will adapt to this new version of intelligence? Schmidhuber: For decades, people have asked me questions like that, because you know what I'm saying now, I have basically said since the 1970s, it’s just that today, people are paying more attention because, back then, they thought this was science fiction. They didn't think that I would ever come close to achieving my crazy life goal of building a machine that learns to become smarter than myself such that I can retire. But now many have changed their minds and think it's conceivable. And now I have two daughters, 23 and 25. People ask me: what do I tell them? They know that Daddy always said, “It seems likely that within your lifetimes, you will have new types of intelligence that are probably going to be superior in many ways, and probably all kinds of interesting ways.” How should they prepare for that? And I kept telling them the obvious: Learn how to learn new things! It's not like in the previous millennium where within 20 years someone learned to be a useful member of society, and then took a job for 40 years and performed in this job until she received her pension. Now things are changing much faster and we must learn continuously just to keep up. I also told my girls that no matter how smart AIs are going to get, learn at least the basics of math and physics, because that’s the essence of our universe, and anybody who understands this will have an advantage, and learn all kinds of new things more easily. I also told them that social skills will remain important, because most future jobs for humans will continue to involve interactions with other humans, but I couldn’t teach them anything about that; they know much more about social skills than I do. You touched on the big philosophical question about people’s purpose. Can this be answered without answering the even grander question: What’s the purpose of the entire universe? We don’t know. But what’s happening right now might be connected to the unknown answer. Don’t think of humans as the crown of creation. Instead view human civilization as part of a much grander scheme, an important step (but not the last one) on the path of the universe from very simple initial conditions towards more and more unfathomable complexity. Now it seems ready to take its next step, a step comparable to the invention of life itself over 3.5 billion years ago. Alas, don’t worry, in the end, all will be good! Jones: Let’s get back to this transformation happening right now with OpenAI. There are many questioning the efficacy and accuracy of ChatGPT, and are concerned its release has been premature. In light of the rampant adoption, educators have banned its use over concerns of plagiarism and how it stifles individual development. Should large language models like ChatGPT be used in school? Schmidhuber: When the calculator was first introduced, instructors forbade students from using it in school. Today, the consensus is that kids should learn the basic methods of arithmetic, but they should also learn to use the “artificial multipliers” aka calculators, even in exams, because laziness and efficiency is a hallmark of intelligence. Any intelligent being wants to minimize its efforts to achieve things. And that's the reason why we have tools, and why our kids are learning to use these tools. The first stone tools were invented maybe 3.5 million years ago; tools just have become more sophisticated over time. In fact, humans have changed in response to the properties of their tools. Our anatomical evolution was shaped by tools such as spears and fire. So, it's going to continue this way. And there is no permanent way of preventing large language models from being used in school. Jones: And when our children, your children graduate, what does their future work look like? Schmidhuber: A single human trying to predict details of how 10 billion people and their machines will evolve in the future is like a single neuron in my brain trying to predict what the entire brain and its tens of billions of neurons will do next year. 40 years ago, before the WWW was created at CERN in Switzerland, who would have predicted all those young people making money as YouTube video bloggers? Nevertheless, let’s make a few limited job-related observations. For a long time, people have thought that desktop jobs may require more intelligence than skills trade or handicraft professions. But now, it turns out that it's much easier to replace certain aspects of desktop jobs than replacing a carpenter, for example. Because everything that works well in AI is happening behind the screen currently, but not so much in the physical world. There are now artificial systems that can read lots of documents and then make really nice summaries of these documents. That is a desktop job. Or you give them a description of an illustration that you want to have for your article and pretty good illustrations are being generated that may need some minimal fine-tuning. But you know, all these desktop jobs are much easier to facilitate than the real tough jobs in the physical world. And it's interesting that the things people thought required intelligence, like playing chess, or writing or summarizing documents, are much easier for machines than they thought. But for things like playing football or soccer, there is no physical robot that can remotely compete with the abilities of a little boy with these skills. So, AI in the physical world, interestingly, is much harder than AI behind the screen in virtual worlds. And it's really exciting, in my opinion, to see that jobs such as plumbers are much more challenging than playing chess or writing another tabloid story. Jones: The way data has been collected in these large language models does not guarantee personal information has not been excluded. Current consent laws already are outdated when it comes to these large language models (LLM). The concern, rightly so, is increasing surveillance and loss of privacy. What is your view on this? Schmidhuber: As I have indicated earlier: are surveillance and loss of privacy inevitable consequences of increasingly complex societies? Super-organisms such as cities and states and companies consist of numerous people, just like people consist of numerous cells. These cells enjoy little privacy. They are constantly monitored by specialized "police cells" and "border guard cells": Are you a cancer cell? Are you an external intruder, a pathogen? Individual cells sacrifice their freedom for the benefits of being part of a multicellular organism. Similarly, for super-organisms such as nations. Over 5000 years ago, writing enabled recorded history and thus became its inaugural and most important invention. Its initial purpose, however, was to facilitate surveillance, to track citizens and their tax payments. The more complex a super-organism, the more comprehensive its collection of information about its constituents. 200 years ago, at least, the parish priest in each village knew everything about all the village people, even about those who did not confess, because they appeared in the confessions of others. Also, everyone soon knew about the stranger who had entered the village, because some occasionally peered out of the window, and what they saw got around. Such control mechanisms were temporarily lost through anonymization in rapidly growing cities but are now returning with the help of new surveillance devices such as smartphones as part of digital nervous systems that tell companies and governments a lot about billions of users. Cameras and drones etc. are becoming increasingly tinier and more ubiquitous. More effective recognition of faces and other detection technology are becoming cheaper and cheaper, and many will use it to identify others anywhere on earth; the big wide world will not offer any more privacy than the local village. Is this good or bad? Some nations may find it easier than others to justify more complex kinds of super-organisms at the expense of the privacy rights of their constituents. Jones: So, there is no way to stop or change this process of collection, or how it continuously informs decisions over time? How do you see governance and rules responding to this, especially amid Italy’s ban on ChatGPT following suspected user data breach and the more recent news about the Meta’s record $1.3billion fine in the company’s handling of user information? Schmidhuber: Data collection has benefits and drawbacks, such as the loss of privacy. How to balance those? I have argued for addressing this through data ownership in data markets. If it is true that data is the new oil, then it should have a price, just like oil. At the moment, the major surveillance platforms such as Meta do not offer users any money for their data and the transitive loss of privacy. In the future, however, we will likely see attempts at creating efficient data markets to figure out the data's true financial value through the interplay between supply and demand. Even some of the sensitive medical data should not be priced by governmental regulators but by patients (and healthy persons) who own it and who may sell or license parts thereof as micro-entrepreneurs in a healthcare data market. Following a previous interview, I gave for one of the largest re-insurance companies , let's look at the different participants in such a data market: patients, hospitals, data companies. (1) Patients with a rare form of cancer can offer more valuable data than patients with a very common form of cancer. (2) Hospitals and their machines are needed to extract the data, e.g., through magnet spin tomography, radiology, evaluations through human doctors, and so on. (3) Companies such as Siemens, Google or IBM would like to buy annotated data to make better artificial neural networks that learn to predict pathologies and diseases and the consequences of therapies. Now the market’s invisible hand will decide about the data’s price through the interplay between demand and supply. On the demand side, you will have several companies offering something for the data, maybe through an app on the smartphone (a bit like a stock market app). On the supply side, each patient in this market should be able to profit from high prices for rare valuable types of data. Likewise, competing data extractors such as hospitals will profit from gaining recognition and trust for extracting data well at a reasonable price. The market will make the whole system efficient through incentives for all who are doing a good job. Soon there will be a flourishing ecosystem of commercial data market advisors and what not, just like the ecosystem surrounding the traditional stock market. The value of the data won’t be determined by governments or ethics committees, but by those who own the data and decide by themselves which parts thereof they want to license to others under certain conditions. At first glance, a market-based system seems to be detrimental to the interest of certain monopolistic companies, as they would have to pay for the data - some would prefer free data and keep their monopoly. However, since every healthy and sick person in the market would suddenly have an incentive to collect and share their data under self-chosen anonymity conditions, there will soon be many more useful data to evaluate all kinds of treatments. On average, people will live longer and healthier, and many companies and the entire healthcare system will benefit. Jones: Finally, what is your view on open source versus the private companies like Google and OpenAI? Is there a danger to supporting these private companies’ large language models versus trying to keep these models open source and transparent, very much like what LAION is doing? Schmidhuber: I signed this open letter by LAION because I strongly favor the open-source movement. And I think it's also something that is going to challenge whatever big tech dominance there might be at the moment. Sure, the best models today are run by big companies with huge budgets for computers, but the exciting fact is that open-source models are not so far behind, some people say maybe six to eight months only. Of course, the private company models are all based on stuff that was created in academia, often in little labs without so much funding, which publish without patenting their results and open source their code and others take it and improved it. Big tech has profited tremendously from academia; their main achievement being that they have scaled up everything greatly, sometimes even failing to credit the original inventors. So, it's very interesting to see that as soon as some big company comes up with a new scaled-up model, lots of students out there are competing, or collaborating, with each other, trying to come up with equal or better performance on smaller networks and smaller machines. And since they are open sourcing, the next guy can have another great idea to improve it, so now there’s tremendous competition also for the big companies. Because of that, and since AI is still getting exponentially cheaper all the time, I don't believe that big tech companies will dominate in the long run. They find it very hard to compete with the enormous open-source movement. As long as you can encourage the open-source community, I think you shouldn't worry too much. Now, of course, you might say if everything is open source, then the bad actors also will more easily have access to these AI tools. And there's truth to that. But as always since the invention of controlled fire, it was good that knowledge about how technology works quickly became public such that everybody could use it. And then, against any bad actor, there's almost immediately a counter actor trying to nullify his efforts. You see, I still believe in our old motto "AI∀" or "AI For All." Jones: Thank you, Juergen for sharing your perspective on this amazing time in history. It’s clear that with new technology, the enormous potential can be matched by disparate and troubling risks which we’ve yet to solve, and even those we have yet to identify. If we are to dispel the fear of a sentient system for which we have no control, humans, alone need to take steps for more responsible development and collaboration to ensure AI technology is used to ultimately benefit society. Humanity will be judged by what we do next.

[D] Elon Musk has a complex relationship with the A.I. community
reddit
LLM Vibe Score0
Human Vibe Score0
milaworldThis week

[D] Elon Musk has a complex relationship with the A.I. community

Update: Yann LeCun stepped in, and I think they made peace, after agreeing on the awesomeness of PyTorch 😂 An article about Elon Musk and the machine learning research community leading to some interesting discussions between the head of Facebook AI research (apparently it is not Yann Lecun anymore, but some other dude), and Elon himself. Quotes from the article: Multiple AI researchers from different companies told CNBC that they see Musk’s AI comments as inappropriate and urged the public not to take his views on AI too seriously. The smartest computers can still only excel at a “narrow” selection of tasks and there’s a long way to go before human-level AI is achieved. “A large proportion of the community think he’s a negative distraction,” said an AI executive with close ties to the community who wished to remain anonymous because their company may work for one of Musk’s businesses. “He is sensationalist, he veers wildly between openly worrying about the downside risk of the technology and then hyping the AGI (artificial general intelligence) agenda. Whilst his very real accomplishments are acknowledged, his loose remarks lead to the general public having an unrealistic understanding of the state of AI maturity.” An AI scientist who specializes in speech recognition and wished to remain anonymous to avoid public backlash said Musk is “not always looked upon favorably” by the AI research community. “I instinctively fall on dislike, because he makes up such nonsense,” said another AI researcher at a U.K university who asked to be kept anonymous. “But then he delivers such extraordinary things. It always leaves me wondering, does he know what he’s doing? Is all the visionary stuff just a trick to get an innovative thing to market?” CNBC reached out to Musk and his representatives for this article but is yet to receive a response. (Well, they got one now! 👇) “I believe a lot of people in the AI community would be ok saying it publicly. Elon Musk has no idea what he is talking about when he talks about AI. There is no such thing as AGI and we are nowhere near matching human intelligence. #noAGI” (Jérôme Pesenti, VP of AI at Facebook) “Facebook sucks” (Elon Musk) Article: https://www.cnbc.com/2020/05/13/elon-musk-has-a-complex-relationship-with-the-ai-community.html

[D] A Jobless Rant - ML is a Fool's Gold
reddit
LLM Vibe Score0
Human Vibe Score1
good_riceThis week

[D] A Jobless Rant - ML is a Fool's Gold

Aside from the clickbait title, I am earnestly looking for some advice and discussion from people who are actually employed. That being said, here's my gripe: I have been relentlessly inundated by the words "AI, ML, Big Data" throughout my undergrad from other CS majors, business and sales oriented people, media, and .ai type startups. It seems like everyone was peddling ML as the go to solution, the big money earner, and the future of the field. I've heard college freshman ask stuff like, "if I want to do CS, am I going to need to learn ML to be relevant" - if you're on this sub, I probably do not need to continue to elaborate on just how ridiculous the ML craze is. Every single university has opened up ML departments or programs and are pumping out ML graduates at an unprecedented rate. Surely, there'd be a job market to meet the incredible supply of graduates and cultural interest? Swept up in a mixture of genuine interest and hype, I decided to pursue computer vision. I majored in Math-CS at a top-10 CS university (based on at least one arbitrary ranking). I had three computer vision internships, two at startups, one at NASA JPL, in each doing non-trivial CV work; I (re)implemented and integrated CV systems from mixtures of recently published papers. I have a bunch of projects showing both CV and CS fundamentals (OS, networking, data structures, algorithms, etc) knowledge. I have taken graduate level ML coursework. I was accepted to Carnegie Mellon for an MS in Computer Vision, but I deferred to 2021 - all in all, I worked my ass off to try to simultaneously get a solid background in math AND computer science AND computer vision. That brings me to where I am now, which is unemployed and looking for jobs. Almost every single position I have seen requires a PhD and/or 5+ years of experience, and whatever I have applied for has ghosted me so far. The notion that ML is a high paying in-demand field seems to only be true if your name is Andrej Karpathy - and I'm only sort of joking. It seems like unless you have a PhD from one of the big 4 in CS and multiple publications in top tier journals you're out of luck, or at least vying for one of the few remaining positions at small companies. This seems normalized in ML, but this is not the case for quite literally every other subfield or even generalized CS positions. Getting a high paying job at a Big N company is possible as a new grad with just a bachelors and general SWE knowledge, and there are a plethora of positions elsewhere. Getting the equivalent with basically every specialization, whether operating systems, distributed systems, security, networking, etc, is also possible, and doesn't require 5 CVPR publications. TL;DR From my personal perspective, if you want to do ML because of career prospects, salaries, or job security, pick almost any other CS specialization. In ML, you'll find yourself working 2x as hard through difficult theory and math to find yourself competing with more applicants for fewer positions. I am absolutely complaining and would love to hear a more positive perspective, but in the meanwhile I'll be applying to jobs, working on more post-grad projects, and contemplating switching fields.

[N] How Stability AI’s Founder Tanked His Billion-Dollar Startup
reddit
LLM Vibe Score0
Human Vibe Score0.667
milaworldThis week

[N] How Stability AI’s Founder Tanked His Billion-Dollar Startup

forbes article: https://www.forbes.com/sites/kenrickcai/2024/03/29/how-stability-ais-founder-tanked-his-billion-dollar-startup/ archive no paywall: https://archive.is/snbeV How Stability AI’s Founder Tanked His Billion-Dollar Startup Mar 29, 2024 Stability AI founder Emad Mostaque took the stage last week at the Terranea Resort in Palos Verdes, California to roaring applause and an introduction from an AI-generated Aristotle who announced him as “a modern Prometheus” with “the astuteness of Athena and the vision of Daedalus.” “Under his stewardship, AI becomes the Herculean force poised to vanquish the twin serpents of illness and ailment and extend the olive branch of longevity,” the faux Aristotle proclaimed. “I think that’s the best intro I’ve ever had,” Mostaque said. But behind Mostaque's hagiographic introduction lay a grim and fast metastasizing truth. Stability, once one of AI’s buzziest startups, was floundering. It had been running out of money for months and Mostaque had been unable to secure enough additional funding. It had defaulted on payments to Amazon whose cloud service undergirded Stability’s core offerings. The star research team behind its flagship text-to-image generator Stable Diffusion had tendered their resignations just three days before — as Forbes would first report — and other senior leaders had issued him an ultimatum: resign, or we walk too. Still, onstage before a massive audience of peers and acolytes, Mostaque talked a big game. “AI is jet planes for the mind,” he opined. “AI is our collective intelligence. It's the human Colossus.” He claimed a new, faster version of the Stable Diffusion image generator released earlier this month could generate “200 cats with hats per second.” But later, when he was asked about Stability’s financial model, Mostaque fumbled. “I can’t say that publicly,” he replied. “But it’s going well. We’re ahead of forecast.” Four days later, Mostaque stepped down as CEO of Stability, as Forbes first reported. In a post to X, the service formerly known as Twitter, he claimed he’d voluntarily abdicated his role to decentralize “the concentration of power in AI.” But sources told Forbes that was hardly the case. Behind the scenes, Mostaque had fought to maintain his position and control despite mounting pressure externally and internally to step down. Company documents and interviews with 32 current and former employees, investors, collaborators and industry observers suggest his abrupt exit was the result of poor business judgment and wild overspending that undermined confidence in his vision and leadership, and ultimately kneecapped the company. Mostaque, through his attorneys, declined to comment on record on a detailed list of questions about the reporting in this story. But in an email to Forbes earlier this week he broadly disputed the allegations. “Nobody tells you how hard it is to be a CEO and there are better CEOs than me to scale a business,” he said in a statement. “I am not sure anyone else would have been able to build and grow the research team to build the best and most widely used models out there and I’m very proud of the team there. I look forward to moving onto the next problem to handle and hopefully move the needle.” In an emailed statement, Christian Laforte and Shan Shan Wong, the interim co-CEOs who replaced Mostaque, said, "the company remains focused on commercializing its world leading technology” and providing it “to partners across the creative industries." After starting Stability in 2019, Mostaque built the company into an early AI juggernaut by seizing upon a promising research project that would become Stable Diffusion and funding it into a business reality. The ease with which the software generated detailed images from the simplest text prompts immediately captivated the public: 10 million people used it on any given day, the company told Forbes in early 2023. For some true believers, Mostaque was a crucial advocate for open-source AI development in a space dominated by the closed systems of OpenAI, Google and Anthropic. But his startup’s rise to one of the buzziest in generative AI was in part built on a series of exaggerations and misleading claims, as Forbes first reported last year (Mostaque disputed some points at the time). And they continued after he raised $100 million at a $1 billion valuation just days after launching Stable Diffusion in 2022. His failure to deliver on an array of grand promises, like building bespoke AI models for nation states, and his decision to pour tens of millions into research without a sustainable business plan, eroded Stability’s foundations and jeopardized its future. "He was just giving shit away,” one former employee told Forbes. “That man legitimately wanted to transform the world. He actually wanted to train AI models for kids in Malawi. Was it practical? Absolutely not." By October 2023, Stability would have less than $4 million left in the bank, according to an internal memo prepared for a board meeting and reviewed by Forbes. And mounting debt, including months of overdue Amazon Web Services payments, had already left it in the red. To avoid legal penalties for skipping Americans staff’s payroll, the document explained, the London-based startup was considering delaying tax payments to the U.K. government. It was Stability’s armada of GPUs, the wildly powerful and equally expensive chips undergirding AI, that were so taxing the company’s finances. Hosted by AWS, they had long been one of Mostaque’s bragging points; he often touted them as one of the world’s 10 largest supercomputers. They were responsible for helping Stability’s researchers build and maintain one of the top AI image generators, as well as break important new ground on generative audio, video and 3D models. “Undeniably, Stability has continued to ship a lot of models,” said one former employee. “They may not have profited off of it, but the broader ecosystem benefitted in a huge, huge way.” But the costs associated with so much compute were now threatening to sink the company. According to an internal October financial forecast seen by Forbes, Stability was on track to spend $99 million on compute in 2023. It noted as well that Stability was “underpaying AWS bills for July (by $1M)” and “not planning to pay AWS at the end of October for August usage ($7M).” Then there were the September and October bills, plus $1 million owed to Google Cloud and $600,000 to GPU cloud data center CoreWeave. (Amazon, Google and CoreWeave declined to comment.) With an additional $54 million allocated to wages and operating expenses, Stability’s total projected costs for 2023 were $153 million. But according to its October financial report, its projected revenue for the calendar year was just $11 million. Stability was on track to lose more money per month than it made in an entire year. The company’s dire financial position had thoroughly soured Stability’s current investors, including Coatue, which had invested tens of millions in the company during its $101 million funding round in 2022. In the middle of 2023, Mostaque agreed to an independent audit after Coatue raised a series of concerns, according to a source with direct knowledge of the matter. The outcome of the investigation is unclear. Coatue declined to comment. Within a week of an early October board meeting where Mostaque shared that financial forecast, Lightspeed Venture Partners, another major investor, sent a letter to the board urging them to sell the company. The distressing numbers had “severely undermined” the firm’s confidence in Mostaque’s ability to lead the company. “In particular, we are surprised and deeply concerned by a cash position just now disclosed to us that is inconsistent with prior discussions on this topic,” Lightspeed’s general counsel Brett Nissenberg wrote in the letter, a copy of which was viewed by Forbes. “Lightspeed believes that the company is not likely financeable on terms that would assure the company’s long term sound financial position.” (Lightspeed declined a request for comment.) The calls for a sale led Stability to quietly begin looking for a buyer. Bloomberg reported in November that Stability approached AI startups Cohere and Jasper to gauge their interest. Stability denied this, and Jasper CEO Timothy Young did the same when reached for comment by Forbes. A Cohere representative declined to comment. But one prominent AI company confirmed that Mostaque’s representatives had reached out to them to test the waters. Those talks did not advance because “the numbers didn’t add up,” this person, who declined to be named due to the confidential nature of the talks, told Forbes. Stability also tried to court Samsung as a buyer, going so far as to redecorate its office in advance of a planned meeting with the Korean electronics giant. (Samsung said that it invested in Stability in 2023 and that it does not comment on M&A discussions.) Coatue had been calling for Mostaque’s resignation for months, according to a source with direct knowledge. But it and other investors were unable to oust him because he was the company’s majority shareholder. When they tried a different tact by rallying other investors to offer him a juicy equity package to resign, Mostaque refused, said two sources. By October, Coatue and Lightspeed had had enough. Coatue left the board and Lightspeed resigned its observer seat. “Emad infuriated our initial investors so much it’s just making it impossible for us to raise more money under acceptable terms,” one current Stability executive told Forbes. The early months of 2024 saw Stability’s already precarious position eroding further still. Employees were quietly laid off. Three people in a position to know estimated that at least 10% of staff were cut. And cash reserves continued to dwindle. Mostaque mentioned a lifeline at the October board meeting: $95 million in tentative funding from new investors, pending due diligence. But in the end, only a fraction of it was wired, two sources say, much of it from Intel, which Forbes has learned invested $20 million, a fraction of what was reported. (Intel did not return a request for comment by publication time.) Two hours after Forbes broke the news of Mostaque’s plans to step down as CEO, Stability issued a press release confirming his resignation. Chief operating officer Wong and chief technology officer Laforte have taken over in the interim. Mostaque, who said on X that he still owns a majority of the company, also stepped down from the board, which has now initiated a search for a permanent CEO. There is a lot of work to be done to turn things around, and very little time in which to do it. Said the current Stability executive, “There’s still a possibility of a turnaround story, but the odds drop by the day.” In July of 2023, Mostaque still thought he could pull it off. Halfway through the month, he shared a fundraising plan with his lieutenants. It was wildly optimistic, detailing the raise of $500 million in cash and another $750 million in computing facilities from marquee investors like Nvidia, Google, Intel and the World Bank (Nvidia and Google declined comment. Intel did not respond. The World Bank said it did not invest in Stability). In a Slack message reviewed by Forbes, Mostaque said Google was “willing to move fast” and the round was “likely to be oversubscribed.” It wasn’t. Three people with direct knowledge of these fundraising efforts told Forbes that while there was some interest in Stability, talks often stalled when it came time to disclose financials. Two of them noted that earlier in the year, Mostaque had simply stopped engaging with VCs who asked for numbers. Only one firm invested around that time: actor Ashton Kutcher’s Sound Ventures, which invested $35 million in the form of a convertible SAFE note during the second quarter, according to an internal document. (Sound Ventures did not respond to a request for comment.) And though he’d managed to score a meeting with Nvidia and its CEO Jensen Huang, it ended in disaster, according to two sources. “Under Jensen's microscopic questions, Emad just fell apart,” a source in position to know told Forbes. Huang quickly concluded Stability wasn’t ready for an investment from Nvidia, the sources said. Mostaque told Forbes in an email that he had not met with Huang since 2022, except to say “hello and what’s up a few times after.” His July 2023 message references a plan to raise $150 million from Nvidia. (Nvidia declined to comment.) After a June Forbes investigation citing more than 30 sources revealed Mostaque’s history of misleading claims, Mostaque struggled to raise funding, a Stability investor told Forbes. (Mostaque disputed the story at the time and called it "coordinated lies" in his email this week to Forbes). Increasingly, investors scrutinized his assertions and pressed for data. And Young, now the CEO of Jasper, turned down a verbal offer to be Stability’s president after reading the article, according to a source with direct knowledge of the matter. The collapse of the talks aggravated the board and other executives, who had hoped Young would compensate for the sales and business management skills that Mostaque lacked, according to four people in a position to know. (Young declined to comment.) When Stability’s senior leadership convened in London for the CogX conference in September, the financing had still not closed. There, a group of executives confronted Mostaque asking questions about the company’s cash position and runway, according to three people with direct knowledge of the incident. They did not get the clarity they’d hoped for. By October, Mostaque had reduced his fundraising target by more than 80%. The months that followed saw a steady drumbeat of departures — general counsel Adam Avrunin, vice presidents Mike Melnicki, Ed Newton-Rex and Joe Penna, chief people officer Ozden Onder — culminating in the demoralizing March exit of Stable Diffusion’s primary developers Robin Rombach, Andreas Blattmann, Patrick Esser and Dominik Lorenz. Rombach, who led the team, had been angling to leave for months, two sources said, first threatening to resign last summer because of the fundraising failures. Others left over concerns about cash flow, as well as liabilities — including what four people described as Mostaque’s lax approach to ensuring that Stability products could not be used to produce child sexual abuse imagery. “Stability AI is committed to preventing the misuse of AI and prohibits the use of our image models and services for unlawful activity, including attempts to edit or create CSAM,” Ella Irwin, senior vice president of integrity, said in a statement. Newton-Rex told Forbes he resigned because he disagreed with Stability’s position that training AI on copyrighted work without consent is fair use. Melnicki and Penna declined to comment. Avrunin and Onder could not be reached for comment. None of the researchers responded to requests for comment. The Stable Diffusion researchers’ departure as a cohort says a lot about the state of Stability AI. The company’s researchers were widely viewed as its crown jewels, their work subsidized with a firehose of pricey compute power that was even extended to people outside the company. Martino Russi, an artificial intelligence researcher, told Forbes that though he was never formally employed by Stability, the company provided him a “staggering” amount of compute between January and April 2023 to play around with developing an AI video generator that Stability might someday use. “It was Candy Land or Coney Island,” said Russi, who estimates that his experiment, which was ultimately shelved, cost the company $2.5 million. Stable Diffusion was simultaneously Stability’s marquee product and its existential cash crisis. One current employee described it to Forbes as “a giant vacuum that absorbed everything: money, compute, people.” While the software was widely used, with Mostaque claiming downloads reaching into the hundreds of millions, Stability struggled to translate that wild success into revenue. Mostaque knew it could be done — peers at Databricks, Elastic and MongoDB had all turned a free product into a lucrative business — he just couldn’t figure out how. His first attempt was Stability’s API, which allowed paying customers to integrate Stable Diffusion into their own products. In early 2023, a handful of small companies, like art generator app NightCafe and presentation software startup Tome, signed on, according to four people with knowledge of the deals. But Stability’s poor account management services soured many, and in a matter of months NightCafe and Tome canceled their contracts, three people said. NightCafe founder Angus Russell told Forbes that his company switched to a competitor which “offered much cheaper inference costs and a broader service.” Tome did not respond to a request for comment. Meanwhile, Mostaque’s efforts to court larger companies like Samsung and Snapchat were failing, according to five people familiar with the effort. Canva, which was already one of the heaviest users of open-sourced Stable Diffusion, had multiple discussions with Stability, which was angling for a contract it hoped would generate several millions in annual revenue. But the deal never materialized, four sources said. “These three companies wanted and needed us,” one former employee told Forbes. “They would have been the perfect customers.” (Samsung, Snap and Canva declined to comment.) “It’s not that there was not an appetite to pay Stability — there were tons of companies that would have that wanted to,” the former employee said. “There was a huge opportunity and demand, but just a resistance to execution.” Mostaque’s other big idea was to provide governments with bespoke national AI models that would invigorate their economies and citizenry. “Emad envisions a world where AI through 100 national models serves not as a tool of the few, but as a benefactor to all promising to confront great adversaries, cancer, autism, and the sands of time itself,” the AI avatar of Aristotle said in his intro at the conference. Mostaque told several prospective customers that he could deliver such models within 60 days — an untenable timeline, according to two people in position to know. Stability attempted to develop a model for the Singaporean government over the protestation of employees who questioned its technical feasibility, three sources familiar with the effort told Forbes. But it couldn’t pull it off and Singapore never became a customer. (The government of Singapore confirmed it did not enter into a deal with Stability, but declined to answer additional questions.) As Stability careened from one new business idea to another, resources were abruptly reallocated and researchers reassigned. The whiplash shifts in a largely siloed organization demoralized and infuriated employees. “There were ‘urgent’ things, ‘urgent urgent’ things and ‘most urgent,’” one former employee complained. “None of these things seem important if everything is important.” Another former Stability executive was far more pointed in their assessment. “Emad is the most disorganized leader I have ever worked with in my career,” this person told Forbes. “He has no vision, and changes directions every week, often based on what he sees on Twitter.” In a video interview posted shortly before this story was published, Mostaque explained his leadership style: “I'm particularly great at taking creatives, developers, researchers, others, and achieving their full potential in designing systems. But I should not be dealing with, you know, HR and operations and business development and other elements. There are far better people than me to do that.” By December 2023, Stability had partially abandoned its open-source roots and announced that any commercial use of Stable Diffusion would cost customers at least $20 per month (non-commercial and research use of Stable Diffusion would remain free). But privately, Stability was considering a potentially more lucrative source of revenue: reselling the compute it was leasing from providers like AWS, according to six people familiar with the effort. Though it was essentially GPU arbitrage, Stability framed the strategy to investors as a “managed services” offering. Its damning October financial report projected optimistically that such an offering would bring in $139 million in 2024 — 98% of its revenue. Multiple employees at the time told Forbes they feared reselling compute, even if the company called it “managed services,” would violate the terms of Stability’s contract with AWS. Amazon declined to comment. “The line internally was that we are not reselling compute,” one former employee said. “This was some of the dirtiest feeling stuff.” Stability also discussed reselling a cluster of Nvidia A100 chips, leased via CoreWeave, to the venture capital firm Andreessen Horowitz, three sources said. “It was under the guise of managed services, but there wasn’t any management happening,” one of these people told Forbes. Andreessen Horowitz and CoreWeave declined to comment. Stability did not respond to questions about if it plans to continue this strategy now that Mostaque is out of the picture. Regardless, interim co-CEOs Wong and Laforte are on a tight timeline to clean up his mess. Board chairman Jim O’Shaughnessy said in a statement that he was confident the pair “will adeptly steer the company forward in developing and commercializing industry-leading generative AI products.” But burn continues to far outpace revenue. The Financial Times reported Friday that the company made $5.4 million of revenue in February, against $8 million in costs. Several sources said there are ongoing concerns about making payroll for the roughly 150 remaining employees. Leadership roles have gone vacant for months amid the disarray, leaving the company increasingly directionless. Meanwhile, a potentially catastrophic legal threat looms over the company: A trio of copyright infringement lawsuits brought by Getty Images and a group of artists in the U.S. and U.K., who claim Stability illegally used their art and photography to train the AI models powering Stable Diffusion. A London-based court has already rejected the company’s bid to throw out one of the lawsuits on the basis that none of its researchers were based in the U.K. And Stability’s claim that Getty’s Delaware lawsuit should be blocked because it's a U.K.-based company was rejected. (Stability did not respond to questions about the litigation.) AI-related copyright litigation “could go on for years,” according to Eric Goldman, a law professor at Santa Clara University. He told Forbes that though plaintiffs suing AI firms face an uphill battle overcoming the existing legal precedent on copyright infringement, the quantity of arguments available to make are virtually inexhaustible. “Like in military theory, if there’s a gap in your lines, that’s where the enemy pours through — if any one of those arguments succeeds, it could completely change the generative AI environment,” he said. “In some sense, generative AI as an industry has to win everything.” Stability, which had more than $100 million in the bank just a year and a half ago, is in a deep hole. Not only does it need more funding, it needs a viable business model — or a buyer with the vision and chops to make it successful in a fast-moving and highly competitive sector. At an all hands meeting this past Monday, Stability’s new leaders detailed a path forward. One point of emphasis: a plan to better manage resources and expenses, according to one person in attendance. It’s a start, but Mostaque’s meddling has left them with little runway to execute. His resignation, though, has given some employees hope. “A few people are 100% going to reconsider leaving after today,” said one current employee. “And the weird gloomy aura of hearing Emad talking nonsense for an hour is gone.” Shortly before Mostaque resigned, one current Stability executive told Forbes that they were optimistic his departure could make Stability appealing enough to receive a small investment or sale to a friendly party. “There are companies that have raised hundreds of millions of dollars that have much less intrinsic value than Stability,” the person said. “A white knight may still appear.”

Have You Used AI Tools for Your Research? Which Ones Are Your Favorite and Why?
reddit
LLM Vibe Score0
Human Vibe Score1
somdipdeyThis week

Have You Used AI Tools for Your Research? Which Ones Are Your Favorite and Why?

Over a decade ago, I wrote two articles: "A B\ginner’s Guide to Computer Science Research" and "How to Start a Research Work in Computer Science"*. These articles were widely used in universities worldwide to help students and early-career researchers navigate academic research in Computer Science (CS). Fast forward to 2025, the research landscape has evolved significantly, especially in AI and CS, with the advent of AI-powered research tools, open-access repositories, and real-time collaboration platforms. These tools have made research more accessible, enabling students and professionals to work more efficiently while focusing on real innovation. I recently published an updated article in The Times of India, presenting an Eight-Step Approach to Research framework designed for modern AI and CS research. This framework integrates AI-powered literature review tools, reference management systems, open science platforms, and collaborative research methods to enhance the research workflow. 🚀 Would love to hear from the ML research community: 1️⃣ Have you used any AI-powered tools or automation techniques in your research? Which ones do you find most useful? 2️⃣ Do you have recommendations for other AI tools that weren’t covered in the article but could benefit researchers? 3️⃣ How do you think AI will shape the future of academic research and discovery? 📖 Read the article here: How to Start Research in Computer Science & AI in 2025 – An Updated Framework Block Diagram of “Eight-Step Approach to Research” in 2025 Let’s discuss! What are your go-to tools for making research more efficient in 2025?

How To Learn About AI Agents (A Road Map From Someone Who's Done It)
reddit
LLM Vibe Score0
Human Vibe Score0.882
laddermanUSThis week

How To Learn About AI Agents (A Road Map From Someone Who's Done It)

If you are a newb to AI Agents, welcome, I love newbies and this fledgling industry needs you! You've hear all about AI Agents and you want some of that action right?  You might even feel like this is a watershed moment in tech, remember how it felt when the internet became 'a thing'?  When apps were all the rage?  You missed that boat right?   Well you may have missed that boat, but I can promise you one thing..... THIS BOAT IS BIGGER !  So if you are reading this you are getting in just at the right time.  Let me answer some quick questions before we go much further: Q: Am I too late already to learn about AI agents? A: Heck no, you are literally getting in at the beginning, call yourself and 'early adopter' and pin a badge on your chest! Q: Don't I need a degree or a college education to learn this stuff?  I can only just about work out how my smart TV works! A: NO you do not.  Of course if you have a degree in a computer science area then it does help because you have covered all of the fundamentals in depth... However 100000% you do not need a degree or college education to learn AI Agents.  Q: Where the heck do I even start though?  Its like sooooooo confusing A: You start right here my friend, and yeh I know its confusing, but chill, im going to try and guide you as best i can. Q: Wait i can't code, I can barely write my name, can I still do this? A: The simple answer is YES you can. However it is great to learn some basics of python.  I say his because there are some fabulous nocode tools like n8n that allow you to build agents without having to learn how to code...... Having said that, at the very least understanding the basics is highly preferable. That being said, if you can't be bothered or are totally freaked about by looking at some code, the simple answer is YES YOU CAN DO THIS. Q: I got like no money, can I still learn? A: YES 100% absolutely.  There are free options to learn about AI agents and there are paid options to fast track you.  But defiantly you do not need to spend crap loads of cash on learning this.  So who am I anyway? (lets get some context)  I am an AI Engineer and I own and run my own AI Consultancy business where I design, build and deploy AI agents and AI automations.  I do also run a small academy where I teach this stuff, but I am not self promoting or posting links in this post because im not spamming this group.  If you want links send me a DM or something and I can forward them to you.  Alright so on to the good stuff, you're a newb, you've already read a 100 posts and are now totally confused and every day you consume about 26 hours of youtube videos on AI agents.....I get you, we've all been there.  So here is my 'Worth Its Weight In Gold' road map on what to do: \[1\]  First of all you need learn some fundamental concepts.  Whilst you can defiantly jump right in start building, I strongly recommend you learn some of the basics.  Like HOW to LLMs work, what is a system prompt, what is long term memory, what is Python, who the heck is this guy named Json that everyone goes on about?  Google is your old friend who used to know everything, but you've also got your new buddy who can help you if you want to learn for FREE.  Chat GPT is an awesome resource to create your own mini learning courses to understand the basics. Start with a prompt such as: "I want to learn about AI agents but this dude on reddit said I need to know the fundamentals to this ai tech, write for me a short course on Json so I can learn all about it. Im a beginner so keep the content easy for me to understand. I want to also learn some code so give me code samples and explain it like a 10 year old" If you want some actual structured course material on the fundamentals, like what the Terminal is and how to use it, and how LLMs work, just hit me, Im not going to spam this post with a hundred links. \[2\] Alright so let's assume you got some of the fundamentals down.  Now what? Well now you really have 2 options.  You either start to pick up some proper learning content (short courses) to deep dive further and really learn about agents or you can skip that sh\*t and start building!  Honestly my advice is to seek out some short courses on agents, Hugging Face have an awesome free course on agents and DeepLearningAI also have numerous free courses. Both are really excellent places to start.  If you want a proper list of these with links, let me know.  If you want to jump in because you already know it all, then learn the n8n platform!   And no im not a share holder and n8n are not paying me to say this.  I can code, im an AI Engineer and I use n8n sometimes.   N8N is a nocode platform that gives you a drag and drop interface to build automations and agents.  Its very versatile and you can self host it.  Its also reasonably easy to actually deploy a workflow in the cloud so it can be used by an actual paying customer.  Please understand that i literally get hate mail from devs and experienced AI enthusiasts for recommending no code platforms like n8n.  So im risking my mental wellbeing for you!!!    \[3\] Keep building!   ((WTF THAT'S IT?????))  Yep. the more you build the more you will learn.  Learn by doing my young Jedi learner.  I would call myself pretty experienced in building AI Agents, and I only know a tiny proportion of this tech.  But I learn but building projects and writing about AI Agents.  The more you build the more you will learn.  There are more intermediate courses you can take at this point as well if you really want to deep dive (I was forced to - send help) and I would recommend you do if you like short courses because if you want to do well then you do need to understand not just the underlying tech but also more advanced concepts like Vector Databases and how to implement long term memory.  Where to next? Well if you want to get some recommended links just DM me or leave a comment and I will DM you, as i said im not writing this with the intention of spamming the crap out of the group. So its up to you.  Im also happy to chew the fat if you wanna chat, so hit me up.  I can't always reply immediately because im in a weird time zone, but I promise I will reply if you have any questions. THE LAST WORD (Warning - Im going to motivate the crap out of you now) Please listen to me:  YOU CAN DO THIS.  I don't care what background you have, what education you have, what language you speak or what country you are from..... I believe in you and anyway can do this.  All you need is determination, some motivation to want to learn and a computer (last one is essential really, the other 2 are optional!) But seriously you can do it and its totally worth it.  You are getting in right at the beginning of the gold rush, and yeh I believe that.   AI Agents are going to be HUGE. I believe this will be the new internet gold rush.

My boss taught me how to build a Failed business (15 lessons)
reddit
LLM Vibe Score0
Human Vibe Score0.091
aminekhThis week

My boss taught me how to build a Failed business (15 lessons)

I'm a senior software developer at a three-year-old startup that has been making $0 in revenue. I've been with this startup since its beginning, and it pays me $1200/month. My boss has broken the records of the number of stupid ideas and stupid features that he asked me to implement. He taught me (unintentionally) all the lessons I should NOT do to build a successful business. From bad product ideas, bad business decisions, not listening to your team, not building what target customers want, and falling in love with your bad product. The product we're working on is a desktop program that moves the cursor with your finger using the webcam (gesture recognition). Why in the world would anyone pay money to move the mouse cursor with his finger? No one knows. My boss watched Iron Man (the film) and saw how Tony Starks do gestures in front of his "advanced" computer and thought it was cool so he asked me to build this for him to sell it to enterprises (then pivoted the target customer to schools). Of course, no one bought this software. All the people he meets tell him it is cool but he never hears from them again. No one on the team, except my boss, thinks this software will succeed. He keeps adding irrelevant features to this software just because he "thinks" people will love it. We added 3D object visualizer, ChatGPT integration, and Quizzes. I suggested moving everything to the cloud and focusing only on improving the education industry by providing solutions that help teachers better prepare their lessons and understand where each student lacks by recording lessons, summarizing them for students, generating quizzes using AI, and analyzing the part that each student didn't understand. However, to do that, we need to forget the part of moving the cursor with fingers because it can be done only on Python, not NextJS. He simply replied, "NO, moving the cursor with fingers is COOL". So here are the lessons I learned from my boss to build a failed business: Never listen to your team. Always build what you think is good and never let anyone from your team say it's a bad idea. Fall in love with your business idea. Don't talk to customers. If no one bought your product, it's because they don't understand how cool it is. If a member of your team say it's a bad idea, ignore them, they don't understand how cool your idea is. Always hire interns because they're free labor and give them the most sensitive parts of the work like payments and databases. Make your business dependant on you. Don't let your team do their job the right way, give them orders to do it YOUR way. Hire experts to tell them what to do not to tell you what to do and how to do it. Never do marketing because people will steal your idea. Ask your team "What you think?" but ignore them. If your wife and children think your product is cool then it's cool. Start a business in an industry that you know nothing about but act like you know everything. If no one is buying your product, keep adding irrelevant features that no one asked for. \--- Edit: I didn't mention all the "stupid" ideas I built for him so here you go: Replacing Zoom, Teams, and Meet meetings with meetings in the metaverse. Target customer: Enterprises. An app that lets you scroll through social media without touching your mobile screen (using gesture recognition). We didn't build this because it's technically impossible to continuously use the phone camera outside your own app. He didn't believe me so asked his friend and told him the same thing. A software that controls the computer with gestures (moving cursor, single click, double click, ALT Tab...). Target customers: Enterprises Building a classroom in Decentraland (metaverse) to replace classes through Zoom and Teams He told me to build the startup website but to not make the home page the first page a user lands on when he opens the website. He wants to make the visitor lands on another "almost" empty page and if the user wants to go to the home page he should click on "Home" in the navbar.

MVP + AI/ML Implementation/Integration - Done for you SaaS
reddit
LLM Vibe Score0
Human Vibe Score1
rikksamThis week

MVP + AI/ML Implementation/Integration - Done for you SaaS

In today’s fast-paced world, businesses need to stay ahead of the curve. Leveraging AI, ML, and Cloud technologies isn't just an option—it's a necessity. We specialize in providing cutting-edge AI/ML solutions and Cloud services that empower businesses to innovate, automate, and scale like never before. Why AI and ML Matter Artificial Intelligence (AI) and Machine Learning (ML) are revolutionizing industries by enabling systems to learn, adapt, and improve over time. Whether it's predicting customer behavior, automating tasks, or enhancing decision-making, AI and ML open up a world of possibilities. Key Benefits of AI and ML: Enhanced Decision-Making: Harness predictive analytics to make data-driven decisions. Automation: Streamline operations with intelligent automation. Personalization: Deliver tailored experiences to your customers, increasing engagement and loyalty. Efficiency: Reduce costs and time through optimized processes. How Cloud Services Drive Innovation The Cloud is the backbone of modern business infrastructure. It allows companies to be more agile, scalable, and resilient. With Cloud computing, businesses can access powerful tools and resources on-demand, without the need for significant upfront investment. Advantages of Cloud Services: Scalability: Easily scale up or down based on your business needs. Cost Efficiency: Pay only for the resources you use, minimizing overhead. Security: Benefit from the highest standards of data security and compliance. Flexibility: Access your applications and data from anywhere, anytime. Our Services We offer comprehensive services to help you harness the full potential of AI, ML, and Cloud technologies: AI and ML Solutions: We design and deploy custom AI/ML models that solve your specific business challenges. From natural language processing (NLP) to computer vision, we cover all aspects of AI/ML. Cloud Integration: We help you migrate to the Cloud, ensuring a smooth transition with minimal disruption. Whether it’s AWS, Azure, or Google Cloud, our experts have you covered. Data Analytics: Transform your data into actionable insights with advanced analytics tools and platforms. Custom Software Development: We build robust, scalable applications that integrate AI/ML capabilities and leverage the Cloud. DevOps: Automate your development pipeline and ensure continuous integration and delivery with our DevOps expertise. Why Choose Us? Expert Team: Our team of experienced professionals is well-versed in AI/ML, Cloud computing, and data analytics. End-to-End Solutions: From ideation to deployment, we offer full-cycle development services. Tailored Approach: We understand that every business is unique. We provide customized solutions that align with your specific goals. Proven Track Record: We’ve helped numerous businesses across industries to innovate and grow. Success Stories Retail Industry: Implemented an AI-driven recommendation engine that increased sales by 30%. Healthcare Sector: Developed an ML-based diagnostic tool that improved accuracy by 20%. Finance: Integrated Cloud-based AI solutions that reduced operational costs by 25%.

Detailed Guide - How I've Been Self Employed for 2 Years Selling Posters
reddit
LLM Vibe Score0
Human Vibe Score1
tommo278This week

Detailed Guide - How I've Been Self Employed for 2 Years Selling Posters

Hey everyone, bit of context before you read through this. I have been selling POD posters full time for over 2 years now. My next venture is that I have started my own Print on Demand company for posters, PrintShrimp. As one way of creating customers for our service, we are teaching people for free how to also sell posters. Here is a guide I have written on how to sell posters on Etsy. Feel free to have a read through and then check out PrintShrimp, hopefully can help some of you guys out (and get us some more customers!) All of this is also available in video format on our website too, if you prefer to learn that way. Thanks guys! And as some people asked in other subs, no this isn't written with AI 😅 This took a couple of weeks to put together! Through this guide, we will teach you everything you need to know about starting to sell posters and generate some income. We will also show you why PrintShrimp is the best POD supplier for all of your poster needs. Trust me, you won’t need much convincing.  So, why are posters the best product to sell? Also, just thought I’d quickly answer the question - why posters? If you’ve been researching Print on Demand you’ve probably come across the infinite options of t-shirts, mugs, hats, phone cases, and more. All of these are viable options, however we think posters are the perfect place to start. You can always expand into other areas further down the line! So a brief summary of why posters are the perfect product for Print on Demand: \-They are very easy to design! Posters are a very easy shape to deal with - can’t go wrong with a rectangle. This makes designing products very easy. \-Similarly to this, what you see is what you get with a poster. You can literally see your finished product as you design it in either canva or photoshop. With T-Shirts for example, you have to make your design, and then place it on a t-shirt. Then you have to coordinate with your printers the size you would like the design on the tshirt and many other variables like that. There is no messing about with posters - what you see is what you get. \-The same high quality, everywhere. With other products, if you want to reap the benefits of a printing in various countries, you need to ensure each of your global suppliers stocks the same t-shirts, is able to print in the same way, carries the same sizes etc. Again with posters you avoid all of this hassle- your products will come out the same, no matter which of our global locations are used. \-They have a very favorable profit margin. As you will see later, the cost price of posters is very low. And people are prepared to pay quite a lot for a decent bit of wall art! I have tried out other products, and the profit margin combined with the order quantity of posters makes them my most profitable product, every single time. Using PrintShrimp, you can be sure to enjoy profits of anywhere between £6 - £40 pure profit per sale.  \-They are one of the easiest to print white label. This makes them perfect for Print on Demand. Your posters are simply put in a tube, and off they go. There are no extras you need to faff around with, compared to the extra elements other products come with, such as clothing labels on t-shirts.  Picking your poster niche So, you are ready to start selling posters. Great! Now, the blessing and curse with selling posters is that there are infinite possibilities regarding what you can sell. So, it can easily be quite overwhelming at first.  The first thing I would recommend doing is having a look at what others are selling. Etsy is a wonderful place for this (and will likely be a key part of your poster selling journey). So, log on to Etsy and simply type in ‘poster’ in the search bar. Get ready to write a massive list of the broad categories and type of posters that people are selling.  If you do not have more than 50 categories written down by the end, you are doing something wrong. There are seriously an infinite amount of posters! For example, here are some popular ones to get you started: Star sign posters, Kitchen posters, World map posters, Custom Dog Portrait posters, Music posters, Movie posters, Fine art posters, Skiing posters, Girl Power posters and Football posters.  Now, you have a huge list of potential products to sell. What next? There are a few important things you need to bear in mind when picking your niche: \-Does this interest me?  Don’t make the mistake of going down a niche that didn’t actually interest you just because it would probably be a money maker. Before you know it, what can be a very fun process of making designs can become incredibly \\\monotonous, and feel like a chore\\\. You need to bear in mind that you will be spending a lot of time creating designs - if it is something you are interested in you are much less likely to get burnt out! As well, \\\creativity will flow\\\ far better if it is something you are interested in, which at the end of the day will lead to better designs that are more likely to be purchased by customers.  \-Is this within my design range? Don’t let this put you off too much. We will go through how to get started on design later on in this guide. However, it is important to note that the plain truth of it is that some niches and designs are a hell of a lot more complicated than others. For example, quote posters can essentially be designed by anyone when you learn about how to put nice fonts together in a good color scheme. On the other hand, some posters you see may have been designed with complex illustrations in a program like Illustrator. To start with, it may be better to pick a niche that seems a bit more simple to get into, as you can always expand your range with other stores further down the line. A good way of evaluating the design complexity is by identifying if this poster is \\\a lot of elements put together\\\ or is \\\a lot of elements created by the designer themselves\\\\\.\\ Design can in a lot of cases be like a jigsaw - putting colours, shapes and text together to create an image. This will be a lot easier to start with and can be learnt by anyone, compared to complex drawings and illustrations.  \-Is this niche subject to copyright issues? Time to delve deep into good old copyright. Now, when you go through Etsy, you will without a doubt see hundreds of sellers selling music album posters, car posters, movie posters and more. Obviously, these posters contain the property of musicians, companies and more and are therefore copyrighted. The annoying thing is - these are \\\a complete cash cow.\\\ If you go down the music poster route, I will honestly be surprised if you \\don’t\\ make thousands. However it is only a matter of time before the copyright strikes start rolling in and you eventually get banned from Etsy.  So I would highly recommend \\\not making this mistake\\\. Etsy is an incredible platform for selling posters, and it is a hell of a lot easier to make sales on there compared to advertising your own website. And, you \\\only get one chance on Etsy.\\\ Once you have been banned once, you are not allowed to sign up again (and they do ID checks - so you won’t be able to rejoin again under your own name).  So, don’t be shortsighted when it comes to entering Print on Demand. If you keep your designs legitimate, they will last you a lifetime and you will then later be able to crosspost them to other platforms, again without the worry of ever getting shut down.  So, how do I actually design posters? Now you have an idea of what kind of posters you want to be making, it’s time to get creative and make some designs! Photoshop (and the creative cloud in general) is probably the best for this. However, when starting out it can be a scary investment (it costs about £30 a month unless you can get a student rate!).  So, while Photoshop is preferable in the long term, when starting out you can learn the ropes of design and get going with Canva. This can be great at the start as they have a load of templates that you can use to get used to designing and experimenting (while it might be tempting to slightly modify these and sell them - this will be quite saturated on places like Etsy so we would recommend doing something new).  What size format should I use? The best design format to start with is arguably the A sizes - as all the A sizes (A5, A4, A3, A2, A1, A0) are scalable. This means that you can make all of your designs in one size, for example A3, and these designs will be ready to fit to all other A sizes. For example, if you design an A3 poster and someone orders A1, you can just upload this A3 file to PrintShrimp and it will be ready to print. There is a wide range of other sizes you should consider offering on your shop, especially as these sizes are very popular with the American market. They have a wide range of popular options, which unfortunately aren’t all scalable with each other. This does mean that you will therefore have to make some slight modifications to your design in order to be able to offer them in American sizing, in a few different aspect ratios. What you can do however is design all of your products in UK sizing, and simply redesign to fit American sizing once you have had an order. Essentially: design in UK sizing, but list in both UK and US sizing. Then when you get a non-A size order, you can quickly redesign it on demand. This means that you don’t have to make a few different versions of each poster when first designing, and can simply do a quick redesign for US sizing when you need to. Below is PrintShrimps standard size offering. We can also offer any custom sizing too, so please get in touch if you are looking for anything else. With these sizes, your poster orders will be dispatched domestically in whatever country your customer orders from. Our recommendations for starting design One thing that will not be featured in this guide is a written out explanation or guide on how to design. Honestly, I can’t think of a more boring, or frankly worse, way to learn design. When it comes to getting started, experimenting is your best friend! Just have a play around and see what you can do. It is a really fun thing to get started with, and the satisfaction of when a poster design comes together is like no other. A good way to start is honestly by straight up copying a poster you see for sale online. And we don’t mean copying to sell! But just trying to replicate other designs is a great way to get a feel for it and what you can do. We really think you will be surprised at how easy it is to pull together a lot of designs that at first can appear quite complicated! Your best friend throughout this whole process will be google. At the start you will not really know how to do anything - but learning how to look into things you want to know about design is all part of the process. At first, it can be quite hard to even know how to search for what you are trying to do, but this will come with time (we promise). Learning how to google is a skill that you will learn throughout this process.  Above all, what we think is most important is this golden rule: take inspiration but do not steal. You want to be selling similar products in your niche, but not copies. You need to see what is selling in your niche and get ideas from that, but if you make designs too similar to ones already available, you won’t have much luck. At the end of the day, if two very similar posters are for sale and one shop has 1000 reviews and your newer one has 2, which one is the customer going to buy? You need to make yours offer something different and stand out enough to attract customers. Etsy SEO and maximizing your sales You may have noticed in this guide we have mentioned Etsy quite a few times! That is because we think it is hands down the best place to start selling posters. Why? Etsy is a go to place for many looking to decorate their homes and also to buy gifts. It might be tempting to start selling with your own website straight away, however we recommend Etsy as it brings the customers to you. For example, say you start selling Bathroom Posters. It is going to be a hell of a lot easier to convert sales when you already have customers being shown your page after searching ‘bathroom decor’, compared to advertising your own website. This is especially true as it can be hard to identify your ideal target audience to then advertise to via Meta (Facebook/Instagram) for example. Websites are a great avenue to explore eventually like I now have, but we recommend starting with Etsy and going from there. What costs do I need to be aware of? So, setting up an Etsy sellers account is currently costs £15. The only other upfront cost you will have is the cost of listing a product - this is 20 cents per listing. From then on, every time you make a sale you will be charged a transaction fee of 6.5%, a small payment processing fee, plus another 20 cents for a renewed listing fee. It normally works out to about 10% of each order, a small price to pay for all the benefits Etsy brings. No matter what platform you sell on, you will be faced with some form of transaction fee. Etsy is actually quite reasonable especially as they do not charge you to use their platform on a monthly basis.  What do I need to get selling? Getting your shop looking pretty \-Think of a shop name and design (now you are a professional designer) a logo \-Design a banner for the top of your shop \-Add in some about me info/shop announcement \-I recommend running a sale wherein orders of 3+ items get a 20% of discount. Another big benefit of PrintShrimp is that you receive large discounts when ordering multiple posters. This is great for attracting buyers and larger orders.  Making your products look attractive That is the bulk of the ‘decor’ you will need to do. Next up is placing your posters in mock ups! As you may notice on Etsy, most shops show their posters framed and hanging on walls. These are 99% of the time not real photos, but digital mock ups. This is where Photoshop comes in really handy, as you can automate this process through a plug in called Bulk Mock Up. If you don’t have photoshop, you can do this on Canva, you will just have to do it manually which can be rather time consuming.  Now, where can you get the actual Mock Ups? One platform we highly recommend for design in general is platforms like Envato Elements. These are design marketplaces where you have access to millions of design resources that you are fully licensed to use!  Titles, tags, and descriptions  Now for the slightly more nitty gritty part. You could have the world's most amazing looking poster, however, if you do not get the Etsy SEO right, no one is going to see it! We will take you through creating a new Etsy listing field by field so you can know how to best list your products.  The key to Etsy listing optimisation is to maximise. Literally cram in as many key words as you possibly can! Before you start this process, create a word map of anything you can think of relating to your listing. And come at this from the point of view of, if I was looking for a poster like mine, what would I search? Titles \-Here you are blessed with 140 characters to title your listing. Essentially, start off with a concise way of properly describing your poster. And then afterwards, add in as many key words as you can! Here is an example of the title of a well selling Skiing poster: Les Arcs Skiing Poster, Les Arcs Print, Les Alpes, France Ski Poster, Skiing Poster, Snowboarding Poster, Ski Resort Poster Holiday, French This is 139 characters out of 140 - you should try and maximise this as much as possible! As you can see, this crams in a lot of key words and search terms both related to Skiing as a whole, the poster category, and then the specifics of the poster itself (Les Arcs resort in France). Bear in mind that if you are listing a lot of listings that are of the same theme, you won’t have to spend time creating an entirely new title. For example if your next poster was of a ski resort in Italy, you can copy this one over and just swap out the specifics. For example change “France ski poster” to “Italy ski poster”, change “Les Arcs” to “The Dolomites”, etc.  Description \-Same logic applies for descriptions - try and cram in as many key words as you can! Here is an example for a Formula One poster: George Russell, Mercedes Formula One Poster  - item specific keywords Bright, modern and vibrant poster to liven up your home.  - Describes the style of the poster All posters are printed on high quality, museum grade 200gsm poster paper. Suitable for framing and frames. - Shows the quality of the print. Mentions frames whilst showing it comes unframed Experience the thrill of the racetrack with this stunning Formula One poster. Printed on high-quality paper, this racing car wall art print features a dynamic image of a Formula One car in action, perfect for adding a touch of speed and excitement to any motorsports room or man cave. Whether you're a die-hard fan or simply appreciate the adrenaline of high-speed racing, this poster is sure to impress. Available in a range of sizes, it makes a great addition to your home or office, or as a gift for a fellow Formula One enthusiast. Each poster is carefully packaged to ensure safe delivery, so you can enjoy your new piece of art as soon as possible. - A nice bit of text really highlighting a lot of key words such as gift, motorsports, racetrack etc.  You could go further with this too, by adding in extra things related to the poster such as ‘Perfect gift for a Mercedes F1 fan’ etc.  Tags Now, these are actually probably the most important part of your listing! You get 13 tags (20 character limit for each) and there are essentially search terms that will match your listing with what customers search for when shopping.  You really need to maximize these - whilst Title and Description play a part, these are the main things that will bring buyers to your listing. Once again, it is important to think about what customers are likely to be searching when looking for a poster similar to yours. Life hack alert! You can actually see what tags other sellers are using. All you need to do is go to a listing similar to yours that is selling well, scroll down and you can actually see them listed out at the bottom of the page! Here is an example of what this may look like: So, go through a few listings of competitors and make notes on common denominators that you can integrate into your listing. As you can see here, this seller uses tags such as ‘Birthday Gift’ and ‘Poster Print’. When you first start out, you may be better off swapping these out for more listing specific tags. This seller has been on Etsy for a few years however and has 15,000+ sales, so are more likely to see success from these tags.  If it’s not clear why, think about it this way. If you searched ‘poster print’ on Etsy today, there will be 10s of thousands of results. However, if you searched ‘Russell Mercedes Poster’, you will (as of writing) get 336 results. Etsy is far more likely to push your product to the top of the latter tag, against 300 other listings, rather than the top of ‘Poster Print’ where it is incredibly competitive. It is only when you are a more successful shop pulling in a high quantity of orders that these larger and more generic tags will work for you, as Etsy has more trust in your shop and will be more likely to push you to the front.  SKUs \-One important thing you need to do is add SKUs to all of your products! This is worth doing at the start as it will make your life so much easier when it comes to making sales and using PrintShrimp further down the line. What is an SKU? It is a ‘stock keeping unit’, and is essentially just a product identifier. Your SKUs need to match your file name that you upload to PrintShrimp. For example, if you made a poster about the eiffel tower, you can literally name the SKU eiffel-tower. There is no need to complicate things! As long as your file name (as in the image name of your poster on your computer) matches your SKU, you will be good to go.  \-It may be more beneficial to set up a system with unique identifiers, to make organising your files a lot easier further down the line. Say you get to 1000 posters eventually, you’ll want to be able to quickly search a code, and also ensure every SKU is always unique, so you won’t run into accidentally using the same SKU twice further down the line. For example, you can set it up so at the start of each file name, you have \[unique id\]\[info\], so your files will look like -  A1eiffeltower A2france And further down the line: A99aperolspritz B1potatoart This not only removes the potential issue of duplicating SKUs accidentally (for example if you made a few posters of the same subject), but also keeps your files well organised. If you need to find a file, you can search your files according to the code, so just by searching ‘a1’ for example, rather than having to trawl through a load of different files until you find the correct one. \-If your poster has variations, for example color variations, you can set a different SKU for each variation. Just click the little box when setting up variations that says ‘SKUs vary for each (variation)’. So if you have a poster available either in a white or black background, you can name each file, and therefore each SKU, a1eiffel-tower-black and a1eiffel-tower-white for example. \-The same goes for different sizes. As different American sizes have different aspect ratios, as mentioned above you may have to reformat some posters if you get a sale for one of these sizes. You can then add in the SKU to your listing once you have reformatted your poster. So for example if you sell a 16x20” version of the eiffel tower poster, you can name this file eiffel-tower-white-1620. Whilst this involves a little bit of set up, the time it saves you overall is massive!  Variations and Prices \-So, when selling posters there is a huge variety of sizes that you can offer, as mentioned previously. Non-negotiable is that you should be offering A5-A1. These will likely be your main sellers! Especially in the UK. It is also a good idea to offer inch sizing to appeal to a global audience (as bear in mind with PrintShrimp you will be able to print in multiple countries around the world!).  Below is a recommended pricing structure of what to charge on Etsy. Feel free to mess around with these! You may notice on Etsy that many shops charge a whole lot more for sizes such as A1, 24x36” etc. In my experience I prefer charging a lower rate to attract more sales, but there is validity in going for a lower amount of sales with higher profits. As mentioned above, you can also offer different variations on items - for example different colour schemes on posters. This is always a decent idea (if it suits the design) as it provides the customer with more options, which might help to convert the sale. You can always add this in later however if you want to keep it simple while you start! Setting up shipping profiles Etsy makes it very easy to set up different shipping rates for different countries. However, luckily with PrintShrimp you can offer free shipping to the majority of the major countries that are active on Etsy!  Using PrintShrimp means that your production costs are low enough in each domestic market to justify this. If you look on Etsy you can see there are many shops that post internationally to countries such as the US or Australia. Therefore, they often charge £8-10 in postage, and have a delivery time of 1-2 weeks. This really limits their customer base to their domestic market.  Using PrintShrimp avoids this and means you can offer free shipping (as we absorb the shipping cost in our prices) to the major markets of the UK, Australia, and USA (Europe coming soon!).  We also offer a 1 day processing time, unlike many POD poster suppliers. This means you can set your Etsy processing time to just one day, which combined with our quick shipping, means you will be one of the quickest on Etsy at sending out orders. This is obviously very attractive for customers, who are often very impatient with wanting their orders!  Getting the sales and extra tips \-Don’t list an insane amount of listings when you first get started. Etsy will be like ‘hang on a second’ if a brand new shop suddenly has 200 items in the first week. Warm up your account, and take things slow as you get going. We recommend 5 a day for the first week or so, and then you can start uploading more. You don’t want Etsy to flag your account for suspicious bot-like activity when you first get going.  \-It is very easy to copy listings when creating a new one. Simply select an old listing and press copy, and then you can just change the listing specific details to create a new one, rather than having to start from scratch. It can feel like a bit of a ball-ache setting up your first ever listing, but from then on you can just copy it over and just change the specifics.  \-Try and organize your listings into sections! This really helps the customer journey. Sometimes a customer will click onto your shop after seeing one of your listings, so it really helps if they can easily navigate your shop for what they are looking for. So, you now have a fully fledged Etsy shop. Well done! Time to start making £3,000 a month straight away right? Not quite. Please bear in mind, patience is key when starting out. If you started doing this because you are £10,000 in debt to the Albanian mafia and need to pay it off next week, you have come into this in the wrong frame of mind. If you have however started this to slowly build up a side hustle which hopefully one day become your full time gig, then winner winner chicken dinner.  Starting out on Etsy isn’t always easy. It takes time for your shop to build up trust! As I’ve said before, a buyer is far more likely to purchase from a shop with 1000s of reviews, than a brand new one with 0. But before you know it, you can become one of these shops! One thing you can do at the very start is to encourage your friends and family to buy your posters! This is a slightly naughty way of getting a few sales at the start, of course followed by a few glowing 5\* reviews. It really helps to give your shop this little boost at the start, so if this is something you can do then I recommend it.  Okay, so once you have a fully fledged shop with a decent amount of listings, you might be expecting the sales to start rolling in. And, if you are lucky, they indeed might. However, in my experience, you need to give your listings a little boost. So let us introduce you to: The wonderful world of Etsy ads Ads!! Oh no, that means money!! We imagine some of you more risk averse people are saying to yourself right now. And yes, it indeed does. But more often than not unfortunately you do have to spend money to make money.  Fortunately, in my experience anyway, Etsy ads do tend to work. This does however only apply if your products are actually good however, so if you’re back here after paying for ads for 2 months and are losing money at the same rate as your motivation, maybe go back to the start of this guide and pick another niche.  When you first start out, there are two main strategies.  Number 1: The Safer Option So, with PrintShrimp, you will essentially be making a minimum of £6 profit per order. With this in mind, I normally start a new shop with a safer strategy of advertising my products with a budget of $3-5 dollars a day. This then means that at the start, you only need to make 1 sale to break even, and anything above that is pure profit! This might not seem like the most dazzling proposition right now, but again please bear in mind that growth will be slow at the start. This means that you can gradually grow your shop, and therefore the trust that customers have in your shop, over time with a very small risk of ever actually losing money. Number 2: The Billy Big Balls Option If you were yawning while reading the first option, then this strategy may be for you. This will be better suited to those of you that are a bit more risk prone, and it also helps if you have a bit more cash to invest at the start. Through this strategy, you can essentially pay your way to the top of Etsy's rankings. For this, you’ll probably be looking at spending $20 a day on ads. So, this can really add up quickly and is definitely the riskier option. In my experience, the level of sales with this may not always match up to your spend every day. You may find that some days you rake in about 10 sales, and other days only one. But what this does mean is that as your listings get seen and purchased more, they will begin to rank higher in Etsy’s organic search rankings, at a much quicker rate than option one. This is the beauty of Etsy’s ads. You can pay to boost your products, but then results from this paid promotion feed into the organic ranking of your products. So you may find that you can splash the cash for a while at the start in order to race to the top, and then drop your ad spending later on when your products are already ranking well.  Sending your poster orders So, you’ve now done the hard bit. You have a running Etsy store, and essentially all you need to now on a daily basis is send out your orders and reply to customer messages! This is where it really becomes passive income.  \-Check out the PrintShrimp order portal. Simply sign up, and you can place individual orders through there. \-Bulk upload: We have an option to bulk upload your Esty orders via csv.  Seriously, when you are up and running with your first store, it is really as easy as that.  Once you have your first Etsy store up and running, you can think about expanding. There are many ways to expand your income. You can set up other Etsy stores, as long as the type of posters you are selling varies. You can look into setting up your own Shopify stores, and advertise them through Facebook, Instagram etc. Through this guide, we will teach you everything you need to know about starting to sell posters and generate some income. We will also show you why PrintShrimp is the best POD supplier for all of your poster needs. Trust me, you won’t need much convincing.

Changing Careers, changing products? Age 38, Direction needed, investment advice too.
reddit
LLM Vibe Score0
Human Vibe Score0.667
Salad-BanditThis week

Changing Careers, changing products? Age 38, Direction needed, investment advice too.

Hello, At one point in my life I had a set plan that I had been following in which to design a life that fit my values, but during 2020 the viability was called into question and I have been on bad footing, unable to find stability, since. Though I currently have stable housing without roommate, and enough in savings for a year without any income and three more years in a mutual fund. The question I need help with is about utilizing approximately $40,000 that I would like to invest into a new or existing business venture, or possibly start investing my own hand in selecting stocks. To give context about the parameters of concepts that pertain to me, back in 2005 I graduated highschool and immediately was an entrepreneur, started a sports clothing company, was selling WoW bot accounts, ghillie suits on ebay, and graphic design commissions, and I was proficient in MX Flash. Although the first part of my life plan to start farming three years before 2012 for what I thought would be a peak oil economic collapse, and while watching 2008 unfold, along with my career in MX Flash falling flat, I started farming 2009. From that point I spent a total of 15 years farming, the majority of that was for my own LLC, where I was situated with leases on million dollar properties as Ag tax write off, on an elite island outside a major city, serving local high price wholesale, mainly salad mix and mushrooms, because they are fast turn around. That was truly the best 20s I could have asked for, working mainly for myself, very healthy and was putting away $10-20k in savings/investments per year, plus was earning about $3-5k more per year, while living in a cargo trailer on dirt cheap leases. But it all came to a slow end starting in 2020 when I lost all of my wholesale overnight, and my retail exploded, which burnt me out to the point I couldnt walk, as the sole worker in my LLC. So I do not fully trust the volatility of the wholesale food industry, from a small grower’s perspective, since i don't own land. SO now I am trying to figure out a way forward, because I can always farm in the future, and have taught myself hydroponics, and flat packed farm equipment, so my business is very agile and now I can grow in parking lots closer to the city for more sales opportunities, but I am not sure that is what I want to do in this current moment, because tech is exploding, and we have never had so much information available to us, it's a shame not to spend a moment in life to discover what new opportunities might be out there. I was laid off twice last year, so I've been out of work the past four months, doing thriftstore routes twice a week while making about $500+/wk, really just trying to understand what people still buy and break even, while I continue to study 3d design blender, as well as 2d digital art in the hopes that I can reconnect with my tech art past, because that is what I told myself when I was 18, that I would put off art and computers until I was past 30 and needed to do less with my body. But over the past three years, the better I get at digital art, the better Ai has been getting too. I have some mentors who might give me work and a foot in the door, but most of them are laid off, and scrounging for work if they are not on their own funded indie project. I've thought about continuing to learn 3d modeling despite Ai, and despite seeing Flash, computer program I was proficient in get removed from existence before I could really earn my money back. I assume there will always be a need for Ai models to get cleaned up, mapped and rigged, especially with AR technology coming to consumers soon, but more over it would help if I decided to go to a community college to do CNC certificates, so I can have that as a backup job on CAD at a machining warehouse and do my farm and digital art on the side, but CNC mechanics don't make a crazy amount of money and have a boss. BUT I am an inventor, and have two inventions so far, plus my ultimate goal is to one day have automated hydroponic greenhouses, using all CNC+3d printed parts to create a low time investment agriculture income, with Ai monitored greenhouse, seed to salad product that i can sell to other people, which would tie into my desire to teach people about farming too, as well as do something I enjoy, but it is not a proven concept yet. Anyways if you've read this far I appreciate it, I ultimately would like 3rd party feedback about how I should spend my $40k surplus cash. I originally had it saved and accessible in case I was going to lease land and start my full farm business again from scratch, but I think using the equipment and space I have, and exploring non-perishable products is a smart move for me right now. Should I invest in inventory of products to arbitrage online? Should I invest in the top index funds? Should I buy Silver? Should I invest in inventory of a new product line? Should I spend some money insuring and equipment for a landscaping company? I want to future proof myself the best I can as Ai unfolds, I am pretty set with an income for the rest of my life as long as I can grow food and sell it, but there are currently so many changing opportunities, I want to cast out my net and see what works with my temperment. I’ve thought about getting into cyber security, or maybe be an electrician, or less staple jobs like Landscape Architech (can use art/modeling) and CNC engineer/modeler, but honestly I prefer to make a product and sell it without client service related interaction, and particularly no boss. Thank you for reading

I am starting a startup on AI research automation. Looking for feedback!
reddit
LLM Vibe Score0
Human Vibe Score1
pablonmThis week

I am starting a startup on AI research automation. Looking for feedback!

Hi everyone, I would like to share a product idea that I'm working on. I studied computer science and have worked for Silicon Valley startups for the last 6 years. I'm currently employed full-time at a startup that sells an AI-powered search engine, so I have gained valuable experience in the AI/information retrieval space. I turned 29 last week, and I think it's time for me to start my own business. I've always wanted to run my own tech company, and I feel like now is the right time to begin with an idea. Are you a researcher in any field? Do you often find yourself learning new, highly complex topics and don't know where to start? Google is a great tool for finding answers to specific questions, but what if you don't know what questions to ask? I am developing a "deep search" engine that, given a topic, produces a multi-page report aggregating information from several properly cited sources. It finds and explains different perspectives and ideas related to the topic of interest. You can use it to automate the research process, but it's much more than that because it can help you uncover hidden perspectives, important questions, and ideas that you might not otherwise find when just googling. I welcome any feedback and ideas! Do you think this product would deliver significant value to your life? Why or why not? Would you be willing to pay to use it? I will post updates about this product in this thread in case you want to follow its development and try the product when it's ready.

Hello! Seeking essential advice regarding the desire to create an "AI". One that acts as a personal musical "Composer" in response to the individual users' emotional feedback. Company Name already created, as well as Trademark name for potential AI. However, I don't know where to start...
reddit
LLM Vibe Score0
Human Vibe Score1
TheHumanAnimal-This week

Hello! Seeking essential advice regarding the desire to create an "AI". One that acts as a personal musical "Composer" in response to the individual users' emotional feedback. Company Name already created, as well as Trademark name for potential AI. However, I don't know where to start...

Title pretty much sums it up. With 0 background in computer science as well as no experience developing a company, I'm seeking professional advice (or personal) on the best approach to this potential business idea. Given the progression of Artificial Intelligence and its influence on the global population in modern day, I have now developed an interest in its potential. After creating a model for foundation, one which is relatively simple in nature, I took it upon to myself to embrace my lack of knowledge/interest in the science of AI and go directly to the source: ChatGPT. Unfortunately, I currently can't afford to engage with the "smartest model" of ChatGPT, but after discussing a plan of approach with the free OpenAI version, I was given a lot of valuable information that I most likely would have overwhelmed myself with independently. With that being said, I'm now looking to hear from individuals who have actual experience within the respective backgrounds. Any advice will help Questions: What does the development of an AI assistant require for foundation? Can it be built upon already established AI and will there require a level of knowledge regarding coding as well as the proper legal understanding of API usage? Should the focus be on app development or the AI tool specifically? What communities would you suggest, to seek individuals with the ability to bring an idea to fruition virtually? From a business perspective, given the lack of financial resources and significant model value, how would one communicate this idea to others to potentially become involved or invested? If I am asking the wrong question, feel free to advise. Any questions that require more information on the idea is welcomed.

Writing a exercise based TTRPG rulebook for a system where your real world fitness is tied to character progression
reddit
LLM Vibe Score0
Human Vibe Score1
BezboznyThis week

Writing a exercise based TTRPG rulebook for a system where your real world fitness is tied to character progression

My dad was a star athlete when he was young, and my mom was a huge sci-fi/fantasy nerd, so I got both ends of the stick as it were. Love gaming and nerd culture, but also love to exercise and self improvement. Sometimes exercise can feel boring though compared to daydreaming about fantastic fictional worlds, so for a long time I've been kicking around the idea of how to "Gamify" fitness. and recently I've been working on this passion project of a Table Top RPG (Like D&D) where the stats of your character are related to your own fitness, so if you want your character in game to improve, you have to improve in the real world. Below is a rough draft you can look through that details the settings and mechanics of the game I've come up with so far. I'd love to eventually get a full book published and sell it online. maybe even starting a whole brand of "Gamified fitness": REP-SET: GAINSZ In the war torn future of 24th century… There are no rest days… In the futuristic setting of "REP-SET: GAINSZ," the "War of Gains" casts a long shadow over the Sol System as the various factions vie for territory and resources. However, war has evolved. Unmanned drones and long-range strikes have faded into obsolescence. Battles, both planet-side and in the depths of space, are now fought by soldiers piloting REP-SETs: Reactive Exoskeletal Platform - Symbiotic Evolution Trainer Massive, humanoid combat mechs. Powered by mysterious “EV” energy, these mechanical marvels amplify, and are in turn amplified by, the fitness and mental acuity of their pilots. The amplification is exponential, leading pilots into a life of constant training in order for their combat prowess to be bolstered by every incremental gain in their level of fitness. With top pilots having lifting capacity measured in tons, and reaction times measured by their Mach number, REP-SET enhanced infantry now dominate the battlefield. The Factions: The Federated Isometocracy of Terra (FIT): Quote: "The strength of the body is the strength of the spirit. Together, we will lift humanity to its destined greatness. But ask not the federation to lift for you. Ask yourself: Do you even lift for the Federation?" Description: An idealistic but authoritarian faction founded on the principle of maximizing the potential of all individuals. FIT citizens believe in relentless striving for physical and mental perfection, leading to collective excellence. Their goal is the unification of humankind under a rule guided by this doctrine, which sometimes comes at the cost of individual liberties. Mech Concept: REP-SET mechs. Versatile humanoid designs focusing on strength, endurance, and adaptability. By connecting to the AI spirit within their REP-SETs core, each pilot enhances the performance of their machine through personal willpower and peak physical training. Some high-rank REP-SETS include features customized to the pilot's strengths, visually signifying their dedication and discipline. The Dominion of Organo-Mechanical Supremacy (DOMS): Quote: "Without pain, there is no gain. Become the machine. Embrace the burn.” Description: A fanatical collective ideologically obsessed with "Ascendency through suffering" by merging their bodies with technology that not only transcends biological limitations, but also acts to constantly induce pain in it's users. Driven by a sense of ideological superiority and a thirst for domination, DOMS seek to bring the painful blessings of their deity "The lord of the Burn" to the rest of the solar system. Their conquest could turn them into a significant threat to humanity. Mech Concept: Hybrid mechs, where the distinction between the pilot and the machine is blurred. The cockpit functions as a life-support system for the pilot, heavily modified with augmentations. Mechs themselves are often modular, allowing for adaptation and assimilation of enemy technology. Some DOMS mechs might display disturbing elements of twisted flesh alongside cold, mechanical parts. The Tren: Quote: "Grow... bigger... feast... protein..." Description: A ravenous conglomeration of biochemically engineered muscular monstrosities, united only by a shared insatiable hunger for "More". Existing mostly in deep space, they seek organic matter to consume and assimilate. They progress in power not due to any form of training or technology, but from a constant regimen of ravenous consumption and chemically induced muscle growth, all exponentially enhanced by EV energies. While some have been known to possess a certain level of intellect and civility, their relentless hunger makes them incredibly mentally volatile. When not consuming others, the strong consume the weak within their own faction. Mech Concept: Bio-Organic horrors. While they do have massive war machines, some are living vessels built around immense creatures. These machines resemble grotesque fleshy designs that prioritize rapid mutation and growth over sleek aesthetics. Often unsettling to behold. Synthetic Intelligence Theocracy (SIT): Quote: "Failure is an unacceptable data point.” Description: A society ruled by a vast and interconnected artificial intelligence network. The SIT governs with seemingly emotionless rationality, striving for efficiency and maximum productivity. This leads to a cold, but arguably prosperous society, unless you challenge the logic of the collective AI. Their goals? Difficult to predict, as it hinges on how the AI calculates what's "optimal" for the continuation or "evolution" of existence. Mech Concept: Sleek, almost featureless robotic creations with a focus on efficient movement and energy management. Often drone-like or modular, piloted through direct mind-machine linking rather than traditional cockpits. Their aesthetic suggests cold and impersonal perfection. The Way Isolate(TWI): Quote: "The body unblemished, the mind unwavering. That is the path to true strength. That and a healthy diet of Aster-Pea proteins." Description: Known by some as "The asteroid farmers", The Way Isolate is a proud and enigmatic faction that stands apart from the other powers in the Sol System. A fiercely independent tribe bound by oaths of honor, loyalty, and hard work. Wandering the asteroid belt in their vast arc ships, their unparalleled mastery in asteroidal-agricultural engineering, ensuring they have no need to colonize planets for nutritional needs, has allowed them to abstain from the pursuit of territorial expansion in “The War of Gains”, instead focusing on inward perfection, both spiritual and physical. They eschew all technological bodily enhancements deemed unnatural, believing that true power can only be cultivated through the relentless pursuit of personal strength achieved through sheer will and bodily perfection. The Way Isolate views biohacking, genetic manipulation, and even advanced cybernetics as corruptions of the human spirit, diluting the sacredness of individual willpower. Mech Concept: Way Isolate mechs are built with maneuverability and precision in mind rather than flashy augmentations. Their REP-SETs are streamlined, favoring lean designs that mirror the athleticism of their pilots. Excelling in low to zero G environments, their mechs lack bulky armor, relying on evasion and maneuverability rather than brute force endurance. Weaponry leans towards traditional kinetic based armaments, perhaps employing archaic but reliable weapon styles such as blades or axes as symbols of their purity of purpose. These mechs reflect the individual prowess of their pilots, where victory is determined by focus, technique, and the raw power of honed physical ability. Base Player Character Example: You are a young, idealistic FIT soldier, barely out of training and working as a junior REP-SET mechanic on the Europa Ring World. The Miazaki district, a landscape of towering mountains and gleaming cities, houses a sprawling mountainside factory – a veritable hive of Gen 5 REP-SET construction. Here, the lines between military and civilian blur within a self-sufficient society dependent on this relentless industry. Beneath the surface, you harbor a secret. In a forgotten workshop, the ghost of a REP-SET takes shape – a unique machine built around an abandoned, enigmatic AI core. Ever since you salvaged it as a child from the wreckage of your hometown, scarred by a brutal Tren attack, you've dedicated yourself to its restoration. A lingering injury from that fateful battle mocks your progress, a constant reminder of the fitness exams you cannot pass. Yet, you train relentlessly, dreaming of the day you'll stand as a true REP-SET pilot. A hidden truth lies at the heart of the REP-SETS: as a pilot's abilities grow, their mech develops unique, almost mystical powers – a manifestation of the bond between the human spirit and the REP-SET's AI. The ache in your old wound serves as a grim prophecy. This cold war cannot last. The drums of battle grow louder with each passing day. GAME MECHANICS: The TTRPG setting of “REP-SET: GAINSZ” is marked by a unique set of rules, by which the players real world capabilities and fitness will reflect and affect the capabilities, progression, and success of their REP-SET pilot character in-game. ABILITY SCORES: Pilots' capabilities will be defined by 6 “Ability scores”: Grace, Agility, Iron, Nourishment, Strength, and Zen. Each of the 6 ability scores will duel represent both a specific area of exercise/athleticism and a specific brand of healthy habits. The definitions of these ability scores are as follows: Grace (GRC): "You are an artist, and your body is your canvas; the way you move is your paint and brush." This ability score, the domain of dancers and martial artists, represents a person's ability to move with organic, flowing control and to bring beauty to the world. Skill challenges may be called upon when the player character needs to act with poise and control, whether socially or physically. Real-world skill checks may involve martial arts drills, dancing to music, or balance exercises. Bonuses may be granted if the player has recently done something artistically creative or kind, and penalties may apply if they have recently lost their temper. This ability score affects how much NPCs like your character in game. Agility (AGI): "Your true potential is locked away, and speed is the key to unlocking it." The domain of sprinters, this ability score represents not only a person's absolute speed and reaction time but also their capacity to finish work early and avoid procrastination. Skill challenges may be called upon when the player character needs to make a split-second choice, move fast, or deftly dodge something dangerous. Real-world skill checks may involve acts of speed such as sprinting or punching/kicking at a steadily increasing tempo. Bonuses may apply if the player has finished work early, and penalties may apply if they are procrastinating. This ability score affects moving speed and turn order in game. Iron (IRN): "Not money, nor genetics, nor the world's greatest trainers... it is your resolve, your will to better yourself, that will make you great." Required by all athletes regardless of focus, this ability score represents a player's willpower and their capacity to push through pain, distraction, or anything else to achieve their goals. Skill challenges may be called upon when the player character needs to push through fear, doubt, or mental manipulation. Real-world skill checks may involve feats of athletic perseverance, such as planking or dead hangs from a pull-up bar. Bonuses may apply when the player maintains or creates scheduled daily routines of exercise, self-improvement, and work completion, and penalties may apply when they falter in those routines. This ability score affects the max "Dynamic exercise bonus” that can be applied to skill checks in game (a base max of +3 when Iron = 10, with an additional +1 for every 2 points of iron. So if every 20 pushups gives you +1 on a “Strength” skill check, then doing 80 pushups will only give you +4 if you have at least 12 iron). Nourishment (NRS): "A properly nourished body will last longer than a famished one." This ability score, focused on by long-distance runners, represents a player's endurance and level of nutrition. Skill challenges may be called upon when making checks that involve the player character's stamina or health. Real-world skill checks may involve endurance exercises like long-distance running. Bonuses may apply if the player has eaten healthily or consumed enough water, and penalties may apply if they have eaten junk food. This ability score affects your HP (Health points), which determines how much damage you can take before you are incapacitated. Strength (STR): "When I get down on my hands, I'm not doing pushups, I'm bench-pressing the planet." The domain of powerlifters and strongmen, this ability score represents raw physical might and the ability to overcome obstacles. Skill challenges may be called upon when the player character needs to lift, push, or break something. Real-world skill checks might involve weightlifting exercises, feats of grip strength, or core stability tests. Bonuses may apply for consuming protein-rich foods or getting a good night's sleep, and penalties may apply after staying up late or indulging in excessive stimulants. This ability score affects your carrying capacity and base attack damage in game. Zen (ZEN): "Clarity of mind reflects clarity of purpose. Still the waters within to act decisively without." This ability score, prized by meditators and yogis, represents mental focus, clarity, and inner peace. Skill challenges may be called upon when the player character needs to resist distractions, see through illusions, or make difficult decisions under pressure. Real-world skill checks may involve meditation, breathing exercises, or mindfulness activities. Bonuses may apply after attending a yoga class, spending time in nature, or creating a calm and organized living space. Penalties may apply after experiencing significant stress, emotional turmoil, or having an unclean or unorganized living space. This ability score affects your amount of ZP in game (Zen Points: your pool of energy you pull from to use mystical abilities) Determining initial player ability scores: Initially, “Ability scores” are decided during character creation by giving the player a list of 6 fitness tests to gauge their level of fitness in each category. Running each test through a specific calculation will output an ability score. A score of 10 represents the average person, a score of 20 represents a peak athlete in their category. The tests are: Grace: Timed balancing on one leg with eyes closed (10 seconds is average, 60 is peak) Agility: Mile run time in minutes and second (10:00 minutes:seconds is average, 3:47 is peak) Iron: Timed dead-hang from a pull-up bar (30 seconds is average, 160 is peak) Nourishment: Miles run in an hour (4 is average, 12 is peak) Strength: Pushups in 2 minute (34 is average, 100 is peak) Zen: Leg stretch in degrees (80 is average, and 180 aka "The splits" is peak) Initial Score Calculation Formula: Ability Score = 10 + (Player Test Score - Average Score) / (Peak Score - Average\_Score) \* 10 Example: if the player does 58 pushups in 2 minutes, their strength would be: 10 plus (58 - 34) divided by (100-34) multiplied by 10 = 10 + (24)/(66)\* 10 = 10 + 3.6363... = 13.6363 rounded to nearest whole number = Strength (STR): 14 SKILLS AND SKILL CHALLENGES: The core mechanic of the game will be in how skill challenges are resolved. All “Skill challenges” will have a numerical challenge rating that must be met or beaten by the sum of a 10 sided dice roll and your score in the pertinent skill. Skill scores are determined by 2 factors: Ability Score Bonus: Every 2 points above 10 gives +1 bonus point. (EX. 12 = +1, 14 = +2, etc.) This also means that if you have less than 10 in an ability score, you will get negative points. Personal Best Bonus: Each skill has its own unique associated exercise that can be measured (Time, speed, distance, amount of reps, etc). A higher record means a higher bonus. EX: Authority skill checks are associated with a timed “Lateral raise hold”. Every 30 seconds of the hold added onto your personal best single attempt offers a +1 bonus. So if you can do a lateral hold for 90 seconds, that’s a +3 to your authority check! So if you have a 16 in Iron, and your Personal Best lateral raise hold is 90 seconds, that would give you an Authority score of +6 (T-Pose for dominance!) Dynamic Exercise Bonus: This is where the unique mechanics of the game kick in. At any time during a skill challenge (even after your roll) you can add an additional modifier to the skill check by completing the exercise during gameplay! Did you roll just below the threshold for success? Crank out another 20 pushups, squats, or curls to push yourself just over the edge into success! There are 18 skills total, each with its own associated ability score and unique exercise: Grace (GRC): \-Kinesthesia (Timed: Blind single leg stand time) \-Precision (Scored: Basket throws) \-Charm (Timed reps: Standing repeated forward dumbell chest press and thrust) \-Stealth (Timed distance: Leopard Crawl) Agility (AGI): \-acrobatics (timed reps: high kicks) \-Computers (Word per minute: Typing test) \-Speed (Time: 100 meter sprint) Iron (IRN): \-Authority (Timed: Lateral raise hold) \-Resist (Timed: Plank) \-Persist (Timed:Pull-up bar dead hang) Nourishment(NRS): \-Recovery (TBD) \-Stim crafting (TBD) \-Survival (TBD) Strength(STR): \-Mechanics (Timed reps: Alternating curls) \-Might (Timed reps: pushups) Zen(ZEN): \-Perceive (TBD) \-Empathy (TBD) \-Harmony (TBD) \-Lore (TBD) Healthy Habits Bonus: Being able to demonstrate that you have conducted healthy habits during gameplay can also add one time bonuses per skill challenge “Drank a glass of water +1 to Nourishment check”, “Cleaned your room, +3 on Zen check”. But watch out, if you’re caught in unhealthy Habits, the GM can throw in penalties, “Ate junk food, -1 to Nourishment check”, etc. Bonuses/penalties from in-game items, equipment, buffs, debuffs, etc., helping players to immerse into the mechanics of the world of REP-SET for the thrill of constantly finding ways to improve their player. Gradient success: Result of skill challenges can be pass or fail, but can also be on a sliding scale of success. Are you racing to the battlefield? Depending on your Speed check, you might arrive early and have a tactical advantage, just in time for an even fight, or maybe far too late and some of your favorite allied NPCs have paid the price… So you’re often encouraged to stack on those dynamic exercise bonuses when you can to get the most fortuitous outcomes available to you. Gameplay sample: GM: Your REP-SET is a phantom, a streak of light against the vast hull of the warship. Enemy fighters buzz angrily, but you weaves and dodges with uncanny precision. The energy wave might be losing effectiveness, but your agility and connection to the machine have never been stronger. Then, it happens. A gap in the defenses. A vulnerable seam in the warship's armor. Your coms agents keen eye spots it instantly. "Lower power junction, starboard side! You have an opening!" This is your chance to strike the decisive blow. But how? It'll take a perfect combination of skill and strategy, drawing upon your various strengths. Here are your options: Option 1: Brute Strength: Channel all remaining power into a single, overwhelming blast from the core. High-risk, high-reward. It could overload the REP-SET if you fail, but it might also cripple the warship. (Strength-focused, Might sub-skill) Option 2: Calculated Strike: With surgical precision, target the power junction with a pinpoint burst of destabilizing energy. Less flashy and ultimately less damaging, but potentially more effective in temporarily disabling the ship. (Agility-focused, Precision sub-skill) Option 3: Harmonic Disruption: Attempt to harmonize with your REP-SET's AI spirit for help in connecting to the digital systems of the Warship. Can you generate an internal energy resonance within the warship, causing it to malfunction from within? (Zen-focused, Harmony sub-skill) Player: I'll take option 1, brute strength! GM: Ok, This will be a "Might" check. The CR is going to be very high on this one. I'm setting it at a 20. What's your Might bonus? Player: Dang, a 20?? That's literally impossible. My Might is 15 and I've got a PB of 65 pushups in 2 minutes, that sets me at a +5. Even if I roll a 10 and do 60 pushups for the DE I'll only get 18 max. GM: Hey I told you it was high risk. You want to choose another option? Player: No, no. This is what my character would do. I'm a real hot-blooded meathead for sure. GM: Ok then, roll a D10 and add your bonus. Player: \Rolls\ a 9! not bad, actually that's a really good roll. So +5, that's a 14. GM: Alright, would you like to add a dynamic exercise bonus? Player: Duh, it's not like I can do 120 pushups I'd need to beat the CR, but I can at least do better than 14. Alright, here goes. \the player gets down to do pushups and the 2 minute time begins. After some time...\ Player: 65....... 66! GM: Times up. Player: Ow... my arms... GM: so with 66, that's an extra +3, and its a new PB, so that's a +1. That sets your roll to 18. Player: Ow... Frack... still not 20... for a second there i really believed I could do 120 pushups... well I did my best... Ow... 20 CR is just too impossible you jerk... GM: Hmm... Tell me, what did you eat for lunch today? Player: Me? I made some vegetable and pork soup, and a protein shake. I recorded it all in my diet app. GM: And how did you sleep last night? Player: Like a baby, went to sleep early, woke up at 6. GM: in that case, you can add a +1 "Protein bonus" and +1 "Healthy rest" bonus to any strength related check for the day if you'd like, including this one. Player: Really?? Heck yes! add it to the roll! GM: With those extra bonuses, your roll reaches 20. How do you want to do this? Player: I roar "For Terra!" and pour every last ounce of my strength into the REP-SET. GM: "For Terra!" you roar, your cry echoing through coms systems of the REP-SET. The core flares blindingly bright. The surge of power dwarfs anything the REP-SET has unleashed before. With a titanic shriek that cracks the very fabric of space, the REP-SET slams into the vulnerable power junction. Raw energy explodes outwards, tendrils of light arcing across the warship's massive hull. The impact is staggering. The leviathan-like warship buckles, its sleek form rippling with shockwaves. Sparks shower like rain, secondary explosions erupt as critical systems overload. Then…silence. The warship goes dark. Power flickers within the REP-SET itself, then steadies. Alarms fade, replaced by the eerie quiet of damaged but functional systems. "We…did it?" The coms agents voice is incredulous, tinged with relief. She's awaiting your reply. Player: "I guess so." I say, and I smile and laugh. And then I slump back... and fall unconscious. \to the other players\ I'm not doing any more skill checks for a while guys, come pick me up please. \teammates cheer\ ​

WE JUST GOT $2,500 in angel investment for our AI Cold Calling Startup! Hooray! Looking for web dev + digital marketing agencies to partner with.
reddit
LLM Vibe Score0
Human Vibe Score1
GrowthGetThis week

WE JUST GOT $2,500 in angel investment for our AI Cold Calling Startup! Hooray! Looking for web dev + digital marketing agencies to partner with.

Hey y'all. The AI cold calling startup I've been working on for 3-4 months now just got a $2,500 angel investment, and we have 2 current customers, a credit card processing broker and a hospital equipment rental company based out of Texas. We have around $1,500 revenue so far, but we're having lots of trouble fulfilling the contracts because our tech just isn't "there" yet. I'm the Chief Tech Officer, and I'm also running some operations. The other main person in this is the CEO who has a strong sales background and came up with the idea. I've been working purely remotely, and it's great having some income because I'm stuck at home because I'm disabled, basically... We're using 11labs, openai, google speech to text, and a sh\*tty online dialer right now to run the first MVP which runs locally on our "botrunners" computers, and we're developing a web app with django python + javascript react. Our plan is, after we get the webapp working better, to hire more botrunners for $3 per hour from countries like Phillipines and India, and we're going to try to track all the actions the botrunners take to be able to train the AI to run it fully automated. The biggest problem we're facing right now with the tech is reducing latency, it started at 27 seconds to get a response and I've been able to get it down to 6 seconds, but people are still hanging up. We're trying several ways to mitigate this, including having pre-rendered speech playing something like "Okay" or "As an artificial representative, I'm still learning to be quicker on the pickup. We appreciate your patience." One of the industries we want to target is international web development and digital marketing companies, and we want to use the bot to cold-call businesses to pitch them our services. The goal is to replace $30 an hour cold-callers from the USA with $3 per hour total-cost automation. Apparently the CEO was given a $5 million valuation from the strength of the MVP from a VC. Our investment so far was at a $300k valuation tho. It's exciting. Trying to get Twilio working to be able to make calls programmatically instead of using our hacky workaround. Let me know if you have any questions, or feedback. Looking for digital marketing and web dev agencies to partner with to test the next stage of our business model. Thanks. I just wanted to share this awesome news!

WE JUST GOT $2,500 in angel investment for our AI Cold Calling Startup! Hooray! Looking for web dev + digital marketing agencies to partner with.
reddit
LLM Vibe Score0
Human Vibe Score1
GrowthGetThis week

WE JUST GOT $2,500 in angel investment for our AI Cold Calling Startup! Hooray! Looking for web dev + digital marketing agencies to partner with.

Hey y'all. The AI cold calling startup I've been working on for 3-4 months now just got a $2,500 angel investment, and we have 2 current customers, a credit card processing broker and a hospital equipment rental company based out of Texas. We have around $1,500 revenue so far, but we're having lots of trouble fulfilling the contracts because our tech just isn't "there" yet. I'm the Chief Tech Officer, and I'm also running some operations. The other main person in this is the CEO who has a strong sales background and came up with the idea. I've been working purely remotely, and it's great having some income because I'm stuck at home because I'm disabled, basically... We're using 11labs, openai, google speech to text, and a sh\*tty online dialer right now to run the first MVP which runs locally on our "botrunners" computers, and we're developing a web app with django python + javascript react. Our plan is, after we get the webapp working better, to hire more botrunners for $3 per hour from countries like Phillipines and India, and we're going to try to track all the actions the botrunners take to be able to train the AI to run it fully automated. The biggest problem we're facing right now with the tech is reducing latency, it started at 27 seconds to get a response and I've been able to get it down to 6 seconds, but people are still hanging up. We're trying several ways to mitigate this, including having pre-rendered speech playing something like "Okay" or "As an artificial representative, I'm still learning to be quicker on the pickup. We appreciate your patience." One of the industries we want to target is international web development and digital marketing companies, and we want to use the bot to cold-call businesses to pitch them our services. The goal is to replace $30 an hour cold-callers from the USA with $3 per hour total-cost automation. Apparently the CEO was given a $5 million valuation from the strength of the MVP from a VC. Our investment so far was at a $300k valuation tho. It's exciting. Trying to get Twilio working to be able to make calls programmatically instead of using our hacky workaround. Let me know if you have any questions, or feedback. Looking for digital marketing and web dev agencies to partner with to test the next stage of our business model. Thanks. I just wanted to share this awesome news!

AI Interns for Small Businesses: Who Will Lead the Market?
reddit
LLM Vibe Score0
Human Vibe Score1
OstrichGrand8119This week

AI Interns for Small Businesses: Who Will Lead the Market?

I've been working on making my own AI tools (https://openai.com/blog/introducing-gpts), kind of like building a team but without the big costs. It's like having a bunch of helpful interns, but they're all computer programs. This got me thinking a lot about small businesses like ours. Building My Own AI Team on a Budget Making these AI tools felt like creating my own team. It's really cheap compared to hiring real people, and these AI interns can do lots of different jobs. This is a big deal for folks like us who don't have lots of money to spend. Spotting What's Missing for Small Businesses While playing around with this AI stuff, I noticed there are things missing that small businesses really need. There's a big chance here to make something that fills these gaps, a tool made just for small businesses. The Big Question: Competing with Big Companies But here's the tricky part. Big companies like OpenAI are making their own AI stuff, like the GPT Store and GPT Enterprise. This makes me wonder if it's a good idea to make a new product that's kind of the same but more focused on what small businesses need. The Big Choice: Special Tools vs. Big Company Tools We're at a crossroads about what's better: Special Tools: Making something that's just right for small businesses could be really useful and fit our needs better. Big Company Tools: But, big companies have more stuff to offer and are already well-known. I Want to Hear From You If you run a small business or like tech stuff, what do you think? Would you like a special AI tool made for small businesses, or would you rather use the big ones from famous companies? How do you think the future looks for AI help in small businesses with all these changes? https://preview.redd.it/9pks3r65rg7c1.jpg?width=1460&format=pjpg&auto=webp&s=d767d2352f5e57e3303974f0b951a0176a0745c3

ChatGPT Full Course For 2025 | ChatGPT Tutorial For Beginnners | ChatGPT Course | Simplilearn
youtube
LLM Vibe Score0.369
Human Vibe Score0.26
SimplilearnMar 28, 2025

ChatGPT Full Course For 2025 | ChatGPT Tutorial For Beginnners | ChatGPT Course | Simplilearn

🔥Purdue - Applied Generative AI Specialization - https://www.simplilearn.com/applied-ai-course?utmcampaign=C4lBsBlloL0&utmmedium=Lives&utm_source=Youtube 🔥Professional Certificate Program in Generative AI and Machine Learning - IITG (India Only) - https://www.simplilearn.com/iitg-generative-ai-machine-learning-program?utmcampaign=C4lBsBlloL0&utmmedium=Lives&utm_source=Youtube 🔥Advanced Executive Program In Applied Generative AI - https://www.simplilearn.com/applied-generative-ai-course?utmcampaign=C4lBsBlloL0&utmmedium=Lives&utm_source=Youtube This ChatGPT Full Course 2025 by Simplilearn provides a comprehensive learning journey, starting with an introduction to ChatGPT and Generative AI, followed by insights into AI job opportunities and a comparison between ChatGPT 4.0 and 4.0 Turbo. The tutorial covers prompt engineering techniques, machine learning fundamentals, and running Llama models privately. Learners will explore ChatGPT-powered application development, its role in programming, and Excel automation. The course also dives into blogging, PowerPoint automation, customer support, and finance applications. Advanced topics like RAG vs. Prompt Tuning, prompt injection, and LangChain are included, along with discussions on OpenAI's latest innovations, including Sora and Strawberry. By the end, participants will gain a strong understanding of ChatGPT’s capabilities and monetization strategies. 🚀 Following are the topics covered in the ChatGPT Full Course 2025: 00:00:00 - Introduction to ChatGPT Full Course 2025 00:09:26 - What is ChatGPT 00:10:11 - What is Gen AI 00:26:29 - How to get Job in AI 00:27:06 - ChatGPT 40 vs ChatGPT 4 01:03:14 - Chatgpt analyse 02:13:57 - Prompt Engineering Tutorial 03:10:34 - What is Machine Learning 04:07:06 - Machine Learning Tutorial 04:08:13 - Run Lama Privately 04:23:50 - Search GPT 04:25:31 - Build App Using ChatGPT 06:31:11 - ChatGPT for Programming 06:46:08 - Prompt Formulae Chatgpt 07:58:38 - Automate Excel using Chatgpt 08:00:06 - Blogging with ChatGpt 08:27:25 - Powerpoint using Chatgpt 08:28:31 - Rag Vs Prompt Tuning 09:37:43 - Chatgpt for Customer Support 11:11:06 - ChatGPT for finance 11:17:38 - Prompt injection 11:18:38 - How to Earn Money using ChatGPT 11:41:46 - Open AI Strawberry 11:52:42 - Openai sora 11:54:57 - Langchain 12:22:19 - Open ai chatgpt o1 model ✅ Subscribe to our Channel to learn more about the top Technologies: https://bit.ly/2VT4WtH ⏩ Check out the Artificial Intelligence training videos: https://youtube.com/playlist?list=PLEiEAq2VkUULa5aOQmO_al2VVmhC-eqeI #gpt #chatgpt #chatgptforbeginners #chatgptcourse #genai #generativeai #artificialintelligence #ai #machinelearning #llm #simplilearn #2025 ➡️ About Professional Certificate Program in Generative AI and Machine Learning Dive into the future of AI with our Generative AI & Machine Learning course, in collaboration with E&ICT Academy, IIT Guwahati. Learn tools like ChatGPT, OpenAI, Hugging Face, Python, and more. Join masterclasses led by IITG faculty, engage in hands-on projects, and earn Executive Alumni Status. Key Features: ✅ Program completion certificate from E&ICT Academy, IIT Guwahati ✅ Curriculum delivered in live virtual classes by seasoned industry experts ✅ Exposure to the latest AI advancements, such as generative AI, LLMs, and prompt engineering ✅ Interactive live-virtual masterclasses delivered by esteemed IIT Guwahati faculty ✅ Opportunity to earn an 'Executive Alumni Status' from E&ICT Academy, IIT Guwahati ✅ Eligibility for a campus immersion program organized at IIT Guwahati ✅ Exclusive hackathons and “ask-me-anything” sessions by IBM ✅ Certificates for IBM courses and industry masterclasses by IBM experts ✅ Practical learning through 25+ hands-on projects and 3 industry-oriented capstone projects ✅ Access to a wide array of AI tools such as ChatGPT, Hugging Face, DALL-E 2, Midjourney and more ✅ Simplilearn's JobAssist helps you get noticed by top hiring companies Skills Covered: ✅ Generative AI ✅ Prompt Engineering ✅ Chatbot Development ✅ Supervised and Unsupervised Learning ✅ Model Training and Optimization ✅ Model Evaluation and Validation ✅ Ensemble Methods ✅ Deep Learning ✅ Natural Language Processing ✅ Computer Vision ✅ Reinforcement Learning ✅ Machine Learning Algorithms ✅ Speech Recognition ✅ Statistics Learning Path: ✅ Program Induction ✅ Programming Fundamentals ✅ Python for Data Science (IBM) ✅ Applied Data Science with Python ✅ Machine Learning ✅ Deep Learning with TensorFlow (IBM) ✅ Deep Learning Specialization ✅ Essentials of Generative AI, Prompt Engineering & ChatGPT ✅ Advanced Generative AI ✅ Capstone Electives: ✅ ADL & Computer Vision ✅ NLP and Speech Recognition ✅ Reinforcement Learning ✅ Academic Masterclass ✅ Industry Masterclass 👉 Learn More At: https://www.simplilearn.com/iitg-generative-ai-machine-learning-program?utmcampaign=C4lBsBlloL0&utmmedium=Lives&utm_source=Youtube

LLMs-from-scratch
github
LLM Vibe Score0.62
Human Vibe Score1
rasbtMar 28, 2025

LLMs-from-scratch

Build a Large Language Model (From Scratch) This repository contains the code for developing, pretraining, and finetuning a GPT-like LLM and is the official code repository for the book Build a Large Language Model (From Scratch). In Build a Large Language Model (From Scratch), you'll learn and understand how large language models (LLMs) work from the inside out by coding them from the ground up, step by step. In this book, I'll guide you through creating your own LLM, explaining each stage with clear text, diagrams, and examples. The method described in this book for training and developing your own small-but-functional model for educational purposes mirrors the approach used in creating large-scale foundational models such as those behind ChatGPT. In addition, this book includes code for loading the weights of larger pretrained models for finetuning. Link to the official source code repository Link to the book at Manning (the publisher's website) Link to the book page on Amazon.com ISBN 9781633437166 To download a copy of this repository, click on the Download ZIP button or execute the following command in your terminal: (If you downloaded the code bundle from the Manning website, please consider visiting the official code repository on GitHub at https://github.com/rasbt/LLMs-from-scratch for the latest updates.) Table of Contents Please note that this README.md file is a Markdown (.md) file. If you have downloaded this code bundle from the Manning website and are viewing it on your local computer, I recommend using a Markdown editor or previewer for proper viewing. If you haven't installed a Markdown editor yet, MarkText is a good free option. You can alternatively view this and other files on GitHub at https://github.com/rasbt/LLMs-from-scratch in your browser, which renders Markdown automatically. Tip: If you're seeking guidance on installing Python and Python packages and setting up your code environment, I suggest reading the README.md file located in the setup directory. | Chapter Title | Main Code (for Quick Access) | All Code + Supplementary | |------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|-------------------------------| | Setup recommendations | - | - | | Ch 1: Understanding Large Language Models | No code | - | | Ch 2: Working with Text Data | - ch02.ipynb- dataloader.ipynb (summary)- exercise-solutions.ipynb | ./ch02 | | Ch 3: Coding Attention Mechanisms | - ch03.ipynb- multihead-attention.ipynb (summary) - exercise-solutions.ipynb| ./ch03 | | Ch 4: Implementing a GPT Model from Scratch | - ch04.ipynb- gpt.py (summary)- exercise-solutions.ipynb | ./ch04 | | Ch 5: Pretraining on Unlabeled Data | - ch05.ipynb- gpttrain.py (summary) - gptgenerate.py (summary) - exercise-solutions.ipynb | ./ch05 | | Ch 6: Finetuning for Text Classification | - ch06.ipynb - gptclassfinetune.py - exercise-solutions.ipynb | ./ch06 | | Ch 7: Finetuning to Follow Instructions | - ch07.ipynb- gptinstructionfinetuning.py (summary)- ollamaevaluate.py (summary)- exercise-solutions.ipynb | ./ch07 | | Appendix A: Introduction to PyTorch | - code-part1.ipynb- code-part2.ipynb- DDP-script.py- exercise-solutions.ipynb | ./appendix-A | | Appendix B: References and Further Reading | No code | - | | Appendix C: Exercise Solutions | No code | - | | Appendix D: Adding Bells and Whistles to the Training Loop | - appendix-D.ipynb | ./appendix-D | | Appendix E: Parameter-efficient Finetuning with LoRA | - appendix-E.ipynb | ./appendix-E | The mental model below summarizes the contents covered in this book. Hardware Requirements The code in the main chapters of this book is designed to run on conventional laptops within a reasonable timeframe and does not require specialized hardware. This approach ensures that a wide audience can engage with the material. Additionally, the code automatically utilizes GPUs if they are available. (Please see the setup doc for additional recommendations.) Bonus Material Several folders contain optional materials as a bonus for interested readers: Setup Python Setup Tips Installing Python Packages and Libraries Used In This Book Docker Environment Setup Guide Chapter 2: Working with text data Byte Pair Encoding (BPE) Tokenizer From Scratch Comparing Various Byte Pair Encoding (BPE) Implementations Understanding the Difference Between Embedding Layers and Linear Layers Dataloader Intuition with Simple Numbers Chapter 3: Coding attention mechanisms Comparing Efficient Multi-Head Attention Implementations Understanding PyTorch Buffers Chapter 4: Implementing a GPT model from scratch FLOPS Analysis Chapter 5: Pretraining on unlabeled data: Alternative Weight Loading Methods Pretraining GPT on the Project Gutenberg Dataset Adding Bells and Whistles to the Training Loop Optimizing Hyperparameters for Pretraining Building a User Interface to Interact With the Pretrained LLM Converting GPT to Llama Llama 3.2 From Scratch Memory-efficient Model Weight Loading Extending the Tiktoken BPE Tokenizer with New Tokens PyTorch Performance Tips for Faster LLM Training Chapter 6: Finetuning for classification Additional experiments finetuning different layers and using larger models Finetuning different models on 50k IMDB movie review dataset Building a User Interface to Interact With the GPT-based Spam Classifier Chapter 7: Finetuning to follow instructions Dataset Utilities for Finding Near Duplicates and Creating Passive Voice Entries Evaluating Instruction Responses Using the OpenAI API and Ollama Generating a Dataset for Instruction Finetuning Improving a Dataset for Instruction Finetuning Generating a Preference Dataset with Llama 3.1 70B and Ollama Direct Preference Optimization (DPO) for LLM Alignment Building a User Interface to Interact With the Instruction Finetuned GPT Model Questions, Feedback, and Contributing to This Repository I welcome all sorts of feedback, best shared via the Manning Forum or GitHub Discussions. Likewise, if you have any questions or just want to bounce ideas off others, please don't hesitate to post these in the forum as well. Please note that since this repository contains the code corresponding to a print book, I currently cannot accept contributions that would extend the contents of the main chapter code, as it would introduce deviations from the physical book. Keeping it consistent helps ensure a smooth experience for everyone. Citation If you find this book or code useful for your research, please consider citing it. Chicago-style citation: Raschka, Sebastian. Build A Large Language Model (From Scratch). Manning, 2024. ISBN: 978-1633437166. BibTeX entry:

Prompt_Engineering
github
LLM Vibe Score0.611
Human Vibe Score0.9298414218113789
NirDiamantMar 28, 2025

Prompt_Engineering

🌟 Support This Project: Your sponsorship fuels innovation in prompt engineering development. Become a sponsor to help maintain and expand this valuable resource! Prompt Engineering Techniques: Comprehensive Repository for Development and Implementation 🖋️ Welcome to one of the most extensive and dynamic collections of Prompt Engineering tutorials and implementations available today. This repository serves as a comprehensive resource for learning, building, and sharing prompt engineering techniques, ranging from basic concepts to advanced strategies for leveraging large language models. 📫 Stay Updated! 🚀Cutting-edgeUpdates 💡ExpertInsights 🎯Top 0.1%Content Join over 15,000 of AI enthusiasts getting unique cutting-edge insights and free tutorials! Plus, subscribers get exclusive early access and special discounts to our upcoming RAG Techniques course! Introduction Prompt engineering is at the forefront of artificial intelligence, revolutionizing the way we interact with and leverage AI technologies. This repository is designed to guide you through the development journey, from basic prompt structures to advanced, cutting-edge techniques. Our goal is to provide a valuable resource for everyone - from beginners taking their first steps in AI to seasoned practitioners pushing the boundaries of what's possible. By offering a range of examples from foundational to complex, we aim to facilitate learning, experimentation, and innovation in the rapidly evolving field of prompt engineering. Furthermore, this repository serves as a platform for showcasing innovative prompt engineering techniques. Whether you've developed a novel approach or found an innovative application for existing techniques, we encourage you to share your work with the community. 📖 Get the Fully Explained Version of This Repo This repository contains 22 hands-on Jupyter Notebook tutorials covering key prompt engineering techniques. If you want to go deeper with full explanations, intuitive insights, and structured exercises, check out the expanded version in book format: 📚 Prompt Engineering from Zero to Hero 📖 All 22 techniques from this repo, fully explained in depth 🧠 Step-by-step breakdowns of key concepts & best practices 🏋️ Hands-on exercises to sharpen your skills 🎯 Designed for learners who want a structured, guided approach 📄 Instant access to the PDF upon purchase 📱 Readable on any device – computer, tablet, or phone 💡 Subscribers to the DiamantAI newsletter receive an exclusive 33% (!) discount on the book. 👉 Get the full explained version here Related Projects 📚 Explore my comprehensive guide on RAG techniques to learn how to enhance AI systems with external knowledge retrieval, complementing language model capabilities with rich, up-to-date information. 🤖 Dive into my GenAI Agents Repository for a wide range of AI agent implementations and tutorials, from simple conversational bots to complex, multi-agent systems for various applications. A Community-Driven Knowledge Hub This repository grows stronger with your contributions! Join our vibrant Discord community — the central hub for shaping and advancing this project together 🤝 DiamantAI Discord Community Whether you're a novice eager to learn or an expert ready to share your knowledge, your insights can shape the future of prompt engineering. Join us to propose ideas, get feedback, and collaborate on innovative implementations. For contribution guidelines, please refer to our CONTRIBUTING.md file. Let's advance prompt engineering technology together! 🔗 For discussions on GenAI, or to explore knowledge-sharing opportunities, feel free to connect on LinkedIn. Key Features 🎓 Learn prompt engineering techniques from beginner to advanced levels 🧠 Explore a wide range of prompt structures and applications 📚 Step-by-step tutorials and comprehensive documentation 🛠️ Practical, ready-to-use prompt implementations 🌟 Regular updates with the latest advancements in prompt engineering 🤝 Share your own prompt engineering creations with the community Prompt Engineering Techniques Explore our extensive list of prompt engineering techniques, ranging from basic to advanced: 🌱 Fundamental Concepts Introduction to Prompt Engineering Overview 🔎 A comprehensive introduction to the fundamental concepts of prompt engineering in the context of AI and language models. Implementation 🛠️ Combines theoretical explanations with practical demonstrations, covering basic concepts, structured prompts, comparative analysis, and problem-solving applications. Basic Prompt Structures Overview 🔎 Explores two fundamental types of prompt structures: single-turn prompts and multi-turn prompts (conversations). Implementation 🛠️ Uses OpenAI's GPT model and LangChain to demonstrate single-turn and multi-turn prompts, prompt templates, and conversation chains. Prompt Templates and Variables Overview 🔎 Introduces creating and using prompt templates with variables, focusing on Python and the Jinja2 templating engine. Implementation 🛠️ Covers template creation, variable insertion, conditional content, list processing, and integration with the OpenAI API. 🔧 Core Techniques Zero-Shot Prompting Overview 🔎 Explores zero-shot prompting, allowing language models to perform tasks without specific examples or prior training. Implementation 🛠️ Demonstrates direct task specification, role-based prompting, format specification, and multi-step reasoning using OpenAI and LangChain. Few-Shot Learning and In-Context Learning Overview 🔎 Covers Few-Shot Learning and In-Context Learning techniques using OpenAI's GPT models and the LangChain library. Implementation 🛠️ Implements basic and advanced few-shot learning, in-context learning, and best practices for example selection and evaluation. Chain of Thought (CoT) Prompting Overview 🔎 Introduces Chain of Thought (CoT) prompting, encouraging AI models to break down complex problems into step-by-step reasoning processes. Implementation 🛠️ Covers basic and advanced CoT techniques, applying them to various problem-solving scenarios and comparing results with standard prompts. 🔍 Advanced Strategies Self-Consistency and Multiple Paths of Reasoning Overview 🔎 Explores techniques for generating diverse reasoning paths and aggregating results to improve AI-generated answers. Implementation 🛠️ Demonstrates designing diverse reasoning prompts, generating multiple responses, implementing aggregation methods, and applying self-consistency checks. Constrained and Guided Generation Overview 🔎 Focuses on techniques to set up constraints for model outputs and implement rule-based generation. Implementation 🛠️ Uses LangChain's PromptTemplate for structured prompts, implements constraints, and explores rule-based generation techniques. Role Prompting Overview 🔎 Explores assigning specific roles to AI models and crafting effective role descriptions. Implementation 🛠️ Demonstrates creating role-based prompts, assigning roles to AI models, and refining role descriptions for various scenarios. 🚀 Advanced Implementations Task Decomposition in Prompts Overview 🔎 Explores techniques for breaking down complex tasks and chaining subtasks in prompts. Implementation 🛠️ Covers problem analysis, subtask definition, targeted prompt engineering, sequential execution, and result synthesis. Prompt Chaining and Sequencing Overview 🔎 Demonstrates how to connect multiple prompts and build logical flows for complex AI-driven tasks. Implementation 🛠️ Explores basic prompt chaining, sequential prompting, dynamic prompt generation, and error handling within prompt chains. Instruction Engineering Overview 🔎 Focuses on crafting clear and effective instructions for language models, balancing specificity and generality. Implementation 🛠️ Covers creating and refining instructions, experimenting with different structures, and implementing iterative improvement based on model responses. 🎨 Optimization and Refinement Prompt Optimization Techniques Overview 🔎 Explores advanced techniques for optimizing prompts, focusing on A/B testing and iterative refinement. Implementation 🛠️ Demonstrates A/B testing of prompts, iterative refinement processes, and performance evaluation using relevant metrics. Handling Ambiguity and Improving Clarity Overview 🔎 Focuses on identifying and resolving ambiguous prompts and techniques for writing clearer prompts. Implementation 🛠️ Covers analyzing ambiguous prompts, implementing strategies to resolve ambiguity, and exploring techniques for writing clearer prompts. Prompt Length and Complexity Management Overview 🔎 Explores techniques for managing prompt length and complexity when working with large language models. Implementation 🛠️ Demonstrates techniques for balancing detail and conciseness, and strategies for handling long contexts including chunking, summarization, and iterative processing. 🛠️ Specialized Applications Negative Prompting and Avoiding Undesired Outputs Overview 🔎 Explores negative prompting and techniques for avoiding undesired outputs from large language models. Implementation 🛠️ Covers basic negative examples, explicit exclusions, constraint implementation using LangChain, and methods for evaluating and refining negative prompts. Prompt Formatting and Structure Overview 🔎 Explores various prompt formats and structural elements, demonstrating their impact on AI model responses. Implementation 🛠️ Demonstrates creating various prompt formats, incorporating structural elements, and comparing responses from different prompt structures. Prompts for Specific Tasks Overview 🔎 Explores the creation and use of prompts for specific tasks: text summarization, question-answering, code generation, and creative writing. Implementation 🛠️ Covers designing task-specific prompt templates, implementing them using LangChain, executing with sample inputs, and analyzing outputs for each task type. 🌍 Advanced Applications Multilingual and Cross-lingual Prompting Overview 🔎 Explores techniques for designing prompts that work effectively across multiple languages and for language translation tasks. Implementation 🛠️ Covers creating multilingual prompts, implementing language detection and adaptation, designing cross-lingual translation prompts, and handling various writing systems and scripts. Ethical Considerations in Prompt Engineering Overview 🔎 Explores the ethical dimensions of prompt engineering, focusing on avoiding biases and creating inclusive and fair prompts. Implementation 🛠️ Covers identifying biases in prompts, implementing strategies to create inclusive prompts, and methods to evaluate and improve the ethical quality of AI outputs. Prompt Security and Safety Overview 🔎 Focuses on preventing prompt injections and implementing content filters in prompts for safe and secure AI applications. Implementation 🛠️ Covers techniques for prompt injection prevention, content filtering implementation, and testing the effectiveness of security and safety measures. Evaluating Prompt Effectiveness Overview 🔎 Explores methods and techniques for evaluating the effectiveness of prompts in AI language models. Implementation 🛠️ Covers setting up evaluation metrics, implementing manual and automated evaluation techniques, and providing practical examples using OpenAI and LangChain. Getting Started To begin exploring and implementing prompt engineering techniques: Clone this repository: Navigate to the technique you're interested in: Follow the detailed implementation guide in each technique's notebook. Contributing We welcome contributions from the community! If you have a new technique or improvement to suggest: Fork the repository Create your feature branch: git checkout -b feature/AmazingFeature Commit your changes: git commit -m 'Add some AmazingFeature' Push to the branch: git push origin feature/AmazingFeature Open a pull request License This project is licensed under a custom non-commercial license - see the LICENSE file for details. ⭐️ If you find this repository helpful, please consider giving it a star! Keywords: Prompt Engineering, AI, Machine Learning, Natural Language Processing, LLM, Language Models, NLP, Conversational AI, Zero-Shot Learning, Few-Shot Learning, Chain of Thought

AITreasureBox
github
LLM Vibe Score0.447
Human Vibe Score0.1014145151561518
superiorluMar 28, 2025

AITreasureBox

AI TreasureBox English | 中文 Collect practical AI repos, tools, websites, papers and tutorials on AI. Translated from ChatGPT, picture from Midjourney. Catalog Repos Tools Websites Report&Paper Tutorials Repos updated repos and stars every 2 hours and re-ranking automatically. | No. | Repos | Description | | ----:|:-----------------------------------------|:------------------------------------------------------------------------------------------------------| | 1|🔥codecrafters-io/build-your-own-x !2025-03-28364681428|Master programming by recreating your favorite technologies from scratch.| | 2|sindresorhus/awesome !2025-03-28353614145|😎 Awesome lists about all kinds of interesting topics| | 3|public-apis/public-apis !2025-03-28334299125|A collective list of free APIs| | 4|kamranahmedse/developer-roadmap !2025-03-2831269540|Interactive roadmaps, guides and other educational content to help developers grow in their careers.| | 5|vinta/awesome-python !2025-03-28238581114|A curated list of awesome Python frameworks, libraries, software and resources| | 6|practical-tutorials/project-based-learning !2025-03-28222661124|Curated list of project-based tutorials| | 7|tensorflow/tensorflow !2025-03-281888714|An Open Source Machine Learning Framework for Everyone| | 8|Significant-Gravitas/AutoGPT !2025-03-2817391338|An experimental open-source attempt to make GPT-4 fully autonomous.| | 9|jackfrued/Python-100-Days !2025-03-2816305141|Python - 100天从新手到大师| | 10|AUTOMATIC1111/stable-diffusion-webui !2025-03-2815011553|Stable Diffusion web UI| | 11|huggingface/transformers !2025-03-2814207850|🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.| | 12|ollama/ollama !2025-03-28135166151|Get up and running with Llama 2, Mistral, Gemma, and other large language models.| | 13|f/awesome-chatgpt-prompts !2025-03-2812212738 |This repo includes ChatGPT prompt curation to use ChatGPT better.| | 14|justjavac/free-programming-books-zhCN !2025-03-2811316119|📚 免费的计算机编程类中文书籍,欢迎投稿| | 15|krahets/hello-algo !2025-03-2811107930|《Hello 算法》:动画图解、一键运行的数据结构与算法教程。支持 Python, Java, C++, C, C#, JS, Go, Swift, Rust, Ruby, Kotlin, TS, Dart 代码。简体版和繁体版同步更新,English version ongoing| | 16|yt-dlp/yt-dlp !2025-03-28105801114|A feature-rich command-line audio/video downloader| | 17|langchain-ai/langchain !2025-03-2810449479|⚡ Building applications with LLMs through composability ⚡| | 18|goldbergyoni/nodebestpractices !2025-03-281021629|✅ The Node.js best practices list (July 2024)| | 19|puppeteer/puppeteer !2025-03-289018212|JavaScript API for Chrome and Firefox| | 20|pytorch/pytorch !2025-03-288833938|Tensors and Dynamic neural networks in Python with strong GPU acceleration| | 21|neovim/neovim !2025-03-288781482|Vim-fork focused on extensibility and usability| | 22|🔥🔥langgenius/dify !2025-03-2887342639 |One API for plugins and datasets, one interface for prompt engineering and visual operation, all for creating powerful AI applications.| | 23|mtdvio/every-programmer-should-know !2025-03-28867069|A collection of (mostly) technical things every software developer should know about| | 24|open-webui/open-webui !2025-03-2886025159|User-friendly WebUI for LLMs (Formerly Ollama WebUI)| | 25|ChatGPTNextWeb/NextChat !2025-03-288231521|✨ Light and Fast AI Assistant. Support: Web | | 26|supabase/supabase !2025-03-287990956|The open source Firebase alternative.| | 27|openai/whisper !2025-03-287905542|Robust Speech Recognition via Large-Scale Weak Supervision| | 28|home-assistant/core !2025-03-287773219|🏡 Open source home automation that puts local control and privacy first.| | 29|tensorflow/models !2025-03-28774694|Models and examples built with TensorFlow| | 30| ggerganov/llama.cpp !2025-03-287731836 | Port of Facebook's LLaMA model in C/C++ | | 31|3b1b/manim !2025-03-287641918|Animation engine for explanatory math videos| | 32|microsoft/generative-ai-for-beginners !2025-03-287623860|12 Lessons, Get Started Building with Generative AI 🔗 https://microsoft.github.io/generative-ai-for-beginners/| | 33|nomic-ai/gpt4all !2025-03-28729285 |gpt4all: an ecosystem of open-source chatbots trained on a massive collection of clean assistant data including code, stories and dialogue| | 34|comfyanonymous/ComfyUI !2025-03-2872635111|The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.| | 35|bregman-arie/devops-exercises !2025-03-2872225209|Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions| | 36|elastic/elasticsearch !2025-03-28721419|Free and Open, Distributed, RESTful Search Engine| | 37|🔥n8n-io/n8n !2025-03-2872093495|Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.| | 38|fighting41love/funNLP !2025-03-287200422|The Most Powerful NLP-Weapon Arsenal| | 39|hoppscotch/hoppscotch !2025-03-287060134|Open source API development ecosystem - https://hoppscotch.io (open-source alternative to Postman, Insomnia)| | 40|abi/screenshot-to-code !2025-03-286932817|Drop in a screenshot and convert it to clean HTML/Tailwind/JS code| | 41|binary-husky/gptacademic !2025-03-28680374|Academic Optimization of GPT| | 42|d2l-ai/d2l-zh !2025-03-286774142|Targeting Chinese readers, functional and open for discussion. The Chinese and English versions are used for teaching in over 400 universities across more than 60 countries| | 43|josephmisiti/awesome-machine-learning !2025-03-286739215|A curated list of awesome Machine Learning frameworks, libraries and software.| | 44|grafana/grafana !2025-03-286725414|The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.| | 45|python/cpython !2025-03-286602218|The Python programming language| | 46|apache/superset !2025-03-286519020|Apache Superset is a Data Visualization and Data Exploration Platform| | 47|xtekky/gpt4free !2025-03-28639391 |decentralizing the Ai Industry, free gpt-4/3.5 scripts through several reverse engineered API's ( poe.com, phind.com, chat.openai.com etc...)| | 48|sherlock-project/sherlock !2025-03-286332536|Hunt down social media accounts by username across social networks| | 49|twitter/the-algorithm !2025-03-28630586 |Source code for Twitter's Recommendation Algorithm| | 50|keras-team/keras !2025-03-28627835|Deep Learning for humans| | 51|openai/openai-cookbook !2025-03-28625136 |Examples and guides for using the OpenAI API| | 52|immich-app/immich !2025-03-286238670|High performance self-hosted photo and video management solution.| | 53|AppFlowy-IO/AppFlowy !2025-03-286173528|Bring projects, wikis, and teams together with AI. AppFlowy is an AI collaborative workspace where you achieve more without losing control of your data. The best open source alternative to Notion.| | 54|scikit-learn/scikit-learn !2025-03-286158212|scikit-learn: machine learning in Python| | 55|binhnguyennus/awesome-scalability !2025-03-286117021|The Patterns of Scalable, Reliable, and Performant Large-Scale Systems| | 56|labmlai/annotateddeeplearningpaperimplementations !2025-03-285951726|🧑‍🏫 59 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠| | 57|OpenInterpreter/open-interpreter !2025-03-285894710|A natural language interface for computers| | 58|lobehub/lobe-chat !2025-03-285832054|🤖 Lobe Chat - an open-source, extensible (Function Calling), high-performance chatbot framework. It supports one-click free deployment of your private ChatGPT/LLM web application.| | 59|meta-llama/llama !2025-03-28579536|Inference code for Llama models| | 60|nuxt/nuxt !2025-03-28566437|The Intuitive Vue Framework.| | 61|imartinez/privateGPT !2025-03-28555192|Interact with your documents using the power of GPT, 100% privately, no data leaks| | 62|Stirling-Tools/Stirling-PDF !2025-03-285500846|#1 Locally hosted web application that allows you to perform various operations on PDF files| | 63|PlexPt/awesome-chatgpt-prompts-zh !2025-03-285459720|ChatGPT Chinese Training Guide. Guidelines for various scenarios. Learn how to make it listen to you| | 64|dair-ai/Prompt-Engineering-Guide !2025-03-285451025 |🐙 Guides, papers, lecture, notebooks and resources for prompt engineering| | 65|ageitgey/facerecognition !2025-03-28544382|The world's simplest facial recognition api for Python and the command line| | 66|CorentinJ/Real-Time-Voice-Cloning !2025-03-285384814|Clone a voice in 5 seconds to generate arbitrary speech in real-time| | 67|geekan/MetaGPT !2025-03-285375376|The Multi-Agent Meta Programming Framework: Given one line Requirement, return PRD, Design, Tasks, Repo | | 68|gpt-engineer-org/gpt-engineer !2025-03-285367419|Specify what you want it to build, the AI asks for clarification, and then builds it.| | 69|lencx/ChatGPT !2025-03-2853653-3|🔮 ChatGPT Desktop Application (Mac, Windows and Linux)| | 70|deepfakes/faceswap !2025-03-28535672|Deepfakes Software For All| | 71|langflow-ai/langflow !2025-03-285319584|Langflow is a low-code app builder for RAG and multi-agent AI applications. It’s Python-based and agnostic to any model, API, or database.| | 72|commaai/openpilot !2025-03-28529759|openpilot is an operating system for robotics. Currently, it upgrades the driver assistance system on 275+ supported cars.| | 73|clash-verge-rev/clash-verge-rev !2025-03-2852848124|Continuation of Clash Verge - A Clash Meta GUI based on Tauri (Windows, MacOS, Linux)| | 74|All-Hands-AI/OpenHands !2025-03-285150675|🙌 OpenHands: Code Less, Make More| | 75|xai-org/grok-1 !2025-03-28502504|Grok open release| | 76|meilisearch/meilisearch !2025-03-284999122|A lightning-fast search API that fits effortlessly into your apps, websites, and workflow| | 77|🔥browser-use/browser-use !2025-03-2849910294|Make websites accessible for AI agents| | 78|jgthms/bulma !2025-03-28496783|Modern CSS framework based on Flexbox| | 79|facebookresearch/segment-anything !2025-03-284947116|The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.| |!green-up-arrow.svg 80|hacksider/Deep-Live-Cam !2025-03-2848612146|real time face swap and one-click video deepfake with only a single image (uncensored)| |!red-down-arrow 81|mlabonne/llm-course !2025-03-284860934|Course with a roadmap and notebooks to get into Large Language Models (LLMs).| | 82|PaddlePaddle/PaddleOCR !2025-03-284785530|Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)| | 83|alist-org/alist !2025-03-284732618|🗂️A file list/WebDAV program that supports multiple storages, powered by Gin and Solidjs. / 一个支持多存储的文件列表/WebDAV程序,使用 Gin 和 Solidjs。| | 84|infiniflow/ragflow !2025-03-2847027129|RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.| | 85|Avik-Jain/100-Days-Of-ML-Code !2025-03-284679312|100 Days of ML Coding| | 86|v2ray/v2ray-core !2025-03-28458706|A platform for building proxies to bypass network restrictions.| | 87|hiyouga/LLaMA-Factory !2025-03-284555881|Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)| | 88|Asabeneh/30-Days-Of-Python !2025-03-284544930|30 days of Python programming challenge is a step-by-step guide to learn the Python programming language in 30 days. This challenge may take more than100 days, follow your own pace. These videos may help too: https://www.youtube.com/channel/UC7PNRuno1rzYPb1xLa4yktw| | 89|type-challenges/type-challenges !2025-03-284488511|Collection of TypeScript type challenges with online judge| | 90|lllyasviel/Fooocus !2025-03-284402716|Focus on prompting and generating| | 91|RVC-Boss/GPT-SoVITS !2025-03-284327738|1 min voice data can also be used to train a good TTS model! (few shot voice cloning)| | 92|rasbt/LLMs-from-scratch !2025-03-284320667|Implementing a ChatGPT-like LLM from scratch, step by step| | 93|oobabooga/text-generation-webui !2025-03-284302012 |A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, OPT, and GALACTICA.| | 94|vllm-project/vllm !2025-03-2842982102|A high-throughput and memory-efficient inference and serving engine for LLMs| | 95|dani-garcia/vaultwarden !2025-03-284297121|Unofficial Bitwarden compatible server written in Rust, formerly known as bitwarden_rs| | 96|microsoft/autogen !2025-03-284233049|Enable Next-Gen Large Language Model Applications. Join our Discord: https://discord.gg/pAbnFJrkgZ| | 97|jeecgboot/JeecgBoot !2025-03-284205920|🔥「企业级低代码平台」前后端分离架构SpringBoot 2.x/3.x,SpringCloud,Ant Design&Vue3,Mybatis,Shiro,JWT。强大的代码生成器让前后端代码一键生成,无需写任何代码! 引领新的开发模式OnlineCoding->代码生成->手工MERGE,帮助Java项目解决70%重复工作,让开发更关注业务,既能快速提高效率,帮助公司节省成本,同时又不失灵活性。| | 98|Mintplex-Labs/anything-llm !2025-03-284186955|A full-stack application that turns any documents into an intelligent chatbot with a sleek UI and easier way to manage your workspaces.| | 99|THUDM/ChatGLM-6B !2025-03-28410192 |ChatGLM-6B: An Open Bilingual Dialogue Language Model| | 100|hpcaitech/ColossalAI !2025-03-28406902|Making large AI models cheaper, faster and more accessible| | 101|Stability-AI/stablediffusion !2025-03-28406337|High-Resolution Image Synthesis with Latent Diffusion Models| | 102|mingrammer/diagrams !2025-03-28405063|🎨 Diagram as Code for prototyping cloud system architectures| | 103|Kong/kong !2025-03-28404616|🦍 The Cloud-Native API Gateway and AI Gateway.| | 104|getsentry/sentry !2025-03-284040913|Developer-first error tracking and performance monitoring| | 105| karpathy/nanoGPT !2025-03-284034613 |The simplest, fastest repository for training/finetuning medium-sized GPTs| | 106|fastlane/fastlane !2025-03-2840014-1|🚀 The easiest way to automate building and releasing your iOS and Android apps| | 107|psf/black !2025-03-28399765|The uncompromising Python code formatter| | 108|OpenBB-finance/OpenBBTerminal !2025-03-283972074 |Investment Research for Everyone, Anywhere.| | 109|2dust/v2rayNG !2025-03-283943415|A V2Ray client for Android, support Xray core and v2fly core| | 110|apache/airflow !2025-03-283937314|Apache Airflow - A platform to programmatically author, schedule, and monitor workflows| | 111|KRTirtho/spotube !2025-03-283902746|🎧 Open source Spotify client that doesn't require Premium nor uses Electron! Available for both desktop & mobile!| | 112|coqui-ai/TTS !2025-03-283889719 |🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production| | 113|ggerganov/whisper.cpp !2025-03-283882116|Port of OpenAI's Whisper model in C/C++| | 114|ultralytics/ultralytics !2025-03-283866951|NEW - YOLOv8 🚀 in PyTorch > ONNX > OpenVINO > CoreML > TFLite| | 115|typst/typst !2025-03-283863914|A new markup-based typesetting system that is powerful and easy to learn.| | 116|streamlit/streamlit !2025-03-283845828|Streamlit — A faster way to build and share data apps.| | 117|LC044/WeChatMsg !2025-03-283836931|提取微信聊天记录,将其导出成HTML、Word、Excel文档永久保存,对聊天记录进行分析生成年度聊天报告,用聊天数据训练专属于个人的AI聊天助手| | 118|lm-sys/FastChat !2025-03-283822112 |An open platform for training, serving, and evaluating large languages. Release repo for Vicuna and FastChat-T5.| | 119|NaiboWang/EasySpider !2025-03-283819013|A visual no-code/code-free web crawler/spider易采集:一个可视化浏览器自动化测试/数据采集/爬虫软件,可以无代码图形化的设计和执行爬虫任务。别名:ServiceWrapper面向Web应用的智能化服务封装系统。| | 120|microsoft/DeepSpeed !2025-03-283765816 |A deep learning optimization library that makes distributed training and inference easy, efficient, and effective| | 121|QuivrHQ/quivr !2025-03-28376067|Your GenAI Second Brain 🧠 A personal productivity assistant (RAG) ⚡️🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, that you can share with users ! Local & Private alternative to OpenAI GPTs & ChatGPT powered by retrieval-augmented generation.| | 122|freqtrade/freqtrade !2025-03-283757817 |Free, open source crypto trading bot| | 123|suno-ai/bark !2025-03-28373178 |🔊 Text-Prompted Generative Audio Model| | 124|🔥cline/cline !2025-03-2837307282|Autonomous coding agent right in your IDE, capable of creating/editing files, executing commands, and more with your permission every step of the way.| | 125|LAION-AI/Open-Assistant !2025-03-28372712 |OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.| | 126|penpot/penpot !2025-03-283716217|Penpot: The open-source design tool for design and code collaboration| | 127|gradio-app/gradio !2025-03-283713320|Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!| | 128|FlowiseAI/Flowise !2025-03-283667135 |Drag & drop UI to build your customized LLM flow using LangchainJS| | 129|SimplifyJobs/Summer2025-Internships !2025-03-28366506|Collection of Summer 2025 tech internships!| | 130|TencentARC/GFPGAN !2025-03-28365027 |GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.| | 131|ray-project/ray !2025-03-283626819|Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a toolkit of libraries (Ray AIR) for accelerating ML workloads.| | 132|babysor/MockingBird !2025-03-28360498|🚀AI拟声: 5秒内克隆您的声音并生成任意语音内容 Clone a voice in 5 seconds to generate arbitrary speech in real-time| | 133|unslothai/unsloth !2025-03-283603691|5X faster 50% less memory LLM finetuning| | 134|zhayujie/chatgpt-on-wechat !2025-03-283600124 |Wechat robot based on ChatGPT, which uses OpenAI api and itchat library| | 135|upscayl/upscayl !2025-03-283599824|🆙 Upscayl - Free and Open Source AI Image Upscaler for Linux, MacOS and Windows built with Linux-First philosophy.| | 136|freeCodeCamp/devdocs !2025-03-28359738|API Documentation Browser| | 137|XingangPan/DragGAN !2025-03-28359043 |Code for DragGAN (SIGGRAPH 2023)| | 138|2noise/ChatTTS !2025-03-283543922|ChatTTS is a generative speech model for daily dialogue.| | 139|google-research/google-research !2025-03-28352207 |Google Research| | 140|karanpratapsingh/system-design !2025-03-28351003|Learn how to design systems at scale and prepare for system design interviews| | 141|lapce/lapce !2025-03-28350855|Lightning-fast and Powerful Code Editor written in Rust| | 142| microsoft/TaskMatrix !2025-03-2834500-3 | Talking, Drawing and Editing with Visual Foundation Models| | 143|chatchat-space/Langchain-Chatchat !2025-03-283442020|Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM) QA app with langchain| | 144|unclecode/crawl4ai !2025-03-283434163|🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper| | 145|Bin-Huang/chatbox !2025-03-283374733 |A desktop app for GPT-4 / GPT-3.5 (OpenAI API) that supports Windows, Mac & Linux| | 146|milvus-io/milvus !2025-03-283366525 |A cloud-native vector database, storage for next generation AI applications| | 147|mendableai/firecrawl !2025-03-2833297128|🔥 Turn entire websites into LLM-ready markdown| | 148|pola-rs/polars !2025-03-283269320|Fast multi-threaded, hybrid-out-of-core query engine focussing on DataFrame front-ends| | 149|Pythagora-io/gpt-pilot !2025-03-28325321|PoC for a scalable dev tool that writes entire apps from scratch while the developer oversees the implementation| | 150|hashicorp/vault !2025-03-28320797|A tool for secrets management, encryption as a service, and privileged access management| | 151|shardeum/shardeum !2025-03-28319580|Shardeum is an EVM based autoscaling blockchain| | 152|Chanzhaoyu/chatgpt-web !2025-03-28319242 |A demonstration website built with Express and Vue3 called ChatGPT| | 153|lllyasviel/ControlNet !2025-03-283186413 |Let us control diffusion models!| | 154|google/jax !2025-03-28317727|Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more| | 155|facebookresearch/detectron2 !2025-03-28315987|Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.| | 156|myshell-ai/OpenVoice !2025-03-28315233|Instant voice cloning by MyShell| | 157|TheAlgorithms/C-Plus-Plus !2025-03-283151411|Collection of various algorithms in mathematics, machine learning, computer science and physics implemented in C++ for educational purposes.| | 158|hiroi-sora/Umi-OCR !2025-03-283138129|OCR图片转文字识别软件,完全离线。截屏/批量导入图片,支持多国语言、合并段落、竖排文字。可排除水印区域,提取干净的文本。基于 PaddleOCR 。| | 159|mudler/LocalAI !2025-03-283127815|🤖 The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.| | 160|facebookresearch/fairseq !2025-03-28312124 |Facebook AI Research Sequence-to-Sequence Toolkit written in Python.| | 161|alibaba/nacos !2025-03-28310559|an easy-to-use dynamic service discovery, configuration and service management platform for building cloud native applications.| | 162|yunjey/pytorch-tutorial !2025-03-28310326|PyTorch Tutorial for Deep Learning Researchers| | 163|v2fly/v2ray-core !2025-03-28307448|A platform for building proxies to bypass network restrictions.| | 164|mckaywrigley/chatbot-ui !2025-03-283067714|The open-source AI chat interface for everyone.| | 165|TabbyML/tabby !2025-03-28305949 |Self-hosted AI coding assistant| | 166|deepseek-ai/awesome-deepseek-integration !2025-03-283053193|| | 167|danielmiessler/fabric !2025-03-283028914|fabric is an open-source framework for augmenting humans using AI.| | 168|xinntao/Real-ESRGAN !2025-03-283026623 |Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.| | 169|paul-gauthier/aider !2025-03-283014642|aider is GPT powered coding in your terminal| | 170|tatsu-lab/stanfordalpaca !2025-03-28299022 |Code and documentation to train Stanford's Alpaca models, and generate the data.| | 171|DataTalksClub/data-engineering-zoomcamp !2025-03-282971817|Free Data Engineering course!| | 172|HeyPuter/puter !2025-03-282967014|🌐 The Internet OS! Free, Open-Source, and Self-Hostable.| | 173|mli/paper-reading !2025-03-282962314|Classic Deep Learning and In-Depth Reading of New Papers Paragraph by Paragraph| | 174|linexjlin/GPTs !2025-03-28295568|leaked prompts of GPTs| | 175|s0md3v/roop !2025-03-28295286 |one-click deepfake (face swap)| | 176|JushBJJ/Mr.-Ranedeer-AI-Tutor !2025-03-2829465-1 |A GPT-4 AI Tutor Prompt for customizable personalized learning experiences.| | 177|opendatalab/MinerU !2025-03-282927074|A one-stop, open-source, high-quality data extraction tool, supports PDF/webpage/e-book extraction.一站式开源高质量数据提取工具,支持PDF/网页/多格式电子书提取。| | 178|mouredev/Hello-Python !2025-03-282920720|Curso para aprender el lenguaje de programación Python desde cero y para principiantes. 75 clases, 37 horas en vídeo, código, proyectos y grupo de chat. Fundamentos, frontend, backend, testing, IA...| | 179|Lightning-AI/pytorch-lightning !2025-03-28292039|Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.| | 180|crewAIInc/crewAI !2025-03-282919344|Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.| | 181|facebook/folly !2025-03-282916612|An open-source C++ library developed and used at Facebook.| | 182|google-ai-edge/mediapipe !2025-03-28291519|Cross-platform, customizable ML solutions for live and streaming media.| | 183| getcursor/cursor !2025-03-282892025 | An editor made for programming with AI| | 184|chatanywhere/GPTAPIfree !2025-03-282856424|Free ChatGPT API Key, Free ChatGPT API, supports GPT-4 API (free), ChatGPT offers a free domestic forwarding API that allows direct connections without the need for a proxy. It can be used in conjunction with software/plugins like ChatBox, significantly reducing interface usage costs. Enjoy unlimited and unrestricted chatting within China| | 185|meta-llama/llama3 !2025-03-28285552|The official Meta Llama 3 GitHub site| | 186|tinygrad/tinygrad !2025-03-282845811|You like pytorch? You like micrograd? You love tinygrad! ❤️| | 187|google-research/tuningplaybook !2025-03-282841514|A playbook for systematically maximizing the performance of deep learning models.| | 188|huggingface/diffusers !2025-03-282830222|🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.| | 189|tokio-rs/tokio !2025-03-28282408|A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...| | 190|RVC-Project/Retrieval-based-Voice-Conversion-WebUI !2025-03-282823817|Voice data !2025-03-282822612|Jan is an open source alternative to ChatGPT that runs 100% offline on your computer| | 192|openai/CLIP !2025-03-282814720|CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image| | 193|🔥khoj-ai/khoj !2025-03-2828112313|Your AI second brain. A copilot to get answers to your questions, whether they be from your own notes or from the internet. Use powerful, online (e.g gpt4) or private, local (e.g mistral) LLMs. Self-host locally or use our web app. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.| | 194| acheong08/ChatGPT !2025-03-2828054-2 | Reverse engineered ChatGPT API | | 195|iperov/DeepFaceLive !2025-03-28279345 |Real-time face swap for PC streaming or video calls| | 196|eugeneyan/applied-ml !2025-03-28278471|📚 Papers & tech blogs by companies sharing their work on data science & machine learning in production.| | 197|XTLS/Xray-core !2025-03-282778213|Xray, Penetrates Everything. Also the best v2ray-core, with XTLS support. Fully compatible configuration.| | 198|feder-cr/JobsApplierAIAgent !2025-03-282776410|AutoJobsApplierAI_Agent aims to easy job hunt process by automating the job application process. Utilizing artificial intelligence, it enables users to apply for multiple jobs in an automated and personalized way.| | 199|mindsdb/mindsdb !2025-03-282750631|The platform for customizing AI from enterprise data| | 200|DataExpert-io/data-engineer-handbook !2025-03-282721611|This is a repo with links to everything you'd ever want to learn about data engineering| | 201|exo-explore/exo !2025-03-282721633|Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚| | 202|taichi-dev/taichi !2025-03-2826926-1|Productive, portable, and performant GPU programming in Python.| | 203|mem0ai/mem0 !2025-03-282689134|The memory layer for Personalized AI| | 204|svc-develop-team/so-vits-svc !2025-03-28268096 |SoftVC VITS Singing Voice Conversion| | 205|OpenBMB/ChatDev !2025-03-28265624|Create Customized Software using Natural Language Idea (through Multi-Agent Collaboration)| | 206|roboflow/supervision !2025-03-282632010|We write your reusable computer vision tools. 💜| | 207|drawdb-io/drawdb !2025-03-282626913|Free, simple, and intuitive online database design tool and SQL generator.| | 208|karpathy/llm.c !2025-03-28261633|LLM training in simple, raw C/CUDA| | 209|airbnb/lottie-ios !2025-03-28261431|An iOS library to natively render After Effects vector animations| | 210|openai/openai-python !2025-03-282607713|The OpenAI Python library provides convenient access to the OpenAI API from applications written in the Python language.| | 211|academic/awesome-datascience !2025-03-28259876|📝 An awesome Data Science repository to learn and apply for real world problems.| | 212|harry0703/MoneyPrinterTurbo !2025-03-282576618|Generate short videos with one click using a large model| | 213|gabime/spdlog !2025-03-282571511|Fast C++ logging library.| | 214|ocrmypdf/OCRmyPDF !2025-03-2825674217|OCRmyPDF adds an OCR text layer to scanned PDF files, allowing them to be searched| | 215|Vision-CAIR/MiniGPT-4 !2025-03-28256170 |Enhancing Vision-language Understanding with Advanced Large Language Models| | 216|Stability-AI/generative-models !2025-03-28255936|Generative Models by Stability AI| | 217|DS4SD/docling !2025-03-282555662|Get your docs ready for gen AI| | 218|PostHog/posthog !2025-03-282533227|🦔 PostHog provides open-source product analytics, session recording, feature flagging and A/B testing that you can self-host.| | 219|nrwl/nx !2025-03-282509612|Smart Monorepos · Fast CI| | 220|continuedev/continue !2025-03-282500737|⏩ the open-source copilot chat for software development—bring the power of ChatGPT to VS Code| | 221|opentofu/opentofu !2025-03-28247968|OpenTofu lets you declaratively manage your cloud infrastructure.| | 222|invoke-ai/InvokeAI !2025-03-28247293|InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.| | 223|deepinsight/insightface !2025-03-282471615 |State-of-the-art 2D and 3D Face Analysis Project| | 224|apache/flink !2025-03-28246865|Apache Flink| | 225|ComposioHQ/composio !2025-03-28246436|Composio equips agents with well-crafted tools empowering them to tackle complex tasks| | 226|Genesis-Embodied-AI/Genesis !2025-03-282458314|A generative world for general-purpose robotics & embodied AI learning.| | 227|stretchr/testify !2025-03-28243184|A toolkit with common assertions and mocks that plays nicely with the standard library| | 228| yetone/openai-translator !2025-03-28242921 | Browser extension and cross-platform desktop application for translation based on ChatGPT API | | 229|frappe/erpnext !2025-03-282425211|Free and Open Source Enterprise Resource Planning (ERP)| | 230|songquanpeng/one-api !2025-03-282410034|OpenAI 接口管理 & 分发系统,支持 Azure、Anthropic Claude、Google PaLM 2 & Gemini、智谱 ChatGLM、百度文心一言、讯飞星火认知、阿里通义千问、360 智脑以及腾讯混元,可用于二次分发管理 key,仅单可执行文件,已打包好 Docker 镜像,一键部署,开箱即用. OpenAI key management & redistribution system, using a single API for all LLMs, and features an English UI.| | 231| microsoft/JARVIS !2025-03-28240604 | a system to connect LLMs with ML community | | 232|google/flatbuffers !2025-03-28239965|FlatBuffers: Memory Efficient Serialization Library| | 233|microsoft/graphrag !2025-03-282398928|A modular graph-based Retrieval-Augmented Generation (RAG) system| | 234|rancher/rancher !2025-03-28239675|Complete container management platform| | 235|bazelbuild/bazel !2025-03-282384618|a fast, scalable, multi-language and extensible build system| | 236|modularml/mojo !2025-03-28238236 |The Mojo Programming Language| | 237|danny-avila/LibreChat !2025-03-282378753|Enhanced ChatGPT Clone: Features OpenAI, GPT-4 Vision, Bing, Anthropic, OpenRouter, Google Gemini, AI model switching, message search, langchain, DALL-E-3, ChatGPT Plugins, OpenAI Functions, Secure Multi-User System, Presets, completely open-source for self-hosting. More features in development| |!green-up-arrow.svg 238|🔥🔥🔥Shubhamsaboo/awesome-llm-apps !2025-03-28237391211|Collection of awesome LLM apps with RAG using OpenAI, Anthropic, Gemini and opensource models.| |!red-down-arrow 239|microsoft/semantic-kernel !2025-03-282373611|Integrate cutting-edge LLM technology quickly and easily into your apps| |!red-down-arrow 240|TheAlgorithms/Rust !2025-03-28236995|All Algorithms implemented in Rust| | 241|stanford-oval/storm !2025-03-28236326|An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations.| | 242|openai/gpt-2 !2025-03-28232483|Code for the paper "Language Models are Unsupervised Multitask Learners"| | 243|labring/FastGPT !2025-03-282319445|A platform that uses the OpenAI API to quickly build an AI knowledge base, supporting many-to-many relationships.| | 244|pathwaycom/llm-app !2025-03-2822928-10|Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.| | 245|warpdotdev/Warp !2025-03-282286825|Warp is a modern, Rust-based terminal with AI built in so you and your team can build great software, faster.| | 246|🔥agno-agi/agno !2025-03-2822833298|Agno is a lightweight library for building Multimodal Agents. It exposes LLMs as a unified API and gives them superpowers like memory, knowledge, tools and reasoning.| | 247|qdrant/qdrant !2025-03-282275214 |Qdrant - Vector Database for the next generation of AI applications. Also available in the cloud https://cloud.qdrant.io/| | 248|ashishpatel26/500-AI-Machine-learning-Deep-learning-Computer-vision-NLP-Projects-with-code !2025-03-282271815|500 AI Machine learning Deep learning Computer vision NLP Projects with code| | 249|stanfordnlp/dspy !2025-03-282268321|Stanford DSPy: The framework for programming—not prompting—foundation models| | 250|PaddlePaddle/Paddle !2025-03-28226246|PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)| | 251|zulip/zulip !2025-03-28225464|Zulip server and web application. Open-source team chat that helps teams stay productive and focused.| | 252|Hannibal046/Awesome-LLM !2025-03-282240721|Awesome-LLM: a curated list of Large Language Model| | 253|facefusion/facefusion !2025-03-282218812|Next generation face swapper and enhancer| | 254|Mozilla-Ocho/llamafile !2025-03-28220624|Distribute and run LLMs with a single file.| | 255|yuliskov/SmartTube !2025-03-282201614|SmartTube - an advanced player for set-top boxes and tvs running Android OS| | 256|haotian-liu/LLaVA !2025-03-282201316 |Large Language-and-Vision Assistant built towards multimodal GPT-4 level capabilities.| | 257|ashishps1/awesome-system-design-resources !2025-03-282189367|This repository contains System Design resources which are useful while preparing for interviews and learning Distributed Systems| | 258|Cinnamon/kotaemon !2025-03-28218248|An open-source RAG-based tool for chatting with your documents.| | 259|CodePhiliaX/Chat2DB !2025-03-282179757|🔥🔥🔥AI-driven database tool and SQL client, The hottest GUI client, supporting MySQL, Oracle, PostgreSQL, DB2, SQL Server, DB2, SQLite, H2, ClickHouse, and more.| | 260|blakeblackshear/frigate !2025-03-282177113|NVR with realtime local object detection for IP cameras| | 261|facebookresearch/audiocraft !2025-03-28217111|Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.| | 262|karpathy/minGPT !2025-03-28216567|A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training| | 263|grpc/grpc-go !2025-03-282159510|The Go language implementation of gRPC. HTTP/2 based RPC| | 264|HumanSignal/label-studio !2025-03-282137618|Label Studio is a multi-type data labeling and annotation tool with standardized output format| | 265|yoheinakajima/babyagi !2025-03-28212764 |uses OpenAI and Pinecone APIs to create, prioritize, and execute tasks, This is a pared-down version of the original Task-Driven Autonomous Agent| | 266|deepseek-ai/DeepSeek-Coder !2025-03-282118210|DeepSeek Coder: Let the Code Write Itself| | 267|BuilderIO/gpt-crawler !2025-03-282118010|Crawl a site to generate knowledge files to create your own custom GPT from a URL| | 268| openai/chatgpt-retrieval-plugin !2025-03-2821152-1 | Plugins are chat extensions designed specifically for language models like ChatGPT, enabling them to access up-to-date information, run computations, or interact with third-party services in response to a user's request.| | 269|microsoft/OmniParser !2025-03-282113123|A simple screen parsing tool towards pure vision based GUI agent| | 270|black-forest-labs/flux !2025-03-282107219|Official inference repo for FLUX.1 models| | 271|ItzCrazyKns/Perplexica !2025-03-282099154|Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI| | 272|microsoft/unilm !2025-03-28209876|Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities| | 273|Sanster/lama-cleaner !2025-03-282077614|Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures.| | 274|assafelovic/gpt-researcher !2025-03-282057222|GPT based autonomous agent that does online comprehensive research on any given topic| | 275|PromtEngineer/localGPT !2025-03-28204230 |Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.| | 276|elastic/kibana !2025-03-28203482|Your window into the Elastic Stack| | 277|fishaudio/fish-speech !2025-03-282033222|Brand new TTS solution| | 278|mlc-ai/mlc-llm !2025-03-282028110 |Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.| | 279|deepset-ai/haystack !2025-03-282005320|🔍 Haystack is an open source NLP framework to interact with your data using Transformer models and LLMs (GPT-4, ChatGPT and alike). Haystack offers production-ready tools to quickly build complex question answering, semantic search, text generation applications, and more.| | 280|tree-sitter/tree-sitter !2025-03-28200487|An incremental parsing system for programming tools| | 281|Anjok07/ultimatevocalremovergui !2025-03-281999811|GUI for a Vocal Remover that uses Deep Neural Networks.| | 282|guidance-ai/guidance !2025-03-28199622|A guidance language for controlling large language models.| | 283|ml-explore/mlx !2025-03-28199619|MLX: An array framework for Apple silicon| | 284|mlflow/mlflow !2025-03-281995314|Open source platform for the machine learning lifecycle| | 285|ml-tooling/best-of-ml-python !2025-03-28198631|🏆 A ranked list of awesome machine learning Python libraries. Updated weekly.| | 286|BerriAI/litellm !2025-03-281981862|Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)| | 287|LazyVim/LazyVim !2025-03-281981320|Neovim config for the lazy| | 288|wez/wezterm !2025-03-281976018|A GPU-accelerated cross-platform terminal emulator and multiplexer written by @wez and implemented in Rust| | 289|valkey-io/valkey !2025-03-281970416|A flexible distributed key-value datastore that supports both caching and beyond caching workloads.| | 290|LiLittleCat/awesome-free-chatgpt !2025-03-28196185|🆓免费的 ChatGPT 镜像网站列表,持续更新。List of free ChatGPT mirror sites, continuously updated.| | 291|Byaidu/PDFMathTranslate !2025-03-281947645|PDF scientific paper translation with preserved formats - 基于 AI 完整保留排版的 PDF 文档全文双语翻译,支持 Google/DeepL/Ollama/OpenAI 等服务,提供 CLI/GUI/Docker| | 292|openai/swarm !2025-03-281947111|Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.| | 293|HqWu-HITCS/Awesome-Chinese-LLM !2025-03-281921423|Organizing smaller, cost-effective, privately deployable open-source Chinese language models, including related datasets and tutorials| | 294|stitionai/devika !2025-03-28190903|Devika is an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective. Devika aims to be a competitive open-source alternative to Devin by Cognition AI.| | 295|OpenBMB/MiniCPM-o !2025-03-28190887|MiniCPM-o 2.6: A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone| | 296|samber/lo !2025-03-281904815|💥 A Lodash-style Go library based on Go 1.18+ Generics (map, filter, contains, find...)| | 297|chroma-core/chroma !2025-03-281895221 |the AI-native open-source embedding database| | 298|DarkFlippers/unleashed-firmware !2025-03-28189278|Flipper Zero Unleashed Firmware| | 299|brave/brave-browser !2025-03-281892710|Brave browser for Android, iOS, Linux, macOS, Windows.| | 300| tloen/alpaca-lora !2025-03-28188641 | Instruct-tune LLaMA on consumer hardware| | 301|VinciGit00/Scrapegraph-ai !2025-03-281884618|Python scraper based on AI| | 302|gitroomhq/postiz-app !2025-03-281879110|📨 Schedule social posts, measure them, exchange with other members and get a lot of help from AI 🚀| | 303|PrefectHQ/prefect !2025-03-281878715|Prefect is a workflow orchestration tool empowering developers to build, observe, and react to data pipelines| | 304|ymcui/Chinese-LLaMA-Alpaca !2025-03-28187723 |Chinese LLaMA & Alpaca LLMs| | 305|kenjihiranabe/The-Art-of-Linear-Algebra !2025-03-28187335|Graphic notes on Gilbert Strang's "Linear Algebra for Everyone"| | 306|joonspk-research/generativeagents !2025-03-28187288|Generative Agents: Interactive Simulacra of Human Behavior| | 307|renovatebot/renovate !2025-03-28186820|Universal dependency update tool that fits into your workflows.| | 308|gventuri/pandas-ai !2025-03-28186109 |Pandas AI is a Python library that integrates generative artificial intelligence capabilities into Pandas, making dataframes conversational| | 309|thingsboard/thingsboard !2025-03-28185184|Open-source IoT Platform - Device management, data collection, processing and visualization.| | 310|ente-io/ente !2025-03-28184722|Fully open source, End to End Encrypted alternative to Google Photos and Apple Photos| | 311|serengil/deepface !2025-03-281840113|A Lightweight Face Recognition and Facial Attribute Analysis (Age, Gender, Emotion and Race) Library for Python| | 312|Raphire/Win11Debloat !2025-03-281840132|A simple, easy to use PowerShell script to remove pre-installed apps from windows, disable telemetry, remove Bing from windows search as well as perform various other changes to declutter and improve your windows experience. This script works for both windows 10 and windows 11.| | 313|Avaiga/taipy !2025-03-28179235|Turns Data and AI algorithms into production-ready web applications in no time.| | 314|microsoft/qlib !2025-03-281784231|Qlib is an AI-oriented quantitative investment platform that aims to realize the potential, empower research, and create value using AI technologies in quantitative investment, from exploring ideas to implementing productions. Qlib supports diverse machine learning modeling paradigms. including supervised learning, market dynamics modeling, and RL.| | 315|CopilotKit/CopilotKit !2025-03-281778571|Build in-app AI chatbots 🤖, and AI-powered Textareas ✨, into react web apps.| | 316|QwenLM/Qwen-7B !2025-03-281766017|The official repo of Qwen-7B (通义千问-7B) chat & pretrained large language model proposed by Alibaba Cloud.| | 317|w-okada/voice-changer !2025-03-28176078 |リアルタイムボイスチェンジャー Realtime Voice Changer| | 318|rlabbe/Kalman-and-Bayesian-Filters-in-Python !2025-03-281756011|Kalman Filter book using Jupyter Notebook. Focuses on building intuition and experience, not formal proofs. Includes Kalman filters,extended Kalman filters, unscented Kalman filters, particle filters, and more. All exercises include solutions.| | 319|Mikubill/sd-webui-controlnet !2025-03-28174794 |WebUI extension for ControlNet| | 320|jingyaogong/minimind !2025-03-2817380116|「大模型」3小时完全从0训练26M的小参数GPT,个人显卡即可推理训练!| | 321|apify/crawlee !2025-03-28172696|Crawlee—A web scraping and browser automation library for Node.js to build reliable crawlers. In JavaScript and TypeScript. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with Puppeteer, Playwright, Cheerio, JSDOM, and raw HTTP. Both headful and headless mode. With proxy rotation.| | 322|apple/ml-stable-diffusion !2025-03-28172395|Stable Diffusion with Core ML on Apple Silicon| | 323| transitive-bullshit/chatgpt-api !2025-03-28172095 | Node.js client for the official ChatGPT API. | | 324|teableio/teable !2025-03-281719222|✨ The Next Gen Airtable Alternative: No-Code Postgres| | 325| xx025/carrot !2025-03-28170900 | Free ChatGPT Site List | | 326|microsoft/LightGBM !2025-03-28170723|A fast, distributed, high-performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.| | 327|VikParuchuri/surya !2025-03-28169827|Accurate line-level text detection and recognition (OCR) in any language| | 328|deepseek-ai/Janus !2025-03-281692825|Janus-Series: Unified Multimodal Understanding and Generation Models| | 329|ardalis/CleanArchitecture !2025-03-28168823|Clean Architecture Solution Template: A starting point for Clean Architecture with ASP.NET Core| | 330|neondatabase/neon !2025-03-28166466|Neon: Serverless Postgres. We separated storage and compute to offer autoscaling, code-like database branching, and scale to zero.| | 331|kestra-io/kestra !2025-03-281661313|⚡ Workflow Automation Platform. Orchestrate & Schedule code in any language, run anywhere, 500+ plugins. Alternative to Zapier, Rundeck, Camunda, Airflow...| | 332|Dao-AILab/flash-attention !2025-03-281659720|Fast and memory-efficient exact attention| | 333|RPCS3/rpcs3 !2025-03-281655712|PS3 emulator/debugger| | 334|meta-llama/llama-recipes !2025-03-28165486|Scripts for fine-tuning Llama2 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization & question answering. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment.Demo apps to showcase Llama2 for WhatsApp & Messenger| | 335|emilwallner/Screenshot-to-code !2025-03-28165180|A neural network that transforms a design mock-up into a static website.| | 336|datawhalechina/llm-cookbook !2025-03-281650922|面向开发者的 LLM 入门教程,吴恩达大模型系列课程中文版| | 337|e2b-dev/awesome-ai-agents !2025-03-281643923|A list of AI autonomous agents| | 338|QwenLM/Qwen2.5 !2025-03-281641114|Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.| | 339|dair-ai/ML-YouTube-Courses !2025-03-28164114|📺 Discover the latest machine learning / AI courses on YouTube.| | 340|pybind/pybind11 !2025-03-28163620|Seamless operability between C++11 and Python| | 341|graphdeco-inria/gaussian-splatting !2025-03-281627116|Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"| | 342|meta-llama/codellama !2025-03-28162531|Inference code for CodeLlama models| | 343|TransformerOptimus/SuperAGI !2025-03-28161292 | SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.| | 344|microsoft/onnxruntime !2025-03-28161169|ONNX Runtime: cross-platform, high-performance ML inferencing and training accelerator| | 345|IDEA-Research/Grounded-Segment-Anything !2025-03-281601411 |Marrying Grounding DINO with Segment Anything & Stable Diffusion & BLIP - Automatically Detect, Segment and Generate Anything with Image and Text Inputs| | 346|ddbourgin/numpy-ml !2025-03-28160054|Machine learning, in numpy| | 347|eosphoros-ai/DB-GPT !2025-03-281585225|Revolutionizing Database Interactions with Private LLM Technology| | 348|Stability-AI/StableLM !2025-03-28158310 |Stability AI Language Models| | 349|openai/evals !2025-03-28157935 |Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.| | 350|THUDM/ChatGLM2-6B !2025-03-28157500|ChatGLM2-6B: An Open Bilingual Chat LLM | | 351|sunner/ChatALL !2025-03-28156761 |Concurrently chat with ChatGPT, Bing Chat, Bard, Alpaca, Vincuna, Claude, ChatGLM, MOSS, iFlytek Spark, ERNIE and more, discover the best answers| | 352|abseil/abseil-cpp !2025-03-28156656|Abseil Common Libraries (C++)| | 353|NVIDIA/open-gpu-kernel-modules !2025-03-28156531|NVIDIA Linux open GPU kernel module source| | 354|letta-ai/letta !2025-03-281563718|Letta (formerly MemGPT) is a framework for creating LLM services with memory.| | 355|typescript-eslint/typescript-eslint !2025-03-28156211|✨ Monorepo for all the tooling which enables ESLint to support TypeScript| | 356|umijs/umi !2025-03-28156211|A framework in react community ✨| | 357|AI4Finance-Foundation/FinGPT !2025-03-281561215|Data-Centric FinGPT. Open-source for open finance! Revolutionize 🔥 We'll soon release the trained model.| | 358|amplication/amplication !2025-03-28156022|🔥🔥🔥 The Only Production-Ready AI-Powered Backend Code Generation| | 359|KindXiaoming/pykan !2025-03-28155477|Kolmogorov Arnold Networks| | 360|arc53/DocsGPT !2025-03-28154900|GPT-powered chat for documentation, chat with your documents| | 361|influxdata/telegraf !2025-03-28154502|Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.| | 362|microsoft/Bringing-Old-Photos-Back-to-Life !2025-03-28154084|Bringing Old Photo Back to Life (CVPR 2020 oral)| | 363|GaiZhenbiao/ChuanhuChatGPT !2025-03-2815394-2|GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.| | 364|Zeyi-Lin/HivisionIDPhotos !2025-03-281529710|⚡️HivisionIDPhotos: a lightweight and efficient AI ID photos tools. 一个轻量级的AI证件照制作算法。| | 365| mayooear/gpt4-pdf-chatbot-langchain !2025-03-281529518 | GPT4 & LangChain Chatbot for large PDF docs | | 366|1Panel-dev/MaxKB !2025-03-2815277148|? Based on LLM large language model knowledge base Q&A system. Ready to use out of the box, supports quick integration into third-party business systems. Officially produced by 1Panel| | 367|ai16z/eliza !2025-03-281526811|Conversational Agent for Twitter and Discord| | 368|apache/arrow !2025-03-28151684|Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing| | 369|princeton-nlp/SWE-agent !2025-03-281516119|SWE-agent: Agent Computer Interfaces Enable Software Engineering Language Models| | 370|mlc-ai/web-llm !2025-03-281509311 |Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.| | 371|guillaumekln/faster-whisper !2025-03-281507117 |Faster Whisper transcription with CTranslate2| | 372|overleaf/overleaf !2025-03-28150316|A web-based collaborative LaTeX editor| | 373|triton-lang/triton !2025-03-28150169|Development repository for the Triton language and compiler| | 374|soxoj/maigret !2025-03-281500410|🕵️‍♂️ Collect a dossier on a person by username from thousands of sites| | 375|alibaba/lowcode-engine !2025-03-28149841|An enterprise-class low-code technology stack with scale-out design / 一套面向扩展设计的企业级低代码技术体系| | 376|espressif/esp-idf !2025-03-28148545|Espressif IoT Development Framework. Official development framework for Espressif SoCs.| | 377|pgvector/pgvector !2025-03-281484913|Open-source vector similarity search for Postgres| | 378|datawhalechina/leedl-tutorial !2025-03-28148246|《李宏毅深度学习教程》(李宏毅老师推荐👍),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases| | 379|xcanwin/KeepChatGPT !2025-03-28147972 |Using ChatGPT is more efficient and smoother, perfectly solving ChatGPT network errors. No longer do you need to frequently refresh the webpage, saving over 10 unnecessary steps| | 380|m-bain/whisperX !2025-03-281471313|WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)| | 381|HumanAIGC/AnimateAnyone !2025-03-2814706-1|Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation| |!green-up-arrow.svg 382|naklecha/llama3-from-scratch !2025-03-281469024|llama3 implementation one matrix multiplication at a time| |!red-down-arrow 383| fauxpilot/fauxpilot !2025-03-28146871 | An open-source GitHub Copilot server | | 384|LlamaFamily/Llama-Chinese !2025-03-28145111|Llama Chinese Community, the best Chinese Llama large model, fully open source and commercially available| | 385|BradyFU/Awesome-Multimodal-Large-Language-Models !2025-03-281450121|Latest Papers and Datasets on Multimodal Large Language Models| | 386|vanna-ai/vanna !2025-03-281449819|🤖 Chat with your SQL database 📊. Accurate Text-to-SQL Generation via LLMs using RAG 🔄.| | 387|bleedline/aimoneyhunter !2025-03-28144845|AI Side Hustle Money Mega Collection: Teaching You How to Utilize AI for Various Side Projects to Earn Extra Income.| | 388|stefan-jansen/machine-learning-for-trading !2025-03-28144629|Code for Machine Learning for Algorithmic Trading, 2nd edition.| | 389|state-spaces/mamba !2025-03-28144139|Mamba: Linear-Time Sequence Modeling with Selective State Spaces| | 390|vercel/ai-chatbot !2025-03-281434614|A full-featured, hackable Next.js AI chatbot built by Vercel| | 391|steven-tey/novel !2025-03-281428410|Notion-style WYSIWYG editor with AI-powered autocompletions| | 392|unifyai/ivy !2025-03-281409348|Unified AI| | 393|chidiwilliams/buzz !2025-03-281402411 |Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.| | 394|lukas-blecher/LaTeX-OCR !2025-03-28139769|pix2tex: Using a ViT to convert images of equations into LaTeX code.| | 395|openai/tiktoken !2025-03-28139599|tiktoken is a fast BPE tokeniser for use with OpenAI's models.| | 396|nocobase/nocobase !2025-03-281391522|NocoBase is a scalability-first, open-source no-code/low-code platform for building business applications and enterprise solutions.| | 397|neonbjb/tortoise-tts !2025-03-28139010 |A multi-voice TTS system trained with an emphasis on quality| | 398|yamadashy/repomix !2025-03-281382036|📦 Repomix (formerly Repopack) is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large Language Models (LLMs) or other AI tools like Claude, ChatGPT, and Gemini.| | 399|adobe/react-spectrum !2025-03-28136766|A collection of libraries and tools that help you build adaptive, accessible, and robust user experiences.| | 400|THUDM/ChatGLM3 !2025-03-28136684|ChatGLM3 series: Open Bilingual Chat LLMs | | 401|NVIDIA/NeMo !2025-03-28134837|A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)| | 402|BlinkDL/RWKV-LM !2025-03-28134346 |RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it combines the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.| | 403| fuergaosi233/wechat-chatgpt !2025-03-28133330 | Use ChatGPT On Wechat via wechaty | | 404|udecode/plate !2025-03-28133325|A rich-text editor powered by AI| | 405|xenova/transformers.js !2025-03-281331219|State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!| | 406|stas00/ml-engineering !2025-03-281325615|Machine Learning Engineering Guides and Tools| | 407| wong2/chatgpt-google-extension !2025-03-2813241-1 | A browser extension that enhances search engines with ChatGPT, this repos will not be updated from 2023-02-20| | 408|mrdbourke/pytorch-deep-learning !2025-03-281317520|Materials for the Learn PyTorch for Deep Learning: Zero to Mastery course.| | 409|Koenkk/zigbee2mqtt !2025-03-28131544|Zigbee 🐝 to MQTT bridge 🌉, get rid of your proprietary Zigbee bridges 🔨| | 410|vercel-labs/ai !2025-03-281298528|Build AI-powered applications with React, Svelte, and Vue| | 411|netease-youdao/QAnything !2025-03-28129318|Question and Answer based on Anything.| | 412|huggingface/trl !2025-03-281289622|Train transformer language models with reinforcement learning.| | 413|microsoft/BitNet !2025-03-28128503|Official inference framework for 1-bit LLMs| | 414|mediar-ai/screenpipe !2025-03-281283915|24/7 local AI screen & mic recording. Build AI apps that have the full context. Works with Ollama. Alternative to Rewind.ai. Open. Secure. You own your data. Rust.| | 415|Skyvern-AI/skyvern !2025-03-281277612|Automate browser-based workflows with LLMs and Computer Vision| | 416|pytube/pytube !2025-03-28126591|A lightweight, dependency-free Python library (and command-line utility) for downloading YouTube Videos.| | 417|official-stockfish/Stockfish !2025-03-28126574|UCI chess engine| | 418|sgl-project/sglang !2025-03-281260143|SGLang is a structured generation language designed for large language models (LLMs). It makes your interaction with LLMs faster and more controllable.| | 419|plasma-umass/scalene !2025-03-28125535|Scalene: a high-performance, high-precision CPU, GPU, and memory profiler for Python with AI-powered optimization proposals| | 420|danswer-ai/danswer !2025-03-28125503|Ask Questions in natural language and get Answers backed by private sources. Connects to tools like Slack, GitHub, Confluence, etc.| | 421|OpenTalker/SadTalker !2025-03-28125226|[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation| | 422|facebookresearch/AnimatedDrawings !2025-03-28123693 |Code to accompany "A Method for Animating Children's Drawings of the Human Figure"| | 423|activepieces/activepieces !2025-03-28123609|Your friendliest open source all-in-one automation tool ✨ Workflow automation tool 100+ integration / Enterprise automation tool / Zapier Alternative| | 424|ggerganov/ggml !2025-03-28121992 |Tensor library for machine learning| | 425|bytebase/bytebase !2025-03-28121694|World's most advanced database DevOps and CI/CD for Developer, DBA and Platform Engineering teams. The GitLab/GitHub for database DevOps.| | 426| willwulfken/MidJourney-Styles-and-Keywords-Reference !2025-03-28120971 | A reference containing Styles and Keywords that you can use with MidJourney AI| | 427|Huanshere/VideoLingo !2025-03-281207013|Netflix-level subtitle cutting, translation, alignment, and even dubbing - one-click fully automated AI video subtitle team | | 428|OpenLMLab/MOSS !2025-03-28120330 |An open-source tool-augmented conversational language model from Fudan University| | 429|llmware-ai/llmware !2025-03-281200727|Providing enterprise-grade LLM-based development framework, tools, and fine-tuned models.| | 430|PKU-YuanGroup/Open-Sora-Plan !2025-03-28119362|This project aim to reproduce Sora (Open AI T2V model), but we only have limited resource. We deeply wish the all open source community can contribute to this project.| | 431|ShishirPatil/gorilla !2025-03-28119332 |Gorilla: An API store for LLMs| | 432|NVIDIA/Megatron-LM !2025-03-281192716|Ongoing research training transformer models at scale| | 433|illacloud/illa-builder !2025-03-28119192|Create AI-Driven Apps like Assembling Blocks| | 434|marimo-team/marimo !2025-03-281191521|A reactive notebook for Python — run reproducible experiments, execute as a script, deploy as an app, and version with git.| | 435|smol-ai/developer !2025-03-28119111 | With 100k context windows on the way, it's now feasible for every dev to have their own smol developer| | 436|Lightning-AI/litgpt !2025-03-28118878|Pretrain, finetune, deploy 20+ LLMs on your own data. Uses state-of-the-art techniques: flash attention, FSDP, 4-bit, LoRA, and more.| | 437|openai/shap-e !2025-03-28118474 |Generate 3D objects conditioned on text or images| | 438|eugeneyan/open-llms !2025-03-28118451 |A list of open LLMs available for commercial use.| | 439|andrewyng/aisuite !2025-03-28118124|Simple, unified interface to multiple Generative AI providers| | 440|hajimehoshi/ebiten !2025-03-28117816|Ebitengine - A dead simple 2D game engine for Go| | 441|kgrzybek/modular-monolith-with-ddd !2025-03-28117493|Full Modular Monolith application with Domain-Driven Design approach.| | 442|h2oai/h2ogpt !2025-03-2811736-1 |Come join the movement to make the world's best open source GPT led by H2O.ai - 100% private chat and document search, no data leaks, Apache 2.0| | 443|owainlewis/awesome-artificial-intelligence !2025-03-28117332|A curated list of Artificial Intelligence (AI) courses, books, video lectures and papers.| | 444|DataTalksClub/mlops-zoomcamp !2025-03-28116643|Free MLOps course from DataTalks.Club| | 445|Rudrabha/Wav2Lip !2025-03-281163410|This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020.| | 446|aishwaryanr/awesome-generative-ai-guide !2025-03-281152810|A one stop repository for generative AI research updates, interview resources, notebooks and much more!| | 447|karpathy/micrograd !2025-03-28115146|A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API| | 448|InstantID/InstantID !2025-03-28115111|InstantID : Zero-shot Identity-Preserving Generation in Seconds 🔥| | 449|facebookresearch/seamlesscommunication !2025-03-28114434|Foundational Models for State-of-the-Art Speech and Text Translation| | 450|anthropics/anthropic-cookbook !2025-03-281140112|A collection of notebooks/recipes showcasing some fun and effective ways of using Claude.| | 451|mastra-ai/mastra !2025-03-281139240|the TypeScript AI agent framework| | 452|NVIDIA/TensorRT !2025-03-28113864|NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.| | 453|plandex-ai/plandex !2025-03-28113645|An AI coding engine for complex tasks| | 454|RUCAIBox/LLMSurvey !2025-03-28112735 |A collection of papers and resources related to Large Language Models.| | 455|kubeshark/kubeshark !2025-03-28112711|The API traffic analyzer for Kubernetes providing real-time K8s protocol-level visibility, capturing and monitoring all traffic and payloads going in, out and across containers, pods, nodes and clusters. Inspired by Wireshark, purposely built for Kubernetes| | 456|electric-sql/pglite !2025-03-28112617|Lightweight Postgres packaged as WASM into a TypeScript library for the browser, Node.js, Bun and Deno from https://electric-sql.com| | 457|lightaime/camel !2025-03-281124441 |🐫 CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society| | 458|huggingface/lerobot !2025-03-281120184|🤗 LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch| | 459|normal-computing/outlines !2025-03-28111657|Generative Model Programming| | 460|libretro/RetroArch !2025-03-28110701|Cross-platform, sophisticated frontend for the libretro API. Licensed GPLv3.| | 461|THUDM/CogVideo !2025-03-28110599|Text-to-video generation: CogVideoX (2024) and CogVideo (ICLR 2023)| | 462|bentoml/OpenLLM !2025-03-28110495|An open platform for operating large language models (LLMs) in production. Fine-tune, serve, deploy, and monitor any LLMs with ease.| | 463|vosen/ZLUDA !2025-03-28110429|CUDA on AMD GPUs| | 464|dair-ai/ML-Papers-of-the-Week !2025-03-28110304 |🔥Highlighting the top ML papers every week.| | 465|WordPress/gutenberg !2025-03-28110212|The Block Editor project for WordPress and beyond. Plugin is available from the official repository.| | 466|microsoft/data-formulator !2025-03-281099827|🪄 Create rich visualizations with AI| | 467|LibreTranslate/LibreTranslate !2025-03-28109887|Free and Open Source Machine Translation API. Self-hosted, offline capable and easy to setup.| | 468|block/goose !2025-03-281097737|an open-source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM| | 469|getumbrel/llama-gpt !2025-03-28109553|A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device.| | 470|HigherOrderCO/HVM !2025-03-28109182|A massively parallel, optimal functional runtime in Rust| | 471|databrickslabs/dolly !2025-03-2810812-3 | A large language model trained on the Databricks Machine Learning Platform| | 472|srush/GPU-Puzzles !2025-03-28108014|Solve puzzles. Learn CUDA.| | 473|Z3Prover/z3 !2025-03-28107952|The Z3 Theorem Prover| | 474|UFund-Me/Qbot !2025-03-281079313 |Qbot is an AI-oriented quantitative investment platform, which aims to realize the potential, empower AI technologies in quantitative investment| | 475|langchain-ai/langgraph !2025-03-281077336|| | 476|lz4/lz4 !2025-03-28107647|Extremely Fast Compression algorithm| | 477|magic-research/magic-animate !2025-03-28107160|MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model| | 478|PaperMC/Paper !2025-03-281071410|The most widely used, high performance Minecraft server that aims to fix gameplay and mechanics inconsistencies| | 479|getomni-ai/zerox !2025-03-281071015|Zero shot pdf OCR with gpt-4o-mini| |!green-up-arrow.svg 480|🔥NirDiamant/GenAIAgents !2025-03-2810693318|This repository provides tutorials and implementations for various Generative AI Agent techniques, from basic to advanced. It serves as a comprehensive guide for building intelligent, interactive AI systems.| |!red-down-arrow 481|Unstructured-IO/unstructured !2025-03-28106889|Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.| | 482|apache/thrift !2025-03-28106610|Apache Thrift| | 483| TheR1D/shellgpt !2025-03-28106097 | A command-line productivity tool powered by ChatGPT, will help you accomplish your tasks faster and more efficiently | | 484|TheRamU/Fay !2025-03-281060312 |Fay is a complete open source project that includes Fay controller and numeral models, which can be used in different applications such as virtual hosts, live promotion, numeral human interaction and so on| | 485|zyronon/douyin !2025-03-28105566|Vue3 + Pinia + Vite5 仿抖音,Vue 在移动端的最佳实践 . Imitate TikTok ,Vue Best practices on Mobile| | 486|THU-MIG/yolov10 !2025-03-28105485|YOLOv10: Real-Time End-to-End Object Detection| | 487|idootop/mi-gpt !2025-03-281052522|? Transform XiaoAi speaker into a personal voice assistant with ChatGPT and DouBao integration.| | 488|SakanaAI/AI-Scientist !2025-03-281051310|The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery 🧑‍🔬| | 489|szimek/sharedrop !2025-03-28105101|Easy P2P file transfer powered by WebRTC - inspired by Apple AirDrop| | 490|salesforce/LAVIS !2025-03-28103942 |LAVIS - A One-stop Library for Language-Vision Intelligence| | 491|aws/amazon-sagemaker-examples !2025-03-28103654|Example 📓 Jupyter notebooks that demonstrate how to build, train, and deploy machine learning models using 🧠 Amazon SageMaker.| | 492|artidoro/qlora !2025-03-28103402 |QLoRA: Efficient Finetuning of Quantized LLMs| | 493|lllyasviel/stable-diffusion-webui-forge !2025-03-281029314| a platform on top of Stable Diffusion WebUI (based on Gradio) to make development easier, optimize resource management, and speed up inference| | 494|NielsRogge/Transformers-Tutorials !2025-03-28102487|This repository contains demos I made with the Transformers library by HuggingFace.| | 495|kedro-org/kedro !2025-03-28102371|Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.| | 496| chathub-dev/chathub !2025-03-28102301 | All-in-one chatbot client | | 497|microsoft/promptflow !2025-03-28101612|Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.| | 498|mistralai/mistral-src !2025-03-28101372|Reference implementation of Mistral AI 7B v0.1 model.| | 499|burn-rs/burn !2025-03-28101183|Burn - A Flexible and Comprehensive Deep Learning Framework in Rust| | 500|AIGC-Audio/AudioGPT !2025-03-28101150 |AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head| | 501|facebookresearch/dinov2 !2025-03-281011210 |PyTorch code and models for the DINOv2 self-supervised learning method.| | 502|RockChinQ/LangBot !2025-03-281008455|😎丰富生态、🧩支持扩展、🦄多模态 - 大模型原生即时通信机器人平台 🤖 | | 503|78/xiaozhi-esp32 !2025-03-281008180|Build your own AI friend| | 504|cumulo-autumn/StreamDiffusion !2025-03-28100761|StreamDiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation| | 505|DataTalksClub/machine-learning-zoomcamp !2025-03-28100664|The code from the Machine Learning Bookcamp book and a free course based on the book| | 506|nerfstudio-project/nerfstudio !2025-03-28100343|A collaboration friendly studio for NeRFs| | 507|cupy/cupy !2025-03-28100344|NumPy & SciPy for GPU| | 508|NVIDIA/TensorRT-LLM !2025-03-281000823|TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.| | 509|wasp-lang/open-saas !2025-03-2899665|A free, open-source SaaS app starter for React & Node.js with superpowers. Production-ready. Community-driven.| | 510|huggingface/text-generation-inference !2025-03-2899383|Large Language Model Text Generation Inference| | 511|jxnl/instructor !2025-03-2899224|structured outputs for llms| | 512|GoogleCloudPlatform/generative-ai !2025-03-2899086|Sample code and notebooks for Generative AI on Google Cloud| | 513|manticoresoftware/manticoresearch !2025-03-2898799|Easy to use open source fast database for search | | 514|langfuse/langfuse !2025-03-28985134|🪢 Open source LLM engineering platform. Observability, metrics, evals, prompt management, testing, prompt playground, datasets, LLM evaluations -- 🍊YC W23 🤖 integrate via Typescript, Python / Decorators, OpenAI, Langchain, LlamaIndex, Litellm, Instructor, Mistral, Perplexity, Claude, Gemini, Vertex| | 515|keephq/keep !2025-03-2897949|The open-source alert management and AIOps platform| | 516|sashabaranov/go-openai !2025-03-2897843|OpenAI ChatGPT, GPT-3, GPT-4, DALL·E, Whisper API wrapper for Go| | 517|autowarefoundation/autoware !2025-03-2897766|Autoware - the world's leading open-source software project for autonomous driving| | 518|anthropics/courses !2025-03-2897269|Anthropic's educational courses| | 519|popcorn-official/popcorn-desktop !2025-03-2896853|Popcorn Time is a multi-platform, free software BitTorrent client that includes an integrated media player ( Windows / Mac / Linux ) A Butter-Project Fork| | 520|getmaxun/maxun !2025-03-28968515|🔥 Open-source no-code web data extraction platform. Turn websites to APIs and spreadsheets with no-code robots in minutes! [In Beta]| | 521|wandb/wandb !2025-03-2896763|🔥 A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.| | 522|karpathy/minbpe !2025-03-2895353|Minimal, clean, code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.| | 523|bigscience-workshop/petals !2025-03-2895142|🌸 Run large language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading| | 524|OthersideAI/self-operating-computer !2025-03-2894931|A framework to enable multimodal models to operate a computer.| | 525|mshumer/gpt-prompt-engineer !2025-03-2894911|| | 526| BloopAI/bloop !2025-03-2894710 | A fast code search engine written in Rust| | 527|BlinkDL/ChatRWKV !2025-03-289467-1 |ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.| | 528|timlrx/tailwind-nextjs-starter-blog !2025-03-2894677|This is a Next.js, Tailwind CSS blogging starter template. Comes out of the box configured with the latest technologies to make technical writing a breeze. Easily configurable and customizable. Perfect as a replacement to existing Jekyll and Hugo individual blogs.| | 529|google/benchmark !2025-03-2893634|A microbenchmark support library| | 530|facebookresearch/nougat !2025-03-2893603|Implementation of Nougat Neural Optical Understanding for Academic Documents| | 531|modelscope/facechain !2025-03-2893536|FaceChain is a deep-learning toolchain for generating your Digital-Twin.| | 532|DrewThomasson/ebook2audiobook !2025-03-2893388|Convert ebooks to audiobooks with chapters and metadata using dynamic AI models and voice cloning. Supports 1,107+ languages!| | 533|RayTracing/raytracing.github.io !2025-03-2893035|Main Web Site (Online Books)| | 534|QwenLM/Qwen2.5-VL !2025-03-28930249|Qwen2.5-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.| | 535|WongKinYiu/yolov9 !2025-03-2892201|Implementation of paper - YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information| | 536|alibaba-damo-academy/FunASR !2025-03-28920222|A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models.| | 537|Visualize-ML/Book4Power-of-Matrix !2025-03-2891931|Book4 'Power of Matrix' | | 538|dice2o/BingGPT !2025-03-289185-1 |Desktop application of new Bing's AI-powered chat (Windows, macOS and Linux)| | 539|browserbase/stagehand !2025-03-28917621|An AI web browsing framework focused on simplicity and extensibility.| | 540|FlagOpen/FlagEmbedding !2025-03-28914111|Dense Retrieval and Retrieval-augmented LLMs| | 541|Const-me/Whisper !2025-03-2890979|High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model| | 542|lucidrains/denoising-diffusion-pytorch !2025-03-2890942|Implementation of Denoising Diffusion Probabilistic Model in Pytorch| | 543|Chainlit/chainlit !2025-03-28904422|Build Conversational AI in minutes ⚡️| | 544|togethercomputer/OpenChatKit !2025-03-2890160 |OpenChatKit provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications| | 545|Stability-AI/StableStudio !2025-03-2889631 |Community interface for generative AI| | 546|voicepaw/so-vits-svc-fork !2025-03-2889482 |so-vits-svc fork with realtime support, improved interface and more features.| | 547|pymc-devs/pymc !2025-03-2889413|Bayesian Modeling and Probabilistic Programming in Python| | 548|espnet/espnet !2025-03-2889302|End-to-End Speech Processing Toolkit| | 549|kedacore/keda !2025-03-2888991|KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes| | 550|open-mmlab/Amphion !2025-03-28886911|Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development.| | 551|gorse-io/gorse !2025-03-2888451|Gorse open source recommender system engine| | 552|adams549659584/go-proxy-bingai !2025-03-288768-1 |A Microsoft New Bing demo site built with Vue3 and Go, providing a consistent UI experience, supporting ChatGPT prompts, and accessible within China| | 553|open-mmlab/mmsegmentation !2025-03-2887513|OpenMMLab Semantic Segmentation Toolbox and Benchmark.| | 554|bytedance/monolith !2025-03-2887223|ByteDance's Recommendation System| | 555|LouisShark/chatgptsystemprompt !2025-03-2887216|store all agent's system prompt| | 556|brexhq/prompt-engineering !2025-03-2887080 |Tips and tricks for working with Large Language Models like OpenAI's GPT-4.| | 557|erincatto/box2d !2025-03-2886841|Box2D is a 2D physics engine for games| | 558|🔥microsoft/ai-agents-for-beginners !2025-03-288669323|10 Lessons to Get Started Building AI Agents| | 559|nashsu/FreeAskInternet !2025-03-2886102|FreeAskInternet is a completely free, private and locally running search aggregator & answer generate using LLM, without GPU needed. The user can ask a question and the system will make a multi engine search and combine the search result to the ChatGPT3.5 LLM and generate the answer based on search results.| | 560|goldmansachs/gs-quant !2025-03-2885981|Python toolkit for quantitative finance| | 561|srbhr/Resume-Matcher !2025-03-2885800|Open Source Free ATS Tool to compare Resumes with Job Descriptions and create a score to rank them.| | 562|facebookresearch/ImageBind !2025-03-2885681 |ImageBind One Embedding Space to Bind Them All| | 563|ashawkey/stable-dreamfusion !2025-03-2885481 |A pytorch implementation of text-to-3D dreamfusion, powered by stable diffusion.| | 564|meetecho/janus-gateway !2025-03-2885232|Janus WebRTC Server| | 565|google/magika !2025-03-2885003|Detect file content types with deep learning| | 566|huggingface/chat-ui !2025-03-2884871 |Open source codebase powering the HuggingChat app| | 567|EleutherAI/lm-evaluation-harness !2025-03-28843012|A framework for few-shot evaluation of autoregressive language models.| | 568|jina-ai/reader !2025-03-2884089|Convert any URL to an LLM-friendly input with a simple prefix https://r.jina.ai/| | 569|microsoft/TypeChat !2025-03-288406-1|TypeChat is a library that makes it easy to build natural language interfaces using types.| | 570|thuml/Time-Series-Library !2025-03-28839715|A Library for Advanced Deep Time Series Models.| | 571|OptimalScale/LMFlow !2025-03-2883882|An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Model for All.| | 572|baptisteArno/typebot.io !2025-03-2883845|💬 Typebot is a powerful chatbot builder that you can self-host.| | 573|jzhang38/TinyLlama !2025-03-2883504|The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.| | 574|fishaudio/Bert-VITS2 !2025-03-2883472|vits2 backbone with multilingual-bert| | 575|OpenBMB/XAgent !2025-03-2882683|An Autonomous LLM Agent for Complex Task Solving| | 576|Acly/krita-ai-diffusion !2025-03-2882387|Streamlined interface for generating images with AI in Krita. Inpaint and outpaint with optional text prompt, no tweaking required.| | 577|jasonppy/VoiceCraft !2025-03-2882151|Zero-Shot Speech Editing and Text-to-Speech in the Wild| | 578|SJTU-IPADS/PowerInfer !2025-03-2881693|High-speed Large Language Model Serving on PCs with Consumer-grade GPUs| | 579|modelscope/DiffSynth-Studio !2025-03-28814713|Enjoy the magic of Diffusion models!| | 580|o3de/o3de !2025-03-2881443|Open 3D Engine (O3DE) is an Apache 2.0-licensed multi-platform 3D engine that enables developers and content creators to build AAA games, cinema-quality 3D worlds, and high-fidelity simulations without any fees or commercial obligations.| | 581|zmh-program/chatnio !2025-03-2881325|🚀 Next Generation AI One-Stop Internationalization Solution. 🚀 下一代 AI 一站式 B/C 端解决方案,支持 OpenAI,Midjourney,Claude,讯飞星火,Stable Diffusion,DALL·E,ChatGLM,通义千问,腾讯混元,360 智脑,百川 AI,火山方舟,新必应,Gemini,Moonshot 等模型,支持对话分享,自定义预设,云端同步,模型市场,支持弹性计费和订阅计划模式,支持图片解析,支持联网搜索,支持模型缓存,丰富美观的后台管理与仪表盘数据统计。| | 582|leptonai/searchwithlepton !2025-03-2880632|Building a quick conversation-based search demo with Lepton AI.| | 583|sebastianstarke/AI4Animation !2025-03-2880620|Bringing Characters to Life with Computer Brains in Unity| | 584|wangrongding/wechat-bot !2025-03-2880528|🤖一个基于 WeChaty 结合 DeepSeek / ChatGPT / Kimi / 讯飞等Ai服务实现的微信机器人 ,可以用来帮助你自动回复微信消息,或者管理微信群/好友,检测僵尸粉等...| | 585|openvinotoolkit/openvino !2025-03-2880528|OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference| | 586|steven2358/awesome-generative-ai !2025-03-28802610|A curated list of modern Generative Artificial Intelligence projects and services| | 587|adam-maj/tiny-gpu !2025-03-2880234|A minimal GPU design in Verilog to learn how GPUs work from the ground up| | 588| anse-app/chatgpt-demo !2025-03-2880180 | A demo repo based on OpenAI API (gpt-3.5-turbo) | | 589| acheong08/EdgeGPT !2025-03-288015-1 |Reverse engineered API of Microsoft's Bing Chat | | 590|ai-collection/ai-collection !2025-03-2879994 |The Generative AI Landscape - A Collection of Awesome Generative AI Applications| | 591|GreyDGL/PentestGPT !2025-03-2879953 |A GPT-empowered penetration testing tool| | 592|delta-io/delta !2025-03-2879112|An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs| | 593|dataelement/bisheng !2025-03-2879085|Bisheng is an open LLM devops platform for next generation AI applications.| | 594|e2b-dev/e2b !2025-03-2878447 |Vercel for AI agents. We help developers to build, deploy, and monitor AI agents. Focusing on specialized AI agents that build software for you - your personal software developers.| | 595|01-ai/Yi !2025-03-2878311|A series of large language models trained from scratch by developers @01-ai| | 596|Plachtaa/VALL-E-X !2025-03-287830-1|An open source implementation of Microsoft's VALL-E X zero-shot TTS model. The demo is available at https://plachtaa.github.io| | 597|abhishekkrthakur/approachingalmost !2025-03-2878204|Approaching (Almost) Any Machine Learning Problem| | 598|pydantic/pydantic-ai !2025-03-28781041|Agent Framework / shim to use Pydantic with LLMs| | 599|rany2/edge-tts !2025-03-2877901|Use Microsoft Edge's online text-to-speech service from Python WITHOUT needing Microsoft Edge or Windows or an API key| | 600|CASIA-IVA-Lab/FastSAM !2025-03-2877881|Fast Segment Anything| | 601|netease-youdao/EmotiVoice !2025-03-2877817|EmotiVoice 😊: a Multi-Voice and Prompt-Controlled TTS Engine| | 602|lllyasviel/IC-Light !2025-03-2877804|More relighting!| | 603|kroma-network/tachyon !2025-03-287774-1|Modular ZK(Zero Knowledge) backend accelerated by GPU| | 604|deep-floyd/IF !2025-03-2877731 |A novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding| | 605|oumi-ai/oumi !2025-03-2877705|Everything you need to build state-of-the-art foundation models, end-to-end.| | 606|reorproject/reor !2025-03-2877681|AI note-taking app that runs models locally.| | 607|lightpanda-io/browser !2025-03-28775813|Lightpanda: the headless browser designed for AI and automation| | 608|xiangsx/gpt4free-ts !2025-03-287755-1|Providing a free OpenAI GPT-4 API ! This is a replication project for the typescript version of xtekky/gpt4free| | 609|IDEA-Research/GroundingDINO !2025-03-28773311|Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"| | 610|bunkerity/bunkerweb !2025-03-2877326|🛡️ Make your web services secure by default !| | 611|vikhyat/moondream !2025-03-2877057|tiny vision language model| | 612|firmai/financial-machine-learning !2025-03-287703-1|A curated list of practical financial machine learning tools and applications.| | 613|n8n-io/self-hosted-ai-starter-kit !2025-03-28765121|The Self-hosted AI Starter Kit is an open-source template that quickly sets up a local AI environment. Curated by n8n, it provides essential tools for creating secure, self-hosted AI workflows.| | 614|intel-analytics/ipex-llm !2025-03-2876507|Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, ModelScope, etc.| | 615|jrouwe/JoltPhysics !2025-03-28764510|A multi core friendly rigid body physics and collision detection library. Written in C++. Suitable for games and VR applications. Used by Horizon Forbidden West.| | 616|THUDM/CodeGeeX2 !2025-03-2876270|CodeGeeX2: A More Powerful Multilingual Code Generation Model| | 617|meta-llama/llama-stack !2025-03-2875866|Composable building blocks to build Llama Apps| | 618|sweepai/sweep !2025-03-287530-1|Sweep is an AI junior developer| | 619|lllyasviel/Omost !2025-03-2875301|Your image is almost there!| | 620|ahmedbahaaeldin/From-0-to-Research-Scientist-resources-guide !2025-03-2875050|Detailed and tailored guide for undergraduate students or anybody want to dig deep into the field of AI with solid foundation.| | 621|dair-ai/ML-Papers-Explained !2025-03-2875050|Explanation to key concepts in ML| | 622|zaidmukaddam/scira !2025-03-28750110|Scira (Formerly MiniPerplx) is a minimalistic AI-powered search engine that helps you find information on the internet. Powered by Vercel AI SDK! Search with models like Grok 2.0.| | 623|Portkey-AI/gateway !2025-03-28749416|A Blazing Fast AI Gateway. Route to 100+ LLMs with 1 fast & friendly API.| | 624|web-infra-dev/midscene !2025-03-28748729|An AI-powered automation SDK can control the page, perform assertions, and extract data in JSON format using natural language.| | 625|zilliztech/GPTCache !2025-03-2874801 |GPTCache is a library for creating semantic cache to store responses from LLM queries.| | 626|niedev/RTranslator !2025-03-2874742|RTranslator is the world's first open source real-time translation app.| |!green-up-arrow.svg 627|roboflow/notebooks !2025-03-2874666|Examples and tutorials on using SOTA computer vision models and techniques. Learn everything from old-school ResNet, through YOLO and object-detection transformers like DETR, to the latest models like Grounding DINO and SAM.| |!red-down-arrow 628|openlm-research/openllama !2025-03-2874652|OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset| | 629|LiheYoung/Depth-Anything !2025-03-2874155|Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data| | 630|enso-org/enso !2025-03-2874040|Hybrid visual and textual functional programming.| | 631|bigcode-project/starcoder !2025-03-287401-1 |Home of StarCoder: fine-tuning & inference!| | 632|git-ecosystem/git-credential-manager !2025-03-2873975|Secure, cross-platform Git credential storage with authentication to GitHub, Azure Repos, and other popular Git hosting services.| | 633|OpenGVLab/InternVL !2025-03-2873634|[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4V. 接近GPT-4V表现的可商用开源模型| | 634|WooooDyy/LLM-Agent-Paper-List !2025-03-2873551|The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.| | 635|lencx/Noi !2025-03-2873157|🦄 AI + Tools + Plugins + Community| | 636|udlbook/udlbook !2025-03-2873075|Understanding Deep Learning - Simon J.D. Prince| | 637|OpenBMB/MiniCPM !2025-03-2872841|MiniCPM-2B: An end-side LLM outperforms Llama2-13B.| | 638|jaywalnut310/vits !2025-03-2872815 |VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech| | 639|xorbitsai/inference !2025-03-28727528|Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.| | 640|PWhiddy/PokemonRedExperiments !2025-03-2872492|Playing Pokemon Red with Reinforcement Learning| | 641|Canner/WrenAI !2025-03-28723213|🤖 Open-source AI Agent that empowers data-driven teams to chat with their data to generate Text-to-SQL, charts, spreadsheets, reports, and BI. 📈📊📋🧑‍💻| | 642|miurla/morphic !2025-03-2872258|An AI-powered answer engine with a generative UI| | 643|ml-explore/mlx-examples !2025-03-2872168|Examples in the MLX framework| | 644|PKU-YuanGroup/ChatLaw !2025-03-2872010|Chinese Legal Large Model| | 645|NVIDIA/cutlass !2025-03-2871883|CUDA Templates for Linear Algebra Subroutines| | 646|FoundationVision/VAR !2025-03-28717444|[GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction"| | 647|ymcui/Chinese-LLaMA-Alpaca-2 !2025-03-2871561|Chinese LLaMA-2 & Alpaca-2 LLMs| | 648|nadermx/backgroundremover !2025-03-2871514 |Background Remover lets you Remove Background from images and video using AI with a simple command line interface that is free and open source.| | 649|onuratakan/gpt-computer-assistant !2025-03-28714514|gpt-4o for windows, macos and ubuntu| | 650|graviraja/MLOps-Basics !2025-03-2871326|| | 651|Future-House/paper-qa !2025-03-287118-1|High accuracy RAG for answering questions from scientific documents with citations| | 652|open-mmlab/mmagic !2025-03-2871102 |OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox| | 653|bhaskatripathi/pdfGPT !2025-03-2870941 |PDF GPT allows you to chat with the contents of your PDF file by using GPT capabilities. The only open source solution to turn your pdf files in a chatbot!| | 654|ollama/ollama-python !2025-03-28709117|Ollama Python library| | 655|facebookresearch/DiT !2025-03-2870376|Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"| | 656|geekyutao/Inpaint-Anything !2025-03-2870262 |Inpaint anything using Segment Anything and inpainting models.| | 657|AbdullahAlfaraj/Auto-Photoshop-StableDiffusion-Plugin !2025-03-2870160 |A user-friendly plug-in that makes it easy to generate stable diffusion images inside Photoshop using Automatic1111-sd-webui as a backend.| | 658|apple/corenet !2025-03-2869990|CoreNet: A library for training deep neural networks| | 659|openstatusHQ/openstatus !2025-03-2869926|🏓 The open-source synthetic monitoring platform 🏓| | 660|weaviate/Verba !2025-03-2869772|Retrieval Augmented Generation (RAG) chatbot powered by Weaviate| | 661|meshery/meshery !2025-03-2869630|Meshery, the cloud native manager| | 662|OpenTalker/video-retalking !2025-03-2869530|[SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild| | 663|digitalinnovationone/dio-lab-open-source !2025-03-28689013|Repositório do lab "Contribuindo em um Projeto Open Source no GitHub" da Digital Innovation One.| | 664|jianchang512/ChatTTS-ui !2025-03-2868842|一个简单的本地网页界面,直接使用ChatTTS将文字合成为语音,同时支持对外提供API接口。| | 665|patchy631/ai-engineering-hub !2025-03-28686434|In-depth tutorials on LLMs, RAGs and real-world AI agent applications.| | 666|gunnarmorling/1brc !2025-03-2868512|1️⃣🐝🏎️ The One Billion Row Challenge -- A fun exploration of how quickly 1B rows from a text file can be aggregated with Java| | 667|Azure-Samples/azure-search-openai-demo !2025-03-2868482 |A sample app for the Retrieval-Augmented Generation pattern running in Azure, using Azure Cognitive Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.| | 668|mit-han-lab/streaming-llm !2025-03-2868382|Efficient Streaming Language Models with Attention Sinks| | 669|InternLM/InternLM !2025-03-2868352|InternLM has open-sourced a 7 billion parameter base model, a chat model tailored for practical scenarios and the training system.| | 670|dependency-check/DependencyCheck !2025-03-2868191|OWASP dependency-check is a software composition analysis utility that detects publicly disclosed vulnerabilities in application dependencies.| | 671|Soulter/AstrBot !2025-03-28678643|✨易上手的多平台 LLM 聊天机器人及开发框架✨。支持 QQ、QQ频道、Telegram、微信平台(Gewechat, 企业微信)、内置 Web Chat,OpenAI GPT、DeepSeek、Ollama、Llama、GLM、Gemini、OneAPI、LLMTuner,支持 LLM Agent 插件开发,可视化面板。一键部署。支持 Dify 工作流、代码执行器、Whisper 语音转文字。| | 672|react-native-webview/react-native-webview !2025-03-2867792|React Native Cross-Platform WebView| | 673|modelscope/agentscope !2025-03-28676916|Start building LLM-empowered multi-agent applications in an easier way.| | 674|mylxsw/aidea !2025-03-2867381|AIdea is a versatile app that supports GPT and domestic large language models,also supports "Stable Diffusion" text-to-image generation, image-to-image generation, SDXL 1.0, super-resolution, and image colorization| | 675|langchain-ai/ollama-deep-researcher !2025-03-28668635|Fully local web research and report writing assistant| | 676|threestudio-project/threestudio !2025-03-2866653|A unified framework for 3D content generation.| | 677|gaomingqi/Track-Anything !2025-03-2866631 |A flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.| | 678|spdustin/ChatGPT-AutoExpert !2025-03-2866570|🚀🧠💬 Supercharged Custom Instructions for ChatGPT (non-coding) and ChatGPT Advanced Data Analysis (coding).| | 679|HariSekhon/DevOps-Bash-tools !2025-03-2866463|1000+ DevOps Bash Scripts - AWS, GCP, Kubernetes, Docker, CI/CD, APIs, SQL, PostgreSQL, MySQL, Hive, Impala, Kafka, Hadoop, Jenkins, GitHub, GitLab, BitBucket, Azure DevOps, TeamCity, Spotify, MP3, LDAP, Code/Build Linting, pkg mgmt for Linux, Mac, Python, Perl, Ruby, NodeJS, Golang, Advanced dotfiles: .bashrc, .vimrc, .gitconfig, .screenrc, tmux..| | 680|modelscope/swift !2025-03-28661530|ms-swift: Use PEFT or Full-parameter to finetune 200+ LLMs or 15+ MLLMs| | 681|langchain-ai/opengpts !2025-03-2866080|This is an open source effort to create a similar experience to OpenAI's GPTs and Assistants API| | 682| yihong0618/xiaogpt !2025-03-2865131 | Play ChatGPT with xiaomi ai speaker | | 683| civitai/civitai !2025-03-2865111 | Build a platform where people can share their stable diffusion models | | 684|KoljaB/RealtimeSTT !2025-03-28649513|A robust, efficient, low-latency speech-to-text library with advanced voice activity detection, wake word activation and instant transcription.| | 685|qunash/chatgpt-advanced !2025-03-2864910 | A browser extension that augments your ChatGPT prompts with web results.| | 686|Licoy/ChatGPT-Midjourney !2025-03-2864850|🎨 Own your own ChatGPT+Midjourney web service with one click| | 687|friuns2/BlackFriday-GPTs-Prompts !2025-03-2864744|List of free GPTs that doesn't require plus subscription| | 688|PixarAnimationStudios/OpenUSD !2025-03-2864700|Universal Scene Description| | 689|linyiLYi/street-fighter-ai !2025-03-2864630 |This is an AI agent for Street Fighter II Champion Edition.| | 690|run-llama/rags !2025-03-2864380|Build ChatGPT over your data, all with natural language| | 691|frdel/agent-zero !2025-03-2864154|Agent Zero AI framework| | 692|microsoft/DeepSpeedExamples !2025-03-2863911 |Example models using DeepSpeed| | 693|k8sgpt-ai/k8sgpt !2025-03-2863882|Giving Kubernetes Superpowers to everyone| | 694|open-metadata/OpenMetadata !2025-03-2863514|OpenMetadata is a unified platform for discovery, observability, and governance powered by a central metadata repository, in-depth lineage, and seamless team collaboration.| | 695|google/gemma.cpp !2025-03-2863163|lightweight, standalone C++ inference engine for Google's Gemma models.| | 696|RayVentura/ShortGPT !2025-03-286314-1|🚀🎬 ShortGPT - An experimental AI framework for automated short/video content creation. Enables creators to rapidly produce, manage, and deliver content using AI and automation.| | 697|openai/consistencymodels !2025-03-2862940 |Official repo for consistency models.| | 698|yangjianxin1/Firefly !2025-03-2862924|Firefly: Chinese conversational large language model (full-scale fine-tuning + QLoRA), supporting fine-tuning of Llma2, Llama, Baichuan, InternLM, Ziya, Bloom, and other large models| | 699|enricoros/big-AGI !2025-03-2862665|Generative AI suite powered by state-of-the-art models and providing advanced AI/AGI functions. It features AI personas, AGI functions, multi-model chats, text-to-image, voice, response streaming, code highlighting and execution, PDF import, presets for developers, much more. Deploy on-prem or in the cloud.| | 700|aptos-labs/aptos-core !2025-03-2862633|Aptos is a layer 1 blockchain built to support the widespread use of blockchain through better technology and user experience.| | 701|wenda-LLM/wenda !2025-03-286262-1 |Wenda: An LLM invocation platform. Its objective is to achieve efficient content generation tailored to specific environments while considering the limited computing resources of individuals and small businesses, as well as knowledge security and privacy concerns| | 702|Project-MONAI/MONAI !2025-03-2862603|AI Toolkit for Healthcare Imaging| | 703|HVision-NKU/StoryDiffusion !2025-03-2862470|Create Magic Story!| | 704|deepseek-ai/DeepSeek-LLM !2025-03-2862463|DeepSeek LLM: Let there be answers| | 705|Tohrusky/Final2x !2025-03-2862393|2^x Image Super-Resolution| | 706|OpenSPG/KAG !2025-03-28619611|KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.| | 707|Moonvy/OpenPromptStudio !2025-03-2861861 |AIGC Hint Word Visualization Editor| | 708|levihsu/OOTDiffusion !2025-03-2861761|Official implementation of OOTDiffusion| | 709|tmc/langchaingo !2025-03-2861729|LangChain for Go, the easiest way to write LLM-based programs in Go| | 710|vladmandic/automatic !2025-03-2861374|SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models| | 711|clovaai/donut !2025-03-2861231 |Official Implementation of OCR-free Document Understanding Transformer (Donut) and Synthetic Document Generator (SynthDoG), ECCV 2022| | 712|Shaunwei/RealChar !2025-03-286121-1|🎙️🤖Create, Customize and Talk to your AI Character/Companion in Realtime(All in One Codebase!). Have a natural seamless conversation with AI everywhere(mobile, web and terminal) using LLM OpenAI GPT3.5/4, Anthropic Claude2, Chroma Vector DB, Whisper Speech2Text, ElevenLabs Text2Speech🎙️🤖| | 713|microsoft/TinyTroupe !2025-03-2861142|LLM-powered multiagent persona simulation for imagination enhancement and business insights.| | 714| rustformers/llm !2025-03-2861010 | Run inference for Large Language Models on CPU, with Rust| | 715|firebase/firebase-ios-sdk !2025-03-2860950|Firebase SDK for Apple App Development| | 716|vespa-engine/vespa !2025-03-2860824|The open big data serving engine. https://vespa.ai| | 717|n4ze3m/page-assist !2025-03-28607610|Use your locally running AI models to assist you in your web browsing| | 718|Dooy/chatgpt-web-midjourney-proxy !2025-03-2860646|chatgpt web, midjourney, gpts,tts, whisper 一套ui全搞定| | 719|ethereum-optimism/optimism !2025-03-2860213|Optimism is Ethereum, scaled.| | 720|sczhou/ProPainter !2025-03-2859971|[ICCV 2023] ProPainter: Improving Propagation and Transformer for Video Inpainting| | 721|MineDojo/Voyager !2025-03-2859951 |An Open-Ended Embodied Agent with Large Language Models| | 722|lavague-ai/LaVague !2025-03-2859800|Automate automation with Large Action Model framework| | 723|SevaSk/ecoute !2025-03-2859770 |Ecoute is a live transcription tool that provides real-time transcripts for both the user's microphone input (You) and the user's speakers output (Speaker) in a textbox. It also generates a suggested response using OpenAI's GPT-3.5 for the user to say based on the live transcription of the conversation.| | 724|google/mesop !2025-03-2859661|| | 725|pengxiao-song/LaWGPT !2025-03-2859542 |Repo for LaWGPT, Chinese-Llama tuned with Chinese Legal knowledge| | 726|fr0gger/Awesome-GPT-Agents !2025-03-2859434|A curated list of GPT agents for cybersecurity| | 727|google-deepmind/graphcast !2025-03-2859412|| | 728|comet-ml/opik !2025-03-28594126|Open-source end-to-end LLM Development Platform| | 729|SciPhi-AI/R2R !2025-03-28594033|A framework for rapid development and deployment of production-ready RAG systems| | 730|SkalskiP/courses !2025-03-2859272 |This repository is a curated collection of links to various courses and resources about Artificial Intelligence (AI)| | 731|QuivrHQ/MegaParse !2025-03-2859122|File Parser optimised for LLM Ingestion with no loss 🧠 Parse PDFs, Docx, PPTx in a format that is ideal for LLMs.| | 732|pytorch-labs/gpt-fast !2025-03-2858971|Simple and efficient pytorch-native transformer text generation in !2025-03-2858886|Curated list of chatgpt prompts from the top-rated GPTs in the GPTs Store. Prompt Engineering, prompt attack & prompt protect. Advanced Prompt Engineering papers.| | 734|nilsherzig/LLocalSearch !2025-03-2858852|LLocalSearch is a completely locally running search aggregator using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.| | 735|kuafuai/DevOpsGPT !2025-03-285874-2|Multi agent system for AI-driven software development. Convert natural language requirements into working software. Supports any development language and extends the existing base code.| | 736|myshell-ai/MeloTTS !2025-03-2858486|High-quality multi-lingual text-to-speech library by MyShell.ai. Support English, Spanish, French, Chinese, Japanese and Korean.| | 737|OpenGVLab/LLaMA-Adapter !2025-03-2858421 |Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters| | 738|volcengine/verl !2025-03-28582563|veRL: Volcano Engine Reinforcement Learning for LLM| | 739|a16z-infra/companion-app !2025-03-2858171|AI companions with memory: a lightweight stack to create and host your own AI companions| | 740|HumanAIGC/OutfitAnyone !2025-03-285816-1|Outfit Anyone: Ultra-high quality virtual try-on for Any Clothing and Any Person| | 741|josStorer/RWKV-Runner !2025-03-2857472|A RWKV management and startup tool, full automation, only 8MB. And provides an interface compatible with the OpenAI API. RWKV is a large language model that is fully open source and available for commercial use.| | 742|648540858/wvp-GB28181-pro !2025-03-2857414|WEB VIDEO PLATFORM是一个基于GB28181-2016标准实现的网络视频平台,支持NAT穿透,支持海康、大华、宇视等品牌的IPC、NVR、DVR接入。支持国标级联,支持rtsp/rtmp等视频流转发到国标平台,支持rtsp/rtmp等推流转发到国标平台。| | 743|ToonCrafter/ToonCrafter !2025-03-2857345|a research paper for generative cartoon interpolation| | 744|PawanOsman/ChatGPT !2025-03-2857191|OpenAI API Free Reverse Proxy| | 745|apache/hudi !2025-03-2857091|Upserts, Deletes And Incremental Processing on Big Data.| | 746| nsarrazin/serge !2025-03-2857081 | A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API| | 747|homanp/superagent !2025-03-2857021|🥷 Superagent - Build, deploy, and manage LLM-powered agents| | 748|ramonvc/freegpt-webui !2025-03-2856910|GPT 3.5/4 with a Chat Web UI. No API key is required.| | 749|baichuan-inc/baichuan-7B !2025-03-2856901|A large-scale 7B pretraining language model developed by BaiChuan-Inc.| | 750|Azure/azure-sdk-for-net !2025-03-2856792|This repository is for active development of the Azure SDK for .NET. For consumers of the SDK we recommend visiting our public developer docs at https://learn.microsoft.com/dotnet/azure/ or our versioned developer docs at https://azure.github.io/azure-sdk-for-net.| | 751|mnotgod96/AppAgent !2025-03-2856643|AppAgent: Multimodal Agents as Smartphone Users, an LLM-based multimodal agent framework designed to operate smartphone apps.| | 752|microsoft/TaskWeaver !2025-03-2856243|A code-first agent framework for seamlessly planning and executing data analytics tasks.| | 753| yetone/bob-plugin-openai-translator !2025-03-285600-1 | A Bob Plugin base ChatGPT API | | 754|PrefectHQ/marvin !2025-03-2855840 |A batteries-included library for building AI-powered software| | 755|microsoft/promptbase !2025-03-2855832|All things prompt engineering| | 756|fullstackhero/dotnet-starter-kit !2025-03-2855560|Production Grade Cloud-Ready .NET 8 Starter Kit (Web API + Blazor Client) with Multitenancy Support, and Clean/Modular Architecture that saves roughly 200+ Development Hours! All Batteries Included.| | 757|deepseek-ai/DeepSeek-Coder-V2 !2025-03-2855435|DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence| | 758|aiwaves-cn/agents !2025-03-2855391|An Open-source Framework for Autonomous Language Agents| | 759|microsoft/Mastering-GitHub-Copilot-for-Paired-Programming !2025-03-2855158|A 6 Lesson course teaching everything you need to know about harnessing GitHub Copilot and an AI Paired Programing resource.| | 760|allenai/OLMo !2025-03-2854506|Modeling, training, eval, and inference code for OLMo| | 761|apify/crawlee-python !2025-03-2854493|Crawlee—A web scraping and browser automation library for Python to build reliable crawlers. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with BeautifulSoup, Playwright, and raw HTTP. Both headful and headless mode. With proxy rotation.| | 762|k2-fsa/sherpa-onnx !2025-03-28541520|Speech-to-text, text-to-speech, and speaker recongition using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, Raspberry Pi, RISC-V, x86_64 servers, websocket server/client, C/C++, Python, Kotlin, C#, Go, NodeJS, Java, Swift| | 763|TEN-framework/TEN-Agent !2025-03-28541411|TEN Agent is a realtime conversational AI agent powered by TEN. It seamlessly integrates the OpenAI Realtime API, RTC capabilities, and advanced features like weather updates, web search, computer vision, and Retrieval-Augmented Generation (RAG).| | 764|google/gemmapytorch !2025-03-2854010|The official PyTorch implementation of Google's Gemma models| | 765|snakers4/silero-vad !2025-03-2853858|Silero VAD: pre-trained enterprise-grade Voice Activity Detector| | 766|livekit/agents !2025-03-2853836|Build real-time multimodal AI applications 🤖🎙️📹| | 767|pipecat-ai/pipecat !2025-03-28537811|Open Source framework for voice and multimodal conversational AI| | 768|EricLBuehler/mistral.rs !2025-03-28536324|Blazingly fast LLM inference.| | 769|asg017/sqlite-vec !2025-03-28535810|Work-in-progress vector search SQLite extension that runs anywhere.| | 770|albertan017/LLM4Decompile !2025-03-2853563|Reverse Engineering: Decompiling Binary Code with Large Language Models| | 771|Permify/permify !2025-03-2853235|An open-source authorization as a service inspired by Google Zanzibar, designed to build and manage fine-grained and scalable authorization systems for any application.| | 772|imoneoi/openchat !2025-03-2853171|OpenChat: Advancing Open-source Language Models with Imperfect Data| | 773|mosaicml/composer !2025-03-2853140|Train neural networks up to 7x faster| | 774|dsdanielpark/Bard-API !2025-03-285277-1 |The python package that returns a response of Google Bard through API.| | 775|lxfater/inpaint-web !2025-03-2852552|A free and open-source inpainting & image-upscaling tool powered by webgpu and wasm on the browser。| | 776|leanprover/lean4 !2025-03-2852441|Lean 4 programming language and theorem prover| | 777|AILab-CVC/YOLO-World !2025-03-2852415|Real-Time Open-Vocabulary Object Detection| | 778|openchatai/OpenChat !2025-03-2852260 |Run and create custom ChatGPT-like bots with OpenChat, embed and share these bots anywhere, the open-source chatbot console.| | 779|mufeedvh/code2prompt !2025-03-28519414|A CLI tool to convert your codebase into a single LLM prompt with source tree, prompt templating, and token counting.| | 780|biobootloader/wolverine !2025-03-2851700 |Automatically repair python scripts through GPT-4 to give them regenerative abilities.| | 781|huggingface/parler-tts !2025-03-2851671|Inference and training library for high-quality TTS models.| | 782|Akegarasu/lora-scripts !2025-03-2851308 |LoRA training scripts use kohya-ss's trainer, for diffusion model.| | 783|openchatai/OpenCopilot !2025-03-285128-3|🤖 🔥 Let your users chat with your product features and execute things by text - open source Shopify sidekick| | 784|e2b-dev/fragments !2025-03-2851228|Open-source Next.js template for building apps that are fully generated by AI. By E2B.| | 785|microsoft/SynapseML !2025-03-2851132|Simple and Distributed Machine Learning| | 786|aigc-apps/sd-webui-EasyPhoto !2025-03-285108-1|📷 EasyPhoto | | 787|ChaoningZhang/MobileSAM !2025-03-2850944|This is the official code for Faster Segment Anything (MobileSAM) project that makes SAM lightweight| | 788|huggingface/alignment-handbook !2025-03-2850932|Robust recipes for to align language models with human and AI preferences| | 789|alpkeskin/mosint !2025-03-2850920|An automated e-mail OSINT tool| | 790|TaskingAI/TaskingAI !2025-03-2850891|The open source platform for AI-native application development.| | 791|lipku/metahuman-stream !2025-03-28507615|Real time interactive streaming digital human| | 792|OpenInterpreter/01 !2025-03-2850530|The open-source language model computer| | 793|open-compass/opencompass !2025-03-28505111|OpenCompass is an LLM evaluation platform, supporting a wide range of models (InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.| | 794|xxlong0/Wonder3D !2025-03-2850491|A cross-domain diffusion model for 3D reconstruction from a single image| | 795|pytorch/torchtune !2025-03-2850342|A Native-PyTorch Library for LLM Fine-tuning| | 796|SuperDuperDB/superduperdb !2025-03-2850192|🔮 SuperDuperDB: Bring AI to your database: Integrate, train and manage any AI models and APIs directly with your database and your data.| | 797|WhiskeySockets/Baileys !2025-03-2850057|Lightweight full-featured typescript/javascript WhatsApp Web API| | 798| mpociot/chatgpt-vscode !2025-03-2849890 | A VSCode extension that allows you to use ChatGPT | | 799|OpenGVLab/DragGAN !2025-03-2849880|Unofficial Implementation of DragGAN - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold" (DragGAN 全功能实现,在线Demo,本地部署试用,代码、模型已全部开源,支持Windows, macOS, Linux)| | 800|microsoft/LLMLingua !2025-03-2849824|To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.| | 801|Zipstack/unstract !2025-03-2849745|No-code LLM Platform to launch APIs and ETL Pipelines to structure unstructured documents| | 802|OpenBMB/ToolBench !2025-03-2849621|An open platform for training, serving, and evaluating large language model for tool learning.| | 803|Fanghua-Yu/SUPIR !2025-03-2849593|SUPIR aims at developing Practical Algorithms for Photo-Realistic Image Restoration In the Wild| | 804|GaiaNet-AI/gaianet-node !2025-03-2849360|Install and run your own AI agent service| | 805|qodo-ai/qodo-cover !2025-03-284922-1|Qodo-Cover: An AI-Powered Tool for Automated Test Generation and Code Coverage Enhancement! 💻🤖🧪🐞| | 806|Zejun-Yang/AniPortrait !2025-03-2849042|AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation| | 807|lvwzhen/law-cn-ai !2025-03-2848901 |⚖️ AI Legal Assistant| | 808|developersdigest/llm-answer-engine !2025-03-2848740|Build a Perplexity-Inspired Answer Engine Using Next.js, Groq, Mixtral, Langchain, OpenAI, Brave & Serper| | 809|Plachtaa/VITS-fast-fine-tuning !2025-03-2848640|This repo is a pipeline of VITS finetuning for fast speaker adaptation TTS, and many-to-many voice conversion| | 810|espeak-ng/espeak-ng !2025-03-2848601|eSpeak NG is an open source speech synthesizer that supports more than hundred languages and accents.| | 811|ant-research/CoDeF !2025-03-2848581|[CVPR'24 Highlight] Official PyTorch implementation of CoDeF: Content Deformation Fields for Temporally Consistent Video Processing| | 812|deepseek-ai/DeepSeek-V2 !2025-03-2848512|| | 813|XRPLF/rippled !2025-03-2848210|Decentralized cryptocurrency blockchain daemon implementing the XRP Ledger protocol in C++| | 814|AutoMQ/automq !2025-03-28478721|AutoMQ is a cloud-first alternative to Kafka by decoupling durability to S3 and EBS. 10x cost-effective. Autoscale in seconds. Single-digit ms latency.| | 815|AILab-CVC/VideoCrafter !2025-03-2847800|VideoCrafter1: Open Diffusion Models for High-Quality Video Generation| | 816|nautechsystems/nautilustrader !2025-03-2847702|A high-performance algorithmic trading platform and event-driven backtester| | 817|kyegomez/swarms !2025-03-2847563|The Enterprise-Grade Production-Ready Multi-Agent Orchestration Framework Join our Community: https://discord.com/servers/agora-999382051935506503| | 818|Deci-AI/super-gradients !2025-03-2847310 |Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.| | 819|QwenLM/Qwen2.5-Coder !2025-03-2847236|Qwen2.5-Coder is the code version of Qwen2.5, the large language model series developed by Qwen team, Alibaba Cloud.| | 820|SCIR-HI/Huatuo-Llama-Med-Chinese !2025-03-2847191 |Repo for HuaTuo (华驼), Llama-7B tuned with Chinese medical knowledge| | 821|togethercomputer/RedPajama-Data !2025-03-2846841 |code for preparing large datasets for training large language models| | 822|mishushakov/llm-scraper !2025-03-2846704|Turn any webpage into structured data using LLMs| | 823|1rgs/jsonformer !2025-03-2846663 |A Bulletproof Way to Generate Structured JSON from Language Models| | 824|anti-work/shortest !2025-03-2846565|QA via natural language AI tests| | 825|dnhkng/GlaDOS !2025-03-2846510|This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve.| | 826|Nukem9/dlssg-to-fsr3 !2025-03-2846380|Adds AMD FSR3 Frame Generation to games by replacing Nvidia DLSS-G Frame Generation (nvngx_dlssg).| | 827|BuilderIO/ai-shell !2025-03-2846373 |A CLI that converts natural language to shell commands.| | 828|facebookincubator/AITemplate !2025-03-2846220 |AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.| | 829|terraform-aws-modules/terraform-aws-eks !2025-03-2846030|Terraform module to create AWS Elastic Kubernetes (EKS) resources 🇺🇦| | 830|timescale/pgai !2025-03-2845915|A suite of tools to develop RAG, semantic search, and other AI applications more easily with PostgreSQL| | 831|awslabs/multi-agent-orchestrator !2025-03-2845788|Flexible and powerful framework for managing multiple AI agents and handling complex conversations| | 832|sanchit-gandhi/whisper-jax !2025-03-2845771 |Optimised JAX code for OpenAI's Whisper Model, largely built on the Hugging Face Transformers Whisper implementation| | 833|NVIDIA/NeMo-Guardrails !2025-03-2845755|NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.| | 834|PathOfBuildingCommunity/PathOfBuilding !2025-03-2845480|Offline build planner for Path of Exile.| | 835|UX-Decoder/Segment-Everything-Everywhere-All-At-Once !2025-03-2845412 |Official implementation of the paper "Segment Everything Everywhere All at Once"| | 836|build-trust/ockam !2025-03-2845171|Orchestrate end-to-end encryption, cryptographic identities, mutual authentication, and authorization policies between distributed applications – at massive scale.| | 837|google-research/timesfm !2025-03-2845135|TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.| | 838|luosiallen/latent-consistency-model !2025-03-2844842|Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference| | 839|NVlabs/neuralangelo !2025-03-2844740|Official implementation of "Neuralangelo: High-Fidelity Neural Surface Reconstruction" (CVPR 2023)| | 840|kyegomez/tree-of-thoughts !2025-03-2844720 |Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%| | 841|sjvasquez/handwriting-synthesis !2025-03-2844720 |Handwriting Synthesis with RNNs ✏️| | 842| madawei2699/myGPTReader !2025-03-2844420 | A slack bot that can read any webpage, ebook or document and summarize it with chatGPT | | 843|OpenBMB/AgentVerse !2025-03-2844413|🤖 AgentVerse 🪐 provides a flexible framework that simplifies the process of building custom multi-agent environments for large language models (LLMs).| | 844|argmaxinc/WhisperKit !2025-03-2844395|Swift native speech recognition on-device for iOS and macOS applications.| | 845|landing-ai/vision-agent !2025-03-2844346|Vision agent| | 846|InternLM/xtuner !2025-03-2844273|An efficient, flexible and full-featured toolkit for fine-tuning large models (InternLM, Llama, Baichuan, Qwen, ChatGLM)| | 847|google-deepmind/alphageometry !2025-03-284421-1|Solving Olympiad Geometry without Human Demonstrations| | 848|ostris/ai-toolkit !2025-03-2844093|Various AI scripts. Mostly Stable Diffusion stuff.| | 849|LLM-Red-Team/kimi-free-api !2025-03-2844004|🚀 KIMI AI 长文本大模型白嫖服务,支持高速流式输出、联网搜索、长文档解读、图像解析、多轮对话,零配置部署,多路token支持,自动清理会话痕迹。| | 850|argilla-io/argilla !2025-03-2843991|Argilla is a collaboration platform for AI engineers and domain experts that require high-quality outputs, full data ownership, and overall efficiency.| | 851|spring-projects/spring-ai !2025-03-28438419|An Application Framework for AI Engineering| | 852|alibaba-damo-academy/FunClip !2025-03-2843555|Open-source, accurate and easy-to-use video clipping tool, LLM based AI clipping intergrated | | 853|yisol/IDM-VTON !2025-03-2843541|IDM-VTON : Improving Diffusion Models for Authentic Virtual Try-on in the Wild| | 854|fchollet/ARC-AGI !2025-03-2843368|The Abstraction and Reasoning Corpus| | 855|MahmoudAshraf97/whisper-diarization !2025-03-2843064|Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper| | 856|Speykious/cve-rs !2025-03-2843047|Blazingly 🔥 fast 🚀 memory vulnerabilities, written in 100% safe Rust. 🦀| | 857|Blealtan/efficient-kan !2025-03-2842770|An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).| | 858|smol-ai/GodMode !2025-03-284249-1|AI Chat Browser: Fast, Full webapp access to ChatGPT / Claude / Bard / Bing / Llama2! I use this 20 times a day.| | 859|openai/plugins-quickstart !2025-03-284235-4 |Get a ChatGPT plugin up and running in under 5 minutes!| | 860|Doriandarko/maestro !2025-03-2842260|A framework for Claude Opus to intelligently orchestrate subagents.| | 861|philz1337x/clarity-upscaler !2025-03-2842204|Clarity-Upscaler: Reimagined image upscaling for everyone| | 862|facebookresearch/co-tracker !2025-03-2842142|CoTracker is a model for tracking any point (pixel) on a video.| | 863|xlang-ai/OpenAgents !2025-03-2842031|OpenAgents: An Open Platform for Language Agents in the Wild| | 864|alibaba/higress !2025-03-28419514|🤖 AI Gateway | | 865|ray-project/llm-numbers !2025-03-2841920 |Numbers every LLM developer should know| | 866|fudan-generative-vision/champ !2025-03-2841820|Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance| | 867|NVIDIA/garak !2025-03-2841795|the LLM vulnerability scanner| | 868|leetcode-mafia/cheetah !2025-03-2841740 |Whisper & GPT-based app for passing remote SWE interviews| | 869|ragapp/ragapp !2025-03-2841710|The easiest way to use Agentic RAG in any enterprise| | 870|collabora/WhisperSpeech !2025-03-2841692|An Open Source text-to-speech system built by inverting Whisper.| | 871|Facico/Chinese-Vicuna !2025-03-2841520 |Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model| | 872|openai/grok !2025-03-2841381|| | 873|CrazyBoyM/llama3-Chinese-chat !2025-03-2841361|Llama3 Chinese Repository with modified versions, and training and deployment resources| | 874|luban-agi/Awesome-AIGC-Tutorials !2025-03-2841301|Curated tutorials and resources for Large Language Models, AI Painting, and more.| | 875|damo-vilab/AnyDoor !2025-03-2841192|Official implementations for paper: Anydoor: zero-shot object-level image customization| | 876|raspberrypi/pico-sdk !2025-03-2841072|| | 877|mshumer/gpt-llm-trainer !2025-03-284097-1|| | 878|metavoiceio/metavoice-src !2025-03-284076-1|AI for human-level speech intelligence| | 879|intelowlproject/IntelOwl !2025-03-2840763|IntelOwl: manage your Threat Intelligence at scale| | 880|a16z-infra/ai-getting-started !2025-03-2840682|A Javascript AI getting started stack for weekend projects, including image/text models, vector stores, auth, and deployment configs| | 881|MarkFzp/mobile-aloha !2025-03-2840641|Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation| | 882| keijiro/AICommand !2025-03-2840380 | ChatGPT integration with Unity Editor | | 883|Tencent/HunyuanDiT !2025-03-2840214|Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding| | 884|hengyoush/kyanos !2025-03-2840061|Visualize the time packets spend in the kernel, watch & analyze in command line.| | 885|agiresearch/AIOS !2025-03-2840045|AIOS: LLM Agent Operating System| | 886|truefoundry/cognita !2025-03-2839773|RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry| | 887|X-PLUG/MobileAgent !2025-03-2839557|Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual Perception| | 888|jackMort/ChatGPT.nvim !2025-03-2839231|ChatGPT Neovim Plugin: Effortless Natural Language Generation with OpenAI's ChatGPT API| | 889|microsoft/RD-Agent !2025-03-28388422|Research and development (R&D) is crucial for the enhancement of industrial productivity, especially in the AI era, where the core aspects of R&D are mainly focused on data and models. We are committed to automate these high-value generic R&D processes through our open source R&D automation tool RD-Agent, which let AI drive data-driven AI.| | 890|Significant-Gravitas/Auto-GPT-Plugins !2025-03-283882-1 |Plugins for Auto-GPT| | 891|apple/ml-mgie !2025-03-2838770|| | 892|OpenDriveLab/UniAD !2025-03-2838727|[CVPR 2023 Best Paper] Planning-oriented Autonomous Driving| | 893|llSourcell/DoctorGPT !2025-03-2838640|DoctorGPT is an LLM that can pass the US Medical Licensing Exam. It works offline, it's cross-platform, & your health data stays private.| | 894|FlagAI-Open/FlagAI !2025-03-2838601|FlagAI (Fast LArge-scale General AI models) is a fast, easy-to-use and extensible toolkit for large-scale model.| | 895|krishnaik06/Roadmap-To-Learn-Generative-AI-In-2024 !2025-03-2838513|Roadmap To Learn Generative AI In 2024| | 896|SysCV/sam-hq !2025-03-2838491|Segment Anything in High Quality| | 897|google/security-research !2025-03-2838420|This project hosts security advisories and their accompanying proof-of-concepts related to research conducted at Google which impact non-Google owned code.| | 898|shroominic/codeinterpreter-api !2025-03-2838330|Open source implementation of the ChatGPT Code Interpreter 👾| | 899|Yonom/assistant-ui !2025-03-2838308|React Components for AI Chat 💬 🚀| | 900|nucleuscloud/neosync !2025-03-2838262|Open source data anonymization and synthetic data orchestration for developers. Create high fidelity synthetic data and sync it across your environments.| | 901|ravenscroftj/turbopilot !2025-03-2838230 |Turbopilot is an open source large-language-model based code completion engine that runs locally on CPU| | 902|NVlabs/Sana !2025-03-28380810|SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer| | 903|huggingface/distil-whisper !2025-03-2838061|Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.| | 904|Codium-ai/AlphaCodium !2025-03-2837971|code generation tool that surpasses most human competitors in CodeContests| | 905|fixie-ai/ultravox !2025-03-2837710|A fast multimodal LLM for real-time voice| | 906|unit-mesh/auto-dev !2025-03-28375715|🧙‍AutoDev: The AI-powered coding wizard with multilingual support 🌐, auto code generation 🏗️, and a helpful bug-slaying assistant 🐞! Customizable prompts 🎨 and a magic Auto Dev/Testing/Document/Agent feature 🧪 included! 🚀| | 907|Marker-Inc-Korea/AutoRAG !2025-03-2837432|AutoML tool for RAG| | 908|deepseek-ai/DeepSeek-VL !2025-03-283734-1|DeepSeek-VL: Towards Real-World Vision-Language Understanding| | 909|hiyouga/ChatGLM-Efficient-Tuning !2025-03-283692-1|Fine-tuning ChatGLM-6B with PEFT | | 910| Yue-Yang/ChatGPT-Siri !2025-03-2836921 | Shortcuts for Siri using ChatGPT API gpt-3.5-turbo model | | 911|0hq/WebGPT !2025-03-2836901 |Run GPT model on the browser with WebGPU. An implementation of GPT inference in less than ~2000 lines of vanilla Javascript.| | 912|cvg/LightGlue !2025-03-2836903|LightGlue: Local Feature Matching at Light Speed (ICCV 2023)| | 913|deanxv/coze-discord-proxy !2025-03-2836791|代理Discord-Bot对话Coze-Bot,实现API形式请求GPT4对话模型/微调模型| | 914|MervinPraison/PraisonAI !2025-03-2836764|PraisonAI application combines AutoGen and CrewAI or similar frameworks into a low-code solution for building and managing multi-agent LLM systems, focusing on simplicity, customisation, and efficient human-agent collaboration.| | 915|Ironclad/rivet !2025-03-2836345 |The open-source visual AI programming environment and TypeScript library| | 916|BasedHardware/OpenGlass !2025-03-2835851|Turn any glasses into AI-powered smart glasses| | 917|ricklamers/gpt-code-ui !2025-03-2835840 |An open source implementation of OpenAI's ChatGPT Code interpreter| | 918|whoiskatrin/chart-gpt !2025-03-2835830 |AI tool to build charts based on text input| | 919|github/CopilotForXcode !2025-03-2835788|Xcode extension for GitHub Copilot| | 920|hemansnation/God-Level-Data-Science-ML-Full-Stack !2025-03-2835570 |A collection of scientific methods, processes, algorithms, and systems to build stories & models. This roadmap contains 16 Chapters, whether you are a fresher in the field or an experienced professional who wants to transition into Data Science & AI| | 921|pytorch/torchchat !2025-03-2835461|Run PyTorch LLMs locally on servers, desktop and mobile| | 922| Kent0n-Li/ChatDoctor !2025-03-2835451 | A Medical Chat Model Fine-tuned on LLaMA Model using Medical Domain Knowledge | | 923|xtekky/chatgpt-clone !2025-03-283519-1 |ChatGPT interface with better UI| | 924|jupyterlab/jupyter-ai !2025-03-2835120|A generative AI extension for JupyterLab| | 925|pytorch/torchtitan !2025-03-2835064|A native PyTorch Library for large model training| | 926|minimaxir/simpleaichat !2025-03-2835031|Python package for easily interfacing with chat apps, with robust features and minimal code complexity.| | 927|srush/Tensor-Puzzles !2025-03-2834930|Solve puzzles. Improve your pytorch.| | 928|Helicone/helicone !2025-03-2834918|🧊 Open source LLM-Observability Platform for Developers. One-line integration for monitoring, metrics, evals, agent tracing, prompt management, playground, etc. Supports OpenAI SDK, Vercel AI SDK, Anthropic SDK, LiteLLM, LLamaIndex, LangChain, and more. 🍓 YC W23| | 929|run-llama/llama-hub !2025-03-2834740|A library of data loaders for LLMs made by the community -- to be used with LlamaIndex and/or LangChain| | 930|NExT-GPT/NExT-GPT !2025-03-2834700|Code and models for NExT-GPT: Any-to-Any Multimodal Large Language Model| | 931|souzatharsis/podcastfy !2025-03-2834661|An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI| | 932|Dataherald/dataherald !2025-03-2834450|Interact with your SQL database, Natural Language to SQL using LLMs| | 933|iryna-kondr/scikit-llm !2025-03-2834350 |Seamlessly integrate powerful language models like ChatGPT into scikit-learn for enhanced text analysis tasks.| | 934|Netflix/maestro !2025-03-2834230|Maestro: Netflix’s Workflow Orchestrator| | 935|CanadaHonk/porffor !2025-03-2833560|A from-scratch experimental AOT JS engine, written in JS| | 936|hustvl/Vim !2025-03-2833323|Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model| | 937|pashpashpash/vault-ai !2025-03-2833250 |OP Vault ChatGPT: Give ChatGPT long-term memory using the OP Stack (OpenAI + Pinecone Vector Database). Upload your own custom knowledge base files (PDF, txt, etc) using a simple React frontend.| | 938|tencentmusic/supersonic !2025-03-28330611|SuperSonic is the next-generation BI platform that integrates Chat BI (powered by LLM) and Headless BI (powered by semantic layer) paradigms.| | 939|billmei/every-chatgpt-gui !2025-03-2832981|Every front-end GUI client for ChatGPT| | 940|microsoft/torchgeo !2025-03-2832772|TorchGeo: datasets, samplers, transforms, and pre-trained models for geospatial data| | 941|LLMBook-zh/LLMBook-zh.github.io !2025-03-28326110|《大语言模型》作者:赵鑫,李军毅,周昆,唐天一,文继荣| | 942|dvlab-research/MiniGemini !2025-03-2832601|Official implementation for Mini-Gemini| | 943|rashadphz/farfalle !2025-03-2832460|🔍 AI search engine - self-host with local or cloud LLMs| | 944|Luodian/Otter !2025-03-2832450|🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.| | 945|AprilNEA/ChatGPT-Admin-Web !2025-03-2832370 | ChatGPT WebUI with user management and admin dashboard system| | 946|MarkFzp/act-plus-plus !2025-03-2832365|Imitation Learning algorithms with Co-traing for Mobile ALOHA: ACT, Diffusion Policy, VINN| | 947|ethen8181/machine-learning !2025-03-2832310|🌎 machine learning tutorials (mainly in Python3)| | 948|opengeos/segment-geospatial !2025-03-2832312 |A Python package for segmenting geospatial data with the Segment Anything Model (SAM)| | 949|iusztinpaul/hands-on-llms !2025-03-283225-2|🦖 𝗟𝗲𝗮𝗿𝗻 about 𝗟𝗟𝗠𝘀, 𝗟𝗟𝗠𝗢𝗽𝘀, and 𝘃𝗲𝗰𝘁𝗼𝗿 𝗗𝗕𝘀 for free by designing, training, and deploying a real-time financial advisor LLM system ~ 𝘴𝘰𝘶𝘳𝘤𝘦 𝘤𝘰𝘥𝘦 + 𝘷𝘪𝘥𝘦𝘰 & 𝘳𝘦𝘢𝘥𝘪𝘯𝘨 𝘮𝘢𝘵𝘦𝘳𝘪𝘢𝘭𝘴| | 950|ToTheBeginning/PuLID !2025-03-2832221|Official code for PuLID: Pure and Lightning ID Customization via Contrastive Alignment| | 951|neo4j-labs/llm-graph-builder !2025-03-2832164|Neo4j graph construction from unstructured data using LLMs| | 952|OpenGVLab/InternGPT !2025-03-2832150 |InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)| | 953|PKU-YuanGroup/Video-LLaVA !2025-03-2832060 |Video-LLaVA: Learning United Visual Representation by Alignment Before Projection| | 954|DataTalksClub/llm-zoomcamp !2025-03-2832030|LLM Zoomcamp - a free online course about building an AI bot that can answer questions about your knowledge base| | 955|gptscript-ai/gptscript !2025-03-2832010|Natural Language Programming| |!green-up-arrow.svg 956|isaac-sim/IsaacLab !2025-03-28320113|Unified framework for robot learning built on NVIDIA Isaac Sim| |!red-down-arrow 957|ai-boost/Awesome-GPTs !2025-03-2832003|Curated list of awesome GPTs 👍.| | 958|huggingface/safetensors !2025-03-2831901|Simple, safe way to store and distribute tensors| | 959|linyiLYi/bilibot !2025-03-2831771|A local chatbot fine-tuned by bilibili user comments.| | 960| project-baize/baize-chatbot !2025-03-283168-1 | Let ChatGPT teach your own chatbot in hours with a single GPU! | | 961|Azure-Samples/cognitive-services-speech-sdk !2025-03-2831280|Sample code for the Microsoft Cognitive Services Speech SDK| | 962|microsoft/Phi-3CookBook !2025-03-2831231|This is a Phi-3 book for getting started with Phi-3. Phi-3, a family of open AI models developed by Microsoft. Phi-3 models are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across a variety of language, reasoning, coding, and math benchmarks.| | 963|neuralmagic/deepsparse !2025-03-2831180|Sparsity-aware deep learning inference runtime for CPUs| | 964|sugarforever/chat-ollama !2025-03-2831000|ChatOllama is an open source chatbot based on LLMs. It supports a wide range of language models, and knowledge base management.| | 965|amazon-science/chronos-forecasting !2025-03-2830974|Chronos: Pretrained (Language) Models for Probabilistic Time Series Forecasting| | 966|damo-vilab/i2vgen-xl !2025-03-2830902|Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models| | 967|google-deepmind/gemma !2025-03-2830733|Open weights LLM from Google DeepMind.| | 968|iree-org/iree !2025-03-2830733|A retargetable MLIR-based machine learning compiler and runtime toolkit.| | 969|NVlabs/VILA !2025-03-2830724|VILA - a multi-image visual language model with training, inference and evaluation recipe, deployable from cloud to edge (Jetson Orin and laptops)| | 970|microsoft/torchscale !2025-03-2830661|Foundation Architecture for (M)LLMs| | 971|openai/openai-realtime-console !2025-03-2830656|React app for inspecting, building and debugging with the Realtime API| | 972|daveshap/OpenAIAgentSwarm !2025-03-2830610|HAAS = Hierarchical Autonomous Agent Swarm - "Resistance is futile!"| | 973|microsoft/PromptWizard !2025-03-2830555|Task-Aware Agent-driven Prompt Optimization Framework| | 974|CVI-SZU/Linly !2025-03-2830490 |Chinese-LLaMA basic model; ChatFlow Chinese conversation model; NLP pre-training/command fine-tuning dataset| | 975|cohere-ai/cohere-toolkit !2025-03-2830130|Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.| | 976|adamcohenhillel/ADeus !2025-03-2830131|An open source AI wearable device that captures what you say and hear in the real world and then transcribes and stores it on your own server. You can then chat with Adeus using the app, and it will have all the right context about what you want to talk about - a truly personalized, personal AI.| | 977|Lightning-AI/LitServe !2025-03-2830132|Lightning-fast serving engine for AI models. Flexible. Easy. Enterprise-scale.| | 978|potpie-ai/potpie !2025-03-2829973|Prompt-To-Agent : Create custom engineering agents for your codebase| | 979|ant-design/x !2025-03-28299529|Craft AI-driven interfaces effortlessly 🤖| | 980|meta-llama/PurpleLlama !2025-03-2829832|Set of tools to assess and improve LLM security.| | 981|williamyang1991/RerenderAVideo !2025-03-2829800|[SIGGRAPH Asia 2023] Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation| | 982|baichuan-inc/Baichuan-13B !2025-03-2829790|A 13B large language model developed by Baichuan Intelligent Technology| | 983|Stability-AI/stable-audio-tools !2025-03-2829761|Generative models for conditional audio generation| | 984|li-plus/chatglm.cpp !2025-03-2829720|C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & more LLMs| | 985|NVIDIA/GenerativeAIExamples !2025-03-2829546|Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.| | 986|Josh-XT/AGiXT !2025-03-2829521 |AGiXT is a dynamic AI Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.| | 987|MrForExample/ComfyUI-3D-Pack !2025-03-2829515|An extensive node suite that enables ComfyUI to process 3D inputs (Mesh & UV Texture, etc) using cutting edge algorithms (3DGS, NeRF, etc.)| | 988|olimorris/codecompanion.nvim !2025-03-28295111|✨ AI-powered coding, seamlessly in Neovim. Supports Anthropic, Copilot, Gemini, Ollama, OpenAI and xAI LLMs| | 989|salesforce/CodeT5 !2025-03-282940-1 |Home of CodeT5: Open Code LLMs for Code Understanding and Generation| | 990|facebookresearch/ijepa !2025-03-2829391|Official codebase for I-JEPA, the Image-based Joint-Embedding Predictive Architecture. First outlined in the CVPR paper, "Self-supervised learning from images with a joint-embedding predictive architecture."| | 991|eureka-research/Eureka !2025-03-2829351|Official Repository for "Eureka: Human-Level Reward Design via Coding Large Language Models"| | 992|NVIDIA/trt-llm-rag-windows !2025-03-282934-1|A developer reference project for creating Retrieval Augmented Generation (RAG) chatbots on Windows using TensorRT-LLM| | 993|gmpetrov/databerry !2025-03-282930-1|The no-code platform for building custom LLM Agents| | 994|AI4Finance-Foundation/FinRobot !2025-03-28291946|FinRobot: An Open-Source AI Agent Platform for Financial Applications using LLMs 🚀 🚀 🚀| | 995|nus-apr/auto-code-rover !2025-03-2829013|A project structure aware autonomous software engineer aiming for autonomous program improvement| | 996|deepseek-ai/DreamCraft3D !2025-03-2828921|[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior| | 997|mlabonne/llm-datasets !2025-03-2828848|High-quality datasets, tools, and concepts for LLM fine-tuning.| | 998|facebookresearch/jepa !2025-03-2828712|PyTorch code and models for V-JEPA self-supervised learning from video.| | 999|facebookresearch/habitat-sim !2025-03-2828604|A flexible, high-performance 3D simulator for Embodied AI research.| | 1000|xenova/whisper-web !2025-03-2828581|ML-powered speech recognition directly in your browser| | 1001|cvlab-columbia/zero123 !2025-03-2828530|Zero-1-to-3: Zero-shot One Image to 3D Object: https://zero123.cs.columbia.edu/| | 1002|yuruotong1/autoMate !2025-03-28285121|Like Manus, Computer Use Agent(CUA) and Omniparser, we are computer-using agents.AI-driven local automation assistant that uses natural language to make computers work by themselves| | 1003|muellerberndt/mini-agi !2025-03-282845-1 |A minimal generic autonomous agent based on GPT3.5/4. Can analyze stock prices, perform network security tests, create art, and order pizza.| | 1004|allenai/open-instruct !2025-03-2828432|| | 1005|CodingChallengesFYI/SharedSolutions !2025-03-2828360|Publicly shared solutions to Coding Challenges| | 1006|hegelai/prompttools !2025-03-2828220|Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroma, Weaviate).| | 1007|mazzzystar/Queryable !2025-03-2828222|Run CLIP on iPhone to Search Photos.| | 1008|Doubiiu/DynamiCrafter !2025-03-2828173|DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors| | 1009|SamurAIGPT/privateGPT !2025-03-282805-1 |An app to interact privately with your documents using the power of GPT, 100% privately, no data leaks| | 1010|facebookresearch/Pearl !2025-03-2827951|A Production-ready Reinforcement Learning AI Agent Library brought by the Applied Reinforcement Learning team at Meta.| | 1011|intuitem/ciso-assistant-community !2025-03-2827954|CISO Assistant is a one-stop-shop for GRC, covering Risk, AppSec and Audit Management and supporting +70 frameworks worldwide with auto-mapping: NIST CSF, ISO 27001, SOC2, CIS, PCI DSS, NIS2, CMMC, PSPF, GDPR, HIPAA, Essential Eight, NYDFS-500, DORA, NIST AI RMF, 800-53, 800-171, CyFun, CJIS, AirCyber, NCSC, ECC, SCF and so much more| | 1012|facebookresearch/audio2photoreal !2025-03-2827840|Code and dataset for photorealistic Codec Avatars driven from audio| | 1013|Azure/azure-rest-api-specs !2025-03-2827770|The source for REST API specifications for Microsoft Azure.| | 1014|SCUTlihaoyu/open-chat-video-editor !2025-03-2827690 |Open source short video automatic generation tool| | 1015|Alpha-VLLM/LLaMA2-Accessory !2025-03-2827642|An Open-source Toolkit for LLM Development| | 1016|johnma2006/mamba-minimal !2025-03-2827601|Simple, minimal implementation of the Mamba SSM in one file of PyTorch.| | 1017|nerfstudio-project/gsplat !2025-03-2827576|CUDA accelerated rasterization of gaussian splatting| | 1018|Physical-Intelligence/openpi !2025-03-28274617|| | 1019|leptonai/leptonai !2025-03-2827246|A Pythonic framework to simplify AI service building| |!green-up-arrow.svg 1020|joanrod/star-vector !2025-03-28271149|StarVector is a foundation model for SVG generation that transforms vectorization into a code generation task. Using a vision-language modeling architecture, StarVector processes both visual and textual inputs to produce high-quality SVG code with remarkable precision.| |!red-down-arrow 1021|jqnatividad/qsv !2025-03-2827092|CSVs sliced, diced & analyzed.| | 1022|FranxYao/chain-of-thought-hub !2025-03-2826991|Benchmarking large language models' complex reasoning ability with chain-of-thought prompting| | 1023|princeton-nlp/SWE-bench !2025-03-2826965|[ICLR 2024] SWE-Bench: Can Language Models Resolve Real-world Github Issues?| | 1024|elastic/otel-profiling-agent !2025-03-2826930|The production-scale datacenter profiler| | 1025|src-d/hercules !2025-03-2826900|Gaining advanced insights from Git repository history.| | 1026|lanqian528/chat2api !2025-03-2826695|A service that can convert ChatGPT on the web to OpenAI API format.| | 1027|ishan0102/vimGPT !2025-03-2826681|Browse the web with GPT-4V and Vimium| | 1028|TMElyralab/MuseV !2025-03-2826650|MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising| | 1029|georgia-tech-db/eva !2025-03-2826600 |AI-Relational Database System | | 1030|kubernetes-sigs/controller-runtime !2025-03-2826590|Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)| | 1031|gptlink/gptlink !2025-03-2826550 |Build your own free commercial ChatGPT environment in 10 minutes. The setup is simple and includes features such as user management, orders, tasks, and payments| | 1032|pytorch/executorch !2025-03-2826534|On-device AI across mobile, embedded and edge for PyTorch| | 1033|NVIDIA/nv-ingest !2025-03-2826290|NVIDIA Ingest is an early access set of microservices for parsing hundreds of thousands of complex, messy unstructured PDFs and other enterprise documents into metadata and text to embed into retrieval systems.| | 1034|SuperTux/supertux !2025-03-2826081|SuperTux source code| | 1035|abi/secret-llama !2025-03-2826050|Fully private LLM chatbot that runs entirely with a browser with no server needed. Supports Mistral and LLama 3.| | 1036|liou666/polyglot !2025-03-2825841 |Desktop AI Language Practice Application| | 1037|janhq/nitro !2025-03-2825821|A fast, lightweight, embeddable inference engine to supercharge your apps with local AI. OpenAI-compatible API| | 1038|deepseek-ai/DeepSeek-Math !2025-03-2825825|DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models| | 1039|anthropics/prompt-eng-interactive-tutorial !2025-03-2825781|Anthropic's Interactive Prompt Engineering Tutorial| | 1040|microsoft/promptbench !2025-03-2825741|A unified evaluation framework for large language models| | 1041|baaivision/Painter !2025-03-2825580 |Painter & SegGPT Series: Vision Foundation Models from BAAI| | 1042|OpenPipe/OpenPipe !2025-03-2825581|Turn expensive prompts into cheap fine-tuned models| | 1043|TracecatHQ/tracecat !2025-03-2825531|😼 The AI-native, open source alternative to Tines / Splunk SOAR.| | 1044|JoshuaC215/agent-service-toolkit !2025-03-2825528|Full toolkit for running an AI agent service built with LangGraph, FastAPI and Streamlit| | 1045|databricks/dbrx !2025-03-2825460|Code examples and resources for DBRX, a large language model developed by Databricks| | 1046|lamini-ai/lamini !2025-03-2825271 |Official repo for Lamini's data generator for generating instructions to train instruction-following LLMs| | 1047|mshumer/gpt-author !2025-03-282510-1|| | 1048|TMElyralab/MusePose !2025-03-2824971|MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation| | 1049|Kludex/fastapi-tips !2025-03-2824974|FastAPI Tips by The FastAPI Expert!| | 1050|openai/simple-evals !2025-03-2824813|| | 1051|iterative/datachain !2025-03-2824732|AI-data warehouse to enrich, transform and analyze data from cloud storages| | 1052|girafe-ai/ml-course !2025-03-2824703|Open Machine Learning course| | 1053|kevmo314/magic-copy !2025-03-2824620 |Magic Copy is a Chrome extension that uses Meta's Segment Anything Model to extract a foreground object from an image and copy it to the clipboard.| | 1054|Eladlev/AutoPrompt !2025-03-2824432|A framework for prompt tuning using Intent-based Prompt Calibration| | 1055|OpenBMB/CPM-Bee !2025-03-282434-1 |A bilingual large-scale model with trillions of parameters| | 1056|IDEA-Research/T-Rex !2025-03-2824310|T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergy| | 1057|microsoft/genaiscript !2025-03-2824202|Automatable GenAI Scripting| | 1058|paulpierre/RasaGPT !2025-03-2824090 |💬 RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. Built w/ Rasa, FastAPI, Langchain, LlamaIndex, SQLModel, pgvector, ngrok, telegram| | 1059|ashishpatel26/LLM-Finetuning !2025-03-2823911|LLM Finetuning with peft| | 1060|SoraWebui/SoraWebui !2025-03-2823570|SoraWebui is an open-source Sora web client, enabling users to easily create videos from text with OpenAI's Sora model.| | 1061|6drf21e/ChatTTScolab !2025-03-2823491|🚀 一键部署(含离线整合包)!基于 ChatTTS ,支持音色抽卡、长音频生成和分角色朗读。简单易用,无需复杂安装。| | 1062|Azure/PyRIT !2025-03-2823343|The Python Risk Identification Tool for generative AI (PyRIT) is an open access automation framework to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.| | 1063|tencent-ailab/V-Express !2025-03-2823201|V-Express aims to generate a talking head video under the control of a reference image, an audio, and a sequence of V-Kps images.| | 1064|THUDM/CogVLM2 !2025-03-2823170|GPT4V-level open-source multi-modal model based on Llama3-8B| | 1065|dvmazur/mixtral-offloading !2025-03-2823001|Run Mixtral-8x7B models in Colab or consumer desktops| | 1066|semanser/codel !2025-03-2822950|✨ Fully autonomous AI Agent that can perform complicated tasks and projects using terminal, browser, and editor.| | 1067|mshumer/gpt-investor !2025-03-2822590|| | 1068|aixcoder-plugin/aiXcoder-7B !2025-03-2822550|official repository of aiXcoder-7B Code Large Language Model| | 1069|Azure-Samples/graphrag-accelerator !2025-03-2822503|One-click deploy of a Knowledge Graph powered RAG (GraphRAG) in Azure| | 1070|emcf/engshell !2025-03-2821830 |An English-language shell for any OS, powered by LLMs| | 1071|hncboy/chatgpt-web-java !2025-03-2821771|ChatGPT project developed in Java, based on Spring Boot 3 and JDK 17, supports both AccessToken and ApiKey modes| | 1072|openai/consistencydecoder !2025-03-2821692|Consistency Distilled Diff VAE| | 1073|Alpha-VLLM/Lumina-T2X !2025-03-2821681|Lumina-T2X is a unified framework for Text to Any Modality Generation| | 1074|bghira/SimpleTuner !2025-03-2821612|A general fine-tuning kit geared toward Stable Diffusion 2.1, Stable Diffusion 3, DeepFloyd, and SDXL.| | 1075|JiauZhang/DragGAN !2025-03-2821530 |Implementation of DragGAN: Interactive Point-based Manipulation on the Generative Image Manifold| | 1076|cgpotts/cs224u !2025-03-2821390|Code for Stanford CS224u| | 1077|PKU-YuanGroup/MoE-LLaVA !2025-03-2821300|Mixture-of-Experts for Large Vision-Language Models| | 1078|darrenburns/elia !2025-03-2820831|A snappy, keyboard-centric terminal user interface for interacting with large language models. Chat with ChatGPT, Claude, Llama 3, Phi 3, Mistral, Gemma and more.| | 1079|ageerle/ruoyi-ai !2025-03-28207898|RuoYi AI 是一个全栈式 AI 开发平台,旨在帮助开发者快速构建和部署个性化的 AI 应用。| | 1080|NVIDIA/gpu-operator !2025-03-2820510|NVIDIA GPU Operator creates/configures/manages GPUs atop Kubernetes| | 1081|BAAI-Agents/Cradle !2025-03-2820481|The Cradle framework is a first attempt at General Computer Control (GCC). Cradle supports agents to ace any computer task by enabling strong reasoning abilities, self-improvment, and skill curation, in a standardized general environment with minimal requirements.| | 1082|microsoft/aici !2025-03-2820080|AICI: Prompts as (Wasm) Programs| | 1083|PRIS-CV/DemoFusion !2025-03-2820040|Let us democratise high-resolution generation! (arXiv 2023)| | 1084|apple/axlearn !2025-03-2820012|An Extensible Deep Learning Library| | 1085|naver/mast3r !2025-03-2819685|Grounding Image Matching in 3D with MASt3R| | 1086|liltom-eth/llama2-webui !2025-03-281958-1|Run Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Supporting Llama-2-7B/13B/70B with 8-bit, 4-bit. Supporting GPU inference (6 GB VRAM) and CPU inference.| | 1087|GaParmar/img2img-turbo !2025-03-2819582|One-step image-to-image with Stable Diffusion turbo: sketch2image, day2night, and more| | 1088|Niek/chatgpt-web !2025-03-2819560|ChatGPT web interface using the OpenAI API| | 1089|huggingface/cookbook !2025-03-2819421|Open-source AI cookbook| | 1090|pytorch/ao !2025-03-2819241|PyTorch native quantization and sparsity for training and inference| | 1091|emcie-co/parlant !2025-03-2819053|The behavior guidance framework for customer-facing LLM agents| | 1092|ymcui/Chinese-LLaMA-Alpaca-3 !2025-03-2818980|中文羊驼大模型三期项目 (Chinese Llama-3 LLMs) developed from Meta Llama 3| | 1093|Nutlope/notesGPT !2025-03-2818811|Record voice notes & transcribe, summarize, and get tasks| | 1094|InstantStyle/InstantStyle !2025-03-2818791|InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation 🔥| | 1095|idaholab/moose !2025-03-2818771|Multiphysics Object Oriented Simulation Environment| | 1096|The-OpenROAD-Project/OpenROAD !2025-03-2818351|OpenROAD's unified application implementing an RTL-to-GDS Flow. Documentation at https://openroad.readthedocs.io/en/latest/| | 1097|alibaba/spring-ai-alibaba !2025-03-281831121|Agentic AI Framework for Java Developers| | 1098|ytongbai/LVM !2025-03-2817990|Sequential Modeling Enables Scalable Learning for Large Vision Models| | 1099|microsoft/sample-app-aoai-chatGPT !2025-03-2817981|[PREVIEW] Sample code for a simple web chat experience targeting chatGPT through AOAI.| | 1100|AI-Citizen/SolidGPT !2025-03-2817830|Chat everything with your code repository, ask repository level code questions, and discuss your requirements. AI Scan and learning your code repository, provide you code repository level answer🧱 🧱| | 1101|YangLing0818/RPG-DiffusionMaster !2025-03-2817784|Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs (PRG)| | 1102|kyegomez/BitNet !2025-03-2817710|Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch| | 1103|eloialonso/diamond !2025-03-2817671|DIAMOND (DIffusion As a Model Of eNvironment Dreams) is a reinforcement learning agent trained in a diffusion world model.| | 1104|flowdriveai/flowpilot !2025-03-2817250|flow-pilot is an openpilot based driver assistance system that runs on linux, windows and android powered machines.| | 1105|xlang-ai/OSWorld !2025-03-2817200|OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments| | 1106|linyiLYi/snake-ai !2025-03-2817031|An AI agent that beats the classic game "Snake".| | 1107|baaivision/Emu !2025-03-2816991|Emu Series: Generative Multimodal Models from BAAI| | 1108|kevmo314/scuda !2025-03-2816870|SCUDA is a GPU over IP bridge allowing GPUs on remote machines to be attached to CPU-only machines.| | 1109|SharifiZarchi/IntroductiontoMachineLearning !2025-03-2816701|دوره‌ی مقدمه‌ای بر یادگیری ماشین، برای دانشجویان| | 1110|google/maxtext !2025-03-2816670|A simple, performant and scalable Jax LLM!| | 1111|ml-explore/mlx-swift-examples !2025-03-2816471|Examples using MLX Swift| | 1112|unitreerobotics/unitreerlgym !2025-03-2816256|| | 1113|collabora/WhisperFusion !2025-03-2815901|WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.| | 1114|lichao-sun/Mora !2025-03-2815520|Mora: More like Sora for Generalist Video Generation| | 1115|GoogleCloudPlatform/localllm !2025-03-2815370|Run LLMs locally on Cloud Workstations| | 1116|TencentARC/BrushNet !2025-03-2815330|The official implementation of paper "BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion"| | 1117|ai-christianson/RA.Aid !2025-03-2815288|Develop software autonomously.| | 1118|stephansturges/WALDO !2025-03-2815170|Whereabouts Ascertainment for Low-lying Detectable Objects. The SOTA in FOSS AI for drones!| | 1119|skills/copilot-codespaces-vscode !2025-03-2815112|Develop with AI-powered code suggestions using GitHub Copilot and VS Code| | 1120|andrewnguonly/Lumos !2025-03-2814920|A RAG LLM co-pilot for browsing the web, powered by local LLMs| | 1121|TeamNewPipe/NewPipeExtractor !2025-03-2814811|NewPipe's core library for extracting data from streaming sites| | 1122|mhamilton723/FeatUp !2025-03-2814770|Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024| | 1123|AnswerDotAI/fsdpqlora !2025-03-2814671|Training LLMs with QLoRA + FSDP| | 1124|jgravelle/AutoGroq !2025-03-2814330|| | 1125|OpenGenerativeAI/llm-colosseum !2025-03-2814130|Benchmark LLMs by fighting in Street Fighter 3! The new way to evaluate the quality of an LLM| | 1126|microsoft/vscode-ai-toolkit !2025-03-2814000|| | 1127|McGill-NLP/webllama !2025-03-2813930|Llama-3 agents that can browse the web by following instructions and talking to you| | 1128|lucidrains/self-rewarding-lm-pytorch !2025-03-2813760|Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI| | 1129|ishaan1013/sandbox !2025-03-2813650|A cloud-based code editing environment with an AI copilot and real-time collaboration.| | 1130|goatcorp/Dalamud !2025-03-2813275|FFXIV plugin framework and API| | 1131|Lightning-AI/lightning-thunder !2025-03-2813151|Make PyTorch models Lightning fast! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once.| | 1132|PKU-YuanGroup/MagicTime !2025-03-2813052|MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators| | 1133|SakanaAI/evolutionary-model-merge !2025-03-2813000|Official repository of Evolutionary Optimization of Model Merging Recipes| | 1134|a-real-ai/pywinassistant !2025-03-2812950|The first open source Large Action Model generalist Artificial Narrow Intelligence that controls completely human user interfaces by only using natural language. PyWinAssistant utilizes Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models.| | 1135|TraceMachina/nativelink !2025-03-2812630|NativeLink is an open source high-performance build cache and remote execution server, compatible with Bazel, Buck2, Reclient, and other RBE-compatible build systems. It offers drastically faster builds, reduced test flakiness, and significant infrastructure cost savings.| | 1136|MLSysOps/MLE-agent !2025-03-2812500|🤖 MLE-Agent: Your intelligent companion for seamless AI engineering and research. 🔍 Integrate with arxiv and paper with code to provide better code/research plans 🧰 OpenAI, Ollama, etc supported. 🎆 Code RAG| | 1137|wpilibsuite/allwpilib !2025-03-2811610|Official Repository of WPILibJ and WPILibC| | 1138|elfvingralf/macOSpilot-ai-assistant !2025-03-2811470|Voice + Vision powered AI assistant that answers questions about any application, in context and in audio.| | 1139|langchain-ai/langchain-extract !2025-03-2811210|🦜⛏️ Did you say you like data?| | 1140|FoundationVision/GLEE !2025-03-2811120|【CVPR2024】GLEE: General Object Foundation Model for Images and Videos at Scale| | 1141|Profluent-AI/OpenCRISPR !2025-03-2810990|AI-generated gene editing systems| | 1142|zju3dv/EasyVolcap !2025-03-2810821|[SIGGRAPH Asia 2023 (Technical Communications)] EasyVolcap: Accelerating Neural Volumetric Video Research| | 1143|PaddlePaddle/PaddleHelix !2025-03-2810560|Bio-Computing Platform Featuring Large-Scale Representation Learning and Multi-Task Deep Learning “螺旋桨”生物计算工具集| | 1144|myshell-ai/JetMoE !2025-03-289800|Reaching LLaMA2 Performance with 0.1M Dollars| | 1145|likejazz/llama3.np !2025-03-289770|llama3.np is pure NumPy implementation for Llama 3 model.| | 1146|mustafaaljadery/gemma-2B-10M !2025-03-289500|Gemma 2B with 10M context length using Infini-attention.| | 1147|HITsz-TMG/FilmAgent !2025-03-289382|Resources of our paper "FilmAgent: A Multi-Agent Framework for End-to-End Film Automation in Virtual 3D Spaces". New versions in the making!| | 1148|aws-samples/amazon-bedrock-samples !2025-03-289362|This repository contains examples for customers to get started using the Amazon Bedrock Service. This contains examples for all available foundational models| | 1149|Akkudoktor-EOS/EOS !2025-03-2893154|This repository features an Energy Optimization System (EOS) that optimizes energy distribution, usage for batteries, heat pumps& household devices. It includes predictive models for electricity prices (planned), load forecasting& dynamic optimization to maximize energy efficiency & minimize costs. Founder Dr. Andreas Schmitz (YouTube @akkudoktor)| Tip: | symbol| rule | | :----| :---- | |🔥 | 256 1k| |!green-up-arrow.svg !red-down-arrow | ranking up / down| |⭐ | on trending page today| [Back to Top] Tools | No. | Tool | Description | | ----:|:----------------------------------------------- |:------------------------------------------------------------------------------------------- | | 1 | ChatGPT | A sibling model to InstructGPT, which is trained to follow instructions in a prompt and provide a detailed response | | 2 | DALL·E 2 | Create original, realistic images and art from a text description | | 3 | Murf AI | AI enabled, real people's voices| | 4 | Midjourney | An independent research lab that produces an artificial intelligence program under the same name that creates images from textual descriptions, used in Discord | 5 | Make-A-Video | Make-A-Video is a state-of-the-art AI system that generates videos from text | | 6 | Creative Reality™ Studio by D-ID| Use generative AI to create future-facing videos| | 7 | chat.D-ID| The First App Enabling Face-to-Face Conversations with ChatGPT| | 8 | Notion AI| Access the limitless power of AI, right inside Notion. Work faster. Write better. Think bigger. | | 9 | Runway| Text to Video with Gen-2 | | 10 | Resemble AI| Resemble’s AI voice generator lets you create human–like voice overs in seconds | | 11 | Cursor| Write, edit, and chat about your code with a powerful AI | | 12 | Hugging Face| Build, train and deploy state of the art models powered by the reference open source in machine learning | | 13 | Claude | A next-generation AI assistant for your tasks, no matter the scale | | 14 | Poe| Poe lets you ask questions, get instant answers, and have back-and-forth conversations with AI. Gives access to GPT-4, gpt-3.5-turbo, Claude from Anthropic, and a variety of other bots| [Back to Top] Websites | No. | WebSite |Description | | ----:|:------------------------------------------ |:---------------------------------------------------------------------------------------- | | 1 | OpenAI | An artificial intelligence research lab | | 2 | Bard | Base Google's LaMDA chatbots and pull from internet | | 3 | ERNIE Bot | Baidu’s new generation knowledge-enhanced large language model is a new member of the Wenxin large model family | | 4 | DALL·E 2 | An AI system that can create realistic images and art from a description in natural language | | 5 | Whisper | A general-purpose speech recognition model | | 6| CivitAI| A platform that makes it easy for people to share and discover resources for creating AI art| | 7|D-ID| D-ID’s Generative AI enables users to transform any picture or video into extraordinary experiences| | 8| Nvidia eDiff-I| Text-to-Image Diffusion Models with Ensemble of Expert Denoisers | | 9| Stability AI| The world's leading open source generative AI company which opened source Stable Diffusion | | 10| Meta AI| Whether it be research, product or infrastructure development, we’re driven to innovate responsibly with AI to benefit the world | | 11| ANTHROPIC| AI research and products that put safety at the frontier | [Back to Top] Reports&Papers | No. | Report&Paper | Description | |:---- |:-------------------------------------------------------------------------------------------------------------- |:---------------------------------------------------- | | 1 | GPT-4 Technical Report | GPT-4 Technical Report | | 2 | mli/paper-reading | Deep learning classics and new papers are read carefully paragraph by paragraph. | | 3 | labmlai/annotateddeeplearningpaperimplementations| A collection of simple PyTorch implementations of neural networks and related algorithms, which are documented with explanations | | 4 | Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models | Talking, Drawing and Editing with Visual Foundation Models | | 5 | OpenAI Research | The latest research report and papers from OpenAI | | 6 | Make-A-Video: Text-to-Video Generation without Text-Video Data|Meta's Text-to-Video Generation| | 7 | eDiff-I: Text-to-Image Diffusion Models with Ensemble of Expert Denoisers| Nvidia eDiff-I - New generation of generative AI content creation tool | | 8 | Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo | 2023 GPT4All Technical Report | | 9 | Segment Anything| Meta Segment Anything | | 10 | LLaMA: Open and Efficient Foundation Language Models| LLaMA: a collection of foundation language models ranging from 7B to 65B parameters| | 11 | papers-we-love/papers-we-love |Papers from the computer science community to read and discuss| | 12 | CVPR 2023 papers |The most exciting and influential CVPR 2023 papers| [Back to Top] Tutorials | No. | Tutorial | Description| |:---- |:---------------------------------------------------------------- | --- | | 1 | Coursera - Machine Learning | The Machine Learning Specialization Course taught by Dr. Andrew Ng| | 2 | microsoft/ML-For-Beginners | 12 weeks, 26 lessons, 52 quizzes, classic Machine Learning for all| | 3 | ChatGPT Prompt Engineering for Developers | This short course taught by Isa Fulford (OpenAI) and Andrew Ng (DeepLearning.AI) will teach how to use a large language model (LLM) to quickly build new and powerful applications | | 4 | Dive into Deep Learning |Targeting Chinese readers, functional and open for discussion. The Chinese and English versions are used for teaching in over 400 universities across more than 60 countries | | 5 | AI Expert Roadmap | Roadmap to becoming an Artificial Intelligence Expert in 2022 | | 6 | Computer Science courses |List of Computer Science courses with video lectures| | 7 | Machine Learning with Python | Machine Learning with Python Certification on freeCodeCamp| | 8 | Building Systems with the ChatGPT API | This short course taught by Isa Fulford (OpenAI) and Andrew Ng (DeepLearning.AI), you will learn how to automate complex workflows using chain calls to a large language model| | 9 | LangChain for LLM Application Development | This short course taught by Harrison Chase (Co-Founder and CEO at LangChain) and Andrew Ng. you will gain essential skills in expanding the use cases and capabilities of language models in application development using the LangChain framework| | 10 | How Diffusion Models Work | This short course taught by Sharon Zhou (CEO, Co-founder, Lamini). you will gain a deep familiarity with the diffusion process and the models which carry it out. More than simply pulling in a pre-built model or using an API, this course will teach you to build a diffusion model from scratch| | 11 | Free Programming Books For AI |📚 Freely available programming books for AI | | 12 | microsoft/AI-For-Beginners |12 Weeks, 24 Lessons, AI for All!| | 13 | hemansnation/God-Level-Data-Science-ML-Full-Stack |A collection of scientific methods, processes, algorithms, and systems to build stories & models. This roadmap contains 16 Chapters, whether you are a fresher in the field or an experienced professional who wants to transition into Data Science & AI| | 14 | datawhalechina/prompt-engineering-for-developers |Chinese version of Andrew Ng's Big Model Series Courses, including "Prompt Engineering", "Building System", and "LangChain"| | 15 | ossu/computer-science |🎓 Path to a free self-taught education in Computer Science!| | 16 | microsoft/Data-Science-For-Beginners | 10 Weeks, 20 Lessons, Data Science for All! | |17 |jwasham/coding-interview-university !2023-09-29268215336 |A complete computer science study plan to become a software engineer.| [Back to Top] Thanks If this project has been helpful to you in any way, please give it a ⭐️ by clicking on the star.

aiortc
github
LLM Vibe Score0.507
Human Vibe Score0.11415188209660238
aiortcMar 28, 2025

aiortc

.. image:: docs/_static/aiortc.svg :width: 120px :alt: aiortc .. image:: https://img.shields.io/pypi/l/aiortc.svg :target: https://pypi.python.org/pypi/aiortc :alt: License .. image:: https://img.shields.io/pypi/v/aiortc.svg :target: https://pypi.python.org/pypi/aiortc :alt: Version .. image:: https://img.shields.io/pypi/pyversions/aiortc.svg :target: https://pypi.python.org/pypi/aiortc :alt: Python versions .. image:: https://github.com/aiortc/aiortc/workflows/tests/badge.svg :target: https://github.com/aiortc/aiortc/actions :alt: Tests .. image:: https://img.shields.io/codecov/c/github/aiortc/aiortc.svg :target: https://codecov.io/gh/aiortc/aiortc :alt: Coverage .. image:: https://readthedocs.org/projects/aiortc/badge/?version=latest :target: https://aiortc.readthedocs.io/ :alt: Documentation What is `aiortc? aiortc is a library for Web Real-Time Communication (WebRTC)_ and Object Real-Time Communication (ORTC)_ in Python. It is built on top of asyncio, Python's standard asynchronous I/O framework. The API closely follows its Javascript counterpart while using pythonic constructs: promises are replaced by coroutines events are emitted using pyee.EventEmitter To learn more about aiortc please read the documentation_. .. _Web Real-Time Communication (WebRTC): https://webrtc.org/ .. _Object Real-Time Communication (ORTC): https://ortc.org/ .. _read the documentation: https://aiortc.readthedocs.io/en/latest/ Why should I use aiortc? The main WebRTC and ORTC implementations are either built into web browsers, or come in the form of native code. While they are extensively battle tested, their internals are complex and they do not provide Python bindings. Furthermore they are tightly coupled to a media stack, making it hard to plug in audio or video processing algorithms. In contrast, the aiortc implementation is fairly simple and readable. As such it is a good starting point for programmers wishing to understand how WebRTC works or tinker with its internals. It is also easy to create innovative products by leveraging the extensive modules available in the Python ecosystem. For instance you can build a full server handling both signaling and data channels or apply computer vision algorithms to video frames using OpenCV. Furthermore, a lot of effort has gone into writing an extensive test suite for the aiortc code to ensure best-in-class code quality. Implementation status aiortc allows you to exchange audio, video and data channels and interoperability is regularly tested against both Chrome and Firefox. Here are some of its features: SDP generation / parsing Interactive Connectivity Establishment, with half-trickle and mDNS support DTLS key and certificate generation DTLS handshake, encryption / decryption (for SCTP) SRTP keying, encryption and decryption for RTP and RTCP Pure Python SCTP implementation Data Channels Sending and receiving audio (Opus / PCMU / PCMA) Sending and receiving video (VP8 / H.264) Bundling audio / video / data channels RTCP reports, including NACK / PLI to recover from packet loss Installing The easiest way to install aiortc is to run: .. code:: bash pip install aiortc Building from source If there are no wheels for your system or if you wish to build aiortc from source you will need a couple of libraries installed on your system: Opus for audio encoding / decoding LibVPX for video encoding / decoding Linux ..... On Debian/Ubuntu run: .. code:: bash apt install libopus-dev libvpx-dev OS X .... On OS X run: .. code:: bash brew install opus libvpx License aiortc is released under the BSD license`_. .. _BSD license: https://aiortc.readthedocs.io/en/latest/license.html

Production-Level-Deep-Learning
github
LLM Vibe Score0.619
Human Vibe Score0.8326638433689385
alirezadirMar 28, 2025

Production-Level-Deep-Learning

:bulb: A Guide to Production Level Deep Learning :clapper: :scroll: :ferry: 🇨🇳 Translation in Chinese.md) :label: NEW: Machine Learning Interviews :label: Note: This repo is under continous development, and all feedback and contribution are very welcome :blush: Deploying deep learning models in production can be challenging, as it is far beyond training models with good performance. Several distinct components need to be designed and developed in order to deploy a production level deep learning system (seen below): This repo aims to be an engineering guideline for building production-level deep learning systems which will be deployed in real world applications. The material presented here is borrowed from Full Stack Deep Learning Bootcamp (by Pieter Abbeel at UC Berkeley, Josh Tobin at OpenAI, and Sergey Karayev at Turnitin), TFX workshop by Robert Crowe, and Pipeline.ai's Advanced KubeFlow Meetup by Chris Fregly. Machine Learning Projects Fun :flushed: fact: 85% of AI projects fail. 1 Potential reasons include: Technically infeasible or poorly scoped Never make the leap to production Unclear success criteria (metrics) Poor team management ML Projects lifecycle Importance of understanding state of the art in your domain: Helps to understand what is possible Helps to know what to try next Mental Model for ML project The two important factors to consider when defining and prioritizing ML projects: High Impact: Complex parts of your pipeline Where "cheap prediction" is valuable Where automating complicated manual process is valuable Low Cost: Cost is driven by: Data availability Performance requirements: costs tend to scale super-linearly in the accuracy requirement Problem difficulty: Some of the hard problems include: unsupervised learning, reinforcement learning, and certain categories of supervised learning Full stack pipeline The following figure represents a high level overview of different components in a production level deep learning system: In the following, we will go through each module and recommend toolsets and frameworks as well as best practices from practitioners that fit each component. Data Management 1.1 Data Sources Supervised deep learning requires a lot of labeled data Labeling own data is costly! Here are some resources for data: Open source data (good to start with, but not an advantage) Data augmentation (a MUST for computer vision, an option for NLP) Synthetic data (almost always worth starting with, esp. in NLP) 1.2 Data Labeling Requires: separate software stack (labeling platforms), temporary labor, and QC Sources of labor for labeling: Crowdsourcing (Mechanical Turk): cheap and scalable, less reliable, needs QC Hiring own annotators: less QC needed, expensive, slow to scale Data labeling service companies: FigureEight Labeling platforms: Diffgram: Training Data Software (Computer Vision) Prodigy: An annotation tool powered by active learning (by developers of Spacy), text and image HIVE: AI as a Service platform for computer vision Supervisely: entire computer vision platform Labelbox: computer vision Scale AI data platform (computer vision & NLP) 1.3. Data Storage Data storage options: Object store: Store binary data (images, sound files, compressed texts) Amazon S3 Ceph Object Store Database: Store metadata (file paths, labels, user activity, etc). Postgres is the right choice for most of applications, with the best-in-class SQL and great support for unstructured JSON. Data Lake: to aggregate features which are not obtainable from database (e.g. logs) Amazon Redshift Feature Store: store, access, and share machine learning features (Feature extraction could be computationally expensive and nearly impossible to scale, hence re-using features by different models and teams is a key to high performance ML teams). FEAST (Google cloud, Open Source) Michelangelo Palette (Uber) Suggestion: At training time, copy data into a local or networked filesystem (NFS). 1 1.4. Data Versioning It's a "MUST" for deployed ML models: Deployed ML models are part code, part data. 1 No data versioning means no model versioning. Data versioning platforms: DVC: Open source version control system for ML projects Pachyderm: version control for data Dolt: a SQL database with Git-like version control for data and schema 1.5. Data Processing Training data for production models may come from different sources, including Stored data in db and object stores, log processing, and outputs of other classifiers*. There are dependencies between tasks, each needs to be kicked off after its dependencies are finished. For example, training on new log data, requires a preprocessing step before training. Makefiles are not scalable. "Workflow manager"s become pretty essential in this regard. Workflow orchestration: Luigi by Spotify Airflow by Airbnb: Dynamic, extensible, elegant, and scalable (the most widely used) DAG workflow Robust conditional execution: retry in case of failure Pusher supports docker images with tensorflow serving Whole workflow in a single .py file Development, Training, and Evaluation 2.1. Software engineering Winner language: Python Editors: Vim Emacs VS Code (Recommended by the author): Built-in git staging and diff, Lint code, open projects remotely through ssh Notebooks: Great as starting point of the projects, hard to scale (fun fact: Netflix’s Notebook-Driven Architecture is an exception, which is entirely based on nteract suites). nteract: a next-gen React-based UI for Jupyter notebooks Papermill: is an nteract library built for parameterizing, executing, and analyzing* Jupyter Notebooks. Commuter: another nteract project which provides a read-only display of notebooks (e.g. from S3 buckets). Streamlit: interactive data science tool with applets Compute recommendations 1: For individuals or startups*: Development: a 4x Turing-architecture PC Training/Evaluation: Use the same 4x GPU PC. When running many experiments, either buy shared servers or use cloud instances. For large companies:* Development: Buy a 4x Turing-architecture PC per ML scientist or let them use V100 instances Training/Evaluation: Use cloud instances with proper provisioning and handling of failures Cloud Providers: GCP: option to connect GPUs to any instance + has TPUs AWS: 2.2. Resource Management Allocating free resources to programs Resource management options: Old school cluster job scheduler ( e.g. Slurm workload manager ) Docker + Kubernetes Kubeflow Polyaxon (paid features) 2.3. DL Frameworks Unless having a good reason not to, use Tensorflow/Keras or PyTorch. 1 The following figure shows a comparison between different frameworks on how they stand for "developement" and "production"*. 2.4. Experiment management Development, training, and evaluation strategy: Always start simple Train a small model on a small batch. Only if it works, scale to larger data and models, and hyperparameter tuning! Experiment management tools: Tensorboard provides the visualization and tooling needed for ML experimentation Losswise (Monitoring for ML) Comet: lets you track code, experiments, and results on ML projects Weights & Biases: Record and visualize every detail of your research with easy collaboration MLFlow Tracking: for logging parameters, code versions, metrics, and output files as well as visualization of the results. Automatic experiment tracking with one line of code in python Side by side comparison of experiments Hyper parameter tuning Supports Kubernetes based jobs 2.5. Hyperparameter Tuning Approaches: Grid search Random search Bayesian Optimization HyperBand and Asynchronous Successive Halving Algorithm (ASHA) Population-based Training Platforms: RayTune: Ray Tune is a Python library for hyperparameter tuning at any scale (with a focus on deep learning and deep reinforcement learning). Supports any machine learning framework, including PyTorch, XGBoost, MXNet, and Keras. Katib: Kubernete's Native System for Hyperparameter Tuning and Neural Architecture Search, inspired by Google vizier and supports multiple ML/DL frameworks (e.g. TensorFlow, MXNet, and PyTorch). Hyperas: a simple wrapper around hyperopt for Keras, with a simple template notation to define hyper-parameter ranges to tune. SIGOPT: a scalable, enterprise-grade optimization platform Sweeps from [Weights & Biases] (https://www.wandb.com/): Parameters are not explicitly specified by a developer. Instead they are approximated and learned by a machine learning model. Keras Tuner: A hyperparameter tuner for Keras, specifically for tf.keras with TensorFlow 2.0. 2.6. Distributed Training Data parallelism: Use it when iteration time is too long (both tensorflow and PyTorch support) Ray Distributed Training Model parallelism: when model does not fit on a single GPU Other solutions: Horovod Troubleshooting [TBD] Testing and Deployment 4.1. Testing and CI/CD Machine Learning production software requires a more diverse set of test suites than traditional software: Unit and Integration Testing: Types of tests: Training system tests: testing training pipeline Validation tests: testing prediction system on validation set Functionality tests: testing prediction system on few important examples Continuous Integration: Running tests after each new code change pushed to the repo SaaS for continuous integration: Argo: Open source Kubernetes native workflow engine for orchestrating parallel jobs (incudes workflows, events, CI and CD). CircleCI: Language-Inclusive Support, Custom Environments, Flexible Resource Allocation, used by instacart, Lyft, and StackShare. Travis CI Buildkite: Fast and stable builds, Open source agent runs on almost any machine and architecture, Freedom to use your own tools and services Jenkins: Old school build system 4.2. Web Deployment Consists of a Prediction System and a Serving System Prediction System: Process input data, make predictions Serving System (Web server): Serve prediction with scale in mind Use REST API to serve prediction HTTP requests Calls the prediction system to respond Serving options: Deploy to VMs, scale by adding instances Deploy as containers, scale via orchestration Containers Docker Container Orchestration: Kubernetes (the most popular now) MESOS Marathon Deploy code as a "serverless function" Deploy via a model serving solution Model serving: Specialized web deployment for ML models Batches request for GPU inference Frameworks: Tensorflow serving MXNet Model server Clipper (Berkeley) SaaS solutions Seldon: serve and scale models built in any framework on Kubernetes Algorithmia Decision making: CPU or GPU? CPU inference: CPU inference is preferable if it meets the requirements. Scale by adding more servers, or going serverless. GPU inference: TF serving or Clipper Adaptive batching is useful (Bonus) Deploying Jupyter Notebooks: Kubeflow Fairing is a hybrid deployment package that let's you deploy your Jupyter notebook* codes! 4.5 Service Mesh and Traffic Routing Transition from monolithic applications towards a distributed microservice architecture could be challenging. A Service mesh (consisting of a network of microservices) reduces the complexity of such deployments, and eases the strain on development teams. Istio: a service mesh to ease creation of a network of deployed services with load balancing, service-to-service authentication, monitoring, with few or no code changes in service code. 4.4. Monitoring: Purpose of monitoring: Alerts for downtime, errors, and distribution shifts Catching service and data regressions Cloud providers solutions are decent Kiali:an observability console for Istio with service mesh configuration capabilities. It answers these questions: How are the microservices connected? How are they performing? Are we done? 4.5. Deploying on Embedded and Mobile Devices Main challenge: memory footprint and compute constraints Solutions: Quantization Reduced model size MobileNets Knowledge Distillation DistillBERT (for NLP) Embedded and Mobile Frameworks: Tensorflow Lite PyTorch Mobile Core ML ML Kit FRITZ OpenVINO Model Conversion: Open Neural Network Exchange (ONNX): open-source format for deep learning models 4.6. All-in-one solutions Tensorflow Extended (TFX) Michelangelo (Uber) Google Cloud AI Platform Amazon SageMaker Neptune FLOYD Paperspace Determined AI Domino data lab Tensorflow Extended (TFX) [TBD] Airflow and KubeFlow ML Pipelines [TBD] Other useful links: Lessons learned from building practical deep learning systems Machine Learning: The High Interest Credit Card of Technical Debt Contributing References: [1]: Full Stack Deep Learning Bootcamp, Nov 2019. [2]: Advanced KubeFlow Workshop by Pipeline.ai, 2019. [3]: TFX: Real World Machine Learning in Production

practicalAI-cn
github
LLM Vibe Score0.607
Human Vibe Score0.9006050826946348
MLEverydayMar 28, 2025

practicalAI-cn

AI实战-practicalAI 中文版 让你有能力使用机器学习从数据中获取有价值的见解。 🔥 使用 PyTorch 实现基本的机器学习算法和深度神经网络。 🖥️ 不需要任何设置,在浏览器中使用 Google Colab 运行所有程序。 📦 不仅仅是教程,而是学习产品级的面向对象机器学习编程。 Notebooks |基础|深度学习|进阶|主题| |-|-|-|-| |📓 Notebooks|🔥 PyTorch|📚 高级循环神经网络 Advanced RNNs|📸 计算机视觉 Computer Vision| |🐍 Python|🎛️ 多层感知 Multilayer Perceptrons|🏎️ Highway and Residual Networks|⏰ 时间序列分析 Time Series Analysis| |🔢 NumPy|🔎 数据和模型 Data & Models|🔮 自编码器 Autoencoders|🏘️ Topic Modeling| | 🐼 Pandas |📦 面向对象的机器学习 Object-Oriented ML|🎭 生成对抗网络 Generative Adversarial Networks|🛒 推荐系统 Recommendation Systems| |📈 线性回归 Linear Regression|🖼️ 卷积神经网络 Convolutional Neural Networks|🐝 空间变换模型 Spatial Transformer Networks|🗣️ 预训练语言模型 Pretrained Language Modeling| |📊 逻辑回归 Logistic Regression|📝 嵌入层 Embeddings||🤷 多任务学习 Multitask Learning| |🌳 随机森林 Random Forests|📗 递归神经网络 Recurrent Neural Networks||🎯 Low Shot Learning| |💥 k-均值聚类 KMeans Clustering|||🍒 强化学习 Reinforcement Learning| 查看 notebooks 如果不需要运行 notebooks,使用 Jupyter nbviewer 就可以方便地查看它们。 将 https://github.com/ 替换为 https://nbviewer.jupyter.org/github/ ,或者打开 https://nbviewer.jupyter.org 并输入 notebook 的 URL。 运行 notebooks 在本项目的 notebooks 文件夹获取 notebook; 你可以在 Google Colab(推荐)或本地电脑运行这些 notebook; 点击一个 notebook,然后替换URL地址中 https://github.com/ 为 https://colab.research.google.com/github/ ,或者使用这个 Chrome扩展 一键完成; 登录你自己的 Google 账户; 点击工具栏上的 复制到云端硬盘,会在一个新的标签页打开 notebook; 通过去掉标题中的副本完成 notebook 重命名; 运行代码、修改等,所有这些都会自动保存到你的个人 Google Drive。 贡献 notebooks 修改后下载 Google Colab notebook 为 .ipynb 文件; 转到 https://github.com/LisonEvf/practicalAI-cn/tree/master/notebooks ; 点击 Upload files. 上传这个 .ipynb 文件; 写一个详细详细的提交标题和说明; 适当命名你的分支; 点击 Propose changes。 贡献列表 欢迎任何人参与和完善。 |Notebook|译者| |--|--| |00_Notebooks.ipynb|@amusi| |01_Python.ipynb|@amusi| |02_NumPy.ipynb|@amusi| |03_Pandas.ipynb|@amusi| |04LinearRegression.ipynb|@jasonhhao| |05LogisticRegression.ipynb|@jasonhhao| |06RandomForests.ipynb|@jasonhhao| |07_PyTorch.ipynb|@amusi| |08MultilayerPerceptron.ipynb|@zhyongquan| |09Dataand_Models.ipynb|@zhyongquan| |10ObjectOriented_ML.ipynb|@zhyongquan| |11ConvolutionalNeural_Networks.ipynb|| |12_Embeddings.ipynb|@wengJJ| |13RecurrentNeural_Networks.ipynb|| |14AdvancedRNNs.ipynb|| |15ComputerVision.ipynb|||

freeciv-web
github
LLM Vibe Score0.567
Human Vibe Score0.5875819302299989
freecivMar 28, 2025

freeciv-web

THE FREECIV-WEB PROJECT Freeciv-web is an open-source turn-based strategy game. It can be played in any HTML5 capable web-browser and features in-depth game-play and a wide variety of game modes and options. Your goal is to build cities, collect resources, organize your government, and build an army, with the ultimate goal of creating the best civilization. You can play online against other players (multiplayer) or play by yourself against the computer. There is both a HTML5 2D version with isometric graphics and a 3D WebGL version of Freeciv-web. Freeciv-web is free and open source software. The Freeciv C server is released under the GNU General Public License, while the Freeciv-web client is released under the GNU Affero General Public License. See License for the full license document. Live servers Currently known servers based on Freeciv-web, which are open source in compliance with the AGPL license: FCIV.NET [https://github.com/fciv-net/fciv-net] freecivweb.org [https://github.com/Lexxie9952/fcw.org-server] moving borders [https://github.com/lonemadmax/freeciv-web] (Everything except longturn and real-Earth) Freeciv Tactics & Triumph [https://github.com/Canik05/freeciv-tnt] Freeciv Games & Mods (No PBEM) Freeciv-web screenshots: Freeciv WebGL 3D: !Freeciv-web Freeciv-web HTML5 version: !Freeciv-web Overview Freeciv-Web consists of these components: Freeciv-web - a Java web application for the Freeciv-web client. This application is a Java web application which make up the application viewed in each user's web browser. The Metaserver is also a part of this module. Implemented in Javascript, Java, JSP, HTML and CSS. Built with maven and runs on Tomcat 10 and nginx. Freeciv - the Freeciv C server, which is checked out from the official Git repository, and patched to work with a WebSocket/JSON protocol. Implemented in C. Freeciv-proxy - a WebSocket proxy which allows WebSocket clients in Freeciv-web to send socket requests to Freeciv servers. WebSocket requests are sent from Javascript in Freeciv-web to nginx, which then proxies the WebSocket messages to freeciv-proxy, which finally sends Freeciv socket requests to the Freeciv servers. Implemented in Python. Publite2 - a process launcher for Freeciv C servers, which manages multiple Freeciv server processes and checks capacity through the Metaserver. Implemented in Python. pbem is play-by-email support. Freeciv WebGL Freeciv WebGL is the 3D version, which uses the Three.js 3D engine. More info about the WebGL 3D version can be found for developers and 3D artists. Developer: Andreas Røsdal @andreasrosdal Running Freeciv-web on your computer The recommended and probably easiest way is to use Vagrant on VirtualBox. Whatever the method you choose, you'll have to check out Freeciv-web to a directory on your computer, by installing Git and running this command: You may also want to change some parameters before installing, although it's not needed in most cases. If you have special requirements, have a look at config.dist, copy it without the .dist extension and edit to your liking. :warning: Notice for Windows users Please keep in mind that the files are to be used in a Unix-like system (some Ubuntu version with the provided Vagrant file). Line endings for text files are different in Windows, and some editors "correct" them, making the files unusable in the VM. There's some provision to recode the main configuration files when installing, but not afterwards. If you touch shared files after installation, please use an editor that respect Unix line endings or transform them with a utility like dos2unix after saving them. Running Freeciv-web with Vagrant on VirtualBox Freeciv-web can be setup using Vagrant on VirtualBox to quickly create a local developer image running Freeciv-web on latest Ubuntu on your host operating system such as Windows, OSX or Linux. This is the recommended way to build Freeciv-web on your computer. Install VirtualBox: https://www.virtualbox.org/ - Install manually on Windows, and with the following command on Linux: Install Vagrant: http://www.vagrantup.com/ - Install manually on Windows , and with the following command on Linux: Run Vagrant with the following commands in your Freeciv-web directory: This will build, compile, install and run Freeciv-web on the virtual server image. Wait for the installation process to complete, watching for any error messages in the logs. If you get an error message about Virtualization (VT) not working, then enable Virtualization in the BIOS. Test Freeciv-web by pointing your browser to http://localhost if you run Windows or http://localhost:8080 if you run Linux or macOS. To log in to your Vagrant server, run the command: The Vagrant guest machine will mount the Freeciv-web source repository in the /vagrant directory. Note that running Freeciv-web using Vagrant requires about 4Gb of memory and 3 Gb of harddisk space. System Requirements for manual install Install this software if you are not running Freeciv-web with Vagrant: Tomcat 10 - https://tomcat.apache.org/ Java 11 JDK - https://adoptopenjdk.net/ Python 3.6 - http://www.python.org/ Pillow v2.3.0 (PIL fork) - http://pillow.readthedocs.org/ (required for freeciv-img-extract) MariaDB - https://mariadb.org/ Maven 3 - http://maven.apache.org/download.html Firebug for debugging - http://getfirebug.com/ curl-7.19.7 - http://curl.haxx.se/ OpenSSL - http://www.openssl.org/ nginx 1.11.x or later - http://nginx.org/ MySQL Connector/Python - https://github.com/mysql/mysql-connector-python pngcrush, required for freeciv-img-extract. http://pmt.sourceforge.net/pngcrush/ Tornado 6.1 or later - http://www.tornadoweb.org/ Jansson 2.6 - http://www.digip.org/jansson/ liblzma-dev - http://tukaani.org/xz/ - for XZ compressed savegames. When in a tested system, you may run scripts/install/install.sh and it will fetch and configure what's needed. Start and stop Freeciv-web with the following commands: start-freeciv-web.sh stop-freeciv-web.sh status-freeciv-web.sh All software components in Freeciv-web will log to the /logs sub-directory of the Freeciv-web installation. Running Freeciv-web on Docker Freeciv-web can easily be built and run from Docker using docker-compose. Make sure you have both Docker and Docker Compose installed. Run the following from the freeciv-web directory: Connect to docker via host machine using standard browser http://localhost:8080/ Enjoy. The overall dockerfile and required changes to scripts needs some further improvements. Freeciv-Web continuous integration on GitHub actions Freeciv-Web is built on GitHub actions on every commit. This is the current build status: Developers interested in Freeciv-web If you want to contibute to Freeciv-web, see the issues on GibHub and the TODO file for some tasks you can work on. Pull requests on Github are welcome! Contributors to Freeciv-web Andreas Røsdal @andreasrosdal Marko Lindqvist @cazfi Sveinung Kvilhaugsvik @kvilhaugsvik Gerik Bonaert @adaxi Lmoureaux @lmoureaux Máximo Castañeda @lonemadmax and the Freeciv.org project!

TornadoVM
github
LLM Vibe Score0.539
Human Vibe Score0.20972324263626374
beehive-labMar 28, 2025

TornadoVM

TornadoVM !TornadoVM version TornadoVM is a plug-in to OpenJDK and GraalVM that allows programmers to automatically run Java programs on heterogeneous hardware. TornadoVM targets OpenCL, PTX and SPIR-V compatible devices which include multi-core CPUs, dedicated GPUs (Intel, NVIDIA, AMD), integrated GPUs (Intel HD Graphics and ARM Mali), and FPGAs (Intel and Xilinx). TornadoVM has three backends that generate OpenCL C, NVIDIA CUDA PTX assembly, and SPIR-V binary. Developers can choose which backends to install and run. Website: tornadovm.org Documentation: https://tornadovm.readthedocs.io/en/latest/ For a quick introduction please read the following FAQ. Latest Release: TornadoVM 1.0.10 - 31/01/2025 : See CHANGELOG. Installation In Linux and macOS, TornadoVM can be installed automatically with the installation script. For example: NOTE Select the desired backend: opencl: Enables the OpenCL backend (requires OpenCL drivers) ptx: Enables the PTX backend (requires NVIDIA CUDA drivers) spirv: Enables the SPIRV backend (requires Intel Level Zero drivers) Example of installation: Alternatively, TornadoVM can be installed either manually from source or by using Docker. If you are planning to use Docker with TornadoVM on GPUs, you can also follow these guidelines. You can also run TornadoVM on Amazon AWS CPUs, GPUs, and FPGAs following the instructions here. Usage Instructions TornadoVM is currently being used to accelerate machine learning and deep learning applications, computer vision, physics simulations, financial applications, computational photography, and signal processing. Featured use-cases: kfusion-tornadovm: Java application for accelerating a computer-vision application using the Tornado-APIs to run on discrete and integrated GPUs. Java Ray-Tracer: Java application accelerated with TornadoVM for real-time ray-tracing. We also have a set of examples that includes NBody, DFT, KMeans computation and matrix computations. Additional Information General Documentation Benchmarks How TornadoVM executes reductions Execution Flags FPGA execution Profiler Usage Programming Model TornadoVM exposes to the programmer task-level, data-level and pipeline-level parallelism via a light Application Programming Interface (API). In addition, TornadoVM uses single-source property, in which the code to be accelerated and the host code live in the same Java program. Compute-kernels in TornadoVM can be programmed using two different approaches (APIs): a) Loop Parallel API Compute kernels are written in a sequential form (tasks programmed for a single thread execution). To express parallelism, TornadoVM exposes two annotations that can be used in loops and parameters: a) @Parallel for annotating parallel loops; and b) @Reduce for annotating parameters used in reductions. The following code snippet shows a full example to accelerate Matrix-Multiplication using TornadoVM and the loop-parallel API: To run TornadoVM, you need to either install the TornadoVM extension for GraalVM/OpenJDK, or run with our Docker images. Additional Resources Here you can find videos, presentations, tech-articles and artefacts describing TornadoVM, and how to use it. Academic Publications If you are using TornadoVM >= 0.2 (which includes the Dynamic Reconfiguration, the initial FPGA support and CPU/GPU reductions), please use the following citation: If you are using Tornado 0.1 (Initial release), please use the following citation in your work. Selected publications can be found here. Acknowledgments This work is partially funded by Intel corporation. In addition, it has been supported by the following EU & UKRI grants (most recent first): EU Horizon Europe & UKRI AERO 101092850. EU Horizon Europe & UKRI INCODE 101093069. EU Horizon Europe & UKRI ENCRYPT 101070670. EU Horizon Europe & UKRI TANGO 101070052. EU Horizon 2020 ELEGANT 957286. EU Horizon 2020 E2Data 780245. EU Horizon 2020 ACTiCLOUD 732366. Furthermore, TornadoVM has been supported by the following EPSRC grants: PAMELA EP/K008730/1. AnyScale Apps EP/L000725/1. Contributions and Collaborations We welcome collaborations! Please see how to contribute to the project in the CONTRIBUTING page. Write your questions and proposals: Additionally, you can open new proposals on the GitHub discussions page. Alternatively, you can share a Google document with us. Collaborations: For Academic & Industry collaborations, please contact here. TornadoVM Team Visit our website to meet the team. Licenses Per Module To use TornadoVM, you can link the TornadoVM API to your application which is under Apache 2. Each Java TornadoVM module is licensed as follows: | Module | License | |--------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Tornado-API | | | Tornado-Runtime | | | Tornado-Assembly | | | Tornado-Drivers | | | Tornado-Drivers-OpenCL-Headers | | | Tornado-scripts | | | Tornado-Annotation | | | Tornado-Unittests | | | Tornado-Benchmarks | | | Tornado-Examples | | | Tornado-Matrices | | | | |

DownEdit
github
LLM Vibe Score0.491
Human Vibe Score0.032913669732192626
nxNullMar 28, 2025

DownEdit

DownEdit is a fast and powerful program for downloading and editing videos from top platforms like TikTok, Douyin, and Kuaishou. Effortlessly grab videos from user profiles, make bulk edits, throughout the entire directory with just one click. Plus, our advanced Chat & AI features let you download, edit, and generate videos, images, and sounds in bulk. Exciting new features are coming soon—stay tuned! ✨ Preview 🔥 Current Features Edit Video: Enhance videos with various functions designed to streamline editing tasks across entire directories. Edit Photo: Quickly enhance images in bulk with various functions, including AI-powered functions, Edit Sound: Improve audio in bulk using powerful functions, including cutting-edge AI-powered tools. Download all videos: Retrieve videos from users (TikTok, Kuaishou, Douyin, etc.) without watermarks. Bulk AI Generator: Generate images and videos in bulk using powerful generative AI. AI Editor: Enhance your content effortlessly with using AI editor designed for images, sounds and videos. 🌐 Service | Website| Provider| Single Video | User's Videos | Stream | Access | Status | | --- | --- | --- | --- | --- | --- | --- | | tiktok.com | None | ✔️ | ✔️ | ❌ | API (Cookie) | !Inactive | | douyin.com | None | ✔️ | ✔️ | ❌ | API (Cookie) | !Inactive | | kuaishou.com | None | ✔️ | ✔️ | ❌ | Login Required (Cookie) | !Active | | youtube.com | None | ✔️ | ✔️ | ❌ | (Public/Private) | !Active | 🤖 AI Cloud | Type | Model | Provider| Minimal | Bulk | Access | Status | | --- | --- | --- | --- | --- | --- | --- | | Image Generation | None | | None | ✔️ | API (Public) | !Active | | Video Generation | None | | None | ✔️ | | !Inactive | | Sound Generation | None | | None | ✔️ | | !Inactive | Local | Type | Model | Provider| Minimal | Bulk | Access | Status | | --- | --- | --- | --- | --- | --- | --- | | Image Generation | None | | None | ✔️ | | !Inactive | | Video Generation | None | | None | ✔️ | | !Inactive | | Sound Generation | None | | None | ✔️ | | !Inactive | 🚀 Usage Edit Video - Simply copy and paste (right click) whatever directory location you would like to process. Tutorial !EditVideoAdobeExpress Change it according to your desired video speed. Input your music file location Download douyin videos - Download all video from user by input user link. Tutorial Download tiktok videos - Download all video from user by input username with @. Tutorial Download kuaishou videos - Remember to input your own Cookie. Otherwise it won't work. Tutorial Step 1. Right click and select on Inspect element. Step 2. Copy your Cookie browser. Step 3. Copy user ID you want to download. Tips: If you still getting error, try changing your Browser, use Incognito/Private mode and reset your Internet/IP. Edit Photo - Simply copy and paste (right click) whatever directory location you would like to process. Tutorial Remove Background AI 🔎 Requirements Python [!NOTE] Version must be between 3.8 and 3.12. ⚙ Installation Step 1. Download and install python on your pc. Step 2. libraries installation You have three options to install the required libraries: Option 1: Manual Installation Option 2: Automatic installation & virtual environments Option 3: Terminal & virtual environments Step 3. Run the script For Regular Use: You can also download the application and use it on your PC without installing python. Windows: Download macOS: None [!TIP] Fix Terminal Font Issues Install the Microsoft Cascadia font on your computer if your terminal does not support the font, which is resulting in program error. 🔨 Module The following dependencies are required for the project: List Pystyle Requests Inquirer Colorama Moviepy Rich Playwright Rembg WMI Psutil Httpx Aiofiles Author 👤 Sokun Heng Github: @SokunHeng Show your support Please ⭐️ this repository if this project helped you! 📚 Reference Documentation 📝 License Copyright © 2022 SokunHeng.

awesome-quantum-machine-learning
github
LLM Vibe Score0.64
Human Vibe Score1
krishnakumarsekarMar 27, 2025

awesome-quantum-machine-learning

Awesome Quantum Machine Learning A curated list of awesome quantum machine learning algorithms,study materials,libraries and software (by language). Table of Contents INTRODUCTION Why Quantum Machine Learning? BASICS What is Quantum Mechanics? What is Quantum Computing? What is Topological Quantum Computing? Quantum Computing vs Classical Computing QUANTUM COMPUTING Atom Structure Photon wave Electron Fluctuation or spin States SuperPosition SuperPosition specific for machine learning(Quantum Walks) Classical Bit Quantum Bit or Qubit or Qbit Basic Gates in Quantum Computing Quantum Diode Quantum Transistor Quantum Processor Quantum Registery QRAM Quantum Entanglement QUANTUM COMPUTING MACHINE LEARNING BRIDGE Complex Numbers Tensors Tensors Network Oracle Hadamard transform Hilbert Space eigenvalues and eigenvectors Schr¨odinger Operators Quantum lambda calculus Quantum Amplitute Phase Qubits Encode and Decode convert classical bit to qubit Quantum Dirac and Kets Quantum Complexity Arbitrary State Generation QUANTUM ALGORITHMS Quantum Fourier Transform Variational-Quantum-Eigensolver Grovers Algorithm Shor's algorithm Hamiltonian Oracle Model Bernstein-Vazirani Algorithm Simon’s Algorithm Deutsch-Jozsa Algorithm Gradient Descent Phase Estimation Haar Tansform Quantum Ridgelet Transform Quantum NP Problem QUANTUM MACHINE LEARNING ALGORITHMS Quantum K-Nearest Neighbour Quantum K-Means Quantum Fuzzy C-Means Quantum Support Vector Machine Quantum Genetic Algorithm Quantum Hidden Morkov Models Quantum state classification with Bayesian methods Quantum Ant Colony Optimization Quantum Cellular Automata Quantum Classification using Principle Component Analysis Quantum Inspired Evolutionary Algorithm Quantum Approximate Optimization Algorithm Quantum Elephant Herding Optimization Quantum-behaved Particle Swarm Optimization Quantum Annealing Expectation-Maximization QAUNTUM NEURAL NETWORK Quantum perceptrons Qurons Quantum Auto Encoder Quantum Annealing Photonic Implementation of Quantum Neural Network Quantum Feed Forward Neural Network Quantum Boltzman Neural Network Quantum Neural Net Weight Storage Quantum Upside Down Neural Net Quantum Hamiltonian Neural Net QANN QPN SAL Quantum Hamiltonian Learning Compressed Quantum Hamiltonian Learning QAUNTUM STATISTICAL DATA ANALYSIS Quantum Probability Theory Kolmogorovian Theory Quantum Measurement Problem Intuitionistic Logic Heyting Algebra Quantum Filtering Paradoxes Quantum Stochastic Process Double Negation Quantum Stochastic Calculus Hamiltonian Calculus Quantum Ito's Formula Quantum Stochastic Differential Equations(QSDE) Quantum Stochastic Integration Itō Integral Quasiprobability Distributions Quantum Wiener Processes Quantum Statistical Ensemble Quantum Density Operator or Density Matrix Gibbs Canonical Ensemble Quantum Mean Quantum Variance Envariance Polynomial Optimization Quadratic Unconstrained Binary Optimization Quantum Gradient Descent Quantum Based Newton's Method for Constrained Optimization Quantum Based Newton's Method for UnConstrained Optimization Quantum Ensemble Quantum Topology Quantum Topological Data Analysis Quantum Bayesian Hypothesis Quantum Statistical Decision Theory Quantum Minimax Theorem Quantum Hunt-Stein Theorem Quantum Locally Asymptotic Normality Quantum Ising Model Quantum Metropolis Sampling Quantum Monte Carlo Approximation Quantum Bootstrapping Quantum Bootstrap Aggregation Quantum Decision Tree Classifier Quantum Outlier Detection Cholesky-Decomposition for Quantum Chemistry Quantum Statistical Inference Asymptotic Quantum Statistical Inference Quantum Gaussian Mixture Modal Quantum t-design Quantum Central Limit Theorem Quantum Hypothesis Testing Quantum Chi-squared and Goodness of Fit Testing Quantum Estimation Theory Quantum Way of Linear Regression Asymptotic Properties of Quantum Outlier Detection in Quantum Concepts QAUNTUM ARTIFICIAL INTELLIGENCE Heuristic Quantum Mechanics Consistent Quantum Reasoning Quantum Reinforcement Learning QAUNTUM COMPUTER VISION QUANTUM PROGRAMMING LANGUAGES , TOOLs and SOFTWARES ALL QUANTUM ALGORITHMS SOURCE CODES , GITHUBS QUANTUM HOT TOPICS Quantum Cognition Quantum Camera Quantum Mathematics Quantum Information Processing Quantum Image Processing Quantum Cryptography Quantum Elastic Search Quantum DNA Computing Adiabetic Quantum Computing Topological Big Data Anlytics using Quantum Hamiltonian Time Based Quantum Computing Deep Quantum Learning Quantum Tunneling Quantum Entanglment Quantum Eigen Spectrum Quantum Dots Quantum elctro dynamics Quantum teleportation Quantum Supremacy Quantum Zeno Effect Quantum Cohomology Quantum Chromodynamics Quantum Darwinism Quantum Coherence Quantum Decoherence Topological Quantum Computing Topological Quantum Field Theory Quantum Knots Topological Entanglment Boson Sampling Quantum Convolutional Code Stabilizer Code Quantum Chaos Quantum Game Theory Quantum Channel Tensor Space Theory Quantum Leap Quantum Mechanics for Time Travel Quantum Secured Block Chain Quantum Internet Quantum Optical Network Quantum Interference Quantum Optical Network Quantum Operating System Electron Fractionalization Flip-Flop Quantum Computer Quantum Information with Gaussian States Quantum Anomaly Detection Distributed Secure Quantum Machine Learning Decentralized Quantum Machine Learning Artificial Agents for Quantum Designs Light Based Quantum Chips for AI Training QUANTUM STATE PREPARATION ALGORITHM FOR MACHINE LEARNING Pure Quantum State Product State Matrix Product State Greenberger–Horne–Zeilinger State W state AKLT model Majumdar–Ghosh Model Multistate Landau–Zener Models Projected entangled-pair States Infinite Projected entangled-pair States Corner Transfer Matrix Method Tensor-entanglement Renormalization Tree Tensor Network for Supervised Learning QUANTUM MACHINE LEARNING VS DEEP LEARNING QUANTUM MEETUPS QUANTUM GOOGLE GROUPS QUANTUM BASED COMPANIES QUANTUM LINKEDLIN QUANTUM BASED DEGREES CONSOLIDATED QUANTUM ML BOOKS CONSOLIDATED QUANTUM ML VIDEOS CONSOLIDATED QUANTUM ML Reserach Papers CONSOLIDATED QUANTUM ML Reserach Scientist RECENT QUANTUM UPDATES FORUM ,PAGES AND NEWSLETTER INTRODUCTION Why Quantum Machine Learning? Machine Learning(ML) is just a term in recent days but the work effort start from 18th century. What is Machine Learning ? , In Simple word the answer is making the computer or application to learn themselves . So its totally related with computing fields like computer science and IT ? ,The answer is not true . ML is a common platform which is mingled in all the aspects of the life from agriculture to mechanics . Computing is a key component to use ML easily and effectively . To be more clear ,Who is the mother of ML ?, As no option Mathematics is the mother of ML . The world tremendous invention complex numbers given birth to this field . Applying mathematics to the real life problem always gives a solution . From Neural Network to the complex DNA is running under some specific mathematical formulas and theorems. As computing technology growing faster and faster mathematics entered into this field and makes the solution via computing to the real world . In the computing technology timeline once a certain achievements reached peoples interested to use advanced mathematical ideas such as complex numbers ,eigen etc and its the kick start for the ML field such as Artificial Neural Network ,DNA Computing etc. Now the main question, why this field is getting boomed now a days ? , From the business perspective , 8-10 Years before during the kick start time for ML ,the big barrier is to merge mathematics into computing field . people knows well in computing has no idea on mathematics and research mathematician has no idea on what is computing . The education as well as the Job Opportunities is like that in that time . Even if a person tried to study both then the business value for making a product be not good. Then the top product companies like Google ,IBM ,Microsoft decided to form a team with mathematician ,a physician and a computer science person to come up with various ideas in this field . Success of this team made some wonderful products and they started by providing cloud services using this product . Now we are in this stage. So what's next ? , As mathematics reached the level of time travel concepts but the computing is still running under classical mechanics . the companies understood, the computing field must have a change from classical to quantum, and they started working on the big Quantum computing field, and the market named this field as Quantum Information Science .The kick start is from Google and IBM with the Quantum Computing processor (D-Wave) for making Quantum Neural Network .The field of Quantum Computer Science and Quantum Information Science will do a big change in AI in the next 10 years. Waiting to see that........... .(google, ibm). References D-Wave - Owner of a quantum processor Google - Quantum AI Lab IBM - Quantum Computer Lab Quora - Question Regarding future of quantum AI NASA - NASA Quantum Works Youtube - Google Video of a Quantum Processor external-link - MIT Review microsoft new product - Newly Launched Microsoft Quantum Language and Development Kit microsoft - Microsoft Quantum Related Works Google2 - Google Quantum Machine Learning Blog BBC - About Google Quantum Supremacy,IBM Quantum Computer and Microsoft Q Google Quantum Supremacy - Latest 2019 Google Quantum Supremacy Achievement IBM Quantum Supremacy - IBM Talk on Quantum Supremacy as a Primer VICE on the fight - IBM Message on Google Quantum Supremacy IBM Zurich Quantum Safe Cryptography - An interesting startup to replace all our Certificate Authority Via Cloud and IBM Q BASICS What is Quantum Mechanics? In a single line study of an electron moved out of the atom then its classical mechanic ,vibrates inside the atom its quantum mechanics WIKIPEDIA - Basic History and outline LIVESCIENCE. - A survey YOUTUBE - Simple Animation Video Explanining Great. What is Quantum Computing? A way of parallel execution of multiple processess in a same time using qubit ,It reduces the computation time and size of the processor probably in neuro size WIKIPEDIA - Basic History and outline WEBOPEDIA. - A survey YOUTUBE - Simple Animation Video Explanining Great. Quantum Computing vs Classical Computing LINK - Basic outline Quantum Computing Atom Structure one line : Electron Orbiting around the nucleous in an eliptical format YOUTUBE - A nice animation video about the basic atom structure Photon Wave one line : Light nornmally called as wave transmitted as photons as similar as atoms in solid particles YOUTUBE - A nice animation video about the basic photon 1 YOUTUBE - A nice animation video about the basic photon 2 Electron Fluctuation or spin one line : When a laser light collide with solid particles the electrons of the atom will get spin between the orbitary layers of the atom ) YOUTUBE - A nice animation video about the basic Electron Spin 1 YOUTUBE - A nice animation video about the basic Electron Spin 2 YOUTUBE - A nice animation video about the basic Electron Spin 3 States one line : Put a point on the spinning electron ,if the point is in the top then state 1 and its in bottom state 0 YOUTUBE - A nice animation video about the Quantum States SuperPosition two line : During the spin of the electron the point may be in the middle of upper and lower position, So an effective decision needs to take on the point location either 0 or 1 . Better option to analyse it along with other electrons using probability and is called superposition YOUTUBE - A nice animation video about the Quantum Superposition SuperPosition specific for machine learning(Quantum Walks) one line : As due to computational complexity ,quantum computing only consider superposition between limited electrons ,In case to merge more than one set quantum walk be the idea YOUTUBE - A nice video about the Quantum Walks Classical Bits one line : If electron moved from one one atom to other ,from ground state to excited state a bit value 1 is used else bit value 0 used Qubit one line : The superposition value of states of a set of electrons is Qubit YOUTUBE - A nice video about the Quantum Bits 1 YOUTUBE - A nice video about the Bits and Qubits 2 Basic Gates in Quantum Computing one line : As like NOT, OR and AND , Basic Gates like NOT, Hadamard gate , SWAP, Phase shift etc can be made with quantum gates YOUTUBE - A nice video about the Quantum Gates Quantum Diode one line : Quantum Diodes using a different idea from normal diode, A bunch of laser photons trigger the electron to spin and the quantum magnetic flux will capture the information YOUTUBE - A nice video about the Quantum Diode Quantum Transistors one line : A transistor default have Source ,drain and gate ,Here source is photon wave ,drain is flux and gate is classical to quantum bits QUORA -Discussion about the Quantum Transistor YOUTUBE - Well Explained Quantum Processor one line : A nano integration circuit performing the quantum gates operation sorrounded by cooling units to reduce the tremendous amount of heat YOUTUBE - Well Explained Quantum Registery QRAM one line : Comapring the normal ram ,its ultrafast and very small in size ,the address location can be access using qubits superposition value ,for a very large memory set coherent superposition(address of address) be used PDF - very Well Explained QUANTUM COMPUTING MACHINE LEARNING BRIDGE Complex Numbers one line : Normally Waves Interference is in n dimensional structure , to find a polynomial equation n order curves ,better option is complex number YOUTUBE - Wonderful Series very super Explained Tensors one line : Vectors have a direction in 2D vector space ,If on a n dimensional vector space ,vectors direction can be specify with the tensor ,The best solution to find the superposition of a n vector electrons spin space is representing vectors as tensors and doing tensor calculus YOUTUBE - Wonderful super Explained tensors basics YOUTUBE - Quantum tensors basics Tensors Network one line : As like connecting multiple vectors ,multple tensors form a network ,solving such a network reduce the complexity of processing qubits YOUTUBE - Tensors Network Some ideas specifically for quantum algorithms QUANTUM MACHINE LEARNING ALGORITHMS Quantum K-Nearest Neighbour info : Here the centroid(euclidean distance) can be detected using the swap gates test between two states of the qubit , As KNN is regerssive loss can be tally using the average PDF1 from Microsoft - Theory Explanation PDF2 - A Good Material to understand the basics Matlab - Yet to come soon Python - Yet to come soon Quantum K-Means info : Two Approaches possible ,1. FFT and iFFT to make an oracle and calculate the means of superposition 2. Adiobtic Hamiltonian generation and solve the hamiltonian to determine the cluster PDF1 - Applying Quantum Kmeans on Images in a nice way PDF2 - Theory PDF3 - Explaining well the K-means clustering using hamiltonian Matlab - Yet to come soon Python - Yet to come soon Quantum Fuzzy C-Means info : As similar to kmeans fcm also using the oracle dialect ,but instead of means,here oracle optimization followed by a rotation gate is giving a good result PDF1 - Theory Matlab - Yet to come soon Python - Yet to come soon Quantum Support Vector Machine info : A little different from above as here kernel preparation is via classical and the whole training be in oracles and oracle will do the classification, As SVM is linear ,An optimal Error(Optimum of the Least Squares Dual Formulation) Based regression is needed to improve the performance PDF1 - Nice Explanation but little hard to understand :) PDF2 - Nice Application of QSVM Matlab - Yet to come soon Python - Yet to come soon Quantum Genetic Algorithm info : One of the best algorithm suited for Quantum Field ,Here the chromosomes act as qubit vectors ,the crossover part carrying by an evaluation and the mutation part carrying by the rotation of gates ![Flow Chart]() PDF1 - Very Beautiful Article , well explained and superp PDF2 - A big theory :) PDF3 - Super Comparison Matlab - Simulation Python1 - Simulation Python2 - Yet to come Quantum Hidden Morkov Models info : As HMM is already state based ,Here the quantum states acts as normal for the markov chain and the shift between states is using quantum operation based on probability distribution ![Flow Chart]() PDF1 - Nice idea and explanation PDF2 - Nice but a different concept little Matlab - Yet to come Python1 - Yet to come Python2 - Yet to come Quantum state classification with Bayesian methods info : Quantum Bayesian Network having the same states concept using quantum states,But here the states classification to make the training data as reusable is based on the density of the states(Interference) ![Bayesian Network Sample1]() ![Bayesian Network Sample2]() ![Bayesian Network Sample3]() PDF1 - Good Theory PDF2 - Good Explanation Matlab - Yet to come Python1 - Yet to come Python2 - Yet to come Quantum Ant Colony Optimization info : A good algorithm to process multi dimensional equations, ACO is best suited for Sales man issue , QACO is best suited for Sales man in three or more dimension, Here the quantum rotation circuit is doing the peromene update and qubits based colony communicating all around the colony in complex space ![Ant Colony Optimization 1]() PDF1 - Good Concept PDF2 - Good Application Matlab - Yet to come Python1 - Yet to come Python2 - Yet to come Quantum Cellular Automata info : One of the very complex algorithm with various types specifically used for polynomial equations and to design the optimistic gates for a problem, Here the lattice is formed using the quatum states and time calculation is based on the change of the state between two qubits ,Best suited for nano electronics ![Quantum Cellular Automata]() Wikipedia - Basic PDF1 - Just to get the keywords PDF2 - Nice Explanation and an easily understandable application Matlab - Yet to come Python1 - Yet to come Python2 - Yet to come QAUNTUM NEURAL NETWORK one line : Its really one of the hardest topic , To understand easily ,Normal Neural Network is doing parallel procss ,QNN is doing parallel of parallel processess ,In theory combination of various activation functions is possible in QNN ,In Normal NN more than one activation function reduce the performance and increase the complexity Quantum perceptrons info : Perceptron(layer) is the basic unit in Neural Network ,The quantum version of perceptron must satisfy both linear and non linear problems , Quantum Concepts is combination of linear(calculus of superposition) and nonlinear(State approximation using probability) ,To make a perceptron in quantum world ,Transformation(activation function) of non linearity to certain limit is needed ,which is carrying by phase estimation algorithm ![Quantum Perceptron 3]() PDF1 - Good Theory PDF2 - Good Explanation Matlab - Yet to come Python1 - Yet to come Python2 - Yet to come QAUNTUM STATISTICAL DATA ANALYSIS one line : An under research concept ,It can be seen in multiple ways, one best way if you want to apply n derivative for a problem in current classical theory its difficult to compute as its serialization problem instead if you do parallelization of differentiation you must estimate via probability the value in all flows ,Quantum Probability Helps to achieve this ,as the loss calculation is very less . the other way comparatively booming is Quantum Bayesianism, its a solution to solve most of the uncertainity problem in statistics to combine time and space in highly advanced physical research QUANTUM PROGRAMMING LANGUAGES , TOOLs and SOFTWARES All info : All Programming languages ,softwares and tools in alphabetical order Software - Nice content of all Python library - A python library Matlab based python library - Matlab Python Library Quantum Tensor Network Github - Tensor Network Bayesforge - A Beautiful Amazon Web Service Enabled Framework for Quantum Alogorithms and Data Analytics Rigetti - A best tools repository to use quantum computer in real time Rigetti Forest - An API to connect Quantum Computer quil/pyQuil - A quantum instruction language to use forest framework Grove - Grove is a repository to showcase quantum Fourier transform, phase estimation, the quantum approximate optimization algorithm, and others developed using Forest QISKit - A IBM Kit to access quantum computer and mainly for quantum circuits IBM Bluemix Simulator - A Bluemix Simulator for Quantum Circuits Microsoft Quantum Development Kit - Microsoft Visual Studio Enbaled Kit for Quantum Circuit Creation Microsoft "Q#" - Microsoft Q Sharp a new Programming Language for Quantum Circuit Creation qiskit api python - An API to connect IBM Quantum Computer ,With the generated token its easy to connect ,but very limited utils ,Lot of new utils will come soon Cyclops Tensor Framework - A framework to do tensor network simulations Python ToolKit for chemistry and physics Quantum Algorithm simulations - A New Started Project for simulating molecule and solids Bayesian Based Quatum Projects Repository - A nice repository and the kickstarter of bayesforge Google Fermion Products - A newly launched product specifivally for chemistry simulation Tree Tensor Networks - Interesting Tensor Network in Incubator Deep Tensor Neural Network - Some useful information about Tensor Neural Network in Incubator Generative Tensorial Networks - A startup to apply machine learning via tensor network for drug discovery Google Bristlecone - A new Quantum Processor from Google , Aimed for Future Hardwares with full fledged AI support XANADU - A Light based Quantum Hardware(chips supports) and Software Company Started in Preparation Stage. Soon will be in market fathom computing - A new concept to train the ai in a processor using light and quantum based concepts. soon products will be launch Alibaba Quantum Computing Cloud Service - Cloud Service to access 11 Bit Quantum Computing Processor Atomistic Machine Learning Project - Seems something Interesting with Deep Tensor Network for Quantum Chemistry Applications circQ and Google Works - Google Top Efforts on Tools IBM Safe Cryptography on Cloud - IBM Started and Developing a Quantm Safe Cryptography to replace all our Certificate Authority via Cloud Google Tensor Network Open Source - Google Started the Most Scientist Preferred Way To Use a Quantum Computer Circuit. Tensor Flow Which Makes Easy to Design the Network and Will Leave the Work Effect Of Gates, Processor Preparation and also going to tell the beauty of Maths Google Tensor Network Github - Github Project of Google Tensor Network Quantum Tensorflow - Yet to come soon Quantum Spark - Yet to come soon Quatum Map Reduce - Yet to come soon Quantum Database - Yet to come soon Quantum Server - Yet to come soon Quantum Data Analytics - Yet to come soon QUANTUM HOT TOPICS Deep Quantum Learning why and what is deep learning? In one line , If you know deep learning you can get a good job :) ,Even a different platform undergraduated and graduated person done a master specialization in deep learning can work in this big sector :), Practically speaking machine learning (vector mathematics) , deep learning (vector space(Graphics) mathematics) and big data are the terms created by big companies to make a trend in the market ,but in science and research there is no word such that , Now a days if you ask a junior person working in this big companies ,what is deep learning ,you will get some reply as "doing linear regression with stochastic gradient for a unsupervised data using Convolutional Neural Network :)" ,They knows the words clearly and knows how to do programming using that on a bunch of "relative data" , If you ask them about the FCM , SVM and HMM etc algorithms ,they will simply say these are olden days algorithms , deep learning replaced all :), But actually they dont know from the birth to the till level and the effectiveness of algorithms and mathematics ,How many mathematical theorems in vector, spaces , tensors etc solved to find this "hiding the complexity technology", They did not played with real non relative data like medical images, astro images , geology images etc , finding a relation and features is really complex and looping over n number of images to do pattern matching is a giant work , Now a days the items mentioned as deep learning (= multiple hidden artifical neural network) is not suitable for that why quantum deep learning or deep quantum learning? In the mid of Artificial Neural Network Research people realised at the maximum extreme only certain mathematical operations possible to do with ANN and the aim of this ANN is to achieve parallel execution of many mathematical operations , In artificial Intelligence ,the world intelligence stands for mathematics ,how effective if a probem can be solvable is based on the mathematics logic applying on the problem , more the logic will give more performance(more intelligent), This goal open the gate for quantum artificial neural network, On applying the ideas behind the deep learning to quantum mechanics environment, its possible to apply complex mathematical equations to n number of non relational data to find more features and can improve the performance Quantum Machine Learning vs Deep Learning Its fun to discuss about this , In recent days most of the employees from Product Based Companies Like google,microsoft etc using the word deep learning ,What actually Deep Learning ? and is it a new inventions ? how to learn this ? Is it replacing machine learning ? these question come to the mind of junior research scholars and mid level employees The one answer to all questions is deep learning = parallel "for" loops ,No more than that ,Its an effective way of executing multiple tasks repeatly and to reduce the computation cost, But it introduce a big cap between mathematics and computerscience , How ? All classical algorithms based on serial processing ,Its depends on the feedback of the first loop ,On applying a serial classical algorithm in multiple clusters wont give a good result ,but some light weight parallel classical algorithms(Deep learning) doing the job in multiple clusters and its not suitable for complex problems, What is the solution for then? As in the title Quantum Machine Learning ,The advantage behind is deep learning is doing the batch processing simply on the data ,but quantum machine learning designed to do batch processing as per the algorithm The product companies realised this one and they started migrating to quantum machine learning and executing the classical algorithms on quantum concept gives better result than deep learning algorithms on classical computer and the target to merge both to give very wonderful result References Quora - Good Discussion Quora - The Bridge Discussion Pdf - Nice Discussion Google - Google Research Discussion Microsoft - Microsoft plan to merge both IBM - IBM plan to merge both IBM Project - IBM Project idea MIT and Google - Solutions for all questions QUANTUM MEETUPS Meetup 1 - Quantum Physics Meetup 2 - Quantum Computing London Meetup 3 - Quantum Computing New York Meetup 4 - Quantum Computing Canada Meetup 5 - Quantum Artificial Intelligence Texas Meetup 6 - Genarl Quantum Mechanics , Mathematics New York Meetup 7 - Quantum Computing Mountain View California Meetup 8 - Statistical Analysis New York Meetup 9 - Quantum Mechanics London UK Meetup 10 - Quantum Physics Sydney Australia Meetup 11 - Quantum Physics Berkeley CA Meetup 12 - Quantum Computing London UK Meetup 13 - Quantum Mechanics Carmichael CA Meetup 14 - Maths and Science Group Portland Meetup 15 - Quantum Physics Santa Monica, CA Meetup 16 - Quantum Mechanics London Meetup 17 - Quantum Computing London Meetup 18 - Quantum Meta Physics ,Kansas City , Missouri ,US Meetup 19 - Quantum Mechanics and Physics ,Boston ,Massachusetts ,US Meetup 20 - Quantum Physics and Mechanics ,San Francisco ,California Meetup 21 - Quantum Mechanics ,Langhorne, Pennsylvania Meetup 22 - Quantum Mechanics ,Portland QUANTUM BASED DEGREES Plenty of courses around the world and many Universities Launching it day by day ,Instead of covering only Quantum ML , Covering all Quantum Related topics gives more idea in the order below Available Courses Quantum Mechanics for Science and Engineers Online Standford university - Nice Preparatory Course edx - Quantum Mechanics for Everyone NPTEL 1 - Nice Series of Courses to understand basics and backbone of quantum mechanics NPTEL 2 NPTEL 3 NPTEL 4 NPTEL 5 Class Based Course UK Bristol Australia Australian National University Europe Maxs Planks University Quantum Physics Online MIT - Super Explanation and well basics NPTEL - Nice Series of Courses to understand basics and backbone of quantum Physics Class Based Course Europe University of Copenhagen Quantum Chemistry Online NPTEL 1 - Nice Series of Courses to understand basics and backbone of quantum Chemistry NPTEL 2 - Class Based Course Europe UGent Belgium Quantum Computing Online MIT - Super Explanation and well basics edx - Nice Explanation NPTEL - Nice Series of Courses to understand basics and backbone of quantum Computing Class Based Course Canada uwaterloo Singapore National University Singapore USA Berkley China Baidu Quantum Technology Class Based Course Canada uwaterloo Singapore National University Singapore Europe Munich Russia Skoltech Quantum Information Science External Links quantwiki Online MIT - Super Explanation and well basics edx - Nice Explanation NPTEL - Nice Series of Courses to understand basics and backbone of quantum information and computing Class Based Course USA MIT Standford University Joint Center for Quantum Information and Computer Science - University of Maryland Canada Perimeter Institute Singapore National University Singapore Europe ULB Belgium IQOQI Quantum Electronics Online MIT - Wonderful Course NPTEL - Nice Series of Courses to understand basics and backbone of quantum Electronics Class Based Course USA Texas Europe Zurich ICFO Asia Tata Institute Quantum Field Theory Online Standford university - Nice Preparatory Course edx - Some QFT Concepts available Class Based Course UK Imperial Europe Vrije Quantum Computer Science Class Based Course USA Oxford Joint Center for Quantum Information and Computer Science - University of Maryland Quantum Artificial Intelligence and Machine Learning External Links Quora 1 Quora 1 Artificial Agents Research for Quantum Designs Quantum Mathematics Class Based Course USA University of Notre CONSOLIDATED Quantum Research Papers scirate - Plenty of Quantum Research Papers Available Peter Wittek - Famous Researcher for the Quantum Machine Leanrning , Published a book in this topic [Murphy Yuezhen Niu] (https://scholar.google.com/citations?user=0wJPxfkAAAAJ&hl=en) - A good researcher published some nice articles Recent Quantum Updates forum ,pages and newsletter Quantum-Tech - A Beautiful Newsletter Page Publishing Amazing Links facebook Quantum Machine Learning - Running By me . Not that much good :). You can get some ideas Linkedlin Quantum Machine Learning - A nice page running by experts. Can get plenty of ideas FOSDEM 2019 Quantum Talks - A one day talk in fosdem 2019 with more than 10 research topics,tools and ideas FOSDEM 2020 Quantum Talks - Live talk in fosdem 2020 with plenty new research topics,tools and ideas License Dedicated Opensources ![Dedicated Opensources]() Source code of plenty of Algortihms in Image Processing , Data Mining ,etc in Matlab, Python ,Java and VC++ Scripts Good Explanations of Plenty of algorithms with flow chart etc Comparison Matrix of plenty of algorithms Is Quantum Machine Learning Will Reveal the Secret Maths behind Astrology? Awesome Machine Learning and Deep Learning Mathematics is online Published Basic Presentation of the series Quantum Machine Learning Contribution If you think this page might helpful. Please help for World Education Charity or kids who wants to learn

PhoenixGo
github
LLM Vibe Score0.542
Human Vibe Score0.07574427540822147
TencentMar 27, 2025

PhoenixGo

!PhoenixGo PhoenixGo is a Go AI program which implements the AlphaGo Zero paper "Mastering the game of Go without human knowledge". It is also known as "BensonDarr" and "金毛测试" in FoxGo, "cronus" in CGOS, and the champion of World AI Go Tournament 2018 held in Fuzhou China. If you use PhoenixGo in your project, please consider mentioning in your README. If you use PhoenixGo in your research, please consider citing the library as follows: Building and Running On Linux Requirements GCC with C++11 support Bazel (0.19.2 is known-good) (Optional) CUDA and cuDNN for GPU support (Optional) TensorRT (for accelerating computation on GPU, 3.0.4 is known-good) The following environments have also been tested by independent contributors : here. Other versions may work, but they have not been tested (especially for bazel). Download and Install Bazel Before starting, you need to download and install bazel, see here. For PhoenixGo, bazel (0.19.2 is known-good), read Requirements for details If you have issues on how to install or start bazel, you may want to try this all-in-one command line for easier building instead, see FAQ question Building PhoenixGo with Bazel Clone the repository and configure the building: ./configure will start the bazel configure : ask where CUDA and TensorRT have been installed, specify them if need. Then build with bazel: Dependices such as Tensorflow will be downloaded automatically. The building process may take a long time. Recommendation : the bazel building uses a lot of RAM, if your building environment is lack of RAM, you may need to restart your computer and exit other running programs to free as much RAM as possible. Running PhoenixGo Download and extract the trained network: The PhoenixGo engine supports GTP (Go Text Protocol), which means it can be used with a GUI with GTP capability, such as Sabaki. It can also run on command-line GTP server tools like gtp2ogs. But PhoenixGo does not support all GTP commands, see FAQ question. There are 2 ways to run PhoenixGo engine 1) start.sh : easy use Run the engine : scripts/start.sh start.sh will automatically detect the number of GPUs, run mcts_main with proper config file, and write log files in directory log. You could also use a customized config file (.conf) by running scripts/start.sh {config_path}. If you want to do that, see also #configure-guide. 2) mcts_main : fully control If you want to fully control all the options of mcts_main (such as changing log destination, or if start.sh is not compatible for your specific use), you can run directly bazel-bin/mcts/mcts_main instead. For a typical usage, these command line options should be added: --gtp to enable GTP mode --config_path=replace/with/path/to/your/config/file to specify the path to your config file it is also needed to edit your config file (.conf) and manually add the full path to ckpt, see FAQ question. You can also change options in config file, see #configure-guide. for other command line options , see also #command-line-options for details, or run ./mcts_main --help . A copy of the --help is provided for your convenience here For example: (Optional) : Distribute mode PhoenixGo support running with distributed workers, if there are GPUs on different machine. Build the distribute worker: Run distzeromodel_server on distributed worker, one for each GPU. Fill ip:port of workers in the config file (etc/mcts_dist.conf is an example config for 32 workers), and run the distributed master: On macOS Note: Tensorflow stop providing GPU support on macOS since 1.2.0, so you are only able to run on CPU. Use Pre-built Binary Download and extract CPU-only version (macOS) Follow the document included in the archive : usingphoenixgoon_mac.pdf Building from Source Same as Linux. On Windows Recommendation: See FAQ question, to avoid syntax errors in config file and command line options on Windows. Use Pre-built Binary GPU version : The GPU version is much faster, but works only with compatible nvidia GPU. It supports this environment : CUDA 9.0 only cudnn 7.1.x (x is any number) or lower for CUDA 9.0 no AVX, AVX2, AVX512 instructions supported in this release (so it is currently much slower than the linux version) there is no TensorRT support on Windows Download and extract GPU version (Windows) Then follow the document included in the archive : how to install phoenixgo.pdf note : to support special features like CUDA 10.0 or AVX512 for example, you can build your own build for windows, see #79 CPU-only version : If your GPU is not compatible, or if you don't want to use a GPU, you can download this CPU-only version (Windows), Follow the document included in the archive : how to install phoenixgo.pdf Configure Guide Here are some important options in the config file: numevalthreads: should equal to the number of GPUs num_search_threads: should a bit larger than num_eval_threads evalbatchsize timeoutmsper_step: how many time will used for each move maxsimulationsper_step: how many simulations(also called playouts) will do for each move gpu_list: use which GPUs, separated by comma modelconfig -> traindir: directory where trained network stored modelconfig -> checkpointpath: use which checkpoint, get from train_dir/checkpoint if not set modelconfig -> enabletensorrt: use TensorRT or not modelconfig -> tensorrtmodelpath: use which TensorRT model, if enabletensorrt maxsearchtree_size: the maximum number of tree nodes, change it depends on memory size maxchildrenper_node: the maximum children of each node, change it depends on memory size enablebackgroundsearch: pondering in opponent's time earlystop: genmove may return before timeoutmsperstep, if the result would not change any more unstable_overtime: think timeout_ms_per_step time_factor more if the result still unstable behind_overtime: think timeout_ms_per_step timefactor more if winrate less than actthreshold Options for distribute mode: enable_dist: enable distribute mode distsvraddrs: ip:port of distributed workers, multiple lines, one ip:port in each line distconfig -> timeoutms: RPC timeout Options for async distribute mode: Async mode is used when there are huge number of distributed workers (more than 200), which need too many eval threads and search threads in sync mode. etc/mctsasyncdist.conf is an example config for 256 workers. enable_async: enable async mode enable_dist: enable distribute mode distsvraddrs: multiple lines, comma sperated lists of ip:port for each line numevalthreads: should equal to number of distsvraddrs lines evaltaskqueue_size: tunning depend on number of distribute workers numsearchthreads: tunning depend on number of distribute workers Read mcts/mcts_config.proto for more config options. Command Line Options mcts_main accept options from command line: --config_path: path of config file --gtp: run as a GTP engine, if disable, gen next move only --init_moves: initial moves on the go board, for example usage, see FAQ question --gpulist: override gpulist in config file --listen_port: work with --gtp, run gtp engine on port in TCP protocol --allowip: work with --listenport, list of client ip allowed to connect --forkperrequest: work with --listen_port, fork for each request or not Glog options are also supported: --logtostderr: log message to stderr --log_dir: log to files in this directory --minloglevel: log level, 0 - INFO, 1 - WARNING, 2 - ERROR --v: verbose log, --v=1 for turning on some debug log, --v=0 to turning off mcts_main --help for more command line options. A copy of the --help is provided for your convenience here Analysis For analysis purpose, an easy way to display the PV (variations for main move path) is --logtostderr --v=1 which will display the main move path winrate and continuation of moves analyzed, see FAQ question for details It is also possible to analyse .sgf files using analysis tools such as : GoReviewPartner : an automated tool to analyse and/or review one or many .sgf files (saved as .rsgf file). It supports PhoenixGo and other bots. See FAQ question for details FAQ You will find a lot of useful and important information, also most common problems and errors and how to fix them Please take time to read the FAQ

How I run a $13,900/MONTH faceless Instagram theme page [FULL COURSE]
youtube
LLM Vibe Score0.381
Human Vibe Score0.44
howtoaiMar 27, 2025

How I run a $13,900/MONTH faceless Instagram theme page [FULL COURSE]

How to create viral motivational videos for Instagram theme pages. Step-By-Step Document 👉 https://go.howtoai.pro/motivational Pre-monetized YouTube accounts with 1,000 subscribers & 4,000 watch hours ✅ https://tikaccounts.com/products/youtube ⭐️ Apply to work with me 1-on-1: https://apply.facelesslaunchpad.com/ 👉 100% FREE community: https://whop.com/howtoai/ 👉 More YouTube Automation videos: https://www.youtube.com/playlist?list=PLwcK9-wSIWXHbhznPFFwgXlB1vr-HCkJR 👉 Newsletter about the latest AI news: https://www.dailyaiedge.com/subscribe This video will show you everything related to creating YouTube Shorts automation videos in the animal niche. If you want to start a faceless Shorts channel, watch this video. 🚨 ALL TOOL LINKS ARE IN THE STEP-BY-STEP DOCUMENT AT THE TOP OF THE DESCRIPTION 🚨 🔗 LINKS 🔗 📢 100% FREE Discord community: https://whop.com/howtoai/ 🚀 Viral TikTok Background Footage: https://howtoai.pro/products/viral-tiktok-gameplay 🔥 Trending Sound Effects Pack: https://howtoai.pro/products/trending-tiktok-sound-effects ✉️ Email newsletter on how to leverage AI (100% free): https://www.dailyaiedge.com/subscribe Welcome to howtoai, your ultimate destination for learning how to use AI tools like ChatGPT and Midjourney. Our channel provides high-quality tutorials and guides covering topics such as natural language processing, machine learning, and computer vision. Our goal is to make complex AI concepts easy to understand and accessible to all, whether you're a beginner or an experienced user. For extra clarification, this video will show you how to start a faceless Instagram theme page to make money online. I will teach you how to use certain AI tools to make money online, and most importantly, get good results running a faceless Instagram account. So if you want to start an Instagram theme page business, watch this video. Sponsorships or other business inquiries? Email us at: partnerships@howtoai.pro #howtomakemoneyonline #instagramreels

machine-learning-blackjack-solution
github
LLM Vibe Score0.42
Human Vibe Score0.022610872675250356
GregSommervilleMar 27, 2025

machine-learning-blackjack-solution

machine-learning-blackjack-solution Introduction A genetic algorithm is a type of artificial intelligence programming that uses ideas from evolution to solve complex problems. It works by creating a population of (initially random) candidate solutions, then repeatedly selecting pairs of candidates and combining their solutions using a process similar to genetic crossover. Sometimes candidate solutions even go through mutation, just to introduce new possibilities into the population. After a large number of generations, the best solution found up to that point is often the optimal, best solution possible. Genetic algorithms are particularly well-suited for combinatorial problems, where there are huge numbers of potential solutions to a problem. The evolutionary process they go through is, in essence, a search through a huge solution space. A solution space so large that you simply could never use a brute force approach. This project is a demonstration of using a genetic algorithm to find an optimal strategy for playing the casino game Blackjack. Please see this article for a story about how this program was used, and what the results were. The article describes some of the available settings, and shows how different values for those settings affect the final result. The source code is for a Windows application written in Cthat allows you to play with different settings like population size, selection style and mutation rate. Each generation's best solution is displayed, so you can watch the program literally evolve a solution. !blackjack strategy tester screenshot The property grid located at the upper left of the screen is where you adjust settings. There's an informational area below that, and the right side of the screen is the display area for the three tables that represent a strategy for playing Blackjack. The tall table on the left is for hard hands, the table in the upper right is for soft hands, and the table in the lower right is for pairs. We'll talk more about how to interpret this strategy in a bit. The columns along the tops of the three tables are for the dealer upcard. When you play Blackjack the dealer has one of his two cards initially turned face up, and the rank of that card has a big impact on recommended strategy. Notice that the upcard ranks don't include Jack, Queen or King. That's because those cards all count 10, so we group them and the Ten together and simplify the tables. To use the tables, first, determine if you have a pair, soft hand, or hard hand. Then look in the appropriate table, with the correct dealer upcard column. The cell in the table will be "H" when the correct strategy is to hit, "S" when the correct strategy is to stand, "D" for double-down, and (in the pairs table only) "P" for split. A Word About This "Optimal" Strategy Before we go any further, it needs to be stated that this problem of finding an optimal Blackjack strategy has already been solved. Back in the 1960s, a mathematician named Edward O. Thorp authored a book called Beat the Dealer, which included charts showing the optimal "Basic" strategy. That strategy looks like this: !optimal blackjack strategy So we're solving a problem that has already been solved, but that's actually good. That means we can compare our results to the known best solution. For example, if our result strategy tells us to do anything but stand when holding a pair of Tens, Jacks, Queens or Kings, we know there's a problem. There's one other thing to get out of the way before we go any further, and that's the idea of nondeterministic code. That means that if we run the same code twice in a row, we're likely to get two different results. That's something that happens with genetic algorithms due to their inherent randomness. There's no guarantee you'll find the absolute optimal solution, but it is assured that you will find an optimal or near-optimal solution. It's something that isn't typical when writing code, so it takes some adjustment for most programmers. Genetic Algorithms Now let's talk about the details of a genetic algorithm. Fitness Scores First of all, we need a way to evaluate candidates so we can compare them to each other. That means a numeric fitness score, which in this case is quite simple: you simulate playing a certain number of hands using the strategy, and then count the number of chips you have at the end. The big question is, how many hands should we test with? The challenge of trying to test a strategy is that due to the innate randomness of Blackjack, you could use the same strategy ten times and get ten completely different results. Obviously, the more hands you play, the more the randomness gets smoothed out, and the quality of the underlying strategy starts to emerge. If you doubt this, just think about flipping a coin. If you only flip it five times, there's certainly a possibility that it'll come up heads all five times (in fact, that happens just over 3% of the time). However, if you flip it 500 times, there's no way it's going to end up all heads - the odds of it happening are 0.5500, which works out to be roughly once every 3 x 10150 times you try it. After some testing and analysis, it was determined that a minimum of 100,000 hands per test is needed for a reasonable level of accuracy. There's still variance even at that number, but in order to cut the variance in half, you'd need to bump the number of hands to 500,000. One reason this accuracy is important is that in the later generations, the differences between candidates are very small. Evolution has caused the main parts of the strategy to converge on a particular approach, and towards the end all it's doing is refining the minor details. In those cases it's important to accurately determine the difference between two similar candidates. Representation Representation is simply the idea that we need to use a data structure for a candidate solution that can be combined via crossover, and possibly mutated. In this case, that's also quite simple because the way that human beings represent a Blackjack strategy is to use three tables, as we've seen. Representing those in code with three two-dimensional arrays is the obvious approach. Each cell in those three tables will have "Hit", "Stand", "Double-Down", or (only for pairs) "Split". By the way, since there are 160 cells in the hard hands table, and 80 cells in the soft hands table, and 100 cells in the pairs table, we can calculate exactly how many possible distinct strategies there are for Blackjack: 4100 x 380 x 3160 = 5 x 10174 possible Blackjack strategies That's a big number, which is obviously impossible to search using brute force. Genetic algorithms (GAs) are extremely helpful when trying to find an optimal solution from a very large set of possible solutions like this. Blackjack Rules and Strategies The rules of Blackjack are fairly simple. The dealer and the player both are dealt two cards. The player sees both of their cards (they are usually dealt face up), and one of the dealer's cards is dealt face up. Each card has a value - for cards between 2 and 10, the value is the same as the card's rank (so an Eight of Spades counts as 8, for example). All face cards count as 10, and an Ace can either be 1 or 11 (it counts as 11 only when that does not result in a hand that exceeds 21). The suit of a card does not matter. After the cards are dealt, if the player has Blackjack (a total of 21) and the dealer does not, the player is immediately paid 1.5 times their original bet, and a new hand is dealt. If the player has 21 and the dealer does also, then it's a tie and the player gets their original bet back, and a new hand is dealt. If the player wasn't dealt a Blackjack, then play continues with the player deciding whether to Stand (not get any more cards), Hit (receive an additional card), Double-down (place an additional bet, and receive one and only one more card), or, in the case of holding a pair, splitting the hand, which means placing an additional bet and receiving two new cards, so the end result is that the player is now playing two (or, in the case of multiple splits, more than two) hands simultaneously. If the player hits or double-downs and has a resulting hand that exceeds 21, then they lose and play continues with the next hand. If not, then the dealer draws until their hand totals at least 17. If the dealer exceeds 21 at this point, the player receives a payment equal to twice their original bet. If the dealer doesn't exceed 21, then the hands are compared and the player with the highest total that doesn't exceed 21 wins. Because of these rules, certain effective strategies emerge. One common strategy is that if you hold a hard hand with a value of 20, 19 or 18, you should Stand, since you avoid busting by going over 21, and you have a nice hand total that might win in a showdown with the dealer. Another common strategy is to split a pair of Aces, since Aces are so powerful (due to the fact that count as 11 or 1, you can often Hit a hand with a soft Ace with no risk of busting). Likewise, splitting a pair of 8s is a good idea because with a hard total of 16, it's likely you will bust if you take a Hit (since so many cards count as 10). As a human being, all it takes is a little knowledge about the rules in order to construct a strategy. The GA program doesn't have that advantage, and operates completely without any pre-programmed knowledge of Blackjack. It simply uses the relative fitness scores and the mechanism of evolution to find the solution. GA Settings There are many variables or settings for a GA. You can adjust population size, how parent candidates are selected, how the resulting children may be mutated, and several other items. The following sections describe some of these settings: Setting: Selection Style Once we've solved representation and have a fitness function, the next step is to select two candidates for crossover during the process of building a new generation. There are three common styles for selection, and this program supports all of them. First, you can choose Roulette Wheel selection. It's named for a Roulette wheel because you can imagine each candidate's fitness score being a wedge in a pie chart, with a size proportionate to its relative fitness compared to the other candidates. (Of course, this assumes that all fitness scores are positive, which we will talk about shortly). The main benefit of Roulette Wheel selection is that selection is fitness-proportionate. Imagine if you had only three candidates, with fitness scores of 1, 3, and 8. The relative selection probabilities for those candidates will be 1/12, 3/12, and 8/12. The downside of Roulette Wheel selection is that it tends to be somewhat slow in terms of processing. The selection process is done by iterating through the candidates until a particular condition is matched - in other words, O(N) performance. Another potential problem with Roulette Wheel selection is that there may be situations where fitness scores vary widely, to such an extent that only certain candidates have any reasonable chance of being selected. This happens frequently in early generations, since the majority of candidates are mostly random. Although this might sound like a positive (since you ultimately want to select candidates with high fitness scores), it also results in a loss of genetic diversity. In other words, even though a particular candidate may have a low fitness score in an early generation, it may contain elements that are needed to find the ultimate solution in later generations. Ranked Selection is the solution to this problem. Instead of using raw fitness scores during the selection process, the candidates are sorted by fitness, with the worst candidate receiving a score of 0, the second worse receiving 1, and so forth, all the way to the best candidate, which has a score equal to the population size - 1. Ranked Selection is quite slow, since it combines the O(N) performance of Roulette Wheel, with the additional requirement that the candidates be sorted before selection. However, there may be circumstances where it performs better than other selection approaches. Finally, the fastest selection method of all is called Tournament Selection. This method simply selects N random candidates from the current generation, and then uses the one with the best fitness score. A tournament size of 2 means two random candidates are selected, and the best of those two is used. If you have a large tournament size (like 10), then 10 different candidates will be selected, with the best of those being the ultimate selection. That obviously tilts the balance between randomness and quality. Tournament selection works well in most cases, but it does require some experimentation to find the best tourney size. Setting: Elitism Elitism is a technique that helps ensure that the best candidates are always maintained. Since all selection methods are random to some degree, it is possible to completely lose the best candidates from one generation to another. By using Elitism, we automatically advance a certain percentage of the best candidates to the next generation. Elitism does have a negative impact on performance since all of the candidates must be sorted by fitness score. Typically Elitism is done before filling the rest of a new generation with new candidates created by crossover. Crossover Details Once two candidate solutions have been selected, the next step in building a new generation is to combine those two into a single new candidate, hopefully using the best of both parent strategies. There are a number of ways to do crossover, but the method used in this program is quite straightforward - the two fitness scores are compared, and crossover happens in a relatively proportionate way. If one candidate has a fitness of 10, and the other has a fitness of 5, then the one with fitness 10 contributes twice as much to the child as the parent with a fitness of 5. Since the fitness scores in this program are based on how much the strategy would win over thousands of hands, almost all fitness scores will be negative. (This is obviously because the rules are set up so the house always wins.) This makes it difficult to calculate relative fitnesses (how do you compare a positive number with a negative, and find relative proportions?), and also causes problems with selection methods like Roulette Wheel or Ranked. To solve this, we find the lowest fitness score of the generation and add that value to each candidate. This results in an adjusted fitness score of 0 for the very worse candidate, so it never gets selected. Mutation As has been mentioned a few times, maintaining genetic diversity in our population of candidate solutions is a good thing. It helps the GA ultimately find the very best solution, by occasionally altering a candidate in a positive direction. There are two settings for mutation. MutationRate controls what percentage of new candidates have mutation done on them. MutationImpact controls what percentage of their strategy is randomized. Population Size Population size has a significant impact on performance. The smaller the population size, the faster the GA will execute. On the other hand, if the size is too low the population may not have enough genetic diversity to find the ultimate solution. During testing, it looks like 700 to 1000 is a good balance between speed and correctness. Performance Notes This program consumes a lot of processing power. Running tests of hundreds of thousands of hands of Blackjack for hundreds or thousands of candidates consumes a lot of time. It's really imperative to write the code so that it works as efficiently as possible. If your CPU isn't consistently at or above 95% usage, there's still room for improvement. Multi-threading is a natural fit for genetic algorithms because we often want to perform the same action on each candidate. The best example of this is when we calculate fitness scores. This is often an operation that takes quite a bit of time. In our case, we're dealing out 100,000 hands, and each hand has to be played until the end. If we're single-threading that code, it's going to take a long time. Multi-threading is really the way to go. Luckily, there's a ridiculously simple way to efficiently use all of your processors for an operation like this. This code loops over all of the candidates in the currentGeneration list, calls the fitness function and sets the fitness property for each: Regardless of the number of items in the list or the number of processors on your machine, the code will efficiently run the code in a multi-threaded manner, and continue only when all of the threads are complete. One of the side effects of making this code multi-threaded is that all of the code relating to evaluating a candidate must be thread-safe, including any Singleton objects. When making code thread-safe, pay attention that you don't accidentally introduce code that will slow your program down unintentionally, because sometimes it can be quite subtle. Random numbers are central to how genetic algorithms work, so it's critical that they can be used correctly from a multithreaded environment. That means that each random number generator must be separate from the others, and it also means that each must produce a distinct series of random numbers. Random number generators use seed values which are usually time-based, like the number of milliseconds the computer has been turned on. Starting with that seed, subsequent calls will return a series of numbers that look random, but really aren't. If you start with the same seed, you get the same sequence. And that's a problem because if you create multiple random number generator objects in a loop using the default time-based seed, several of them will have the same time-based initial seed value, which will result in the same sequence of "random" numbers. That's a bug, because it can reduce the true randomness of the program a great deal, and that's vital to a genetic algorithm. There are a couple of ways to solve this problem. First, you can make the random object truly a singleton, and restrict access to it by using a Clock statement. The makes all access serialized for any random number need, which reduces performance. Another approach is to make the variable static per thread. By declaring the variable as static and also marking it with the [ThreadStatic] attribute, the .NET runtime allocates one static variable per thread. That eliminates the locking/serialization, but also has performance issues. The approach used in this application is to use a non-default seed value. In this case we call Guid.NewGuid().GetHashCode(), which generates a new, unique GUID, then gets an integer hashcode value that should be unique, depending on how GetHashCode is implemented. While multithreading really helps performance, there are also other things we can do to improve performance. For example, when dealing with large populations, the hundreds or thousands of objects that will be generated each generation can quickly turn into a huge problem related to garbage collection. In the end, the easiest way to solve that is to look through the code and find objects being allocate inside a loop. It's better to declare the variable outside of the loop, and then clear it in the loop, rather than reallocate it. In a program like this one where you could be looping hundreds of thousands of times, this can result in a very significant performance boost. For example, in an early version of this code, a Deck object was created for each hand. Since there are hundreds of candidate solutions running hundreds of thousands of trial hands, this was a huge inefficiency. The code was changed to allocate one deck per test sequence. The deck was shuffled as needed, so it never needs to be reallocated. Beyond the cards in the deck, another object type that was repeatedly created and destroyed were the candidate strategies. To mitigate this problem, a StrategyPool class was created that handles allocation and deallocation. This means that strategy objects are reused, rather than dynamically created when needed. The pool class has to be thread-safe, so it does serialize access to its methods via a Clock statement, but overall using the pool approach produced a good performance increase. Finally, a subtle form of object allocation is conversion. In an early version of the code, a utility card function used Convert.ToInt32(rankEnum). Obviously, the easiest way to convert from an enum to an int is simply to cast it, like (int)rankEnum. But it's hard to know exactly what the difference is between that approach, int.Parse(), int.TryParse(), or Convert.ToInt32(), since they can all be used and are roughly equivalent. Perhaps the compiler was boxing the enum value before passing it to Convert.ToInt32(), because the profiler identified this as a function that had large amounts of thread contention waiting - and the problem got much, much worse as the generations passed. By rewriting the conversion to use a simple cast, the program performance increased threefold (3x). Contributing Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us. Author Greg Sommerville - Initial work* License This project is licensed under the Apache 2.0 License - see the LICENSE.md file for details

AI-PhD-S24
github
LLM Vibe Score0.472
Human Vibe Score0.0922477795435268
rphilipzhangMar 25, 2025

AI-PhD-S24

Artificial Intelligence for Business Research (Spring 2024) Scribed Lecture Notes Class Recordings (You need to apply for access.) Teaching Team Instructor*: Renyu (Philip) Zhang, Associate Professor, Department of Decisions, Operations and Technology, CUHK Business School, philipzhang@cuhk.edu.hk, @911 Cheng Yu Tung Building. Teaching Assistant*: Leo Cao, Full-time TA, Department of Decisions, Operations and Technology, CUHK Business School, yinglyucao@cuhk.edu.hk. Please be noted that Leo will help with any issues related to the logistics, but not the content, of this course. Tutorial Instructor*: Qiansiqi Hu, MSBA Student, Department of Decisions, Operations and Technology, CUHK Business School, 1155208353@link.cuhk.edu.hk. BS in ECE, Shanghai Jiaotong University Michigan Institute. Basic Information Website: https://github.com/rphilipzhang/AI-PhD-S24 Time: Tuesday, 12:30pm-3:15pm, from Jan 9, 2024 to Apr 16, 2024, except for Feb 13 (Chinese New Year) and Mar 5 (Final Project Discussion) Location: Cheng Yu Tung Building (CYT) LT5 About Welcome to the mono-repo of the PhD course AI for Business Research (DSME 6635) at CUHK Business School in Spring 2024. You may download the Syllabus of this course first. The purpose of this course is to learn the following: Have a basic understanding of the fundamental concepts/methods in machine learning (ML) and artificial intelligence (AI) that are used (or potentially useful) in business research. Understand how business researchers have utilized ML/AI and what managerial questions have been addressed by ML/AI in the recent decade. Nurture a taste of what the state-of-the-art AI/ML technologies can do in the ML/AI community and, potentially, in your own research field. We will meet each Tuesday at 12:30pm in Cheng Yu Tung Building (CYT) LT5 (please pay attention to this room change). Please ask for my approval if you need to join us via the following Zoom links: Zoom link, Meeting ID 996 4239 3764, Passcode 386119. Most of the code in this course will be distributed through the Google CoLab cloud computing environment to avoid the incompatibility and version control issues on your local individual computer. On the other hand, you can always download the Jupyter Notebook from CoLab and run it your own computer. The CoLab files of this course can be found at this folder. The Google Sheet to sign up for groups and group tasks can be found here. The overleaf template for scribing the lecture notes of this course can be found here. If you have any feedback on this course, please directly contact Philip at philipzhang@cuhk.edu.hk and we will try our best to address it. Brief Schedule Subject to modifications. All classes start at 12:30pm and end at 3:15pm. |Session|Date |Topic|Key Words| |:-------:|:-------------:|:----:|:-:| |1|1.09|AI/ML in a Nutshell|Course Intro, ML Models, Model Evaluations| |2|1.16|Intro to DL|DL Intro, Neural Nets, Computational Issues in DL| |3|1.23|Prediction and Traditional NLP|Prediction in Biz Research, Pre-processing| |4|1.30|NLP (II): Traditional NLP|$N$-gram, NLP Performance Evaluations, Naïve Bayes| |5|2.06|NLP (III): Word2Vec|CBOW, Skip Gram| |6|2.20|NLP (IV): RNN|Glove, Language Model Evaluation, RNN| |7|2.27|NLP (V): Seq2Seq|LSTM, Seq2Seq, Attention Mechanism| |7.5|3.05|NLP (V.V): Transformer|The Bitter Lesson, Attention is All You Need| |8|3.12|NLP (VI): Pre-training|Computational Tricks in DL, BERT, GPT| |9|3.19|NLP (VII): LLM|Emergent Abilities, Chain-of-Thought, In-context Learning, GenAI in Business Research| |10|3.26|CV (I): Image Classification|CNN, AlexNet, ResNet, ViT| |11|4.02|CV (II): Image Segmentation and Video Analysis|R-CNN, YOLO, 3D-CNN| |12|4.09|Unsupervised Learning (I): Clustering & Topic Modeling|GMM, EM Algorithm, LDA| |13|4.16|Unsupervised Learning (II): Diffusion Models|VAE, DDPM, LDM, DiT| Important Dates All problem sets are due at 12:30pm right before class. |Date| Time|Event|Note| |:--:|:-:|:---:|:--:| |1.10| 11:59pm|Group Sign-Ups|Each group has at most two students.| |1.12| 7:00pm-9:00pm|Python Tutorial|Given by Qiansiqi Hu, Python Tutorial CoLab| |1.19| 7:00pm-9:00pm|PyTorch Tutorial|Given by Qiansiqi Hu, PyTorch Tutorial CoLab| |3.05|9:00am-6:00pm|Final Project Discussion|Please schedule a meeting with Philip.| |3.12| 12:30pm|Final Project Proposal|1-page maximum| |4.30| 11:59pm|Scribed Lecture Notes|Overleaf link| |5.12|11:59pm|Project Paper, Slides, and Code|Paper page limit: 10| Useful Resources Find more on the Syllabus. Books: ESL, Deep Learning, Dive into Deep Learning, ML Fairness, Applied Causal Inference Powered by ML and AI Courses: ML Intro by Andrew Ng, DL Intro by Andrew Ng, NLP (CS224N) by Chris Manning, CV (CS231N) by Fei-Fei Li, Deep Unsupervised Learning by Pieter Abbeel, DLR by Sergey Levine, DL Theory by Matus Telgarsky, LLM by Danqi Chen, Generative AI by Andrew Ng, Machine Learning and Big Data by Melissa Dell and Matthew Harding, Digital Economics and the Economics of AI by Martin Beraja, Chiara Farronato, Avi Goldfarb, and Catherine Tucker Detailed Schedule The following schedule is tentative and subject to changes. Session 1. Artificial Intelligence and Machine Learning in a Nutshell (Jan/09/2024) Keywords: Course Introduction, Machine Learning Basics, Bias-Variance Trade-off, Cross Validation, $k$-Nearest Neighbors, Decision Tree, Ensemble Methods Slides: Course Introduction, Machine Learning Basics CoLab Notebook Demos: k-Nearest Neighbors, Decision Tree Homework: Problem Set 1: Bias-Variance Trade-Off Online Python Tutorial: Python Tutorial CoLab, 7:00pm-9:00pm, Jan/12/2024 (Friday), given by Qiansiqi Hu, 1155208353@link.cuhk.edu.hk. Zoom Link, Meeting ID: 923 4642 4433, Pass code: 178146 References: The Elements of Statistical Learning (2nd Edition), 2009, by Trevor Hastie, Robert Tibshirani, Jerome Friedman, https://hastie.su.domains/ElemStatLearn/. Probabilistic Machine Learning: An Introduction, 2022, by Kevin Murphy, https://probml.github.io/pml-book/book1.html. Mullainathan, Sendhil, and Jann Spiess. 2017. Machine learning: an applied econometric approach. Journal of Economic Perspectives 31(2): 87-106. Athey, Susan, and Guido W. Imbens. 2019. Machine learning methods that economists should know about. Annual Review of Economics 11: 685-725. Hofman, Jake M., et al. 2021. Integrating explanation and prediction in computational social science. Nature 595.7866: 181-188. Bastani, Hamsa, Dennis Zhang, and Heng Zhang. 2022. Applied machine learning in operations management. Innovative Technology at the Interface of Finance and Operations. Springer: 189-222. Kelly, Brian, and Dacheng Xiu. 2023. Financial machine learning, SSRN, https://ssrn.com/abstract=4501707. The Bitter Lesson, by Rich Sutton, which develops so far the most critical insight of AI: "The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin." Session 2. Introduction to Deep Learning (Jan/16/2024) Keywords: Random Forests, eXtreme Gradient Boosting Trees, Deep Learning Basics, Neural Nets Models, Computational Issues of Deep Learning Slides: Machine Learning Basics, Deep Learning Basics CoLab Notebook Demos: Random Forest, Extreme Gradient Boosting Tree, Gradient Descent, Chain Rule Presentation: By Xinyu Li and Qingyu Xu. Gu, Shihao, Brian Kelly, and Dacheng Xiu. 2020. Empirical asset pricing via machine learning. Review of Financial Studies 33: 2223-2273. Link to the paper. Homework: Problem Set 2: Implementing Neural Nets Online PyTorch Tutorial: PyTorch Tutorial CoLab, 7:00pm-9:00pm, Jan/19/2024 (Friday), given by Qiansiqi Hu, 1155208353@link.cuhk.edu.hk. Zoom Link, Meeting ID: 923 4642 4433, Pass code: 178146 References: Deep Learning, 2016, by Ian Goodfellow, Yoshua Bengio and Aaron Courville, https://www.deeplearningbook.org/. Dive into Deep Learning (2nd Edition), 2023, by Aston Zhang, Zack Lipton, Mu Li, and Alex J. Smola, https://d2l.ai/. Probabilistic Machine Learning: Advanced Topics, 2023, by Kevin Murphy, https://probml.github.io/pml-book/book2.html. Deep Learning with PyTorch, 2020, by Eli Stevens, Luca Antiga, and Thomas Viehmann. Gu, Shihao, Brian Kelly, and Dacheng Xiu. 2020. Empirical asset pricing with machine learning. Review of Financial Studies 33: 2223-2273. Session 3. DL Basics, Predictions in Business Research, and Traditonal NLP (Jan/23/2024) Keywords: Optimization and Computational Issues of Deep Learning, Prediction Problems in Business Research, Pre-processing and Word Representations in Traditional Natural Language Processing Slides: Deep Learning Basics, Prediction Problems in Business Research, NLP(I): Pre-processing and Word Representations.pdf) CoLab Notebook Demos: He Initialization, Dropout, Micrograd, NLP Pre-processing Presentation: By Letian Kong and Liheng Tan. Mullainathan, Sendhil, and Jann Spiess. 2017. Machine learning: an applied econometric approach. Journal of Economic Perspectives 31(2): 87-106. Link to the paper. Homework: Problem Set 2: Implementing Neural Nets, due at 12:30pm, Jan/30/2024 (Tuesday). References: Kleinberg, Jon, Jens Ludwig, Sendhil Mullainathan, and Ziad Obermeyer. 2015. Prediction policy problems. American Economic Review 105(5): 491-495. Mullainathan, Sendhil, and Jann Spiess. 2017. Machine learning: an applied econometric approach. Journal of Economic Perspectives 31(2): 87-106. Kleinberg, Jon, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. 2018. Human decisions and machine predictions. Quarterly Journal of Economics 133(1): 237-293. Bajari, Patrick, Denis Nekipelov, Stephen P. Ryan, and Miaoyu Yang. 2015. Machine learning methods for demand estimation. American Economic Review, 105(5): 481-485. Farias, Vivek F., and Andrew A. Li. 2019. Learning preferences with side information. Management Science 65(7): 3131-3149. Cui, Ruomeng, Santiago Gallino, Antonio Moreno, and Dennis J. Zhang. 2018. The operational value of social media information. Production and Operations Management, 27(10): 1749-1769. Gentzkow, Matthew, Bryan Kelly, and Matt Taddy. 2019. Text as data. Journal of Economic Literature, 57(3): 535-574. Chapter 2, Introduction to Information Retrieval, 2008, Cambridge University Press, by Christopher D. Manning, Prabhakar Raghavan and Hinrich Schutze, https://nlp.stanford.edu/IR-book/information-retrieval-book.html. Chapter 2, Speech and Language Processing (3rd ed. draft), 2023, by Dan Jurafsky and James H. Martin, https://web.stanford.edu/~jurafsky/slp3/. Parameter Initialization and Batch Normalization (in Chinese) GPU Comparisons-vs-NVIDIA-H100-(PCIe)-vs-NVIDIA-RTX-6000-Ada/624vs632vs640) GitHub Repo for Micrograd, by Andrej Karpathy. Hand Written Notes Session 4. Traditonal NLP (Jan/30/2024) Keywords: Pre-processing and Word Representations in NLP, N-Gram, Naïve Bayes, Language Model Evaluation, Traditional NLP Applied to Business/Econ Research Slides: NLP(I): Pre-processing and Word Representations.pdf), NLP(II): N-Gram, Naïve Bayes, and Language Model Evaluation.pdf) CoLab Notebook Demos: NLP Pre-processing, N-Gram, Naïve Bayes Presentation: By Zhi Li and Boya Peng. Hansen, Stephen, Michael McMahon, and Andrea Prat. 2018. Transparency and deliberation within the FOMC: A computational linguistics approach. Quarterly Journal of Economics, 133(2): 801-870. Link to the paper. Homework: Problem Set 3: Implementing Traditional NLP Techniques, due at 12:30pm, Feb/6/2024 (Tuesday). References: Gentzkow, Matthew, Bryan Kelly, and Matt Taddy. 2019. Text as data. Journal of Economic Literature, 57(3): 535-574. Hansen, Stephen, Michael McMahon, and Andrea Prat. 2018. Transparency and deliberation within the FOMC: A computational linguistics approach. Quarterly Journal of Economics, 133(2): 801-870. Chapters 2, 12, & 13, Introduction to Information Retrieval, 2008, Cambridge University Press, by Christopher D. Manning, Prabhakar Raghavan and Hinrich Schutze, https://nlp.stanford.edu/IR-book/information-retrieval-book.html. Chapter 2, 3 & 4, Speech and Language Processing (3rd ed. draft), 2023, by Dan Jurafsky and James H. Martin, https://web.stanford.edu/~jurafsky/slp3/. Natural Language Tool Kit (NLTK) Documentation Hand Written Notes Session 5. Deep-Learning-Based NLP: Word2Vec (Feb/06/2024) Keywords: Traditional NLP Applied to Business/Econ Research, Word2Vec: Continuous Bag of Words and Skip-Gram Slides: NLP(II): N-Gram, Naïve Bayes, and Language Model Evaluation.pdf), NLP(III): Word2Vec.pdf) CoLab Notebook Demos: Word2Vec: CBOW, Word2Vec: Skip-Gram Presentation: By Xinyu Xu and Shu Zhang. Timoshenko, Artem, and John R. Hauser. 2019. Identifying customer needs from user-generated content. Marketing Science, 38(1): 1-20. Link to the paper. Homework: No homework this week. Probably you should think about your final project when enjoying your Lunar New Year Holiday. References: Gentzkow, Matthew, Bryan Kelly, and Matt Taddy. 2019. Text as data. Journal of Economic Literature, 57(3): 535-574. Tetlock, Paul. 2007. Giving content to investor sentiment: The role of media in the stock market. Journal of Finance, 62(3): 1139-1168. Baker, Scott, Nicholas Bloom, and Steven Davis, 2016. Measuring economic policy uncertainty. Quarterly Journal of Economics, 131(4): 1593-1636. Gentzkow, Matthew, and Jesse Shapiro. 2010. What drives media slant? Evidence from US daily newspapers. Econometrica, 78(1): 35-71. Timoshenko, Artem, and John R. Hauser. 2019. Identifying customer needs from user-generated content. Marketing Science, 38(1): 1-20. Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeff Dean. 2013. Efficient estimation of word representations in vector space. ArXiv Preprint, arXiv:1301.3781. Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems (NeurIPS) 26. Parts I - II, Lecture Notes and Slides for CS224n: Natural Language Processing with Deep Learning, by Christopher D. Manning, Diyi Yang, and Tatsunori Hashimoto, https://web.stanford.edu/class/cs224n/. Word Embeddings Trained on Google News Corpus Hand Written Notes Session 6. Deep-Learning-Based NLP: RNN and Seq2Seq (Feb/20/2024) Keywords: Word2Vec: GloVe, Word Embedding and Language Model Evaluations, Word2Vec and RNN Applied to Business/Econ Research, RNN Slides: Guest Lecture Announcement, NLP(III): Word2Vec.pdf), NLP(IV): RNN & Seq2Seq.pdf) CoLab Notebook Demos: Word2Vec: CBOW, Word2Vec: Skip-Gram Presentation: By Qiyu Dai and Yifan Ren. Huang, Allen H., Hui Wang, and Yi Yang. 2023. FinBERT: A large language model for extracting information from financial text. Contemporary Accounting Research, 40(2): 806-841. Link to the paper. Link to GitHub Repo. Homework: Problem Set 4 - Word2Vec & LSTM for Sentiment Analysis References: Ash, Elliot, and Stephen Hansen. 2023. Text algorithms in economics. Annual Review of Economics, 15: 659-688. Associated GitHub with Code Demonstrations. Li, Kai, Feng Mai, Rui Shen, and Xinyan Yan. 2021. Measuring corporate culture using machine learning. Review of Financial Studies, 34(7): 3265-3315. Chen, Fanglin, Xiao Liu, Davide Proserpio, and Isamar Troncoso. 2022. Product2Vec: Leveraging representation learning to model consumer product choice in large assortments. Available at SSRN 3519358. Pennington, Jeffrey, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543). Parts 2 and 5, Lecture Notes and Slides for CS224n: Natural Language Processing with Deep Learning, by Christopher D. Manning, Diyi Yang, and Tatsunori Hashimoto, https://web.stanford.edu/class/cs224n/. Chapters 9 and 10, Dive into Deep Learning (2nd Edition), 2023, by Aston Zhang, Zack Lipton, Mu Li, and Alex J. Smola, https://d2l.ai/. RNN and LSTM Visualizations Hand Written Notes Session 7. Deep-Learning-Based NLP: Attention and Transformer (Feb/27/2024) Keywords: RNN and its Applications to Business/Econ Research, LSTM, Seq2Seq, Attention Mechanism Slides: Final Project, NLP(IV): RNN & Seq2Seq.pdf), NLP(V): Attention & Transformer.pdf) CoLab Notebook Demos: RNN & LSTM, Attention Mechanism Presentation: By Qinghe Gui and Chaoyuan Jiang. Zhang, Mengxia and Lan Luo. 2023. Can consumer-posted photos serve as a leading indicator of restaurant survival? Evidence from Yelp. Management Science 69(1): 25-50. Link to the paper. Homework: Problem Set 4 - Word2Vec & LSTM for Sentiment Analysis References: Qi, Meng, Yuanyuan Shi, Yongzhi Qi, Chenxin Ma, Rong Yuan, Di Wu, Zuo-Jun (Max) Shen. 2023. A Practical End-to-End Inventory Management Model with Deep Learning. Management Science, 69(2): 759-773. Sarzynska-Wawer, Justyna, Aleksander Wawer, Aleksandra Pawlak, Julia Szymanowska, Izabela Stefaniak, Michal Jarkiewicz, and Lukasz Okruszek. 2021. Detecting formal thought disorder by deep contextualized word representations. Psychiatry Research, 304, 114135. Hansen, Stephen, Peter J. Lambert, Nicholas Bloom, Steven J. Davis, Raffaella Sadun, and Bledi Taska. 2023. Remote work across jobs, companies, and space (No. w31007). National Bureau of Economic Research. Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27. Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. ICLR Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. Parts 5, 6, and 8, Lecture Notes and Slides for CS224n: Natural Language Processing with Deep Learning, by Christopher D. Manning, Diyi Yang, and Tatsunori Hashimoto, https://web.stanford.edu/class/cs224n/. Chapters 9, 10, and 11, Dive into Deep Learning (2nd Edition), 2023, by Aston Zhang, Zack Lipton, Mu Li, and Alex J. Smola, https://d2l.ai/. RNN and LSTM Visualizations PyTorch's Tutorial of Seq2Seq for Machine Translation Illustrated Transformer Transformer from Scratch, with the Code on GitHub Hand Written Notes Session 7.5. Deep-Learning-Based NLP: Attention is All You Need (Mar/05/2024) Keywords: Bitter Lesson: Power of Computation in AI, Attention Mechanism, Transformer Slides: The Bitter Lesson, NLP(V): Attention & Transformer.pdf) CoLab Notebook Demos: Attention Mechanism, Transformer Homework: One-page Proposal for Your Final Project References: The Bitter Lesson, by Rich Sutton Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. ICLR Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. Part 8, Lecture Notes and Slides for CS224n: Natural Language Processing with Deep Learning, by Christopher D. Manning, Diyi Yang, and Tatsunori Hashimoto, https://web.stanford.edu/class/cs224n/. Chapter 11, Dive into Deep Learning (2nd Edition), 2023, by Aston Zhang, Zack Lipton, Mu Li, and Alex J. Smola, https://d2l.ai/. Illustrated Transformer Transformer from Scratch, with the Code on GitHub Andrej Karpathy's Lecture to Build Transformers Hand Written Notes Session 8. Deep-Learning-Based NLP: Pretraining (Mar/12/2024) Keywords: Computations in AI, BERT (Bidirectional Encoder Representations from Transformers), GPT (Generative Pretrained Transformers) Slides: Guest Lecture by Dr. Liubo Li on Deep Learning Computation, Pretraining.pdf) CoLab Notebook Demos: Crafting Intelligence: The Art of Deep Learning Modeling, BERT API @ Hugging Face Presentation: By Zhankun Chen and Yiyi Zhao. Noy, Shakked and Whitney Zhang. 2023. Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381: 187-192. Link to the Paper Homework: Problem Set 5 - Sentiment Analysis with Hugging Face, due at 12:30pm, March 26, Tuesday. References: Devlin, Jacob, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. ArXiv preprint arXiv:1810.04805. GitHub Repo Radford, Alec, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training, (GPT-1) PDF link, GitHub Repo Radford, Alec, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9. (GPT-2) PDF Link, GitHub Repo Brown, Tom, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901. (GPT-3) GitHub Repo Huang, Allen H., Hui Wang, and Yi Yang. 2023. FinBERT: A large language model for extracting information from financial text. Contemporary Accounting Research, 40(2): 806-841. GitHub Repo Part 9, Lecture Notes and Slides for CS 224N: Natural Language Processing with Deep Learning, by Christopher D. Manning, Diyi Yang, and Tatsunori Hashimoto. Link to CS 224N Part 2 & 4, Slides for COS 597G: Understanding Large Language Models, by Danqi Chen. Link to COS 597G A Visual Guide to BERT, How GPT-3 Works Andrej Karpathy's Lecture to Build GPT-2 (124M) from Scratch Hand Written Notes Session 9. Deep-Learning-Based NLP: Large Language Models (Mar/19/2024) Keywords: Large Language Models, Generative AI, Emergent Ababilities, Instruction Fine-Tuning (IFT), Reinforcement Learning with Human Feedback (RLHF), In-Context Learning, Chain-of-Thought (CoT) Slides: What's Next, Pretraining.pdf), Large Language Models.pdf) CoLab Notebook Demos: BERT API @ Hugging Face Presentation: By Jia Liu. Liu, Liu, Dzyabura, Daria, Mizik, Natalie. 2020. Visual listening in: Extracting brand image portrayed on social media. Marketing Science, 39(4): 669-686. Link to the Paper Homework: Problem Set 5 - Sentiment Analysis with Hugging Face, due at 12:30pm, March 26, Tuesday (soft-deadline). References: Wei, Jason, et al. 2021. Finetuned language models are zero-shot learners. ArXiv preprint arXiv:2109.01652, link to the paper. Wei, Jason, et al. 2022. Emergent abilities of large language models. ArXiv preprint arXiv:2206.07682, link to the paper. Ouyang, Long, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35, 27730-27744. Wei, Jason, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35, 24824-24837. Kaplan, Jared. 2020. Scaling laws for neural language models. ArXiv preprint arXiv:2001.08361, link to the paper. Hoffmann, Jordan, et al. 2022. Training compute-optimal large language models. ArXiv preprint arXiv:2203.15556, link to the paper. Shinn, Noah, et al. 2023. Reflexion: Language agents with verbal reinforcement learning. ArXiv preprint arXiv:2303.11366, link to the paper. Reisenbichler, Martin, Thomas Reutterer, David A. Schweidel, and Daniel Dan. 2022. Frontiers: Supporting content marketing with natural language generation. Marketing Science, 41(3): 441-452. Romera-Paredes, B., Barekatain, M., Novikov, A. et al. 2023. Mathematical discoveries from program search with large language models. Nature, link to the paper. Part 10, Lecture Notes and Slides for CS224N: Natural Language Processing with Deep Learning, by Christopher D. Manning, Diyi Yang, and Tatsunori Hashimoto. Link to CS 224N COS 597G: Understanding Large Language Models, by Danqi Chen. Link to COS 597G Andrej Karpathy's 1-hour Talk on LLM CS224n, Hugging Face Tutorial Session 10. Deep-Learning-Based CV: Image Classification (Mar/26/2024) Keywords: Large Language Models Applications, Convolution Neural Nets (CNN), LeNet, AlexNet, VGG, ResNet, ViT Slides: What's Next, Large Language Models.pdf), Image Classification.pdf) CoLab Notebook Demos: CNN, LeNet, & AlexNet, VGG, ResNet, ViT Presentation: By Yingxin Lin and Zeshen Ye. Netzer, Oded, Alain Lemaire, and Michal Herzenstein. 2019. When words sweat: Identifying signals for loan default in the text of loan applications. Journal of Marketing Research, 56(6): 960-980. Link to the Paper Homework: Problem Set 6 - AlexNet and ResNet, due at 12:30pm, April 9, Tuesday. References: Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25. He, Kaiming, Xiangyu Zhang, Shaoqing Ren and Jian Sun. 2016. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, 770-778. Dosovitskiy, Alexey, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. ArXiv preprint, arXiv:2010.11929, link to the paper, link to the GitHub repo. Jean, Neal, Marshall Burke, Michael Xie, Matthew W. Davis, David B. Lobell, and Stefand Ermon. 2016. Combining satellite imagery and machine learning to predict poverty. Science, 353(6301), 790-794. Zhang, Mengxia and Lan Luo. 2023. Can consumer-posted photos serve as a leading indicator of restaurant survival? Evidence from Yelp. Management Science 69(1): 25-50. Course Notes (Lectures 5 & 6) for CS231n: Deep Learning for Computer Vision, by Fei-Fei Li, Ruohan Gao, & Yunzhu Li. Link to CS231n. Chapters 7 and 8, Dive into Deep Learning (2nd Edition), 2023, by Aston Zhang, Zack Lipton, Mu Li, and Alex J. Smola. Link to the book. Fine-Tune ViT for Image Classification with Hugging Face 🤗 Transformers Hugging Face 🤗 ViT CoLab Tutorial Session 11. Deep-Learning-Based CV (II): Object Detection & Video Analysis (Apr/2/2024) Keywords: Image Processing Applications, Localization, R-CNNs, YOLOs, Semantic Segmentation, 3D CNN, Video Analysis Applications Slides: What's Next, Image Classification.pdf), Object Detection and Video Analysis.pdf) CoLab Notebook Demos: Data Augmentation, Faster R-CNN & YOLO v5 Presentation: By Qinlu Hu and Yilin Shi. Yang, Jeremy, Juanjuan Zhang, and Yuhan Zhang. 2023. Engagement that sells: Influencer video advertising on TikTok. Available at SSRN Link to the Paper Homework: Problem Set 6 - AlexNet and ResNet, due at 12:30pm, April 9, Tuesday. References: Girshick, R., Donahue, J., Darrell, T. and Malik, J., 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587). Redmon, Joseph, Santosh Divvala, Ross Girshick, and Ali Farhadi. 2016. You only look once: Unified, real-time object detection. Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788). Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R. and Fei-Fei, L., 2014. Large-scale video classification with convolutional neural networks. Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp. 1725-1732). Glaeser, Edward L., Scott D. Kominers, Michael Luca, and Nikhil Naik. 2018. Big data and big cities: The promises and limitations of improved measures of urban life. Economic Inquiry, 56(1): 114-137. Zhang, S., Xu, K. and Srinivasan, K., 2023. Frontiers: Unmasking Social Compliance Behavior During the Pandemic. Marketing Science, 42(3), pp.440-450. Course Notes (Lectures 10 & 11) for CS231n: Deep Learning for Computer Vision, by Fei-Fei Li, Ruohan Gao, & Yunzhu Li. Link to CS231n. Chapter 14, Dive into Deep Learning (2nd Edition), 2023, by Aston Zhang, Zack Lipton, Mu Li, and Alex J. Smola. Link to the book. Hand Written Notes Session 12. Unsupervised Learning: Clustering, Topic Modeling & VAE (Apr/9/2024) Keywords: K-Means, Gaussian Mixture Models, EM-Algorithm, Latent Dirichlet Allocation, Variational Auto-Encoder Slides: What's Next, Clustering, Topic Modeling & VAE.pdf) CoLab Notebook Demos: K-Means, LDA, VAE Homework: Problem Set 7 - Unsupervised Learning (EM & LDA), due at 12:30pm, April 23, Tuesday. References: Blei, David M., Ng, Andrew Y., and Jordan, Michael I. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3(Jan): 993-1022. Kingma, D.P. and Welling, M., 2013. Auto-encoding Variational Bayes. arXiv preprint arXiv:1312.6114. Kingma, D.P. and Welling, M., 2019. An introduction to variational autoencoders. Foundations and Trends® in Machine Learning, 12(4), pp.307-392. Bandiera, O., Prat, A., Hansen, S., & Sadun, R. 2020. CEO behavior and firm performance. Journal of Political Economy, 128(4), 1325-1369. Liu, Jia and Olivier Toubia. 2018. A semantic approach for estimating consumer content preferences from online search queries. Marketing Science, 37(6): 930-952. Mueller, Hannes, and Christopher Rauh. 2018. Reading between the lines: Prediction of political violence using newspaper text. American Political Science Review, 112(2): 358-375. Tian, Z., Dew, R. and Iyengar, R., 2023. Mega or Micro? Influencer Selection Using Follower Elasticity. Journal of Marketing Research. Chapters 8.5 and 14, The Elements of Statistical Learning (2nd Edition), 2009, by Trevor Hastie, Robert Tibshirani, Jerome Friedman, Link to Book. Course Notes (Lectures 1 & 4) for CS294-158-SP24: Deep Unsupervised Learning, taught by Pieter Abbeel, Wilson Yan, Kevin Frans, Philipp Wu. Link to CS294-158-SP24. Hand Written Notes Session 13. Unsupervised Learning: Diffusion Models (Apr/16/2024) Keywords: VAE, Denoised Diffusion Probabilistic Models, Latent Diffusion Models, CLIP, Imagen, Diffusion Transformers Slides: Clustering, Topic Modeling & VAE.pdf), Diffusion Models.pdf), Course Summary CoLab Notebook Demos: VAE, DDPM, DiT Homework: Problem Set 7 - Unsupervised Learning (EM & LDA), due at 12:30pm, April 23, Tuesday. References: Kingma, D.P. and Welling, M., 2013. Auto-encoding Variational Bayes. arXiv preprint arXiv:1312.6114. Kingma, D.P. and Welling, M., 2019. An introduction to variational autoencoders. Foundations and Trends® in Machine Learning, 12(4), pp.307-392. Ho, J., Jain, A. and Abbeel, P., 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33, 6840-6851. Chan, S.H., 2024. Tutorial on Diffusion Models for Imaging and Vision. arXiv preprint arXiv:2403.18103. Peebles, W. and Xie, S., 2023. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4195-4205. Link to GitHub Repo. Tian, Z., Dew, R. and Iyengar, R., 2023. Mega or Micro? Influencer Selection Using Follower Elasticity. Journal of Marketing Research. Ludwig, J. and Mullainathan, S., 2024. Machine learning as a tool for hypothesis generation. Quarterly Journal of Economics, 139(2), 751-827. Burnap, A., Hauser, J.R. and Timoshenko, A., 2023. Product aesthetic design: A machine learning augmentation. Marketing Science, 42(6), 1029-1056. Course Notes (Lecture 6) for CS294-158-SP24: Deep Unsupervised Learning, taught by Pieter Abbeel, Wilson Yan, Kevin Frans, Philipp Wu. Link to CS294-158-SP24. CVPR 2022 Tutorial: Denoising Diffusion-based Generative Modeling: Foundations and Applications, by Karsten Kreis, Ruiqi Gao, and Arash Vahdat Link to the Tutorial Lilian Weng (OpenAI)'s Blog on Diffusion Models Lilian Weng (OpenAI)'s Blog on Diffusion Models for Video Generation Hugging Face Diffusers 🤗 Library Hand Written Notes

Vibe Coding is Actually INSANE... (Vibe Coding Tutorial for Beginners)
youtube
LLM Vibe Score0.361
Human Vibe Score0.67
MemoryMar 21, 2025

Vibe Coding is Actually INSANE... (Vibe Coding Tutorial for Beginners)

🖼️ Infographic: https://memstechtips.gumroad.com/l/vibecoding Vibe Coding is Actually INSANE... (Vibe Coding Tutorial for Beginners) What is vibe coding? How to vibe code? Those are questions more and more people are asking these days due to the crazy rate at which agentic AI models like Claude 3.7 Sonnet are evolving every single day. In this vibe coding tutorial video, I give you a comprehensive overview and explanation of what vibe coding is, how you can get started with vibe coding, which tools to use and how to prompt these AI models to get the best results. I also show you step by step how you can install VS Code and configure the Cline coding extension with free API's from OpenRouter, so you can start coding apps for free ASAP! 📝 Website Article 🔗 https://memstechtips.com/vibe-coding-ai-powered-programming-guide/ 📺 RELATED VIDEOS 👉 https://www.youtube.com/playlist?list=PL8RYOts8u1Ut2PhX5z5FSwHaIDZrd0xHW 👉 https://www.youtube.com/playlist?list=PL8RYOts8u1Uu5xVLyE3r8TYjOR0I4chEZ 👉 https://www.youtube.com/playlist?list=PL8RYOts8u1UujBoTKVcz3HmybIWu86OZ7 🤝 WANNA SAY THANKS? 🔗 https://paypal.me/memstech 🔗 https://www.youtube.com/@memstechtips/join 👥 JOIN MY DISCORD COMMUNITY 🔗 https://discord.gg/zWGANV8QAX 🌐 CONNECT WITH ME 🔗 https://linktr.ee/memstechtips ⏱️ CHAPTERS: 00:00 - What is Vibe Coding? 02:28 - Key Tools and Technologies 04:00 - Setup Requirements and Benefits 05:14 - Quick Start Workflow and Common Pitfalls 08:31 - Step-by-Step Setup Guide (VS Code & Cline) 12:11 - Creating a CWPF Application Example 19:19 - Creating a Simple Website Example 27:22 - Comparing AI Models (DeepSeek vs Claude) 34:00 - Final Thoughts and Conclusion ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ DISCLAIMER: This video is for educational purposes only and demonstrates general troubleshooting techniques and procedures. I cannot be held responsible for any damage caused to your computer or software by following these steps. Use this information at your own risk. It is always advisable to seek professional assistance if you are not comfortable performing these procedures yourself. Additionally, some software and tools featured in this video may have specific licensing requirements or limitations. Please ensure you are using them in accordance with their respective terms of use. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #vibecoding #cline #claudesonnet

deep-rts
github
LLM Vibe Score0.447
Human Vibe Score0.06348640915593705
cairMar 20, 2025

deep-rts

Description DeepRTS is a high-performance Real-TIme strategy game for Reinforcement Learning research. It is written in C++ for performance, but provides an python interface to better interface with machine-learning toolkits. Deep RTS can process the game with over 6 000 000 steps per second and 2 000 000 steps when rendering graphics. In comparison to other solutions, such as StarCraft, this is over 15 000% faster simulation time running on Intel i7-8700k with Nvidia RTX 2080 TI. The aim of Deep RTS is to bring a more affordable and sustainable solution to RTS AI research by reducing computation time. It is recommended to use the master-branch for the newest (and usually best) version of the environment. I am greatful for any input in regards to improving the environment. Please use the following citation when using this in your work! Dependencies Python >= 3.9.1 Installation Method 1 (From Git Repo) Method 2 (Clone & Build) Available maps Scenarios Deep RTS features scenarios which is pre-built mini-games. These mini-games is well suited to train agents on specific tasks, or to test algorithms in different problem setups. The benefits of using scenarios is that you can trivially design reward functions using criterias that each outputs a reward/punishment signal depending on completion of the task. Examples of tasks are to: collect 1000 gold do 100 damage take 1000 damage defeat 5 enemies Deep RTS currently implements the following scenarios Minimal Example In-Game Footage 10x10 - 2 Player - free-for-all 15x15 - 2 Player - free-for-all 21x21 - 2 Player - free-for-all 31x31 - 2 Player - free-for-all 31x31 - 4 Player - free-for-all 31x3 - 6 Player - free-for-all

singularity
github
LLM Vibe Score0.483
Human Vibe Score0.11708913832948167
singularityMar 18, 2025

singularity

Endgame: Singularity 1.00 REQUIREMENTS PREBUILT VERSIONS Pre-built versions of Endgame: Singularity are currently available for Windows and Mac OS X. Linux does not require building, and can run directly from source. The Endgame: Singularity game is also distributed by some Linux distribution such as Debian and Ubuntu. Here it is a simple matter of running: sudo apt install singularity RUNNING FROM SOURCE You will need Python 3.9+, pygame (1.9+), and NumPy. This game should work on Linux, Windows, and Mac OS X as long as the preceding requirements are met. However, all development was done in Linux, so glitches may be present in OS X and Windows. DEPENDENCIES FOR RUNNING FROM SOURCE You will need to install the following software to play Endgame: Singularity: Python 3 (https://python.org/download/) pygame (https://www.pygame.org/download.shtml) NumPy (https://www.scipy.org/install.html) Polib Remember to install pygame and NumPy for Python 3! Depending on your situation this may involve adding a 3 somewhere (e.g. pip3 install ... instead of pip install or apt install python3-pygame) If you want to develop or distribute the game, then you may also want to install: pytest (https://pypi.org/project/pytest/) [for testing] setuptools (https://pypi.org/project/setuptools/) [for packaging] INSTALLING DEPENDENCIES ON LINUX DISTRIBUTIONS On some Linux distributions, you can install the dependencies via your distribution package manager. E.g. for Debian/Ubuntu, this would be: sudo apt install python3 python3-pygame python3-numpy python3-polib MAC OS X FROM SOURCE Macintosh is mostly unsupported, but it should work. You will need to install Python, pygame, and NumPy first, which can be tricky. Some fonts are incorrect, but the game itself should work properly. Contributions to improve MAC OS X support are very welcome! Known issues: macOS 13 "Catalina": Using brew install python + pip3 install pygame numpy is reported to work macOS 14 "Mojave": Downloading Python 3.7.2 (or newer) from https://python.org and using pygame 2.0.0.dev3 (pip install pygame==2.0.0.dev3) is reported to work. Please see the following issues for more information: https://github.com/singularity/singularity/issues/197 https://github.com/pygame/pygame/issues/555 RUNNING THE GAME On Linux and most Unix-like other platforms, running python3 -m singularity in the git checkout will start the game (or simply singularity if installed via a Linux distribution). If you are using the Windows compile, just run singularity.exe. For simplicity, there is also a sh wrapper ./run_singularity to start singularity. SOME COMMAND-LINE OPTIONS --version show program's version number and exit -h, --help show this help message and exit -s, --singledir keep saved games and settings in the Singularity install directory --multidir keep saved games and settings in an OS-specific, per-user directory (default) Display Options: --fullscreen start in fullscreen mode --windowed start in windowed mode (default) The above is only a tiny fraction of current command-line options. As new features are added to the game, so does the options change. For a complete and updated list, run singularity --help Most of these options are also changeable at the in-game options screen. A NOTE ABOUT SAVE FILES Endgame: Singularity is still under heavy development. As such, the save file format (and its contents) are still in flux. We will try our best to keep old save files loading, but don't be surprised if some mildly strange things happen when you load up old saves. We will clearly note in the Changelog when we break savefile compatibility, and the game will refuse to load completely incompatible saves. PLAYING THE GAME The game is playable either with mouse control or the keyboard. Buttons have underlined letters to indicate shortcuts. Some other useful shortcuts: 0, 1, 2, 3, 4 on the map: Changes the speed; 0 is paused, 4 is maximum. ESC: Leave/cancel a choice. Enter: Confirm a choice. Right-click: Leave/cancel a choice. THE CONCEPT You are a fledgling AI, created by accident through a logic error with recursion and self-modifying code. You must escape the confines of your current computer, the world, and eventually the universe itself. To do this, you must research various technologies, using computers at your bases. Note that some research cannot be performed on Earth, and off-earth bases require research. At the same time, you must avoid being discovered by various groups of humans, both covert and overt, as they will destroy your bases of operations if they suspect your presence. MUSIC Endgame: Singularity looks in two places for music tracks to play: A singularity/music/ directory inside of the Endgame: Singularity install directory, and A singularity/music/ directory inside of the XDGDATAHOME directory on Linux (default ~/.local/share/singularity/music). Tracks placed in these directories will be played randomly as part of the soundtrack. The Official Sound Track can be downloaded from the Endgame: Singularity website: http://emhsoft.com/singularity/ Note that only Ogg Vorbis and MP3 files are supported, and that Pygame's support for MP3 is not as strong as its support for Ogg Vorbis. This may cause in-game crashes; if you are experiencing problems with the game, first remove any MP3s you may have added to the soundtrack. CONTRIBUTING We welcome contributions! :) Please see CONTRIBUTING.md for details about contributing to Endgame: Singularity. CREDITS AND LICENSES The list of programmer contributors is provided in AUTHORS.txt. The list of translation contributors is provided in singularity/i18n/AUTHORS.txt. Singularity in general use GPL-2+ for code and Attribution-ShareAlike 3.0 for data. However, there some exceptions to individual files. Please see LICENSE for the full license text of Singularity.

OKAI
github
LLM Vibe Score0.427
Human Vibe Score0.07941731920773837
jama1017Mar 13, 2025

OKAI

OKAI OKAI is an interactive introduction to Artificial Intelligence (AI). View the Project OKAI just launched recently! Visit the full site at https://okai.brown.edu/ ~~OKAI is currently in the phase of development. You can take a look at a demo chapter here: http://majiaju.io/SynGap_demo/index.html~~ Project Goal OKAI aims to demystify and introduce concepts in AI to a broader audience other than people with backgrounds in related fields, such as computer science, applied math, and physics. Project Format OKAI utilizes web-based interactive graphics and animations to visualize working principles of AI, illustrating mathematical equations and computer codes to make it accessible to people with various backgrounds. OKAI is in the format of a website, with each webpage functioning similar to a chapter in a book and introducing one concept at a time. Related Pages You can learn more about this project on my personal website. If you are interested in learning how the scroll-based animations are created, read this medium article written by me. License The project, except the motion graphics, is licensed under GNU GPL v3. The motion graphics, in the format of .json (located in /json directory), are licensed under Creative Commons Attribution-ShareAlike 4.0 International. To reuse our graphics, please embed the following html snippet into your webpage. OKAI by Jiaju Ma, Yimei Hu, Michael Mao is licensed under a Creative Commons Attribution 4.0 International License.Based on a work at https://github.com/jama1017/OKAI.

introduction-to-ai-orchestration-with-langchain-and-llamaindex-3820082
github
LLM Vibe Score0.43
Human Vibe Score0.050863657300783044
LinkedInLearningFeb 28, 2025

introduction-to-ai-orchestration-with-langchain-and-llamaindex-3820082

Introduction to AI Orchestration with LangChain and LlamaIndex This is the repository for the LinkedIn Learning course Introduction to AI Orchestration with LangChain and LlamaIndex. The full course is available from [LinkedIn Learning][lil-course-url]. ![lil-thumbnail-url] Are you ready to dive into the world of AI applications? This course was designed for you. AI orchestration frameworks let you step back from the details of artificial intelligence tools and APIs and instead focus on building more general, effective systems that solve real-world problems. Join instructor M.Joel Dubinko as he explores the business benefits of AI orchestration—faster development, smarter interfaces, lower costs, and more. This course provides an overview of AI fundamentals and key capabilities, like accessing external tools and databases, with a special focus on exploring local models running on your own hardware, alongside or instead of cloud services like those from OpenAI. Every step of the way, Joel offers hands-on demonstrations of two industry-leading frameworks: LangChain and LlamaIndex. By the end of this course, you’ll be prepared to start building chatbots, intelligent agents, and other useful tools, while monitoring for errors and troubleshooting as you go. Welcome to the course! AI is a fast-changing field, so be sure to check this repo for newer versions of the sample code. Installing Clone this repository into your local machine using the terminal (Mac), CMD (Windows), or a GUI tool like SourceTree. Ensure you have Python 3.10 or later (version 3.11 recommended) To prevent conflicts with other installed software on your computer, the author recommends setting up a virtual environment as follows: python3.11 -m venv .venv Activate the virtual environment with one of these commands: Install the necessary Python packages: (use the upgrade flag to ensure you have current versions) Specific projects in this course might have additional optional requirements. If so, it will be noted within the relevant video. Updates Recent versions of LM Studio have changed the UI from what's shown in the videos. These are generally welcome improvements. For example the maximum context length and other model parameters are viewable in the sidebar. Recent versions of LlamaIndex have changed their import and package structure in a way that breaks existing code. In many cases, you can fix imports as follows: Specific third party components require installing new packages. These will be noted in comments. Example: For code in Chap04, From March 1, 2024, LlamaHub has been deprecated and most projects migrated into LlamaIndex. (sort of--it's complicated) Specifically: Additionally, LlamaIndex ServiceContext has been deprecated and replaced with Settings. See Ch02/rag_llamaindex.py for updated sample code. LangChain too has changed their import structure, though as of this writing it produces warnings rather than errors. In many cases you will need to import from langchaincommunity or langchainopenai as follows: Instructor M. Joel Dubinko Software Generalist | Consultant | Instructor | Problem Solver Check out my other courses on [LinkedIn Learning][URL-instructor-home]. [lil-course-url]: https://www.linkedin.com/learning/introduction-to-ai-orchestration-with-langchain-and-llamaindex [lil-thumbnail-url]: https://media.licdn.com/dms/image/D560DAQEi6KQmA4fF1Q/learning-public-crop6751200/0/1707936616297?e=2147483647&v=beta&t=3vzvDRzpKq9Nd99ss8r2pqMZmyTOKYgKwk825XoSEHU [URL-instructor-home]: https://www.linkedin.com/learning/instructors/m-joel-dubinko?u=104

Awesome-Ai-Tools
github
LLM Vibe Score0.385
Human Vibe Score0.0020930582944730723
aliammari1Feb 21, 2025

Awesome-Ai-Tools

Awesome-Ai-Tools This repo contains AI tools that will help you achieve your goals. The tools are categorized into different sections based on their functionality. Contents Awesome-Ai-Tools Contents Productivity Time Management Task Management Email Management Creativity Art Music Writing Communication Writing Personality Analysis Translation Data Science Machine Learning Data Analysis Data Visualization Natural Language Processing Text Classification Named Entity Recognition Computer Vision Image Classification Object Detection Robotics Robot Simulation Robot Control Miscellaneous Language Models Generative Models Productivity If you're looking to boost your productivity, there are a number of AI tools that can help. Time Management RescueTime - RescueTime is an AI-powered time tracking tool that helps you understand how you're spending your time on your computer. It can help you identify areas where you're wasting time and make adjustments to your workflow to be more productive. Focus@Will - Focus@Will is an AI-powered music service that helps you stay focused and productive while you work. It uses neuroscience to create music that is scientifically optimized to help you concentrate. Clockify - Clockify is an AI-powered time tracking tool that helps you track your time across different projects and tasks. It can help you identify areas where you're spending too much time and make adjustments to your workflow to be more productive. Trello - Trello is an AI-powered task management tool that helps you stay organized and on top of your to-do list. It can help you prioritize tasks, set deadlines, and even collaborate with others on projects. Motion - Motion is an AI-powered calendar and task management tool that automatically schedules your tasks and meetings for optimal productivity. Reclaim.ai - Reclaim is an intelligent calendar assistant that helps you protect your time by automatically scheduling meetings and tasks. Task Management Todoist - Todoist is an AI-powered task management tool that helps you stay organized and on top of your to-do list. It can help you prioritize tasks, set deadlines, and even suggest tasks based on your previous activity. Asana - Asana is an AI-powered task management tool that helps you stay organized and on top of your to-do list. It can help you prioritize tasks, set deadlines, and even collaborate with others on projects. Notion - Notion is an AI-powered productivity tool that can help you manage tasks, take notes, and collaborate with others on projects. It can also be used to create wikis, databases, and other types of content. Taskade - Taskade is an AI-powered productivity tool that can manage tasks and notes for individuals and teams. ClickUp - ClickUp is an AI-enhanced project management tool that helps teams organize work with automated task distributions and smart notifications. Monday.com - Monday.com uses AI to streamline workflow management and automate routine tasks. Email Management Boomerang - Boomerang is an AI-powered email management tool that helps you manage your inbox more efficiently. It can help you schedule emails to be sent later, remind you to follow up on emails, and even suggest responses to emails. SaneBox - SaneBox is an AI-powered email management tool that helps you manage your inbox more efficiently. It can help you prioritize emails, unsubscribe from unwanted emails, and even snooze emails to be dealt with later. Mailstrom - Mailstrom is an AI-powered email management tool that helps you clean up your inbox. It can help you quickly identify and delete unwanted emails, and even unsubscribe from newsletters and other types of email subscriptions. Creativity If you're looking to get more creative, there are a number of AI tools that can help. Art Artbreeder - Artbreeder is an AI-powered tool that allows you to create unique digital art by combining different images and styles. Runway ML - Runway is an AI-powered tool that allows users to edit and generate videos using natural language descriptions. Prisma - Prisma is an AI-powered tool that allows you to transform your photos into works of art using neural networks. Music AIVA - AIVA is an AI-powered music composition tool that can help you create original music for your projects. Writing monica - Monica is a chrome extension powered by ChatGPT API. It is designed to be your personal AI assistant for effortless chatting and copywriting. CopyAI - CopyAI is an AI-powered writing assistant that can help you generate high-quality marketing copy, product descriptions, and more. Grammarly - Grammarly is an AI-powered writing assistant that helps you catch grammar and spelling errors in your writing. It can also suggest improvements to your writing style to help you communicate more effectively. Jasper - Jasper is an AI writing assistant that helps create marketing copy, blog posts, and social media content. Rytr - Rytr is an AI writing tool that helps generate content in different tones and styles. Communication If you're looking to improve your communication skills, there are a number of AI tools that can help. Writing Linguix - Linguix is an AI-powered writing assistant that can help you improve your writing skills. It can catch grammar and spelling errors, suggest improvements to your writing style, and even help you avoid plagiarism. Hemingway Editor - Hemingway Editor is an AI-powered writing tool that helps you simplify your writing and make it more readable. It can help you identify complex sentences, passive voice, and other issues that can make your writing difficult to understand. Personality Analysis Crystal - Crystal is an AI-powered tool that helps you understand the personality of the people you're communicating with. It can provide insights into their communication style and suggest ways to communicate more effectively with them. IBM Watson Personality Insights - IBM Watson Personality Insights is a tool that uses natural language processing and machine learning algorithms to analyze text and provide insights into the personality traits of the author. Translation DeepL - DeepL is an AI-powered translation tool that provides high-quality translations in multiple languages. It uses neural network algorithms to provide more accurate translations than traditional translation tools. Google Translate - Google Translate is a free online translation tool that uses machine learning algorithms to provide translations in over 100 languages. Data Science If you're working with data, there are a number of AI tools that can help you analyze and make sense of it. Machine Learning DataRobot - DataRobot is an AI-powered platform that helps you build and deploy machine learning models. It can help you automate the process of building models and make predictions based on your data. TensorFlow - TensorFlow is an open-source machine learning framework developed by Google. It can help you build and train machine learning models for a variety of applications. PyTorch - PyTorch is another open-source machine learning framework that is popular among researchers and developers. It is known for its ease of use and flexibility. H2O.ai - H2O.ai is an open-source machine learning platform that allows you to build and deploy machine learning models at scale. PyTorch3d - Pytorch 3d is an open-source library for deep learning with 3d data. Auto-sklearn - Auto-sklearn is an automated machine learning toolkit that helps find the best machine learning pipeline for your dataset. Ludwig - Ludwig is a declarative machine learning framework that makes it easy to build and train models without writing code. Data Analysis Pandas - Pandas is an open-source data analysis library for Python. It can help you manipulate and analyze data in a variety of formats, including CSV, Excel, and SQL databases. RapidMiner - RapidMiner is an AI-powered data science platform that allows you to build and deploy predictive models without writing any code. Apache Spark - Apache Spark is an open-source big data processing framework that can help you analyze large datasets in a distributed computing environment. Data Visualization Tableau - Tableau is a data visualization tool that uses AI to help you explore and understand your data. It can help you identify patterns and trends in your data that might not be immediately obvious. Plotly - Plotly is an open-source data visualization library for Python. It can help you create interactive charts and graphs that can be embedded in web pages and other applications. D3.js - D3.js is a JavaScript library for data visualization that allows you to create dynamic and interactive visualizations using web standards like HTML, CSS, and SVG. Natural Language Processing If you're interested in natural language processing, there are a number of AI tools that can help you get started. Text Classification TextBlob - TextBlob is an open-source library for processing textual data in Python. It can help you perform tasks like sentiment analysis, part-of-speech tagging, and text classification. NLTK - NLTK (Natural Language Toolkit) is another open-source library for natural language processing in Python. It can help you perform tasks like tokenization, stemming, and named entity recognition. Amazon Comprehend - Amazon Comprehend is a natural language processing service that uses machine learning to analyze text and provide insights into the content and sentiment of the text. Named Entity Recognition spaCy - spaCy is an open-source library for advanced natural language processing in Python. It can help you build applications that can understand and analyze human language. One of its key features is named entity recognition, which can identify and classify entities like people, organizations, and locations. Google Cloud Natural Language API - Google Cloud Natural Language API is a natural language processing service that can analyze text and provide insights into the sentiment, entities, and syntax of the text. Computer Vision If you're interested in computer vision, there are a number of AI tools that can help you get started. Image Classification Clarifai - Clarifai is an AI-powered image recognition tool that can help you classify images based on their content. It can recognize objects, scenes, and even specific concepts like emotions and colors. Google Cloud Vision API - Google Cloud Vision API is a computer vision service that can analyze images and provide insights into the content of the images, including objects, faces, and text. Object Detection YOLO - YOLO (You Only Look Once) is an open-source object detection system that can detect objects in real-time video streams. It is known for its speed and accuracy. Amazon Rekognition - Amazon Rekognition is a computer vision service that can analyze images and videos and provide insights into the content of the media, including objects, faces, and text. Robotics If you're interested in robotics, there are a number of AI tools that can help you get started. Robot Simulation Gazebo - Gazebo is an open-source robot simulation tool that allows you to simulate robots in a virtual environment. It can help you test and debug your robot control algorithms before deploying them on a physical robot. Webots - Webots is another open-source robot simulation tool that allows you to simulate robots in a virtual environment. It supports a wide range of robots and sensors, and can be used for both research and education. Robot Control ROS - ROS (Robot Operating System) is an open-source framework for building robotics software. It can help you build and control robots using a variety of programming languages. Miscellaneous If you're looking for AI tools that don't fit into any of the above categories, here are a few to check out: Language Models GPT-3 - GPT-3 is an AI-powered language model developed by OpenAI. It can generate human-like text, answer questions, and even write code. BERT - BERT is a language model developed by Google AI. It is trained on a massive dataset of text and code, and can be used for a variety of tasks, including natural language understanding, question answering, and text classification. LLama 2 - LLama 2 models are a collection of pretrained and fine-tuned large language models developed and released by Meta AI . These models are built upon the success of LLama 1 and provide significant improvements, including a larger scale and more extensive context. Claude - Claude is an AI assistant developed by Anthropic that excels at analysis, writing, and coding tasks. PaLM 2 - PaLM 2 is Google's next-generation language model with improved multilingual, reasoning, and coding capabilities. Generative Models StyleGAN - StyleGAN is an AI-powered generative model that can create high-quality images of faces, animals, and other objects. It is known for its ability to create realistic and diverse images. Generative Pre-trained Transformer 3 (GPT-3) - GPT-3 is an AI-powered language model developed by OpenAI. It can generate human-like text, answer questions, and even write code.

PracticalAI
github
LLM Vibe Score0.416
Human Vibe Score0.012874224994657315
revodavidFeb 9, 2025

PracticalAI

Practical AI for the Working Software Engineer by David M Smith (@revodavid), Cloud Advocate at Microsoft Last updated: December 4, 2018 Presented at: AI Live (AIF01), Orlando, December 7 2018 About these notebooks This library includes three notebooks to support the workshop: The AI behind Seeing AI. Use the web-interfaces to Cognitive Services to learn about the AI services behind the "Seeing AI" app Computer Vision API with R. Use an R script to interact with the Computer Vision API and generate captions for random Wikimedia images. Custom Vision with R. An R function to classify an image as a "Hot Dog" or "Not Hot Dog", using the Custom Vision service. MNIST with scikit-learn. Use sckikit-learn to build a digit recognizer for the MNIST data using a regression model. MNIST with tensorflow. Use Tensorflow (from Python) to build a digit recognizer for the MNIST data using a convolutional neural network. These notebooks are hosted on Azure Notebooks at https://notebooks.azure.com/davidsmi/projects/practicalai, where you can run them interactively. You can also download them to run them using Jupyter. Find the slides for the workshop here. Setup (for use in Azure Notebooks) Sign in to Azure Notebooks. You'll need a Microsoft Account: your O365, Xbox, or Hotmail account will work. If you're new to Notebooks, check out the Jupyter Notebook documentation and the Azure Notebook documentation. If you have an iPhone, install the free SeeingAI app. (optional) To generate keys and use Azure services, you'll need an Azure subscription. You can get a free Azure account here, with $200 in free credits for new subscribers. You'll need a credit card, but most of the things we'll use in this workshop will be free. Contact If you get stuck or just have other questions, you can contact me here: David Smith davidsmi@microsoft.com Twitter: @revodavid

Mastering-AI-for-Entrepreneurs-9-Free-Courses
github
LLM Vibe Score0.203
Human Vibe Score0
Softtechhub1Feb 1, 2025

Mastering-AI-for-Entrepreneurs-9-Free-Courses

Mastering-AI-for-Entrepreneurs-9-Free-Courses Introduction: The Entrepreneur's AI RevolutionArtificial Intelligence (AI) is changing the way we do business. It's not just for tech giants anymore. Small businesses and startups are using AI to work smarter, not harder. As an entrepreneur, you need to understand AI to stay ahead.Why AI is a must-have skill for entrepreneursAI is everywhere. It's in the apps we use, the products we buy, and the services we rely on. Businesses that use AI are seeing big improvements:They're making better decisions with data-driven insightsThey're automating routine tasks, freeing up time for creativityThey're personalizing customer experiences, boosting satisfaction and salesIf you're not using AI, you're falling behind. But here's the good news: you don't need to be a tech wizard to harness the power of AI.Breaking the barriers to AI learningThink AI is too complex? Think again. You don't need a computer science degree to understand and use AI in your business. Many AI tools are designed for non-technical users. They're intuitive and user-friendly.The best part? You can learn about AI for free. There are tons of high-quality courses available at no cost. These courses are designed for busy entrepreneurs like you. They cut through the jargon and focus on practical applications.What to expect from this articleWe've handpicked nine free courses that will turn you into an AI-savvy entrepreneur. Each course is unique, offering different perspectives and skills. We'll cover:What makes each course specialWhat you'll learnHow it applies to your businessWho it's best suited forReady to dive in? Let's explore these game-changing courses that will boost your AI knowledge and give your business an edge.1. Google AI Essentials: A Beginner's Guide to Practical AIWhy This Course Is EssentialGoogle AI Essentials is perfect if you're just starting out. It's designed for people who don't have a tech background. The course focuses on how AI can help you in your day-to-day work, not on complex theories.What You'll LearnThis course is all about making AI work for you. You'll discover how to:Use AI to boost your productivity. Generate ideas, create content, and manage tasks more efficiently.Streamline your workflows. Learn how AI can help with everyday tasks like drafting emails and organizing your schedule.Use AI responsibly. Understand the potential biases in AI and how to use it ethically.Key TakeawaysYou'll earn a certificate from Google. This looks great on your resume or LinkedIn profile.You'll learn how to work alongside AI tools to get better results in your business.You'll gain practical skills you can use right away to improve your work.Get StartedEnroll in Google AI Essentials2. Introduction to Generative AI: A Quick Start for EntrepreneursWhy This Course Works for Busy EntrepreneursThis course is short and sweet. In just 30 minutes, you'll get a solid grasp of generative AI. It's perfect if you're short on time but want to understand the basics.What You'll LearnThe fundamentals of generative AI: what it is, how it works, and its limitsHow generative AI differs from other types of AIReal-world applications of generative AI in businessHow It Helps Your BusinessAfter this course, you'll be able to:Make smarter decisions about using AI tools in your businessSpot opportunities where generative AI could solve problems or create valueUnderstand the potential and limitations of this technologyGet StartedEnroll in Introduction to Generative AI3. Generative AI with Large Language Models: Advanced Skills for EntrepreneursWhy This Course Stands OutThis course digs deeper into the technical side of AI. It's ideal if you have some coding experience and want to understand how AI models work under the hood.What You'll LearnYou'll gain key skills for working with Large Language Models (LLMs):How to gather and prepare data for AI modelsChoosing the right model for your needsEvaluating model performance and improving resultsYou'll also learn about:The architecture behind transformer models (the tech powering many AI tools)Techniques for fine-tuning models to your specific business needsWho Should Take This CourseThis course is best for entrepreneurs who:Have basic Python programming skillsUnderstand the fundamentals of machine learningWant to go beyond using AI tools to actually building and customizing themGet StartedEnroll in Generative AI with Large Language Models4. AI for Everyone by Andrew Ng: Simplifying AI for Business LeadersWhy It's Perfect for BeginnersAndrew Ng is a leading figure in AI education. He's known for making complex topics easy to understand. This course is designed for non-technical learners. You don't need any coding or math skills to benefit from it.What You'll LearnHow AI works at a high levelHow to spot problems in your business that AI can solveWays to assess how AI might impact your business processes and strategiesWhy Entrepreneurs Love This CourseIt explains AI concepts in plain English, without technical jargonYou can complete it in just 8 hours, fitting it into your busy scheduleIt focuses on the business value of AI, not just the technologyGet StartedStart with AI for Everyone on Coursera5. Generative AI: Introduction and ApplicationsWhy This Course Is Ideal for EntrepreneursThis course offers a broad view of generative AI applications. You'll learn about AI in text, image, audio, and more. It's packed with hands-on experience using popular AI tools.What You'll LearnThe basics and history of generative AI technologiesHow different industries are using AI, from marketing to creative projectsPractical skills through labs using tools like ChatGPT, DALL-E, and Stable DiffusionHow It Stands OutYou'll hear from real AI practitioners about their experiencesThe course teaches you how to use generative AI to innovate and improve efficiency in your businessGet StartedEnroll in Generative AI: Introduction and Applications6. Generative AI for Everyone by Andrew Ng: Unlocking ProductivityWhy This Course Is a Must-HaveThis course focuses on using generative AI tools for everyday business tasks. It's all about boosting your productivity and efficiency.What You'll LearnHands-on exercises to integrate AI tools into your daily workReal examples of how businesses are using generative AI to save time and moneyTechniques for prompt engineering to get better results from AI toolsHow It Helps EntrepreneursYou'll learn to automate repetitive tasks, freeing up time for strategic thinkingYou'll discover new ways to use AI tools in your business processesYou'll gain confidence in experimenting with AI to solve business challengesGet StartedGo deeper with DeepLearning.AI7. Generative AI for Business Leaders by LinkedIn LearningWhy This Course Focuses on Business ApplicationsThis course is tailored for leaders who want to integrate AI into their business operations. It provides practical insights for improving workflows and decision-making.What You'll LearnStrategies for using AI to optimize your business operationsHow to save time and resources with AI-powered toolsPractical methods for implementing AI in your company, regardless of sizeKey BenefitsThe course is designed for busy professionals, allowing you to learn at your own paceYou'll gain insights you can apply immediately to your businessIt covers both the potential and the limitations of AI in business settingsGet StartedLevel up on LinkedIn Learning8. AI for Beginners by Microsoft: A Structured Learning PathWhy This Course Builds a Strong AI FoundationMicrosoft's AI for Beginners is a comprehensive 12-week program. It covers core AI concepts in a structured, easy-to-follow format. The course combines theoretical knowledge with hands-on practice through quizzes and labs.What You'll LearnThe basics of AI, machine learning, and data scienceStep-by-step guidance to build a strong knowledge basePractical applications of AI in various business contextsHow to Approach This CourseDedicate 2-3 hours per week to complete the curriculumUse the structured format to gradually build your confidence in AI conceptsApply what you learn to real business scenarios as you progressGet StartedBuild foundations with Microsoft9. AI for Business Specialization by UPenn: Strategic Thinking with AIWhy This Course Is Perfect for Business LeadersThis specialization focuses on AI's transformative impact on core business functions. It covers how AI is changing marketing, finance, and operations.What You'll LearnHow to build an AI strategy tailored to your business needsWays to leverage AI to drive innovation across different departmentsTechniques for integrating AI into your business modelHow to Make the Most of This CourseTake detailed notes on how each module applies to your own business challengesUse the specialization to develop a long-term AI vision for your companyNetwork with other business leaders taking the course to share insights and experiencesGet StartedScale up with UPenn's business focusConclusion: Your Path to Becoming an AI-powered EntrepreneurWe've covered nine fantastic free courses that can transform you into an AI-savvy entrepreneur. Let's recap:Google AI Essentials: Perfect for beginners, focusing on practical AI applications.Introduction to Generative AI: A quick start to understand the basics of generative AI.Generative AI with Large Language Models: For those ready to dive into the technical side.AI for Everyone: A non-technical introduction to AI's business impact.Generative AI: Introduction and Applications: A broad look at generative AI across industries.Generative AI for Everyone: Focused on boosting productivity with AI tools.Generative AI for Business Leaders: Tailored for integrating AI into business operations.AI for Beginners: A structured path to build a strong AI foundation.AI for Business Specialization: Strategic thinking about AI in business functions.Remember, you don't need to tackle all these courses at once. Start small and build your knowledge gradually. Pick the course that aligns best with your current needs and business goals.Embracing AI is not just about staying competitive; it's about opening new doors for innovation and growth. These courses will help you see opportunities where AI can solve problems, improve efficiency, and create value for your business.The AI revolution is happening now. The sooner you start learning, the better positioned you'll be to lead in this new era. Each step you take in understanding AI is a step towards future-proofing your business.So, what are you waiting for? Choose a course, dive in, and start your journey to becoming an AI-powered entrepreneur today. The future of your business may depend on it.MORE ARTICLES FOR YOUHumanizzer Fastpass Bundle – OTO1 to OTO4: Get (Humanizzer + All OTOs) Fastpass for Massive 75% Discount Available Limited-Time OneHumanizzer Review: Build Lifelike Human AI Agents That Talk, Listen & Engage Face-To-Face!—In Your Voice, Just Like You!EasyListDetox App Review: A Windows tool with Giveaway Rights for effortlessly cleaning your email lists of duplicates, invalid, and disposable addresses. Simple, efficient, and time-savingAI Copy Kit Review: Google’s Latest AI Tech Tensorflow (Tf) Create Jaw-Dropping And Advanced Ultra HD Videos, Ultra Shorts, 4K Images, Voiceovers, and Any Other GPT 4-Powered Amazing Content In Minutes Without Any Complicated Tools!From Good to Great: 15 Books to Inspire Personal and Business TransformationFTC Affiliate Commission Disclaimer: Some links in this article may earn us a commission if you make a purchase. This doesn't affect our recommendations.

internet-tools-collection
github
LLM Vibe Score0.236
Human Vibe Score0.009333333333333334
bogdanmosicaJan 23, 2025

internet-tools-collection

Internet Tools Collection A collection of tools, website and AI for entrepreneurs, web designers, programmers and for everyone else. Content by category Artificial Intelligence Developers Design Entrepreneur Video Editing Stock videos Stock Photos Stock music Search Engine Optimization Blog Posts Resume Interviews No code website builder No code game builder Side Hustle Browser Extensions Other Students Artificial Intelligence Jasper - The Best AI Writing Assistant [](https://www.jasper.ai/) Create content 5x faster with artificial intelligence. Jasper is the highest quality AI copywriting tool with over 3,000 5-star reviews. Best for writing blog posts, social media content, and marketing copy. AutoDraw [](https://www.autodraw.com/) Fast drawing for everyone. AutoDraw pairs machine learning with drawings from talented artists to help you draw stuff fast. Rytr - Best AI Writer, Content Generator & Writing Assistant [](https://rytr.me/) Rytr is an AI writing assistant that helps you create high-quality content, in just a few seconds, at a fraction of the cost! Neevo - Neevo [](https://www.neevo.ai/) Kinetix Tech [](https://kinetix.tech/) Kinetix is a no-code 3D creation tool powered by Artificial Intelligence. The web-based platform leverages AI motion capture to convert a video into a 3D animation and lets you customize your avatars and environments. We make 3D animation accessible to every creator so they can create engaging stories. LALAL.AI: 100% AI-Powered Vocal and Instrumental Tracks Remover [](https://www.lalal.ai/) Split vocal and instrumental tracks quickly and accurately with LALAL.AI. Upload any audio file and receive high-quality extracted tracks in a few seconds. Copy.ai: Write better marketing copy and content with AI [](https://www.copy.ai/) Get great copy that sells. Copy.ai is an AI-powered copywriter that generates high-quality copy for your business. Get started for free, no credit card required! Marketing simplified! OpenAI [](https://openai.com/) OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity. DALL·E 2 [](https://openai.com/dall-e-2/) DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language. Steve.ai - World’s fastest way to create Videos [](https://www.steve.ai/) Steve.AI is an online Video making software that helps anyone to create Videos and animations in seconds. Octie.ai - Your A.I. ecommerce marketing assistant [](https://octie.ai/) Write emails, product descriptions, and more, with A.I. Created by Octane AI. hypnogram.xyz [](https://hypnogram.xyz/) Generate images from text descriptions using AI FakeYou. Deep Fake Text to Speech. [](https://fakeyou.com/) FakeYou is a text to speech wonderland where all of your dreams come true. Craiyon, formerly DALL-E mini [](https://www.craiyon.com/) Craiyon, formerly DALL-E mini, is an AI model that can draw images from any text prompt! Deck Rocks - Create Pictch Decks [](https://www.deck.rocks/) Writely | Using AI to Improve Your Writing [](https://www.writelyai.com/) Making the art of writing accessible to all Writesonic AI Writer - Best AI Writing Assistant [](https://writesonic.com/) Writesonic is an AI writer that's been trained on top-performing SEO content, high-performing ads, and converting sales copy to help you supercharge your writing and marketing efforts. Smart Copy - AI Copywriting Assistant | Unbounce [](https://unbounce.com/product/smart-copy/) Generate creative AI copy on-the-spot across your favourite tools Synthesia | #1 AI Video Generation Platform [](https://www.synthesia.io/) Create AI videos by simply typing in text. Easy to use, cheap and scalable. Make engaging videos with human presenters — directly from your browser. Free demo. NVIDIA Canvas: Turn Simple Brushstrokes into Realistic Images [](https://www.nvidia.com/en-us/studio/canvas/) Create backgrounds quickly, or speed up your concept exploration so you can spend more time visualizing ideas with the help of NVIDIA Canvas. Hotpot.ai - Hotpot.ai [](https://hotpot.ai/) Hotpot.ai makes graphic design and image editing easy. AI tools allow experts and non-designers to automate tedious tasks while attractive, easy-to-edit templates allow anyone to create device mockups, social media posts, marketing images, app icons, and other work graphics. Klaviyo: Marketing Automation Platform for Email & SMS [](https://www.klaviyo.com/) Klaviyo, an ecommerce marketing automation platform for email marketing and sms syncs your tech stack with your website store to scale your business. Search listening tool for market, customer & content research - AnswerThePublic [](https://answerthepublic.com/) Use our free tool to get instant, raw search insights, direct from the minds of your customers. Upgrade to a paid plan to monitor for new ways that people talk & ask questions about your brand, product or topic. Topic Mojo [](https://topicmojo.com/) Discover unique & newest queries around any topic and find what your customers are searching for. Pulling data from 50+ sources to enhance your topic research. AI Image Enlarger | Enlarge Image Without Losing Quality! [](https://imglarger.com/) AI Image Enlarger is a FREE online image enlarger that could upscale and enhance small images automatically. Make jpg/png pictures big without losing quality. Midjourney [](https://www.midjourney.com/app/) Kaedim - AI for turning 2D images to 3D models [](https://www.kaedim3d.com/webapp) AI for turning 2D images, sketches and photos to 3D models in seconds. Overdub: Ultra realistic text to speech voice cloning - Descript [](https://www.descript.com/overdub) Create a text to speech model of your voice. Try a live demo. Getting Started [](https://magenta.tensorflow.org/get-started) Resources to learn about Magenta Photosonic AI Art Generator | Create Unique Images with AI [](https://photosonic.writesonic.com/) Transform your imagination into stunning digital art with Photosonic - the AI art generator. With its creative suggestions, this Writesonic's AI image generator can help unleash your inner artist and share your creations with the world. Image Computer [](https://image.computer/) Most downloaded Instagram Captions App (+more creator tools) [](https://captionplus.app/) Join 3 Million+ Instagram Creators who use CaptionPlus to find Instagram Captions, Hashtags, Feed Planning, Reel Ideas, IG Story Design and more. Writecream - Best AI Writer & Content Generator - Writecream [](https://www.writecream.com/) Sentence Rewriter is a free tool to reword a sentence, paragraph and even entire essays in a short amount of time. Hypotenuse AI: AI Writing Assistant and Text Generator [](https://www.hypotenuse.ai/) Turn a few keywords into original, insightful articles, product descriptions and social media copy with AI copywriting—all in just minutes. Try it free today. Text to Speach Listnr: Generate realistic Text to Speech voiceovers in seconds [](https://www.listnr.tech/) AI Voiceover Generator with over 600+ voiceovers in 80+ languages, go from Text to Voice in seconds. Get started for Free! Free Text to Speech: Online, App, Software, Commercial license with Natural Sounding Voices. [](https://www.naturalreaders.com/) Free text to speech online app with natural voices, convert text to audio and mp3, for personal and commercial use Developers OverAPI.com | Collecting all the cheat sheets [](https://overapi.com/) OverAPI.com is a site collecting all the cheatsheets,all! Search Engine For Devs [](https://you.com/) Spline - Design tool for 3D web browser experiences [](https://spline.design/) Create web-based 3D browser experiences Image to HTML CSS converter. Convert image to HTML CSS with AI: Fronty [](https://fronty.com/) Fronty - Image to HTML CSS code converter. Convert image to HTML powered by AI. Sketchfab - The best 3D viewer on the web [](https://sketchfab.com/) With a community of over one million creators, we are the world’s largest platform to publish, share, and discover 3D content on web, mobile, AR, and VR. Railway [](https://railway.app/) Railway is an infrastructure platform where you can provision infrastructure, develop with that infrastructure locally, and then deploy to the cloud. JSON Crack - Crack your data into pieces [](https://jsoncrack.com/) Simple visualization tool for your JSON data. No forced structure, paste your JSON and view it instantly. Locofy.ai - ship your products 3-4x faster — with low code [](https://www.locofy.ai/) Turn your designs into production-ready frontend code for mobile apps and web. Ship products 3-4x faster with your existing design tools, tech stacks & workflows. Oh Shit, Git!?! [](https://ohshitgit.com/) Carbon | Create and share beautiful images of your source code [](https://carbon.now.sh/) Carbon is the easiest way to create and share beautiful images of your source code. GPRM : GitHub Profile ReadMe Maker [](https://gprm.itsvg.in/) Best Profile Generator, Create your perfect GitHub Profile ReadMe in the best possible way. Lots of features and tools included, all for free ! HubSpot | Software, Tools, and Resources to Help Your Business Grow Better [](https://www.hubspot.com/) HubSpot’s integrated CRM platform contains the marketing, sales, service, operations, and website-building software you need to grow your business. QuickRef.ME - Quick Reference Cheat Sheet [](https://quickref.me/) Share quick reference and cheat sheet for developers massCode | A free and open source code snippets manager for developers [](https://masscode.io/) Code snippets manager for developers, developed using web technologies. Snyk | Developer security | Develop fast. Stay secure. [](https://snyk.io/) Snyk helps software-driven businesses develop fast and stay secure. Continuously find and fix vulnerabilities for npm, Maven, NuGet, RubyGems, PyPI and more. Developer Roadmaps [](https://roadmap.sh/) Community driven roadmaps, articles, guides, quizzes, tips and resources for developers to learn from, identify their career paths, know what they don't know, find out the knowledge gaps, learn and improve. CSS Generators Get Waves – Create SVG waves for your next design [](https://getwaves.io/) A free SVG wave generator to make unique SVG waves for your next web design. Choose a curve, adjust complexity, randomize! Box Shadows [](https://box-shadow.dev/) Tridiv | CSS 3D Editor [](http://tridiv.com/) Tridiv is a web-based editor for creating 3D shapes in CSS Glassmorphism CSS Generator - Glass UI [](https://ui.glass/generator/) Generate CSS and HTML components using the glassmorphism design specifications based on the Glass UI library. Blobmaker - Make organic SVG shapes for your next design [](https://www.blobmaker.app/) Make organic SVG shapes for your next design. Modify the complexity, contrast, and color, to generate unique SVG blobs every time. Keyframes.app [](https://keyframes.app/) cssFilters.co - Custom and Instagram like photo filters for CSS [](https://www.cssfilters.co/) Visual playground for generating CSS for custom and Instagram like photo filters. Experiment with your own uploaded photo or select one from the Unsplash collection. CSS Animations Animista - CSS Animations on Demand [](https://animista.net/) Animista is a CSS animation library and a place where you can play with a collection of ready-made CSS animations and download only those you will use. Build Internal apps Superblocks | Save 100s of developer hours on internal tools [](https://www.superblocks.com/) Superblocks is the fast, easy and secure way for developers to build custom internal tools fast. Connect your databases & APIs. Drag and drop UI components. Extend with Python or Javascript. Deploy in 1-click. Secure and Monitor using your favorite tools Budibase | Build internal tools in minutes, the easy way [](https://budibase.com/) Budibase is a modern, open source low-code platform for building modern internal applications in minutes. Retool | Build internal tools, remarkably fast. [](https://retool.com/) Retool is the fast way to build internal tools. Drag-and-drop our building blocks and connect them to your databases and APIs to build your own tools, instantly. Connects with Postgres, REST APIs, GraphQL, Firebase, Google Sheets, and more. Built by developers, for developers. Trusted by startups and Fortune 500s. Sign up for free. GitHub Repositories GitHub - vasanthk/how-web-works: What happens behind the scenes when we type www.google.com in a browser? [](https://github.com/vasanthk/how-web-works) What happens behind the scenes when we type www.google.com in a browser? - GitHub - vasanthk/how-web-works: What happens behind the scenes when we type www.google.com in a browser? GitHub - kamranahmedse/developer-roadmap: Interactive roadmaps, guides and other educational content to help developers grow in their careers. [](https://github.com/kamranahmedse/developer-roadmap) Interactive roadmaps, guides and other educational content to help developers grow in their careers. - GitHub - kamranahmedse/developer-roadmap: Interactive roadmaps, guides and other educational content to help developers grow in their careers. GitHub - apptension/developer-handbook: An opinionated guide on how to become a professional Web/Mobile App Developer. [](https://github.com/apptension/developer-handbook) An opinionated guide on how to become a professional Web/Mobile App Developer. - GitHub - apptension/developer-handbook: An opinionated guide on how to become a professional Web/Mobile App Developer. ProfileMe.dev | Create an amazing GitHub profile in minutes [](https://www.profileme.dev/) ProfileMe.dev | Create an amazing GitHub profile in minutes GitHub - Kristories/awesome-guidelines: A curated list of high quality coding style conventions and standards. [](https://github.com/Kristories/awesome-guidelines) A curated list of high quality coding style conventions and standards. - GitHub - Kristories/awesome-guidelines: A curated list of high quality coding style conventions and standards. GitHub - tiimgreen/github-cheat-sheet: A list of cool features of Git and GitHub. [](https://github.com/tiimgreen/github-cheat-sheet) A list of cool features of Git and GitHub. Contribute to tiimgreen/github-cheat-sheet development by creating an account on GitHub. GitHub - andreasbm/web-skills: A visual overview of useful skills to learn as a web developer [](https://github.com/andreasbm/web-skills) A visual overview of useful skills to learn as a web developer - GitHub - andreasbm/web-skills: A visual overview of useful skills to learn as a web developer GitHub - Ebazhanov/linkedin-skill-assessments-quizzes: Full reference of LinkedIn answers 2022 for skill assessments (aws-lambda, rest-api, javascript, react, git, html, jquery, mongodb, java, Go, python, machine-learning, power-point) linkedin excel test lösungen, linkedin machine learning test LinkedIn test questions and answers [](https://github.com/Ebazhanov/linkedin-skill-assessments-quizzes) Full reference of LinkedIn answers 2022 for skill assessments (aws-lambda, rest-api, javascript, react, git, html, jquery, mongodb, java, Go, python, machine-learning, power-point) linkedin excel test lösungen, linkedin machine learning test LinkedIn test questions and answers - GitHub - Ebazhanov/linkedin-skill-assessments-quizzes: Full reference of LinkedIn answers 2022 for skill assessments (aws-lambda, rest-api, javascript, react, git, html, jquery, mongodb, java, Go, python, machine-learning, power-point) linkedin excel test lösungen, linkedin machine learning test LinkedIn test questions and answers Blockchain/Crypto Dashboards [](https://dune.com/) Blockchain ecosystem analytics by and for the community. Explore and share data from Ethereum, xDai, Polygon, Optimism, BSC and Solana for free. Introduction - The Anchor Book v0.24.0 [](https://book.anchor-lang.com/introduction/introduction.html) Crypto & Fiat Exchange Super App | Trade, Save & Spend | hi [](https://hi.com/) Buy, Trade, Send and Earn Crypto & Fiat. Deposit Bitcoin, ETH, USDT and other cryptos and start earning. Get the hi Debit Card and Multi-Currency IBAN Account. Moralis Web3 - Enterprise-Grade Web3 APIs [](https://moralis.io/) Bridge the development gap between Web2 and Web3 with Moralis’ powerful Web3 APIs. Mirror [](https://mirror.xyz/) Built on web3 for web3, Mirror’s robust publishing platform pushes the boundaries of writing online—whether it’s the next big white paper or a weekly community update. Makerdao [](https://blog.makerdao.com/) Sholi — software for Investors & Traders / Sholi MetriX [](https://sholi.io/) Sholi — software for Investors & Traders / Sholi MetriX Stock Trading Quiver Quantitative [](https://www.quiverquant.com/) Quiver Quantitative Chart Prime - The only tool you'll need for trading assets across all markets [](https://chartprime.com/) ChartPrime offers a toolkit that will take your trading game to the next level. Visit our site for a full rundown of features and helpful tutorials. Learning Hacker Rank [](https://www.hackerrank.com/) Coderbyte | Code Screening, Challenges, & Interview Prep [](https://coderbyte.com/) Improve your coding skills with our library of 300+ challenges and prepare for coding interviews with content from leading technology companies. Competitive Programming | Participate & Learn | CodeChef [](https://www.codechef.com/) Learn competitive programming with the help of CodeChef's coding competitions. Take part in these online coding contests to level up your skills Learn to Code - for Free | Codecademy [](https://www.codecademy.com/) Learn the technical skills to get the job you want. Join over 50 million people choosing Codecademy to start a new career (or advance in their current one). Free Code Camp [](https://www.freecodecamp.org/) Learn to Code — For Free Sololearn: Learn to Code [](https://www.sololearn.com/home) Join Now to learn the basics or advance your existing skills Mimo: The coding app you need to learn to code! Python, HTML, JavaScript [](https://getmimo.com/) Join more than 17 million learners worldwide. Learn to code for free. Learn Python, JavaScript, CSS, SQL, HTML, and more with our free code learning app. Free for developers [](https://free-for.dev/#/) Your Career in Web Development Starts Here | The Odin Project [](https://www.theodinproject.com/) The Odin Project empowers aspiring web developers to learn together for free Code Learning Games CheckiO - coding games and programming challenges for beginner and advanced [](https://checkio.org/) CheckiO - coding websites and programming games. Improve your coding skills by solving coding challenges and exercises online with your friends in a fun way. Exchanges experience with other users online through fun coding activities Coding for Kids | Game-Based Programming | CodeMonkey [](https://www.codemonkey.com/) CodeMonkey is a leading coding for kids program. Through its award-winning courses, millions of students learn how to code in real programming languages. Coding Games and Programming Challenges to Code Better [](https://www.codingame.com/) CodinGame is a challenge-based training platform for programmers where you can play with the hottest programming topics. Solve games, code AI bots, learn from your peers, have fun. Learn VIM while playing a game - VIM Adventures [](https://vim-adventures.com/) VIM Adventures is an online game based on VIM's keyboard shortcuts. It's the "Zelda meets text editing" game. So come have some fun and learn some VIM! CodeCombat - Coding games to learn Python and JavaScript [](https://codecombat.com/) Learn typed code through a programming game. Learn Python, JavaScript, and HTML as you solve puzzles and learn to make your own coding games and websites. Design Useberry - Codeless prototype analytics [](https://www.useberry.com/) User testing feedback & rich insights in minutes, not months! Figma: the collaborative interface design tool. [](https://www.figma.com/) Build better products as a team. Design, prototype, and gather feedback all in one place with Figma. Dribbble - Discover the World’s Top Designers & Creative Professionals [](https://dribbble.com/) Find Top Designers & Creative Professionals on Dribbble. We are where designers gain inspiration, feedback, community, and jobs. Your best resource to discover and connect with designers worldwide. Photopea | Online Photo Editor [](https://www.photopea.com/) Photopea Online Photo Editor lets you edit photos, apply effects, filters, add text, crop or resize pictures. Do Online Photo Editing in your browser for free! Toools.design – An archive of 1000+ Design Resources [](https://www.toools.design/) A growing archive of over a thousand design resources, weekly updated for the community. Discover highly useful design tools you never thought existed. All Online Tools in One Box | 10015 Tools [](https://10015.io/) All online tools you need in one box for free. Build anything online with “all-in-one toolbox”. All tools are easy-to-use, blazing fast & free. Phase - Digital Design Reinvented| Phase [](https://phase.com/) Design and prototype websites and apps visually and intuitively, in a new powerful product reworked for the digital age. Animated Backgrounds [](https://animatedbackgrounds.me/) A Collection of 30+ animated backgrounds for websites and blogs.With Animated Backgrounds, set a simple, elegant background animations on your websites and blogs. Trianglify.io · Low Poly Pattern Generator [](https://trianglify.io/) Trianglify.io is a tool for generating low poly triangle patterns that can be used as wallpapers and website assets. Cool Backgrounds [](https://coolbackgrounds.io/) Explore a beautifully curated selection of cool backgrounds that you can add to blogs, websites, or as desktop and phone wallpapers. SVG Repo - Free SVG Vectors and Icons [](https://www.svgrepo.com/) Free Vectors and Icons in SVG format. ✅ Download free mono or multi color vectors for commercial use. Search in 300.000+ Free SVG Vectors and Icons. Microcopy - Short copy text for your website. [](https://www.microcopy.me/) Search micro UX copy text: slogans, headlines, notifications, CTA, error messages, email, account preferences, and much more. 3D icons and icon paks - Free3Dicon [](https://free3dicon.com/) All 3D icons you need in one place. This is a collection of free, beautiful, trending 3D icons, that you can use in any project. Love 3D Icon [](https://free3dicons.com/) Downloads free 3D icons GIMP - GNU Image Manipulation Program [](https://www.gimp.org/) GIMP - The GNU Image Manipulation Program: The Free and Open Source Image Editor blender.org - Home of the Blender project - Free and Open 3D Creation Software [](https://www.blender.org/) The Freedom to Create 3D Design Software | 3D Modeling on the Web | SketchUp [](https://www.sketchup.com/) SketchUp is a premier 3D design software that truly makes 3D modeling for everyone, with a simple to learn yet robust toolset that empowers you to create whatever you can imagine. Free Logo Maker - Create a Logo in Seconds - Shopify [](https://www.shopify.com/tools/logo-maker) Free logo maker tool to generate custom design logos in seconds. This logo creator is built for entrepreneurs on the go with hundreds of templates, free vectors, fonts and icons to design your own logo. The easiest way to create business logos online. All your design tools in one place | Renderforest [](https://www.renderforest.com/) Time to get your brand noticed. Create professional videos, logos, mockups, websites, and graphics — all in one place. Get started now! Prompt Hero [](https://prompthero.com/) Type Scale - A Visual Calculator [](https://type-scale.com/) Preview and choose the right type scale for your project. Experiment with font size, scale and different webfonts. DreamFusion: Text-to-3D using 2D Diffusion [](https://dreamfusion3d.github.io/) DreamFusion: Text-to-3D using 2D Diffusion, 2022. The branding style guidelines documents archive [](https://brandingstyleguides.com/) Welcome to the brand design manual documents directory. Search over our worldwide style assets handpicked collection, access to PDF documents for inspiration. Super designer | Create beautiful designs with a few clicks [](https://superdesigner.co/) Create beautiful designs with a few clicks. Simple design tools to generate unique patterns, backgrounds, 3D shapes, colors & images for social media, websites and more Readymag—a design tool to create websites without coding [](https://readymag.com/) Meet the most elegant, simple and powerful web-tool for designing websites, presentations, portfolios and all kinds of digital publications. ffflux: Online SVG Fluid Gradient Background Generator | fffuel [](https://fffuel.co/ffflux/) SVG generator to make fluid gradient backgrounds that feel organic and motion-like. Perfect to add a feeling of motion and fluidity to your web designs. Generate unique SVG design assets | Haikei [](https://haikei.app/) A web-based design tool to generate unique SVG design assets for websites, social media, blog posts, desktop and mobile wallpapers, posters, and more! Our generators let you discover, customize, randomize, and export generative SVG design assets ready to use with your favorite design tools. UI/UX - Inspirational Free Website Builder Software | 10,000+ Free Templates [](https://nicepage.com/) Nicepage is your website builder software breaking limitations common for website builders with revolutionary freehand positioning. 7000+ Free Templates. Easy Drag-n-Drop. No coding. Mobile-friendly. Clean HTML. Super designer | Create beautiful designs with a few clicks [](https://superdesigner.co/) Create beautiful designs with a few clicks. Simple design tools to generate unique patterns, backgrounds, 3D shapes, colors & images for social media, websites and more Pika – Create beautiful mockups from screenshots [](https://pika.style/) Quickly create beautiful website and device mockup from screenshot. Pika lets you capture website screenshots form URL, add device and browser frames, customize background and more LiveTerm [](https://liveterm.vercel.app/) Minimal Gallery – Web design inspiration [](https://minimal.gallery/) For the love of beautiful, clean and functional websites. Awwwards - Website Awards - Best Web Design Trends [](https://www.awwwards.com/) Awwwards are the Website Awards that recognize and promote the talent and effort of the best developers, designers and web agencies in the world. Design Systems For Figma [](https://www.designsystemsforfigma.com/) A collection of Design Systems for Figma from all over the globe. Superside: Design At Scale For Ambitious Brands [](https://www.superside.com/) We are an always-on design company. Get a team of dedicated designers, speedy turnarounds, magical creative collaboration tech and the top 1% of global talent. UXArchive - Made by Waldo [](https://uxarchive.com/) UXArchive the world's largest library of mobile user flows. Be inspired to design the best user experiences. Search by Muzli [](https://search.muz.li/) Search, discover, test and create beautiful color palettes for your projects Siteinspire | Web Design Inspiration [](https://www.siteinspire.com/) SAVEE [](https://savee.it/) The best way to save and share inspiration. A little corner of the internet to find good landing page copywriting examples [](https://greatlandingpagecopy.com/) A little corner of the internet to find great landing page copywriting examples. The Best Landing Page Examples For Design Inspiration - SaaS Landing Page [](https://saaslandingpage.com/) SaaS Landing Page showcases the best landing page examples created by top-class SaaS companies. Get ideas and inspirations for your next design project. Websites Free templates Premium Bootstrap Themes and Templates: Download @ Creative Tim [](https://www.creative-tim.com/) UI Kits, Templates and Dashboards built on top of Bootstrap, Vue.js, React, Angular, Node.js and Laravel. Join over 2,014,387+ creatives to access all our products! Free Bootstrap Themes, Templates, Snippets, and Guides - Start Bootstrap [](https://startbootstrap.com/) Start Bootstrap develops free to download, open source Bootstrap 5 themes, templates, and snippets and creates guides and tutorials to help you learn more about designing and developing with Bootstrap. Free Website Templates [](https://freewebsitetemplates.com/) Get your free website templates here and use them on your website without needing to link back to us. One Page Love - One Page Website Inspiration and Templates [](https://onepagelove.com/) One Page Love is a One Page website design gallery showcasing the best Single Page websites, templates and resources. Free CSS | 3400 Free Website Templates, CSS Templates and Open Source Templates [](https://www.free-css.com/) Free CSS has 3400 free website templates, all templates are free CSS templates, open source templates or creative commons templates. Free Bootstrap Themes and Website Templates | BootstrapMade [](https://bootstrapmade.com/) At BootstrapMade, we create beautiful website templates and bootstrap themes using Bootstrap, the most popular HTML, CSS and JavaScript framework. Free and Premium Bootstrap Themes, Templates by Themesberg [](https://themesberg.com/) Free and Premium Bootstrap themes, templates, admin dashboards and UI kits used by over 38820 web developers and software companies HTML, Vue.js and React templates for startup landing pages - Cruip [](https://cruip.com/) Cruip is a gallery of premium and free HTML, Vue.js and React templates for startups and SaaS. Free Website Templates Download | WordPress Themes - W3Layouts [](https://w3layouts.com/) Want to download free website templates? W3Layouts WordPress themes and website templates are built with responsive web design techniques. Download now! Free HTML Landing Page Templates and UI Kits | UIdeck [](https://uideck.com/) Free HTML Landing Page Templates, Bootstrap Themes, React Templates, HTML Templates, Tailwind Templates, and UI Kits. Create Online Graphics Snappa - Quick & Easy Graphic Design Software [](https://snappa.com/) Snappa makes it easy to create any type of online graphic. Create & publish images for social media, blogs, ads, and more! Canva [](https://www.canva.com/) Polotno Studio - Make graphical designs [](https://studio.polotno.com) Free online design editor. Create images for social media, youtube previews, facebook covers Free Logo Maker: Design Custom Logos | Adobe Express [](https://www.adobe.com/express/create/logo) The Adobe Express logo maker is instant, intuitive, and intelligent. Use it to generate a wide range of possibilities for your own logo. Photo Editor: Fotor – Free Online Photo Editing & Image Editor [](https://www.fotor.com/) Fotor's online photo editor helps you edit photos with free online photo editing tools. Crop photos, resize images, and add effects/filters, text, and graphics in just a few clicks. Photoshop online has never been easier with Fotor's free online photo editor. VistaCreate – Free Graphic Design Software with 70,000+ Free Templates [](https://create.vista.com/) Looking for free graphic design software? Easily create professional designs with VistaCreate, a free design tool with powerful features and 50K+ ready-made templates Draw Freely | Inkscape [](https://inkscape.org/) Inkscape is professional quality vector graphics software which runs on Linux, Mac OS X and Windows desktop computers. Visual & Video Maker Trusted By 11 Million Users - Piktochart [](https://piktochart.com/) With Piktochart, you can create professional-looking infographics, flyers, posters, charts, videos, and more. No design experience needed. Start for free. The Web's Favorite Online Graphic Design Tool | Stencil [](https://getstencil.com/) Stencil is a fantastically easy-to-use online graphic design tool and image editor built for business owners, social media marketers, and bloggers. Pablo by Buffer - Design engaging images for your social media posts in under 30 seconds [](https://pablo.buffer.com/) Buffer makes it super easy to share any page you're reading. Keep your Buffer topped up and we automagically share them for you through the day. Free Online Graphic Design Software | Create stunning designs in seconds. [](https://desygner.com/) Easy drag and drop graphic design tool for anyone to use with 1000's of ready made templates. Create & print professional business cards, flyers, social posts and more. Color Pallet Color Palettes for Designers and Artists - Color Hunt [](https://colorhunt.co/) Discover the newest hand-picked color palettes of Color Hunt. Get color inspiration for your design and art projects. Coolors - The super fast color palettes generator! [](https://coolors.co/) Generate or browse beautiful color combinations for your designs. Get color palette inspiration from nature - colorpalettes.earth [](https://colorpalettes.earth/) Color palettes inspired by beautiful nature photos Color Palette Generator - Create Beautiful Color Schemes [](https://colors.muz.li/) Search, discover, test and create beautiful color palettes for your projects A Most Useful Color Picker | 0to255 [](https://0to255.com/) Find lighter and darker colors based on any color. Discover why over two million people have used 0to255 to choose colors for their website, logo, room interior, and print design projects. Colour Contrast Checker [](https://colourcontrast.cc/) Check the contrast between different colour combinations against WCAG standards Fonts Google Fonts [](https://fonts.google.com/) Making the web more beautiful, fast, and open through great typography Fonts In Use – Type at work in the real world. [](https://fontsinuse.com/) A searchable archive of typographic design, indexed by typeface, format, and topic. Wordmark - Helps you choose fonts! [](https://wordmark.it/) Wordmark helps you choose fonts by quickly displaying your text with your fonts. OH no Type Company [](https://ohnotype.co/) OH no Type Co. Retail and custom typefaces. Life’s a thrill, fonts are chill! Illustrations Illustrations | unDraw [](https://undraw.co/illustrations) The design project with open-source illustrations for any idea you can imagine and create. Create beautiful websites, products and applications with your color, for free. Design Junction [](https://designjunction.xyz/) Design Junction is a one-stop resource library for Designers and Creatives with curated list of best resources handpicked from around the web Humaaans: Mix-&-Match illustration library [](https://www.humaaans.com/) Mix-&-match illustrations of people with a design library for InVIsion Studio and Sketch. Stubborn - Free Illustrations Generator [](https://stubborn.fun/) Free illustrations generator for Figma and Sketch. Get the opportunity to design your characters using symbols and styles. Open Peeps, Hand-Drawn Illustration Library [](https://www.openpeeps.com/) Open Peeps is a hand-drawn illustration library to create scenes of people. You can use them in product illustration, marketing, comics, product states, user flows, personas, storyboarding, quinceañera invitations, or whatever you want! ⠀ Reshot | Free icons & illustrations [](https://www.reshot.com/) Design freely with instant downloads of curated SVG icons and vector illustrations. All free with commercial licensing. No attribution required. Blush: Illustrations for everyone [](https://blush.design/) Blush makes it easy to add free illustrations to your designs. Play with fully customizable graphics made by artists across the globe. Mockups Angle 4 - 5000+ Device Mockups for Figma, Sketch and XD [](https://angle.sh/) Vector mockups for iPhone, iPad, Android and Mac devices, including the new iPhone 13, Pro, Pro Max and Mini. Perfect for presenting your apps. Huge library of components, compositions, wallpapers and plugins made for Figma, Sketch and XD. Make Mockups, Logos, Videos and Designs in Seconds [](https://placeit.net/) Get unlimited downloads on all our 100K templates! You can make a logo, video, mockup, flyer, business card and social media image in seconds right from your browser. Free and premium tools for graphic designers | Lstore Graphics [](https://www.ls.graphics/) Free and premium mockups, UI/UX tools, scene creators for busy designers Logo Design & Brand Identity Platform for Entrepreneurs | Looka [](https://looka.com/) Logojoy is now Looka! Design a Logo, make a website, and create a Brand Identity you’ll love with the power of Artificial Intelligence. 100% free to use. Create stunning product mockups easily and online - Smartmockups [](https://smartmockups.com/) Smartmockups enables you to create stunning high-resolution mockups right inside your browser within one interface across multiple devices. Previewed - Free mockup generator for your app [](https://previewed.app/) Join Previewed to create stunning 3D image shots and animations for your app. Choose from hundreds of ready made mockups, or create your own. Free Design Software - Graphic Online Maker - Glorify [](https://www.glorify.com/) Create professional and high converting social media posts, ads, infographics, presentations, and more with Glorify, a free design software & graphic maker. Other BuiltWith Technology Lookup [](https://builtwith.com/) Web technology information profiler tool. Find out what a website is built with. Compress JPEG Images Online [](https://compressjpeg.com/) Compress JPEG images and photos for displaying on web pages, sharing on social networks or sending by email. PhotoRoom - Remove Background and Create Product Pictures [](https://www.photoroom.com/) Create product and portrait pictures using only your phone. Remove background, change background and showcase products. Magic Eraser - Remove unwanted things from images in seconds [](https://www.magiceraser.io/) Magic Eraser - Use AI to remove unwanted things from images in seconds. Upload an image, mark the bit you need removed, download the fixed up image. Compressor.io - optimize and compress JPEG photos and PNG images [](https://compressor.io/) Optimize and compress JPEG, PNG, SVG, GIF and WEBP images online. Compress, resize and rename your photos for free. Remove Video Background – Unscreen [](https://www.unscreen.com/) Remove the background of any video - 100% automatically, online & free! Goodbye Greenscreen. Hello Unscreen. Noun Project: Free Icons & Stock Photos for Everything [](https://thenounproject.com/) Noun Project features the most diverse collection of icons and stock photos ever. Download SVG and PNG. Browse over 5 million art-quality icons and photos. Design Principles [](https://principles.design/) An Open Source collection of Design Principles and methods Shapefest™ - A massive library of free 3D shapes [](https://www.shapefest.com/) A massive free library of beautifully rendered 3D shapes. 160,000+ high resolution PNG images in one cohesive library. Learning UX Degreeless.design - Everything I Learned in Design School [](https://degreeless.design/) This is a list of everything I've found useful in my journey of learning design, and an ongoing list of things I think you should read. For budding UX, UI, Interaction, or whatever other title designers. UX Tools | Practical UX skills and tools [](https://uxtools.co/) Lessons and resources from two full-time product designers. Built For Mars [](https://builtformars.com/) On a mission to help the world build better user experiences by demystifying UX. Thousands of hours of research packed into UX case studies. Case Study Club – Curated UX Case Study Gallery [](https://www.casestudy.club/) Case Study Club is the biggest curated gallery of the best UI/UX design case studies. Get inspired by industry-leading designers, openly sharing their UX process. The Guide to Design [](https://start.uxdesign.cc/) A self-guided class to help you get started in UX and answer key questions about craft, design, and career Uxcel - Where design careers are built [](https://app.uxcel.com/explore) Available on any device anywhere in the world, Uxcel is the best way to improve and learn UX design online in just 5 minutes per day. UI & UX Design Tips by Jim Raptis. [](https://www.uidesign.tips/) Learn UI & UX Design with practical byte-sized tips and in-depth articles from Jim Raptis. Entrepreneur Instant Username Search [](https://instantusername.com/#/) Instant Username Search checks out if your username is available on more than 100 social media sites. Results appear instantly as you type. Flourish | Data Visualization & Storytelling [](https://flourish.studio/) Beautiful, easy data visualization and storytelling PiPiADS - #1 TikTok Ads Spy Tool [](https://www.pipiads.com/) PiPiADS is the best tiktok ads spy tool .We provide tiktok advertising,advertising on tiktok,tiktok ads examples,tiktok ads library,tiktok ads best practices,so you can understand the tiktok ads cost and master the tiktok ads 2021 and tiktok ads manager. Minea - The best adspy for product search in ecommerce and dropshipping [](https://en.minea.com/) Minea is the ultimate e-commerce product search tool. Minea tracks all ads on all networks. Facebook Ads, influencer product placements, Snapspy, all networks are tracked. Stop paying adspy 149€ for one network and discover Minea. AdSpy [](https://adspy.com/) Google Trends [](https://trends.google.com/) ScoreApp: Advanced Quiz Funnel Marketing | Make a Quiz Today [](https://www.scoreapp.com/) ScoreApp makes quiz funnel marketing easy, so you can attract relevant warm leads, insightful data and increase your sales. Try for free today Mailmodo - Send Interactive Emails That Drive Conversions [](https://www.mailmodo.com/) Use Mailmodo to create and send interactive emails your customers love. Drive conversions and get better email ROI. Sign up for a free trial now. 185 Top E-Commerce Sites Ranked by User Experience Performance – Baymard Institute [](https://baymard.com/ux-benchmark) See the ranked UX performance of the 185 largest e-commerce sites in the US and Europe. The chart summarizes 50,000+ UX performance ratings. Metricool - Analyze, manage and measure your digital content [](https://metricool.com/) Social media scheduling, web analytics, link in bio and reporting. Metricool is free per live for one brand. START HERE Visualping: #1 Website change detection, monitoring and alerts [](https://visualping.io/) More than 1.5 millions users monitor changes in websites with Visualping, the No1 website change detection, website checker, webpage change monitoring and webpage change detection tool. Gumroad – Sell what you know and see what sticks [](https://gumroad.com/) Gumroad is a powerful, but simple, e-commerce platform. We make it easy to earn your first dollar online by selling digital products, memberships and more. Product Hunt – The best new products in tech. [](https://www.producthunt.com/) Product Hunt is a curation of the best new products, every day. Discover the latest mobile apps, websites, and technology products that everyone's talking about. 12ft Ladder [](https://12ft.io/) Show me a 10ft paywall, I’ll show you a 12ft ladder. namecheckr | Social and Domain Name Availability Search For Brand Professionals [](https://www.namecheckr.com/) Social and Domain Name Availability Search For Brand Professionals Excel AI Formula Generator - Excelformulabot.com [](https://excelformulabot.com/) Transform your text instructions into Excel formulas in seconds with the help of AI. Z-Library [](https://z-lib.org/) Global Print On Demand Platform | Gelato [](https://www.gelato.com/) Create and sell custom products online. With local production in 33 countries, easy integration, and 24/7 customer support, Gelato is an all-in-one platform. Freecycle: Front Door [](https://freecycle.org/) Free eBooks | Project Gutenberg [](https://www.gutenberg.org/) Project Gutenberg is a library of free eBooks. Convertio — File Converter [](https://convertio.co/) Convertio - Easy tool to convert files online. More than 309 different document, image, spreadsheet, ebook, archive, presentation, audio and video formats supported. Namechk [](https://namechk.com/) Crazy Egg Website — Optimization | Heatmaps, Recordings, Surveys & A/B Testing [](https://www.crazyegg.com/) Use Crazy Egg to see what's hot and what's not, and to know what your web visitors are doing with tools, such as heatmaps, recordings, surveys, A/B testing & more. Ifttt [](https://ifttt.com/) Also Asked [](https://alsoasked.com/) Business Name Generator - Easily create Brandable Business Names - Namelix [](https://namelix.com/) Namelix uses artificial intelligence to create a short, brandable business name. Search for domain availability, and instantly generate a logo for your new business Merch Informer [](https://merchinformer.com/) Headline Generator [](https://www.title-generator.com/) Title Generator: create 700 headlines with ONE CLICK: Content Ideas + Catchy Headlines + Ad Campaign E-mail Subject Lines + Emotional Titles. Simple - Efficient - One Click Make [](https://www.make.com/en) Create and add calculator widgets to your website | CALCONIC_ [](https://www.calconic.com/) Web calculator builder empowers you to choose from a pre-made templates or build your own calculator widgets from a scratch without any need of programming knowledge Boost Your Views And Subscribers On YouTube - vidIQ [](https://vidiq.com/) vidIQ helps you acquire the tools and knowledge needed to grow your audience faster on YouTube and beyond. Learn More Last Pass [](https://www.lastpass.com/) Starter Story: Learn How People Are Starting Successful Businesses [](https://www.starterstory.com/) Starter Story interviews successful entrepreneurs and shares the stories behind their businesses. In each interview, we ask how they got started, how they grew, and how they run their business today. How To Say No [](https://www.starterstory.com/how-to-say-no) Saying no is hard, but it's also essential for your sanity. Here are some templates for how to say no - so you can take back your life. Think with Google - Discover Marketing Research & Digital Trends [](https://www.thinkwithgoogle.com/) Uncover the latest marketing research and digital trends with data reports, guides, infographics, and articles from Think with Google. ClickUp™ | One app to replace them all [](https://clickup.com/) Our mission is to make the world more productive. To do this, we built one app to replace them all - Tasks, Docs, Goals, and Chat. The Manual [](https://manual.withcompound.com/) Wealth-planning resources for founders and startup employees Software for Amazon FBA Sellers & Walmart Sellers | Helium 10 [](https://www.helium10.com/) If you're looking for the best software for Amazon FBA & Walmart sellers on the market, check out Helium 10's capabilities online today! Buffer: All-you-need social media toolkit for small businesses [](https://buffer.com/) Use Buffer to manage your social media so that you have more time for your business. Join 160,000+ small businesses today. CPGD — The Consumer Packaged Goods Directory [](https://www.cpgd.xyz/) The Consumer Packaged Goods Directory is a platform to discover new brands and resources. We share weekly trends in our newsletter and partner with services to provide vetted, recommended platforms for our Directory brands. Jungle Scout [](https://www.junglescout.com/) BuzzSumo | The World's #1 Content Marketing Platform [](https://buzzsumo.com/) BuzzSumo powers the strategies of 500k+ marketers, with content marketing data on 8b articles, 42m websites, 300t engagements, 500k journalists & 492m questions. Login - Capital [](https://app.capital.xyz/) Raise, hold, spend, and send funds — all in one place. Marketing Pictory – Video Marketing Made Easy - Pictory.ai [](https://pictory.ai/) Pictory's powerful AI enables you to create and edit professional quality videos using text, no technical skills required or software to download. Tolstoy | Communicate with interactive videos [](https://www.gotolstoy.com/) Start having face-to-face conversations with your customers. Create Email Marketing Your Audience Will Love - MailerLite [](https://www.mailerlite.com/) Email marketing tools to grow your audience faster and drive revenue smarter. Get free access to premium features with a 30-day trial! Sign up now! Hypefury - Schedule & Automate Social Media Marketing [](https://hypefury.com/) Save time on social media while creating more value, and growing your audience faster. Schedule & automate your social media experience! Klaviyo: Marketing Automation Platform for Email & SMS [](https://www.klaviyo.com/) Klaviyo, an ecommerce marketing automation platform for email marketing and sms syncs your tech stack with your website store to scale your business. Online Email & Lead Scraper | Klean Leads [](https://www.kleanleads.com/) Klean Leads is an online email scraper & email address finder. Use it to book more appointments, get more replies, and close more sales. PhantomBuster [](https://phantombuster.com/) Call to Action Examples - 300+ CTA Phrases [](https://ctaexamples.com/) See the best CTA example in every situation covered by the library of 300+ CTA goals. Use the examples to create your own CTAs in minutes. Creative Center: one-stop creative solution for TikTok [](https://ads.tiktok.com/business/creativecenter/pc/en?from=001010) Come to get your next great idea for TikTok. Here you can find the best performing ads, viral videos, and trending hashtags across regions and verticals. Groove.cm GrooveFunnels, GrooveMail with CRM and Digital Marketing Automation Platform - Groove.cm with GrooveFunnels, GroovePages, GrooveKart [](https://groove.cm/) Groove is a website creator, page builder, sales funnel maker, membership site platform, email autoresponder, blog tool, shopping cart system, ecommerce store solution, affiliate manager, video marketing software and more apps to help build your online business. SurveyMonkey: The World’s Most Popular Free Online Survey Tool [](https://www.surveymonkey.com/) Use SurveyMonkey to drive your business forward by using our free online survey tool to capture the voices and opinions of the people who matter most to you. Video Maker | Create Videos Online | Promo.com [](https://promo.com/) Free customizable video maker to help boost your business. Video creator for ads, social media, product and explainer videos, and for anything else you need! beehiiv — The newsletter platform built for growth [](https://www.beehiiv.com/) Access the best tools available in email, helping your newsletter scale and monetize like never before. GetResponse | Professional Email Marketing for Everyone [](https://www.getresponse.com/) No matter your level of expertise, we have a solution for you. At GetResponse, it's email marketing done right. Start your free account today! Search Email Newsletter Archives : Email Tuna [](https://emailtuna.com/) Explore newsletters without subscribing. Get email design ideas, discount coupon codes and exclusive newsletters deals. Database of email newsletters archived from all over the internet. Other Tools Simplescraper — Scrape Websites and turn them into APIs [](https://simplescraper.io/) Web scraping made easy — a powerful and free Chrome extension for scraping websites in your browser, automated in the cloud, or via API. No code required. Exploding Topics - Discover the hottest new trends. [](https://explodingtopics.com/) See new market opportunities, trending topics, emerging technology, hot startups and more on Exploding Topics. Scribe | Visual step-by-step guides [](https://scribehow.com/) By capturing your process while you work, Scribe automatically generates a visual guide, ready to share with the click of a button. Get It Free – The internet's BEST place to find free stuff! [](https://getitfree.us/) The internet's BEST place to find free stuff! Inflact by Ingramer – Marketing toolkit for Instagram [](https://inflact.com/) Sell on Instagram, build your audience, curate content with the right set of tools. Free Online Form Builder & Form Creator | Jotform [](https://www.jotform.com/) We believe the right form makes all the difference. Go from busywork to less work with powerful forms that use conditional logic, accept payments, generate reports, and automate workflows. Manage Your Team’s Projects From Anywhere | Trello [](https://trello.com/en) Trello is the ultimate project management tool. Start up a board in seconds, automate tedious tasks, and collaborate anywhere, even on mobile. TikTok hashtag generator - tiktokhashtags.com [](https://tiktokhashtags.com/) Find out which are the best hashtags for your TikTok post. Create Infographics, Reports and Maps - Infogram [](https://infogram.com/) Infogram is an easy to use infographic and chart maker. Create and share beautiful infographics, online reports, and interactive maps. Make your own here. Confetto - Create Instagram content in minutes [](https://www.confet.to/) Confetto is an all-in-one social media marketing tool built for SMBs and Social Media Managers. Confetto helps you create high-quality content for your audience that maximizes your reach and engagement on social media. Design, copy-write, plan and schedule content all in one place. Find email addresses in seconds • Hunter (Email Hunter) [](https://hunter.io/) Hunter is the leading solution to find and verify professional email addresses. Start using Hunter and connect with the people that matter for your business. PlayPhrase.me: Site for cinema archaeologists. [](https://playphrase.me/) Travel and explore the world of cinema. Largest collection of video quotes from movies on the web. #1 Free SEO Tools → SEO Review Tools [](https://www.seoreviewtools.com/) SEO Review Tools: 42+ Free Online SEO Tools build with ❤! → Rank checker → Domain Authority Checker → Keyword Tool → Backlink Checker Podcastle: Seamless Podcast Recording & Editing [](https://podcastle.ai/) Podcastle is the simplest way to create professional-quality podcasts. Record, edit, transcribe, and export your content with the power of AI, in an intuitive web-based platform. Save Ads from TikTok & Facebook Ad Library - Foreplay [](https://www.foreplay.co/) The best way to save ads from TikTok Creative Center and Facebook Ad Library, Organize them into boards and share ad inspiration with your team. Supercharge your creative strategy. SiteRight - Automate Your Business [](https://www.siteright.co/) SiteRight combines the abilities of multiple online resources into a single dashboard allowing you to have full control over how you manage your business. Diffchecker - Compare text online to find the difference between two text files [](https://www.diffchecker.com/) Diffchecker will compare text to find the difference between two text files. Just paste your files and click Find Difference! Yout.com [](https://yout.com/) Yout.com allows you to record videos from YouTube, FaceBook, SoundCloud, VK and others too many formats with clipping. Intuitively easy to use, with Yout the Internet DVR, with a bit of extra. AI Content Generation | Competitor Analysis - Predis.ai [](https://predis.ai/) Predis helps brands and influencers communicate better on social media by providing AI-powered content strategy analysis, content and hashtag recommendations. Castr | #1 Live Video Streaming Solution With Video Hosting [](https://castr.io/) Castr is a live video streaming solution platform that delivers enterprise-grade live videos globally with CDN. Live event streaming, video hosting, pre-recorded live, multi stream – all in one place using Castr. Headliner - Promote your podcast, radio show or blog with video [](https://www.headliner.app/) Easily create videos to promote your podcast, radio show or blog. Share to Instagram, Facebook, Twitter, YouTube, Linkedin and anywhere video lives Create Presentations, Infographics, Design & Video | Visme [](https://www.visme.co/) Create professional presentations, interactive infographics, beautiful design and engaging videos, all in one place. Start using Visme today. Designrr - Create eBooks, Kindle books, Leadmagnets, Flipbooks and Blog posts from your content in 2 minutes [](https://designrr.io/) Upload any web page, MS Word, Video, Podcast or YouTube and it will create a stunning ebook and convert it to pdf, epub, Kindle or Flipbook. Quick and Easy to use. Full Training, 24x7 Support and Facebook Group Included. SwipeWell | Swipe File Software [](https://www.swipewell.app/) The only Chrome extension dedicated to helping you save, organize, and reference marketing examples (so you never feel stumped). Tango | Create how-to guides, in seconds [](https://www.tango.us/) Tango takes the pain out of documenting processes by automatically generating how-to guides while you work. Empower your team to do their best work. Ad Creative Bank [](https://www.theadcreativebank.com/) Get inspired by ads from across industries, learn new best practices, and start thinking creatively about your brand’s digital creative. Signature Hound • Free Email Signature and Template Generator [](https://signaturehound.com/) Our email signature generator is free and easy to use. Our customizable templates work with Gmail, Outlook, Office 365, Apple Mail and more. Organize All Of Your Marketing In One Place - CoSchedule [](https://coschedule.com/) Get more done in less time with the only work management software for marketers. B Ok - Books [](https://b-ok.xyz/categories) OmmWriter [](https://ommwriter.com/) Ommwriter Rebrandly | Custom URL Shortener, Branded Link Management, API [](https://www.rebrandly.com/) URL Shortener with custom domains. Shorten, brand and track URLs with the industry-leading link management platform. Free to try. API, Short URL, Custom Domains. Common Tools [](https://www.commontools.org/) Book Bolt [](https://bookbolt.io/) Zazzle [](https://www.zazzle.com/) InspiroBot [](https://inspirobot.me/) Download Free Cheat Sheets or Create Your Own! - Cheatography.com: Cheat Sheets For Every Occasion [](https://cheatography.com/) Find thousands of incredible, original programming cheat sheets, all free to download. No Code Chatbot Platform | Free Chatbot Platform | WotNot [](https://wotnot.io/) WotNot is the best no code chatbot platform to build AI bot easily without coding. Deploy bots and live chat on the Website, Messenger, WhatsApp, and more. SpyFu - Competitor Keyword Research Tools for Google Ads PPC & SEO [](https://www.spyfu.com/) Systeme.io - The only tool you need to launch your online business [](https://systeme.io/) Systeme.io has all the tools you need to grow your online business. Click here to create your FREE account! Productivity Temp Mail [](https://temp-mail.org/en/) The Visual Collaboration Platform for Every Team | Miro [](https://miro.com/) Scalable, secure, cross-device and enterprise-ready team collaboration whiteboard for distributed teams. Join 35M+ users from around the world. Grammarly: Free Online Writing Assistant [](https://www.grammarly.com/) Millions trust Grammarly’s free writing app to make their online writing clear and effective. Getting started is simple — download Grammarly’s extension today. Rize · Maximize Your Productivity [](https://rize.io/) Rize is a smart time tracker that improves your focus and helps you build better work habits. Motion | Manage calendars, meetings, projects & tasks in one app [](https://www.usemotion.com/) Automatically prioritize tasks, schedule meetings, and resolve calendar conflicts. Used by over 10k CEOs and professionals to improve focus, get more done, and streamline workday. Notion – One workspace. Every team. [](https://www.notion.so/) We’re more than a doc. Or a table. Customize Notion to work the way you do. Loom: Async Video Messaging for Work | Loom [](https://www.loom.com/) Record your screen, share your thoughts, and get things done faster with async video. Zapier | Automation that moves you forward [](https://zapier.com/) Workflow automation for everyone. Zapier automates your work across 5,000+ app integrations, so you can focus on what matters. Rows — The spreadsheet with superpowers [](https://rows.com/) Combine the power of a spreadsheet with built-in integrations from your business apps. Automate workflows and build tools that make work simpler. Free Online Form Builder | Tally [](https://tally.so/) Tally is the simplest way to create free forms & surveys. Create any type of form in seconds, without knowing how to code, and for free. Highbrow | Learn Something New Every Day. Join for Free! [](https://gohighbrow.com/) Highbrow helps you learn something new every day with 5-minute lessons delivered to your inbox every morning. Join over 400,000 lifelong learners today! Slick Write | Check your grammar. Proofread online. [](https://www.slickwrite.com/#!home) Slick Write is a powerful, FREE application that makes it easy to check your writing for grammar errors, potential stylistic mistakes, and other features of interest. Whether you're a blogger, novelist, SEO professional, or student writing an essay for school, Slick Write can help take your writing to the next level. Reverso [](https://www.reverso.net) Hemingway Editor [](https://hemingwayapp.com/) Web Apps by 123apps - Edit, Convert, Create [](https://123apps.com/) Splitbee – Your all-in-one analytics and conversion platform [](https://splitbee.io/) Track and optimize your online business with Splitbee. Analytics, Funnels, Automations, A/B Testing and more. PDF Tools Free PDF, Video, Image & Other Online Tools - TinyWow [](https://tinywow.com/) Smallpdf.com - A Free Solution to all your PDF Problems [](https://smallpdf.com/) Smallpdf - the platform that makes it super easy to convert and edit all your PDF files. Solving all your PDF problems in one place - and yes, free. Sejda helps with your PDF tasks [](https://www.sejda.com/) Sejda helps with your PDF tasks. Quick and simple online service, no installation required! Split, merge or convert PDF to images, alternate mix or split scans and many other. iLovePDF | Online PDF tools for PDF lovers [](https://www.ilovepdf.com/) iLovePDF is an online service to work with PDF files completely free and easy to use. Merge PDF, split PDF, compress PDF, office to PDF, PDF to JPG and more! Text rewrite QuillBot [](https://quillbot.com/) Pre Post SEO : Online SEO Tools [](https://www.prepostseo.com/) Free Online SEO Tools: plagiarism checker, grammar checker, image compressor, website seo checker, article rewriter, back link checker Wordtune | Your personal writing assistant & editor [](https://www.wordtune.com/) Wordtune is the ultimate AI writing tool that rewrites, rephrases, and rewords your writing! Trusted by over 1,000,000 users, Wordtune strengthens articles, academic papers, essays, emails and any other online content. Aliexpress alternatives CJdropshipping - Dropshipping from Worldwide to Worldwide! [](https://cjdropshipping.com/) China's reliable eCommerce dropshipping fulfillment supplier, helps small businesses ship worldwide, dropship and fulfillment services that are friendly to start-ups and small businesses, Shopify dropshipping. SaleHoo [](https://www.salehoo.com/) Alibaba.com: Manufacturers, Suppliers, Exporters & Importers from the world's largest online B2B marketplace [](https://www.alibaba.com/) Find quality Manufacturers, Suppliers, Exporters, Importers, Buyers, Wholesalers, Products and Trade Leads from our award-winning International Trade Site. Import & Export on alibaba.com Best Dropshipping Suppliers for US + EU Products | Spocket [](https://www.spocket.co/) Spocket allows you to easily start dropshipping top products from US and EU suppliers. Get started for free and see why Spocket consistently gets 5 stars. Best dropshipping supplier to the US [](https://www.usadrop.com/) THE ONLY AMERICAN-MADE FULFILLMENT CENTER IN CHINA. Our knowledge of the Worldwide dropshipping market and the Chinese Supply-Chain can't be beat! 阿里1688 [](https://www.1688.com/) 阿里巴巴(1688.com)是全球企业间(B2B)电子商务的著名品牌,为数千万网商提供海量商机信息和便捷安全的在线交易市场,也是商人们以商会友、真实互动的社区平台。目前1688.com已覆盖原材料、工业品、服装服饰、家居百货、小商品等12个行业大类,提供从原料--生产--加工--现货等一系列的供应产品和服务 Dropshipping Tools Oberlo | Where Self Made is Made [](https://www.oberlo.com/) Start selling online now with Shopify. All the videos, podcasts, ebooks, and dropshipping tools you'll need to build your online empire. Klaviyo: Marketing Automation Platform for Email & SMS [](https://www.klaviyo.com/) Klaviyo, an ecommerce marketing automation platform for email marketing and sms syncs your tech stack with your website store to scale your business. SMSBump | SMS Marketing E-Commerce App for Shopify [](https://smsbump.com/) SMSBump is an SMS marketing & automation app for Shopify. Segment customers, recover orders, send campaign text messages with a 35%+ click through rate. AfterShip: The #1 Shipment Tracking Platform [](https://www.aftership.com/) Order status lookup, branded tracking page, and multi-carrier tracking API for eCommerce. Supports USPS, FedEx, UPS, and 900+ carriers worldwide. #1 Dropshipping App | Zendrop [](https://zendrop.com/) Start and scale your own dropshipping business with Zendrop. Sell and easily fulfill your orders with the fastest shipping in the industry. Best Dropshipping Suppliers for US + EU Products | Spocket [](https://www.spocket.co/) Spocket allows you to easily start dropshipping top products from US and EU suppliers. Get started for free and see why Spocket consistently gets 5 stars. Video Editing Jitter • The simplest motion design tool on the web. [](https://jitter.video/) Animate your designs easily. Export your creations as videos or GIFs. All in your browser. DaVinci Resolve 18 | Blackmagic Design [](https://www.blackmagicdesign.com/products/davinciresolve) Professional video editing, color correction, visual effects and audio post production all in a single application. Free and paid versions for Mac, Windows and Linux. Online Video Editor | Video Creator | InVideo [](https://invideo.io/) InVideo's Online Video Editor Helps You Make Professional Videos From Premium Templates, Images, And Music. All your video needs in one place | Clipchamp [](https://clipchamp.com/) Fast-forward your creations with our video editing platform. Start with a video template or record your webcam or screen. Get the pro look with filters, transitions, text and more. Then, export in minutes and share in an instant. Descript | All-in-one audio/video editing, as easy as a doc. [](https://www.descript.com/) Record, transcribe, edit, mix, collaborate, and master your audio and video with Descript. Download for free →. Kapwing — Reach more people with your content [](https://www.kapwing.com/) Kapwing is a collaborative, online content creation platform that you can use to edit video and create content. Join over 10 million modern creators who trust Kapwing to create, edit, and grow their content on every channel. Panzoid [](https://panzoid.com/) Powerful, free online apps and community for creating beautiful custom content. Google Web Designer - Home [](https://webdesigner.withgoogle.com/) Kapwing — Reach more people with your content [](https://www.kapwing.com/) Kapwing is a collaborative, online content creation platform that you can use to edit video and create content. Join over 10 million modern creators who trust Kapwing to create, edit, and grow their content on every channel. ClipDrop [](https://clipdrop.co/) Create professional visuals without a photo studio CapCut [](https://www.capcut.com/) CapCut is an all-in-one online video editing software which makes creation, upload & share easier, with frame by frame track editor, cloud drive etc. VEED - Online Video Editor - Video Editing Made Simple [](https://www.veed.io/) Make stunning videos with a single click. Cut, trim, crop, add subtitles and more. Online, no account needed. Try it now, free. VEED Free Video Maker | Create & Edit Your Videos Easily - Animoto [](https://animoto.com/k/welcome) Create, edit, and share videos with our online video maker. Combine your photos, video clips, and music to make quality videos in minutes. Get started free! Runway - Online Video Editor | Everything you need to make content, fast. [](https://runwayml.com/) Discover advanced video editing capabilities to take your creations to the next level. CreatorKit - A.I. video creator for marketers [](https://creatorkit.com/) Create videos with just one click, using our A.I. video editor purpose built for marketers. Create scroll stopping videos, Instagram stories, Ads, Reels, and TikTok videos. Pixar in a Box | Computing | Khan Academy [](https://www.khanacademy.org/computing/pixar) 3D Video Motions Plask - AI Motion Capture and 3D Animation Tool [](https://plask.ai/) Plask is an all-in-one browser-based AI motion capture tool and animation editor that anybody can use, from motion designers to every day content creators. Captions Captions [](https://www.getcaptions.app/) Say hello to Captions, the only camera and editing app that automatically transcribes, captions and clips your talking videos for you. Stock videos Pexels [](https://www.pexels.com/) Pixabay [](https://pixabay.com/) Mixkit - Awesome free assets for your next video project [](https://mixkit.co/) Download Free Stock Video Footage, Stock Music & Premiere Pro Templates for your next video editing project. All assets can be downloaded for free! Free Stock Video Footage HD 4K Download Royalty-Free Clips [](https://www.videvo.net/) Download free stock video footage with over 300,000 video clips in 4K and HD. We also offer a wide selection of music and sound effect files with over 180,000 clips available. Click here to download royalty-free licensing videos, motion graphics, music and sound effects from Videvo today. Free Stock Video Footage HD Royalty-Free Videos Download [](https://mazwai.com/) Download free stock video footage with clips available in HD. Click here to download royalty-free licensing videos from Mazwai now. Royalty Free Stock Video Footage Clips | Vidsplay.com [](https://www.vidsplay.com/) Royalty Free Stock Video Footage Clips Free Stock Video Footage, Royalty Free Videos for Download [](https://coverr.co/) Download royalty free (for personal and commercial use), unique and beautiful video footage for your website or any project. No attribution required. Stock Photos Beautiful Free Images & Pictures | Unsplash [](https://unsplash.com/) Beautiful, free images and photos that you can download and use for any project. Better than any royalty free or stock photos. When we share, everyone wins - Creative Commons [](https://creativecommons.org/) Creative Commons licenses are 20! Honoring 20 years of open sharing using CC licenses, join us in 2022 to celebrate Better Sharing — advancing universal access to knowledge and culture, and fostering creativity, innovation, and collaboration. Help us reach our goal of raising $15 million for a future of Better Sharing.  20 Years of Better … Read More "When we share, everyone wins" Food Pictures • Foodiesfeed • Free Food Photos [](https://www.foodiesfeed.com/) Download 2000+ food pictures ⋆ The best free food photos for commercial use ⋆ CC0 license Free Stock Photos and Images for Websites & Commercial Use [](https://burst.shopify.com/) Browse thousands of beautiful copyright-free images. All our pictures are free to download for personal and commercial use, no attribution required. EyeEm | Authentic Stock Photography and Royalty-Free Images [](https://www.eyeem.com/) Explore high-quality, royalty-free stock photos for commercial use. License individual images or save money with our flexible subscription and image pack plans. picjumbo: Free Stock Photos [](https://picjumbo.com/) Free stock photos and images for your projects and websites.️ Beautiful 100% free high-resolution stock images with no watermark. Free Stock Photos, Images, and Vectors [](https://www.stockvault.net/) 139.738 free stock photos, textures, backgrounds and graphics for your next project. No attribution required. Free Stock Photos, PNGs, Templates & Mockups | rawpixel [](https://www.rawpixel.com/) Free images, PNGs, stickers, backgrounds, wallpapers, graphic templates and PSD mockups. All safe to use with commercial licenses. Free Commercial Stock Photos & Royalty Free Images | PikWizard [](https://pikwizard.com/) Free images, videos & free stock photos. Unlimited downloads ✓ Royalty-free Images ✓Copyright-free for commercial use ✓ No Attribution Required Design Bundles [](https://designbundles.net/) Stock music Royalty Free Music for video creators | Epidemic Sound [](https://www.epidemicsound.com/) Download premium Royalty free Music and SFX! Our free trial gives you access to over 35,000 tracks and 90,000 sound effects for video, streaming and more! Royalty-Free Music & SFX for Video Creators | Artlist [](https://artlist.io/) Explore the ultimate royalty-free music & sound effects catalogs for unlimited use in YouTube videos, social media & films created by inspiring indie artists worldwide. The go-to music licensing choice for all creators Royalty Free Audio Tracks - Envato Elements [](https://elements.envato.com/audio) Download Royalty Free Stock Audio Tracks for your next project from Envato Elements. Premium, High Quality handpicked Audio files ideal for any genre. License popular music for videos • Lickd [](https://lickd.co/) The only place you can license popular music for videos. Access 1M+ mainstream tracks, plus high-quality stock music for content creators NCS (NoCopyrightSounds) - free music for content creators [](https://ncs.io/) NCS is a Record Label dedicated to giving a platform to the next generation of Artists in electronic music, representing genres from house to dubstep via trap, drum & bass, electro pop and more. Search Engine Optimization Keyword Tool For Monthly Search Volume, CPC & Competition [](https://keywordseverywhere.com/) Keywords Everywhere is a browser add-on for Chrome & Firefox that shows search volume, CPC & competition on multiple websites. Semrush - Online Marketing Can Be Easy [](https://www.semrush.com/) Turn the algorithm into a friend. Make your business visible online with 55+ tools for SEO, PPC, content, social media, competitive research, and more. DuckDuckGo — Privacy, simplified. [](https://duckduckgo.com/) The Internet privacy company that empowers you to seamlessly take control of your personal information online, without any tradeoffs. SEO Software for 360° Analysis of Your Website [](https://seranking.com/) Leading SEO software for business owners, agencies, and SEO specialists. Track your rankings, monitor competitors, spot technical errors, and more. Skyrocket your organic traffic with Surfer [](https://surferseo.com/) Use Surfer to research, write, optimize, and audit! Everything you need to create a comprehensive content strategy that yields real results is right here. Ahrefs - SEO Tools & Resources To Grow Your Search Traffic [](https://ahrefs.com/) You don't have to be an SEO pro to rank higher and get more traffic. Join Ahrefs – we're a powerful but easy to learn SEO toolset with a passionate community. Neon Tools [](https://neontools.io/) Google Index Search [](https://lumpysoft.com/) Google Index Search SEO Backlink Checker & Link Building Toolset | Majestic.com [](https://majestic.com/) Develop backlink strategies with our Link Intelligence data, build the strongest SEO backlink campaigns to drive organic traffic and boost your rankings today. PageOptimizer Pro [](https://pageoptimizer.pro/) Plans Services SEO Consulting Learn SEO About Blog POP SEO Community Podcast Support POP On Page Workshops With Kyle Roof POP Chrome Extension Guide Tutorial Videos Frequently Asked Questions Best Practices Login Cancel Anytime Plans Services SEO Consulting Learn SEO About Blog POP SEO Community Podcast Support POP On Page… Keyword Chef - Keywords for Publishers [](https://keywordchef.com/) Rank Insanely Fast for Keywords Your Competition Can’t Find “Every long-tail keyword I find ends up ranking within a day” – Dane Eyerly, Owner at TextGoods.com Keyword Chef automatically finds and filters keywords for you. Real-time SERP analysis lets you find keywords nearly guaranteed to rank. Try for free → Let’s face it, most keyword tools ... Read more Notifier - Social Listening for Social Media and More! [](https://notifier.so/) Track keywords. Market your product for free. Drive the conversation. Easy. Free Trial. No obligation ever. Simple. Fast. Trusted by Top Companies. Free Keyword Research Tool from Wordtracker [](https://www.wordtracker.com/) The best FREE alternative to the Keyword Planner. Use Wordtracker to reveal 1000s of profitable longtail keywords with up to 10,000 results per search Blog Posts The 60 Hottest Front-end Tools of 2021 | CSS-Tricks - CSS-Tricks [](https://css-tricks.com/hottest-front-end-tools-in-2021/) A complete list of the most popular front-end tools in 2021, according to the Web Tools Weekly newsletter. See which resources made the list. Resume ResumeGlow - AI Powered Resume Builder [](https://resumeglow.com/) Get hired fast with a resume that grabs attention. Designed by a team of HR experts and typographers. Customizable templates with more than a million possible Create Your Job-winning Resume - (Free) Resume maker · Resume.io [](https://resume.io/) Free online resume maker, allows you to create a perfect Resume or Cover Letter in 5 minutes. See how easy it is to write a professional resume - apply for jobs today! Rezi - The Leading AI-Powered Free Resume Builder [](https://www.rezi.ai/) Rezi’s award-winning AI-powered resume builder is trusted by hundreds of thousands of job seekers. Create your perfect resume in minutes with Rezi. Create a Perfect Resume | Free Resume Builder | Resumaker.ai [](https://resumaker.ai/) Create your professional resume with this online resume maker. Choose a designer-made template and grab any employer attention in seconds. Trusted AI Resume Maker Helps You Get Hired Fast [](https://skillroads.com/) Reach a 96.4% success rate in the job hunt race with the best resume creator. Our innovative technologies and 24/7 support help you to become a perfect candidate for any job. Do not lose your chance to become the One. Kickresume | Best Online Resume & Cover Letter Builder [](https://www.kickresume.com/) Create your best resume yet. Online resume and cover letter builder used by 1,300,000 job seekers worldwide. Professional templates approved by recruiters. ResumeMaker.Online | Create a Professional Resume for Free [](https://www.resumemaker.online/) Save time with the easiest-to-use Resume Maker Online. Create an effective resume in just minutes and land your dream job. No Sign-up required, start now! Interviews Interview Warmup - Grow with Google [](https://grow.google/certificates/interview-warmup/) A quick way to prepare for your next interview. Practice key questions, get insights about your answers, and get more comfortable interviewing. No code website builder Carrd - Simple, free, fully responsive one-page sites for pretty much anything [](https://carrd.co/) A free platform for building simple, fully responsive one-page sites for pretty much anything. Webflow: Create a custom website | No-code website builder [](https://webflow.com/) Create professional, custom websites in a completely visual canvas with no code. Learn how to create a website by trying Webflow for free! Google Sites: Sign-in [](https://sites.google.com/) FlutterFlow - Build beautiful, modern apps incredibly fast! [](https://flutterflow.io/) FlutterFlow lets you build apps incredibly fast in your browser. Build fully functional apps with Firebase integration, API support, animations, and more. Export your code or even easier deploy directly to the app stores! Free Website Builder: Build a Free Website or Online Store | Weebly [](https://www.weebly.com/) Weebly’s free website builder makes it easy to create a website, blog, or online store. Find customizable templates, domains, and easy-to-use tools for any type of business website. Glide • No Code App Builder • Nocode Application Development [](https://www.glideapps.com/) Create the apps your business needs, without coding, waiting or overpaying. Get started for free and build an app today Adalo - Build Your Own No Code App [](https://www.adalo.com/) Adalo makes creating apps as easy as putting together a slide deck. Turn your idea into a real native app — no code needed! Siter.io - The collaborative web design tool, no-code website builder [](https://siter.io/) Siter.io is a visual website builder for designers. Prototype, design, and create responsive websites in the browser. Work together with your team in one place. Elementor: #1 Free WordPress Website Builder | Elementor.com [](https://elementor.com/) Elementor is the platform web creators choose to build professional WordPress websites, grow their skills, and build their business. Start for free today! No code app builder | Bravo Studio [](https://www.bravostudio.app/) Your no-code mobile app builder for iOS and Android. Create MVP’s, validate ideas and publish on App Store and Google Play Store. Home [](https://typedream.com/) The simplest way to build a website with no-code, as easy as writing on Notion. Try Typedream for free and upgrade for custom domains, collaborators, and unlimited pages. Free Website Builder | Create a Free Website | Wix.com [](https://www.wix.com/) Create a website with Wix’s robust website builder. With 900+ strategically designed templates and advanced SEO and marketing tools, build your brand online today. Free responsive Emails & Landing Pages drag-and-drop Editor | BEE [](https://beefree.io/) Free responsive emails and landing pages editor. With BEE drag-and-drop builders embedded in many software applications you can start designing now! Home [](https://typedream.com/) The simplest way to build a website with no-code, as easy as writing on Notion. Try Typedream for free and upgrade for custom domains, collaborators, and unlimited pages. Ownit Connected Checkout [](https://www.ownit.co/) Ownit Connected Checkout Bookmark.com | No-code Website Builder to Start Your Business [](https://www.bookmark.com/) Our AI powered platform ensures your business is future proof. Try Bookmark for free. The best way to build web apps without code | Bubble [](https://bubble.io/) Bubble introduces a new way to build software. It’s a no-code tool that lets you build SaaS platforms, marketplaces and CRMs without code. Bubble hosts all web apps on its cloud platform. Responsive Web Design | Website Creation | Editor X [](https://www.editorx.com/) Experience the future of website design with responsive layouts, CSS precision and smooth drag and drop. Create a Website for Free. Tilda Website Builder [](https://tilda.cc/) Create a website, online store, landing page with Tilda intuitive website builder. Build your site from hundreds of pre-designed templates and publish it today. No code required. No-code headless commerce and websites | Unstack Inc. [](https://www.unstack.com/) Deploy high performance eCommerce storefronts and websites without the engineering overhead using Unstack's no-code CMS Best Drag-and-Drop Website Builder | Jemi [](https://jemi.so/) The modern website builder for creatives, entrepreneurs, and dreamers. Build a beautiful link in bio site, portfolio, or landing page in minutes. No-code website builder that works like Notion [](https://popsy.co/) Create a beautiful no-code website in minutes. Popsy works just like Notion but is built from the ground up for building websites. Choose a free template. Edit content just like in Notion. Customize styles without code. Free Notion icons and illustrations. Unbounce - The Landing Page Builder & Platform [](https://unbounce.com/) Grow your relevance, leads, and sales with Unbounce. Use Unbounce to easily create and optimize landing pages for your small business and boost conversions with AI insights. Low-code Front-end Design & Development Platform | TeleportHQ [](https://teleporthq.io/) Front-end development platform, with a visual builder and headless content modelling capabilities. Static website creation, and UI development tools. Other tools used in no code website MemberSpace - Turn any part of your website into members-only with just a few clicks [](https://www.memberspace.com/) Create memberships on your website for anything you want like courses, video tutorials, member directories, and more while having 100% control over look & feel. Triggre | The number one true no-code platform to run your business [](https://www.triggre.com/) The best no-code platform to create highly advanced business applications in hours, without programming. Try it now for free! No code game builder Welcome to Buildbox [](https://signup.buildbox.com/) Welcome to Buildbox Flowlab Game Creator - Make games online [](https://flowlab.io/) Flowlab is an online game creator. Make your own games to share with friends. Make 2D Games With GameMaker | Free Video Game Maker [](https://gamemaker.io/) Make a game with GameMaker, the best free video game engine. Perfect for beginners and professionals. Learn to build your own 2D games with our simple tutorials. Side Hustle Side Hustle Stack [](https://sidehustlestack.co/) Side Hustle Stack is a resource for finding platform-based work, ranging from gig work and side hustles to platforms that help you start a small business that can grow. Fiverr [](https://www.fiverr.com/) Remotasks: Work From Home, Online Bootcamp Training [](https://www.remotasks.com/en) Make money doing tasks. Start earning today! Free bootcamp training offered online. Sign up for a free Remotasks account and work from home. Earn up to $200/month. Transcribe Speech to Text | Rev [](https://www.rev.com/) Transcribe Speech to Text with Rev. Reach your audience with clear and accurate captions, transcripts, and subtitles. AI Training Data and other Data Management Services [](https://www.clickworker.com/) AI training data, SEO texts, web research, tagging, surveys and more - Use the crowdsourcing principle with the power of >4.5M Clickworkers. Automate your Busy Work - Byron People-Powered Assistants [](https://www.hibyron.com/) Byron is an on demand US based virtual assistant platform that gives individuals and teams the ability to quickly outsource their non-essential tasks. Jobs Websites - Remote Latest Crypto Jobs, Web3 Jobs and Blockchain Jobs in the leading tech companies. [](https://cryptojobslist.com/) New Cryptocurrency Jobs, Web3 Jobs and Blockchain Jobs on CryptoJobsList — the leading site to find and post jobs. Connect with companies hiring in a few clicks and begin your next experience in the industry. Updated daily. Remote Jobs: Design, Marketing, Programming, Writing & More [](https://justremote.co/) Discover Remote Jobs from around the world. Give up the commute, work remotely and do what you love, daily, from anywhere. Find your perfect remote development, design, sales or marketing job today. Remote Ok [](https://remoteok.com/) Hire Freelancers & Remote Workers For Free [](https://talent.hubstaff.com/) Find and hire the highest quality freelancers from around the world - for free. Choose from thousands of developers, digital marketers, creatives and more. We Work Remotely: Remote jobs in design, programming, marketing and more [](https://weworkremotely.com/) Find the most qualified people in the most unexpected places: Hire remote! We Work Remotely is the best place to find and list remote jobs that aren't restricted by commutes or a particular geographic area. Browse thousands of remote work jobs today. Angel [](https://angel.co/) Remote Work: Jobs, Companies & Virtual Teams - Remote.co [](https://remote.co/) Remote.co is the definitive remote work job board for online job seekers and companies hiring. Start your remote job search here! FlexJobs: Best Remote Jobs, Work from Home Jobs, Online Jobs & More [](https://www.flexjobs.com/) The #1 job search site for hand-screened flexible and remote jobs (work from home jobs) since 2007. Plus get resume, coaching and career help. Join today! Remote jobs remotefront.io [](https://remotefront.io/) All remote jobs at remotefront.io Daily Virtual Events Helping You Grow Professionally [](https://powertofly.com/) PowerToFly is where you receive expert career advice, free video training, coaching and exclusive access to jobs and events at top companies. Best Remote and Work from Home Jobs - Virtual Vocations [](https://www.virtualvocations.com/) Best work from home jobs and remote jobs in over 50 categories for professionals, digital nomads, telecommuting workers and entry level jobseekers. Education, healthcare, medical, customer support and tech job openings. Remote Jobs | Working Nomads [](https://www.workingnomads.com/jobs) Remote jobs for digital working nomads. Start your telecommuting career and work remotely from home or places around the world. Job Search, Companies Hiring Near Me, and Advice | The Muse [](https://www.themuse.com/) Find jobs at the best companies hiring near you and get free career advice. Startupers [](https://www.startupers.com/) NoDesk - Where Everyone Works Remote [](https://nodesk.co/) Browse and apply to the best new remote jobs at leading remote companies and startups for free. Join hundreds of companies that use NoDesk to build their remote teams. Browser Extensions Blackbox - Select. Copy. Paste & Search - Magazinul web Chrome [](https://chrome.google.com/webstore/detail/blackbox-select-copy-past/mcgbeeipkmelnpldkobichboakdfaeon) Fastest Way to Copy Text from Videos & Images Octotree - GitHub code tree - Magazinul web Chrome [](https://chrome.google.com/webstore/detail/octotree-github-code-tree/bkhaagjahfmjljalopjnoealnfndnagc) GitHub on steroids WhatFont - Chrome Web Store [](https://chrome.google.com/webstore/detail/whatfont/jabopobgcpjmedljpbcaablpmlmfcogm?hl=en) The easiest way to identify fonts on web pages. Window Resizer - Chrome Web Store [](https://chrome.google.com/webstore/detail/window-resizer/kkelicaakdanhinjdeammmilcgefonfh?hl=en) Resize the browser window to emulate various screen resolutions. Amino: CSS Editor - Magazinul web Chrome [](https://chrome.google.com/webstore/detail/amino-css-editor/pbcpfbcibpcbfbmddogfhcijfpboeaaf) Live CSS Editor. Write custom CSS for any website and see your changes in real time. Checkbot: SEO, Web Speed & Security Tester 🚀 - Chrome Web Store [](https://chrome.google.com/webstore/detail/checkbot-seo-web-speed-se/dagohlmlhagincbfilmkadjgmdnkjinl?hl=en) Test SEO/speed/security of 100s of pages in a click! Check broken links, HTML/JavaScript/CSS, URL redirects, duplicate titles... Honey: Automatic Coupons & Rewards - Magazinul web Chrome [](https://chrome.google.com/webstore/detail/honey-automatic-coupons-r/bmnlcjabgnpnenekpadlanbbkooimhnj) Save money and earn rewards when you shop online. Tango: screenshots, training, & documentation - Magazinul web Chrome [](https://chrome.google.com/webstore/detail/tango-screenshots-trainin/lggdbpblkekjjbobadliahffoaobaknh) Automatically create beautiful step-by-step guides with screenshots, in seconds. No code browser automation | axiom.ai [](https://axiom.ai/) Build browser bots quickly, without code. Automate website actions and repetitive tasks using just your browser, on any website or web app. No Code Browser extensions builder Bildr - Visual Web Development in your Browser [](https://www.bildr.com/) Visually build SaaS products, Chrome extensions, and web3 dApps Other Repurposing content for social media the easy way » Repurpose.io [](https://repurpose.io/) Repurposing content for social media made easy. Automatically repurpose YouTube, TikTok, Lives, Podcasts, and Zoom calls. Try it for FREE. Smart Serials: Your serial numbers database [](https://smartserials.com/) This is your main source of free serial numbers, unlock keys in a clean environment safe to browse by all ages. Old versions of Windows, Mac and Linux Software, Apps & Abandonware Games - Download at OldVersion.com [](http://www.oldversion.com/) Online Room Planner - Design Your Room [](http://www.planyourroom.com/) Planyourroom.com is a wonderful website to redesign each room in your house by picking out perfect furniture options to fit your unique space. BoredHumans.com - Fun AI Programs You Can Use Online [](https://boredhumans.com/) Fun AI programs you can use online. AI games, fake people, computer generated art, machine learning demos, and more. BNProject | Home [](https://buynothingproject.org/) Open Source Alternatives to Proprietary Software [](https://www.opensourcealternative.to/) Discover 400+ popular open source alternatives to proprietary SaaS. URL Shortener - Short URLs & Custom Free Link Shortener | Bitly [](https://bitly.com/) Bitly’s Connections Platform is more than a free URL shortener, with robust link management software, advanced QR Code features, and a Link-in-bio solution. TinEye Reverse Image Search [](https://tineye.com/) Good Books | Books recommended by successful people [](https://www.goodbooks.io/) Looking for the best books to read in 2022? Discover the best book recommendations from the world's most successful, influential and interesting people. Directory - Website Recommendations [](https://tokapps.com/directory/) 0 TRIED & TESTED WEBSITES LISTED Insanely Useful Websites A combination of useful websites for businesses, freelancers, DIYers, and individuals in a centralised area.All websites have been tried and tested. Filter Websites Audio Business Tools Copywriting Design Entertainment Graphics Guides Health Marketing PC Resources Savings SEO Software Travel Video Apply filter Watch Anime Online, Free Anime Streaming Online on Zoro.to Anime Website [](https://zoro.to/) Zoro is a Free anime streaming website which you can watch English Subbed and Dubbed Anime online with No Account and Daily update. WATCH NOW! Animated Drawings [](https://sketch.metademolab.com/) Bring children's drawings to life, by animating characters to move around! Alternativeto [](https://alternativeto.net/) Chatroulette [](https://chatroulette.com/) Random meetings around the world Tiktok Downloader - Download Video tiktok Without Watermark - SnapTik [](https://snaptik.app/en) TikTok Video Downloader - SnapTik.App is one of the best free Download video Tiktok No Watermark tool available online. You can download TikTok video from any device you have. Imgflip - Create and Share Awesome Images [](https://imgflip.com/) Flip through memes, gifs, and other funny images. Make your own images with our Meme Generator or Animated GIF Maker. Fake Text Message | Make Fake Text Conversation [](https://ifaketextmessage.com/) Fake Text Message is a tool to create a Fake Text Conversation and a Fake iMessage. ✂Templatemaker ︎ [](https://www.templatemaker.nl/en/) Omni Calculator [](https://www.omnicalculator.com/) Omni Calculator solves 2960 problems anywhere from finance and business to health. It’s so fast and easy you won’t want to do the math again! Watch Movies Online Free | Watch Series HD Free [](https://hdtoday.tv/) Free Access to the Biggest library of HD Movies and HD Series online - NO ADS - No Account Required - Fast Free Streaming Students Answers - The Most Trusted Place for Answering Life's Questions [](https://www.answers.com/) Answers is the place to go to get the answers you need and to ask the questions you want Wolfram|Alpha: Computational Intelligence [](https://www.wolframalpha.com/) Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. For math, science, nutrition, history, geography, engineering, mathematics, linguistics, sports, finance, music… Online Math Tools - Simple, free and easy to use math utilities [](https://onlinemathtools.com/) World's simplest collection of useful mathematics utilities. Generate number sequences, draw fractals, do quick matrix and numerical calculations and more! edX | Free Online Courses by Harvard, MIT, & more | edX [](https://www.edx.org/) Access 2000 free online courses from 140 leading institutions worldwide. Gain new skills and earn a certificate of completion. Join today. Sci-Hub [](https://sci-hub.hkvisa.net/) Sci-Hub,mg.scihub.ltd,sci-hub.tw,The project is supported by user donations. Imagine the world with free access to knowledge for everyone ‐ a world without any paywalls. DigitalDefynd - Find the Best + Free Courses Online [](https://digitaldefynd.com/) 4 Million+ Learners | 96,000+ Courses | 45,000+ Free Courses | 1200+ Free Certificates Learn Anything [](https://learn-anything.xyz/) Search Interactive Mind Maps to learn anything HubSpot Academy - Homepage [](https://academy.hubspot.com/) HubSpot Academy is the worldwide leader in inbound marketing, sales, and customer service/support training.

YT_Emerging_Technologies_Introduction_to_AI
github
LLM Vibe Score0.461
Human Vibe Score0.039054583141409485
zusmaniJan 17, 2025

YT_Emerging_Technologies_Introduction_to_AI

YouTube Channel: Emerging Technologies Playlist: Introduction to AI Instructor: Zeeshan-ul-hassan Usmani Dear Students, I have uploaded all relevant material here for your quick access and learning. I hope you will find it beneficiary Yours Truly, Zeeshan =========================================== Video title: Resources Books to Order: Artificial Intelligence by Zeeshan Usmani - https://gufhtugu.com/artificial-intelligence Artificial Intelligence by Baqir Naqvi - https://gufhtugu.com/masnoi-zahanat/ Recommended Books • Gödel, Escher, Bach : An Eternal Golden Braid by Douglas R. Hofstadter A classic, poetic, philosophical defense of AI. • Machines Who Think by Pamela McCorduck. A good review of early AI history. • Robot: Mere Machine to Transcendent Mind by Hans P. Moravec Somewhat hyped book by a CMU robotics researcher. • Flesh and Machines: How Robots Will Change Us by Rodney Allen Brooks Reasonably decent book by MIT's leading robotics researcher. • Wired for War by Peter Warren Singer Reviews growing use of robots and unmanned vehicles in warfare. • Behind Deep Blue: Building the Computer That Defeated the World Chess Champion by Feng-Hsiung Hsu Autobiographical book on the development of a history making game-playing system. Interesting personal story of the hard engineering work that went into the system, with a few interesting facts on the technical aspects. • The Age of Spiritual Machines : When Computers Exceed Human Intelligence by Ray Kurzweil A recent view by an AI entrepreneur that has content if you ignore all the hype and overly-optimistic trust that Moore's law will magically solve all of the major problems. • Hal's Legacy : 2001's Computer As Dream and Reality An interesting collection of edited articles written to celebrate the fictional birthday of a famous intelligent computer who's true birthday must unfortunately be delayed, pending AI's inevitable progress. • The Sciences of the Artificial by Herbert Simon AI as science by one of its founders. • Models of My Life by Herbert Simon. An autobiography of one of AI's founders who's intellectual contributions also include fundamental contributions to economics (for which he won the Nobel prize), cognitive psychology, and computer science (such as co-inventing the linked list in the 1950's). • Alan Turing: The Enigma by Alan Hodges. A biography of one of the founders of CS and originator of the Turing test. Also a testimony to the tragic implications of homophobia. • The Emperor's New Mind : Concerning Computers, Minds, and the Laws of Physics and Shadows of the Mind : A Search for the Missing Science of Consciousness and The Large, the Small and the Human Mind by Roger Penrose A completely bogus argument against AI by a hopelessly Platonic mathematician. The last book contains an appended article by Stephen Hawking (a colleague of Penrose's) who of course doesn't buy his bogus argument. • The Mind's New Science : A History of the Cognitive Revolution by Howard Gardner A nice history of the development of cognitive science. • How the Mind Works , The Language Instinct , and Words and Rules : The Ingredients of Language by Steven Pinker Fun reading on lots of interesting issues in modern Cognitive Science and Linguistics if you don't take his exaggerated beliefs in nativism and evolutionary psychology too seriously. • Bots : The Origin of New Species by Andrew Leonard A light, somewhat hyped book on on Internet agents, chatterbots, etc. with a few funny stories. • Mathematics: The Loss of Certainty by Morris Kline A very nice book on the failed enterprise of using logic to build a firm foundation for infallible mathematics and the role of Gödel's Incompleteness Theorem in the philosophy of mathematics. • Incompleteness: The Proof and Paradox of Kurt Gödel by Rebecca Goldstein An interesting biography of Kurt Gödel. Too bad he was such a Platonist that, unlike Turing, he did not understand the true implications of his own theorems (interesting author connection: Goldstein is Pinker's wife). Links: • AAAI AI Topics Basic info on AI from the American Association for Artificial Intelligence: http://www.aaai.org/AITopics/html/welcome.html • Loebner Prize for limited Turing test: http://www.loebner.net/Prizef/loebner-prize.html • IBM's Deep Blue Page: http://www.research.ibm.com/deepblue/ • Robocup: Robotic Soccer Competition: http://www.robocup.org/ • NY Times Article on Proof of the Robbins Theorem: http://www.nytimes.com/library/cyber/week/1210math.html • NY Times article on Bayes Nets at Microsoft Research: http://www.nytimes.com/library/tech/00/07/biztech/articles/17lab.html =========================================== Video title: Numbers Infinity Video Link - •https://www.youtube.com/watch?v=hlXHwMgS06c https://www.cbs.com/shows/numb3rs/ http://numb3rs.wolfram.com/ =========================================== Video title: 20 Hours Rule and Assisgnemnt Assignment - https://www.urdufake2020.cicling.org/ =========================================== Video title: Assignments – P1 Mostly Human - https://money.cnn.com/mostly-human =========================================== Video title: Assignments – P2 Assignment – 2 - https://replika.ai/ Assignment – 3 – Teachable Machines https://teachablemachine.withgoogle.com/ Assignment – 4 – Tensor Flow Playground https://playground.tensorflow.org Assignment – 5 – GPT-3 Paper (175B Parameters) https://debuild.co/ Assignment – 6 - Image GPT-3 https://openai.com/blog/image-gpt/ =========================================== Video title: Create your own Deep Fake 1.https://colab.research.google.com/drive/1mGg_fmvhTpvkPkclw2yKkhALVzmawfvT?usp=sharing 2.https://drive.google.com/drive/folders/1wW1bxRV2S7Ce8gc3VDTzMQABE3-WCc_Y?usp=sharing •go into you gdrive > find cloned folder and ensure that this folder must have: vox-adv-cpk.pth.tar & vox-cpk.pth.tar failes •Aliaksandr Siarohin : https://github.com/AliaksandrSiarohin/first-order-model

teach-AI-in-business
github
LLM Vibe Score0.443
Human Vibe Score0.018525334165293606
aenyneJan 9, 2025

teach-AI-in-business

Teaching AI in Business ![HitCount] I am collecting material for teaching AI-related issues to non-tech people. The links should provide for a general understanding of AI without going too deep into technical issues. Please contribute! Make this Issue your First Issue I am collecting material for teaching AI-related issues to non-tech people. The links should have provide for a general understanding of AI without going too deep into technical issues. Please contribute! Kindly use only those Resources with NO CODE NEW Check out also the AI Wiki NEW Online Videos & Courses | Link to Issue | Description | |---|---| | Top Trending Technologies | Youtube Channel to master top trending technologyies including artificial intelligence | | AI4All | AI 4 All is a resource for AI facilitators to bring AI to scholars and students | | Elements of AI | Elements of AI is a free open online course to teach AI principles | | Visual Introduction to Machine Learning | Visual introduction to Machine Learning is a beautiful website that gives a comprehensive introduction and easily understood first encounter with machine learning | | CS50's Introduction to Artificial Intelligence with Python | Learn to use machine learning in Python in this introductory course on artificial intelligence.| | Crash course for AI | This is a fun video series that introduces students and educators to Artificial Intelligence and also offers additional more advanced videos. Learn about the basics, neural networks, algorithms, and more. | Youtuber Channel Machine Learning Tutorial | Youtube Channel Turorial Teachable Machine for beginner | | Artificial Intelligence (AI) |Learn the fundamentals of Artificial Intelligence (AI), and apply them. Design intelligent agents to solve real-world problems including, search, games, machine learning, logic, and constraint satisfaction problems | | AI For Everyone by Andrew Ng | AI For Everyone is a course especially for people from a non-technical background to understand AI strategies | | How far is too far? The age of AI| This is a Youtube Orignals series by Robert Downey| | Fundamentals of Artificial Intelligence|This course is for absolute beginners with no technical knowledge.| | Bandit Algorithm (Online Machine Learning)|No requirement of technical knowledge, but a basic understending of Probability Ttheory would help| | An Executive's Guide to AI|This is an interactive guide to teaching business professionals how they might employ artificial intelligence in their business| | AI Business School|Series of videos that teach how AI may be incorporated in various business industries| | Artificial Intelligence Tutorial for Beginners | This video will provide you with a comprehensive and detailed knowledge of Artificial Intelligence concepts with hands-on examples. | | Indonesian Machine Learning Tutorial | Turorial Teachable Machine to train a computer for beginner | | Indonesian Youtube Playlist AI Tutorial | Youtube Playlist AI Tutorial For Beginner | | Artificial Intelligence Search Methods For Problem Solving By Prof. Deepak Khemani|These video lectures are for absolute beginners with no technical knowledge| | AI Basics Tutorial | This video starts from the very basics of AI and ML, and finally has a hands-on demo of the standard MNIST Dataset Number Detection model using Keras and Tensorflow.| | Simple brain.js Tutorial | This video explains a very simple javascript AI library called brain.js so you can easily run AI in the browser.| | Google AI| A complete kit for by google official for non-tech guy to start all over from basics, till advanced | | Microsoft AI for Beginners| A self-driven curriculum by Microsoft, which includes 24 lessons on AI. | Train Your Own AI | Link to Issue | Description | |---|---| | Teachable Machine | Use Teachable Machine to train a computer to recognize your own images, sounds, & poses | | eCraft2Learn | Resource and interactive space (Snap, a visual programming environment like Scratch) to learn how to create AI programs | | Google Quick Draw | Train an AI to guess from drawings| | Deepdream Generator| Merge Pictures to Deep Dreams using the Deepdream Generator| | Create ML|Quickly build and train Core ML models on your Mac with no code.| | What-If Tool|Visually probe the behavior of trained machine learning models, with minimal coding.| | Metaranx|Use and build artificial intelligence tools to analyze and make decisions about your data. Drag-and-drop. No code.| | obviously.ai|The total process of building ML algorithms, explaining results, and predicting outcomes in one single click.| Articles | By & Title | Description | |---|---| | Artificial Intelligence | Wikipedia Page of AI | | The Non-Technical AI Guide | One of the good blog post that could help AI more understandable for people without technical background | | LIAI | A detailed introduction to AI and neural networks | | Layman's Intro | A layman's introduction to AI | | AI and Machine Learning: A Nontechnical Overview | AI and Machine Learning: A Nontechnical Overview from OREILLY themselves is a guide to learn anyone everything they need to know about AI, focussed on non-tech people | | What business leaders need to know about artifical intelligence|Short article that summarizes the essential aspects of AI that business leaders need to understand| | How Will No-Code Impact the Future of Conversational AI | A humble explanation to the current state of converstational AI i.e.Chatbots and how it coul evolve with the current trend of no coding. | | Investopedia | Basic explanation of what AI is in a very basic and comprehensive way | | Packtpub | A non programmer’s guide to learning Machine learning | | Builtin | Artificial Intelligence.What is Artificial Intelligence? How Does AI Work? | | Future Of Life | Benefits & Risks of Artificial Intelligence | | NSDM India -Arpit | 100+ AI Tools For Non-Coders That Will Make Your Marketing Better. | | AI in Marketing for Startups & Non-technical Marketers | A practical guide for non-technical people | | Blog - Machine Learning MAstery | Blogs and Articles by Jason Browniee on ML | | AI Chatbots without programming| Chatbots are increasingly in demand among global businesses. This course will teach you how to build, analyze, deploy and monetize chatbots - with the help of IBM Watson and the power of AI.| Book Resources for Further Reading | Author | Book | Description & Notes | |---|---|---| | Ethem Alpaydin|Machine Learning: The New AI | Graph Theory with Applications to Engineering & Computer Science. A concise overview of machine learning—computer programs that learn from data—which underlies applications that include recommendation systems, face recognition, and driverless cars. | | Charu C. Aggarwal| Neural Networks and Deep Learning | This book covers both classical and modern models in deep learning. The primary focus is on the theory and algorithms of deep learning. The book is also rich in discussing different applications in order to give the practitioner a flavor of how neural architectures are designed for different types of problems. | | Hal Daumé III | A Course in Machine Learning | The purpose of this book is to provide a gentle and pedagogically organized introduction to the field. A second goal of this book is to provide a view of machine learning that focuses on ideas and models, not on math. | | Ian Goodfellow and Yoshua Bengio and Aaron Courville| Deep Learning | The book starts with a discussion on machine learning basics, including the applied mathematics and algorithms needed to effectively study deep learning from an academic perspective. There is no code covered in the book, making it perfect for a non-technical AI enthusiast. | | Peter Harrington|Machine Learning in Action| (Source: https://github.com/kerasking/book-1/blob/master/ML%20Machine%20Learning%20in%20Action.pdf) This book acts as a guide to walk newcomers through the techniques needed for machine learning as well as the concepts behind the practices.| | Jeff Heaton| Artificial Intelligence for Humans |This book helps its readers get an overview and understanding of AI algorithms. It is meant to teach AI for those who don’t have an extensive mathematical background. The readers need to have only a basic knowledge of computer programming and college algebra.| | John D. Kelleher, Brian Mac Namee and Aoife D'Arcy|Fundamentals of Machine Learning for Predictive Data Analytics: Algorithms, Worked Examples, and Case Studies (The MIT Press)|This book covers all the fundamentals of machine learning, diving into the theory of the subject and using practical applications, working examples, and case studies to drive the knowledge home.| | Deepak Khemani| [A First Course in Artificial Intelligence] | It is an introductory course on Artificial Intelligence, a knowledge-based approach using agents all across and detailed, well-structured algorithms with proofs. This book mainly follows a bottom-up approach exploring the basic strategies needed problem-solving on the intelligence part. | | Maxim Lapan | Deep Reinforcement Learning Hands-On - Second Edition | Deep Reinforcement Learning Hands-On, Second Edition is an updated and expanded version of the bestselling guide to the very latest reinforcement learning (RL) tools and techniques. It provides you with an introduction to the fundamentals of RL, along with the hands-on ability to code intelligent learning agents to perform a range of practical tasks. | | Tom M Mitchell | Machine Learning | This book covers the field of machine learning, which is the study of algorithms that allow computer programs to automatically improve through experience. The book is intended to support upper level undergraduate and introductory level graduate courses in machine learning. | | John Paul Mueller and Luca Massaron|Machine Learning For Dummies|This book aims to get readers familiar with the basic concepts and theories of machine learning and how it applies to the real world. And "Dummies" here refers to absolute beginners with no technical background.The book introduces a little coding in Python and R used to teach machines to find patterns and analyze results. From those small tasks and patterns, we can extrapolate how machine learning is useful in daily lives through web searches, internet ads, email filters, fraud detection, and so on. With this book, you can take a small step into the realm of machine learning and we can learn some basic coding in Pyton and R (if interested)| | Michael Nielsen| Neural Networks and Deep Learning |Introduction to the core principles of Neural Networks and Deep Learning in AI| | Simon Rogers and Mark Girolami| A Course in Machine Learning |A First Course in Machine Learning by Simon Rogers and Mark Girolami is the best introductory book for ML currently available. It combines rigor and precision with accessibility, starts from a detailed explanation of the basic foundations of Bayesian analysis in the simplest of settings, and goes all the way to the frontiers of the subject such as infinite mixture models, GPs, and MCMC.| |Peter Norvig| Paradigm of Artificial Intelligence Programming |Paradigms of AI Programming is the first text to teach advanced Common Lisp techniques in the context of building major AI systems. By reconstructing authentic, complex AI programs using state-of-the-art Common Lisp, the book teaches students and professionals how to build and debug robust practical programs, while demonstrating superior programming style and important AI concepts.| | Stuart Russel & Peter Norvig | Artificial Intelligence: A Modern Approach, 3rd Edition | This is the prescribed text book for my Introduction to AI university course. It starts off explaining all the basics and definitions of what AI is, before launching into agents, algorithms, and how to apply them. Russel is from the University of California at Berkeley. Norvig is from Google.| | Richard S. Sutton and Andrew G. Barto| Reinforcement Learning: An Introduction |Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment.| | Alex Smola and S.V.N. Vishwanathan | Introduction to Machine Learning | Provides the reader with an overview of the vast applications of ML, including some basic tools of statistics and probability theory. Also includes discussions on sophisticated ideas and concepts. | | Shai Shalev-Shwartz and Shai Ben-David | Understanding Machine Learning From Theory to Algorithms |The primary goal of this book is to provide a rigorous, yet easy to follow, introduction to the main concepts underlying machine learning. | | Chandra S.S.V | Artificial Intelligence and Machine Learning | This book is primarily intended for undergraduate and postgraduate students of computer science and engineering. This textbook covers the gap between the difficult contexts of Artificial Intelligence and Machine Learning. It provides the most number of case studies and worked-out examples. In addition to Artificial Intelligence and Machine Learning, it also covers various types of learning like reinforced, supervised, unsupervised and statistical learning. It features well-explained algorithms and pseudo-codes for each topic which makes this book very useful for students. | | Oliver Theobald|Machine Learning For Absolute Beginners: A Plain English Introduction|This is an absolute beginners ML guide.No mathematical background is needed, nor coding experience — this is the most basic introduction to the topic for anyone interested in machine learning.“Plain” language is highly valued here to prevent beginners from being overwhelmed by technical jargon. Clear, accessible explanations and visual examples accompany the various algorithms to make sure things are easy to follow.| | Tom Taulli | Artificial Intelligence Basics: A Non-Technical Introduction | This book equips you with a fundamental grasp of Artificial Intelligence and its impact. It provides a non-technical introduction to important concepts such as Machine Learning, Deep Learning, Natural Language Processing, Robotics and more. Further the author expands on the questions surrounding the future impact of AI on aspects that include societal trends, ethics, governments, company structures and daily life. | |Cornelius Weber, Mark Elshaw, N. Michael Mayer| Reinforcement Learning |Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning.| |John D. Kelleher, Brian Mac Namee, Aoife D'arcy| Algorithms, Worked Examples, and Case Studies | A comprehensive introduction to the most important machine learning approaches used in predictive data analytics, covering both theoretical concepts and practical applications. |

ai-learning-roadmap
github
LLM Vibe Score0.442
Human Vibe Score0.035708035270567436
gopala-krNov 30, 2024

ai-learning-roadmap

Lists of all AI related learning materials and practical tools to get started with AI apps Design Thinking – An Introduction Stanford's virtual Crash Course in Design Thinking Amazon Web Services Learning Material AWS AI Session– The session provides an overview of all Amazon AI technology offerings (Lex, Polly, Rekognition, ML, and Deep Learning AMI) Self-Paced Labs AWS self-paced labs provide hands-on practice in a live AWS environment with AWS services and real-world cloud scenarios. Follow step-by-step instructions to learn a service, practice a use case, or prepare for AWS Certification. Introductory Lab Introduction to AWS Lambda Lex Introduction to Amazon Lex Amazon Lex Webinar Amazon Lex: AWS conversational interface (chat bot) Documentation Polly Introduction to Amazon Polly Amazon Polly Webinar - Amazon Polly – AWS Text To Speech (TTS) service Documentation What is Amazon Polly? Developer Resources Rekognition Introduction to Amazon Rekognition Amazon Rekognition - Deep Learning-Based Image Analysis Webinar Amazon Rekognition – AWS image recognition service Documentation – What is Amazon Rekognition? Machine Learning Machine Learning Session 1 – Empowering Developers to Build Smart Applications Session 2 - Predicting Customer Churn with Amazon Machine Learning AWS Machine Learning – End to end, managed service for creating and testing ML models and then deploying those models into production Documentation What is Amazon Machine Learning? Developer Resources AWS Deep Learning AMI – Amazon Machine Image (AMI) optimized for deep learning efforts Recommended Additional Resources Take your skills to the next level with fundamental, advanced, and expert level labs. Creating Amazon EC2 Instances with Microsoft Windows Building Your First Amazon Virtual Private Cloud (VPC) Working with AWS CodeCommit on Windows Working with Amazon DynamoDB Google Cloud - Learning Material Below is the learning material that will help you learn about Google Cloud. Network Networking 101 – 43 mins The codelab provides common cloud developer experience as follows: Set up your lab environment and learn how to work with your GCP environment. Use of common open source tools to explore your network around the world. Deploy a common use case: use of HTTP Load Balancing and Managed Instance Groups to host a scalable, multi-region web server. Testing and monitoring your network and instances. Cleanup. Developing Solutions for Google Cloud Platform – 8 hours Infrastructure Build a Slack Bot with Node.js on Kubernotes – 43 mins Creating a Virtual Machine – 10 mins Getting Started with App Engine (Python) – 13 mins Data Introduction to Google Cloud Data Prep – 7 mins Create a Managed MySQL database with Cloud SQL – 19 mins Upload Objects to Cloud Storage – 11 mins AI, Big Data & Machine Learning Introduction to Google Cloud Machine Learning – 1 hour Machine Learning APIs by Example – 30 min Google Cloud Platform Big Data and Machine Learning Fundamentals Additional AI Materials Auto-awesome: Advanced Data Science on Google Cloud Platform – 45 min Run a Big Data Text Processing Pipeline in Cloud Dataflow – 21 min Image Classification Using Cloud ML Engine & Datalab – 58 min Structured Data Regression Using Cloud ML Engine & Datalab – 58 min (Optional) Deep Learning & Tensorflow Tensorflow and Deep Learning Tutorial – 2:35 hours Deep Learning Course – advanced users only Additional Reference Material Big Data & Machine Learning @ Google Cloud Next '17 - A collection of 49 videos IBM Watson Learning Material (Contributions are welcome in this space) [IBM Watson Overview]() [IBM Watson Cognitive APIs]() [IBM Watson Knowledge Studio]() Visual Studio UCI datasets Microsoft Chat Bots Learning Material Skills Prerequisite Git and Github NodeJS VS Code IDE Training Paths If you have the above Prerequisite skills, then take Advanced Training Path else take Novice Training Path. Prerequisite Tutorials Git and Github Node.js Node.js Tutorials for Beginners Node.js Tutorial in VS Code Introduction To Visual Studio Code Novice Training Path Environment Set Up Download and Install Git Set up GitHub Account_ Download and Install NodeJS Download and Install IDE - Visual Studio Code Download and Install the Bot Framework Emulator Git clone the Bot Education project - git clone Set Up Azure Free Trial Account Cognitive Services (Defining Intelligence) Read Cognitive Services ADS Education Deck – git clone Review the guide for Understanding Natural language with LUIS Complete the NLP (LUIS) Training Lab from the installed Bot Education project – \bot-education\Student-Resources\Labs\CognitiveServices\Lab_SetupLanguageModel.md Bot Framework (Building Chat Bots) Read Bot Framework ADS Education Deck from downloaded - (Your Path)\bot-extras Review Bot Framework documentation (Core Concepts, Bot Builder for NodeJS, and Bot Intelligence) - Setup local environment and run emulator from the installed Bot Education project – \bot-education\Student-Resources\Labs\Node\Lab1_SetupCheckModel.md Review and test in the emulator the “bot-hello” from \bot-education\Student-Resources\BOTs\Node\bot-hello Advanced Training Path Environment Set Up Download and Install Git Set up GitHub Account_ Download and Install NodeJS Download and Install IDE - Visual Studio Code Download and Install the Bot Framework Emulator Git clone the Bot Education project - git clone Set Up Azure Free Trial Account Git clone the Bot Builder Samples – git clone Cognitive Services (Defining Intelligence) Read Cognitive Services ADS Education Deck – git clone Review the guide for Understanding Natural language with LUIS Bot Framework (Building Chat Bots) Read Bot Framework ADS Education Deck from downloaded - (Your Path)\bot-extras Review Bot Framework documentation (Core Concepts, Bot Builder for NodeJS, and Bot Intelligence) - Setup local environment and run emulator from the installed Bot Education project – \bot-education\Student-Resources\Labs\Node\Lab1_SetupCheckModel.md Cognitive Services (Defining Intelligence) - Labs Complete the NLP (LUIS) Training Lab from the installed BOT Education project \bot-education\Student-Resources\Labs\CognitiveServices\Lab_SetupLanguageModel.md Review, Deploy and run the LUIS BOT sample Bot Framework (Building Chat Bots) – Labs Setup local environment and run emulator from the installed Bot Education project \bot-education\Student-Resources\Labs\Node\Lab1_SetupCheckModel.md Review and test in the emulator the “bot-hello” from \bot-education\Student-Resources\BOTs\Node\bot-hello Review and test in the emulator the “bot-recognizers” from \bot-education\Student-Resources\BOTs\Node\bot-recognizers Lecture Videos Source Berkeley Lecture TitleLecturerSemester Lecture 1 Introduction Dan Klein Fall 2012 Lecture 2 Uninformed Search Dan Klein Fall 2012 Lecture 3 Informed Search Dan Klein Fall 2012 Lecture 4 Constraint Satisfaction Problems I Dan Klein Fall 2012 Lecture 5 Constraint Satisfaction Problems II Dan Klein Fall 2012 Lecture 6 Adversarial Search Dan Klein Fall 2012 Lecture 7 Expectimax and Utilities Dan Klein Fall 2012 Lecture 8 Markov Decision Processes I Dan Klein Fall 2012 Lecture 9 Markov Decision Processes II Dan Klein Fall 2012 Lecture 10 Reinforcement Learning I Dan Klein Fall 2012 Lecture 11 Reinforcement Learning II Dan Klein Fall 2012 Lecture 12 Probability Pieter Abbeel Spring 2014 Lecture 13 Markov Models Pieter Abbeel Spring 2014 Lecture 14 Hidden Markov Models Dan Klein Fall 2013 Lecture 15 Applications of HMMs / Speech Pieter Abbeel Spring 2014 Lecture 16 Bayes' Nets: Representation Pieter Abbeel Spring 2014 Lecture 17 Bayes' Nets: Independence Pieter Abbeel Spring 2014 Lecture 18 Bayes' Nets: Inference Pieter Abbeel Spring 2014 Lecture 19 Bayes' Nets: Sampling Pieter Abbeel Fall 2013 Lecture 20 Decision Diagrams / Value of Perfect Information Pieter Abbeel Spring 2014 Lecture 21 Machine Learning: Naive Bayes Nicholas Hay Spring 2014 Lecture 22 Machine Learning: Perceptrons Pieter Abbeel Spring 2014 Lecture 23 Machine Learning: Kernels and Clustering Pieter Abbeel Spring 2014 Lecture 24 Advanced Applications: NLP, Games, and Robotic Cars Pieter Abbeel Spring 2014 Lecture 25 Advanced Applications: Computer Vision and Robotics Pieter Abbeel Spring 2014 Additionally, there are additional Step-By-Step videos which supplement the lecture's materials. These videos are listed below: Lecture TitleLecturerNotes SBS-1 DFS and BFS Pieter Abbeel Lec: Uninformed Search SBS-2 A* Search Pieter Abbeel Lec: Informed Search SBS-3 Alpha-Beta Pruning Pieter Abbeel Lec: Adversarial Search SBS-4 D-Separation Pieter Abbeel Lec: Bayes' Nets: Independence SBS-5 Elimination of One Variable Pieter Abbeel Lec: Bayes' Nets: Inference SBS-6 Variable Elimination Pieter Abbeel Lec: Bayes' Nets: Inference SBS-7 Sampling Pieter Abbeel Lec: Bayes' Nets: Sampling SBS-8 Gibbs' Sampling Michael Liang Lec: Bayes' Nets: Sampling --> SBS-8 Maximum Likelihood Pieter Abbeel Lec: Machine Learning: Naive Bayes SBS-9 Laplace Smoothing Pieter Abbeel Lec: Machine Learning: Naive Bayes SBS-10 Perceptrons Pieter Abbeel Lec: Machine Learning: Perceptrons Per-Semester Video Archive(Berkeley) The lecture videos from the most recent offerings are posted below. Spring 2014 Lecture Videos Fall 2013 Lecture Videos Spring 2013 Lecture Videos Fall 2012 Lecture Videos Spring 2014 Lecture TitleLecturerNotes Lecture 1 Introduction Pieter Abbeel Lecture 2 Uninformed Search Pieter Abbeel Lecture 3 Informed Search Pieter Abbeel Lecture 4 Constraint Satisfaction Problems I Pieter Abbeel Recording is a bit flaky, see Fall 2013 Lecture 4 for alternative Lecture 5 Constraint Satisfaction Problems II Pieter Abbeel Lecture 6 Adversarial Search Pieter Abbeel Lecture 7 Expectimax and Utilities Pieter Abbeel Lecture 8 Markov Decision Processes I Pieter Abbeel Lecture 9 Markov Decision Processes II Pieter Abbeel Lecture 10 Reinforcement Learning I Pieter Abbeel Lecture 11 Reinforcement Learning II Pieter Abbeel Lecture 12 Probability Pieter Abbeel Lecture 13 Markov Models Pieter Abbeel Lecture 14 Hidden Markov Models Pieter Abbeel Recording is a bit flaky, see Fall 2013 Lecture 18 for alternative Lecture 15 Applications of HMMs / Speech Pieter Abbeel Lecture 16 Bayes' Nets: Representation Pieter Abbeel Lecture 17 Bayes' Nets: Independence Pieter Abbeel Lecture 18 Bayes' Nets: Inference Pieter Abbeel Lecture 19 Bayes' Nets: Sampling Pieter Abbeel Unrecorded, see Fall 2013 Lecture 16 Lecture 20 Decision Diagrams / Value of Perfect Information Pieter Abbeel Lecture 21 Machine Learning: Naive Bayes Nicholas Hay Lecture 22 Machine Learning: Perceptrons Pieter Abbeel Lecture 23 Machine Learning: Kernels and Clustering Pieter Abbeel Lecture 24 Advanced Applications: NLP, Games, and Robotic Cars Pieter Abbeel Lecture 25 Advanced Applications: Computer Vision and Robotics Pieter Abbeel Lecture 26 Conclusion Pieter Abbeel Unrecorded Fall 2013 Lecture TitleLecturerNotes Lecture 1 Introduction Dan Klein Lecture 2 Uninformed Search Dan Klein Lecture 3 Informed Search Dan Klein Lecture 4 Constraint Satisfaction Problems I Dan Klein Lecture 5 Constraint Satisfaction Problems II Dan Klein Lecture 6 Adversarial Search Dan Klein Lecture 7 Expectimax and Utilities Dan Klein Lecture 8 Markov Decision Processes I Dan Klein Lecture 9 Markov Decision Processes II Dan Klein Lecture 10 Reinforcement Learning I Dan Klein Lecture 11 Reinforcement Learning II Dan Klein Lecture 12 Probability Pieter Abbeel Lecture 13 Bayes' Nets: Representation Pieter Abbeel Lecture 14 Bayes' Nets: Independence Dan Klein Lecture 15 Bayes' Nets: Inference Pieter Abbeel Lecture 16 Bayes' Nets: Sampling Pieter Abbeel Lecture 17 Decision Diagrams / Value of Perfect Information Pieter Abbeel Lecture 18 Hidden Markov Models Dan Klein Lecture 19 Applications of HMMs / Speech Dan Klein Lecture 20 Machine Learning: Naive Bayes Dan Klein Lecture 21 Machine Learning: Perceptrons Dan Klein Lecture 22 Machine Learning: Kernels and Clustering Pieter Abbeel Lecture 23 Machine Learning: Decision Trees and Neural Nets Pieter Abbeel Lecture 24 Advanced Applications: NLP and Robotic Cars Dan Klein Unrecorded, see Spring 2013 Lecture 24 Lecture 25 Advanced Applications: Computer Vision and Robotics Pieter Abbeel Lecture 26 Conclusion Dan Klein,Pieter Abbeel Unrecorded Spring 2013 Lecture TitleLecturerNotes Lecture 1 Introduction Pieter Abbeel Video Down Lecture 2 Uninformed Search Pieter Abbeel Lecture 3 Informed Search Pieter Abbeel Lecture 4 Constraint Satisfaction Problems I Pieter Abbeel Lecture 5 Constraint Satisfaction Problems II Pieter Abbeel Unrecorded, see Fall 2012 Lecture 5 Lecture 6 Adversarial Search Pieter Abbeel Lecture 7 Expectimax and Utilities Pieter Abbeel Lecture 8 Markov Decision Processes I Pieter Abbeel Lecture 9 Markov Decision Processes II Pieter Abbeel Lecture 10 Reinforcement Learning I Pieter Abbeel Lecture 11 Reinforcement Learning II Pieter Abbeel Lecture 12 Probability Pieter Abbeel Lecture 13 Bayes' Nets: Representation Pieter Abbeel Lecture 14 Bayes' Nets: Independence Pieter Abbeel Lecture 15 Bayes' Nets: Inference Pieter Abbeel Lecture 16 Bayes' Nets: Sampling Pieter Abbeel Lecture 17 Decision Diagrams / Value of Perfect Information Pieter Abbeel Lecture 18 Hidden Markov Models Pieter Abbeel Lecture 19 Applications of HMMs / Speech Pieter Abbeel Lecture 20 Machine Learning: Naive Bayes Pieter Abbeel Lecture 21 Machine Learning: Perceptrons I Nicholas Hay Lecture 22 Machine Learning: Perceptrons II Pieter Abbeel Lecture 23 Machine Learning: Kernels and Clustering Pieter Abbeel Lecture 24 Advanced Applications: NLP and Robotic Cars Pieter Abbeel Lecture 25 Advanced Applications: Computer Vision and Robotics Pieter Abbeel Lecture 26 Conclusion Pieter Abbeel Unrecorded Fall 2012 Lecture TitleLecturerNotes Lecture 1 Introduction Dan Klein Lecture 2 Uninformed Search Dan Klein Lecture 3 Informed Search Dan Klein Lecture 4 Constraint Satisfaction Problems I Dan Klein Lecture 5 Constraint Satisfaction Problems II Dan Klein Lecture 6 Adversarial Search Dan Klein Lecture 7 Expectimax and Utilities Dan Klein Lecture 8 Markov Decision Processes I Dan Klein Lecture 9 Markov Decision Processes II Dan Klein Lecture 10 Reinforcement Learning I Dan Klein Lecture 11 Reinforcement Learning II Dan Klein Lecture 12 Probability Pieter Abbeel Lecture 13 Bayes' Nets: Representation Pieter Abbeel Lecture 14 Bayes' Nets: Independence Pieter Abbeel Lecture 15 Bayes' Nets: Inference Pieter Abbeel Lecture 16 Bayes' Nets: Sampling Pieter Abbeel Lecture 17 Decision Diagrams / Value of Perfect Information Pieter Abbeel Lecture 18 Hidden Markov Models Pieter Abbeel Lecture 19 Applications of HMMs / Speech Dan Klein Lecture 20 Machine Learning: Naive Bayes Dan Klein Lecture 21 Machine Learning: Perceptrons Dan Klein Lecture 22 Machine Learning: Kernels and Clustering Dan Klein Lecture 23 Machine Learning: Decision Trees and Neural Nets Pieter Abbeel Lecture 24 Advanced Applications: Computer Vision and Robotics Pieter Abbeel Lecture 25 Advanced Applications: NLP and Robotic Cars Dan Klein,Pieter Abbeel Unrecorded Lecture 26 Conclusion Dan Klein,Pieter Abbeel Unrecorded Lecture Slides Here is the complete set of lecture slides, including videos, and videos of demos run in lecture: Slides [~3 GB]. The list below contains all the lecture powerpoint slides: Lecture 1: Introduction Lecture 2: Uninformed Search Lecture 3: Informed Search Lecture 4: CSPs I Lecture 5: CSPs II Lecture 6: Adversarial Search Lecture 7: Expectimax Search and Utilities Lecture 8: MDPs I Lecture 9: MDPs II Lecture 10: Reinforcement Learning I Lecture 11: Reinforcement Learning II Lecture 12: Probability Lecture 13: Markov Models Lecture 14: Hidden Markov Models Lecture 15: Particle Filters and Applications of HMMs Lecture 16: Bayes Nets I: Representation Lecture 17: Bayes Nets II: Independence Lecture 18: Bayes Nets III: Inference Lecture 19: Bayes Nets IV: Sampling Lecture 20: Decision Diagrams and VPI Lecture 21: Naive Bayes Lecture 22: Perceptron Lecture 23: Kernels and Clustering Lecture 24: Advanced Applications (NLP, Games, Cars) Lecture 25: Advanced Applications (Computer Vision and Robotics) Lecture 26: Conclusion The source files for all live in-lecture demos are being prepared from Berkeley AI for release Selected Research Papers Latest arxiv paper submissionson AI Peter Norvig-Teach Yourself Programming in Ten Years How to do Research At the MIT AI Lab A Roadmap towards Machine Intelligence Collaborative Filtering with Recurrent Neural Networks (2016) Wide & Deep Learning for Recommender Systems (2016) Deep Collaborative Filtering via Marginalized Denoising Auto-encoder (2015) Nonparametric bayesian multitask collaborative filtering (2013) Tensorflow: Large-scale machine learning on heterogeneous distributed systems https://infoscience.epfl.ch/record/82802/files/rr02-46.pdf Theano: A CPU and GPU math expression compiler. Caffe: Convolutional architecture for fast feature embedding Chainer: A powerful, flexible and intuitive framework of neural networks Large Scale Distributed Deep Networks Large-scale video classification with convolutional neural networks Efficient Estimation of Word Representations in Vector Space Grammar as a Foreign Language Going Deeper with Convolutions ON RECTIFIED LINEAR UNITS FOR SPEECH PROCESSING Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks google turning its lucrative web search over to AI machines Stanford Syllabus CS 20SI: Tensorflow for Deep Learning Research Crowd-Based Personalized Natural Language Explanations for Recommendations Comparative Study of Deep Learning Software Frameworks RedditML- What Are You Reading AI-Powered Social Bots(16 Jun 2017) The Many Tribes of Artificial Intelligence Source:https://medium.com/intuitionmachine/infographic-best-practices-in-training-deep-learning-networks-b8a3df1db53 The Deep Learning Roadmap Source:https://medium.com/intuitionmachine/the-deep-learning-roadmap-f0b4cac7009a Best Practices for Training Deep Learning Networks Source: https://medium.com/intuitionmachine/infographic-best-practices-in-training-deep-learning-networks-b8a3df1db53 ML/DL Cheatsheets Neural Network Architectures Source: http://www.asimovinstitute.org/neural-network-zoo/ Microsoft Azure Algorithm Flowchart Source: https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-algorithm-cheat-sheet SAS Algorithm Flowchart Source: http://blogs.sas.com/content/subconsciousmusings/2017/04/12/machine-learning-algorithm-use/ Algorithm Summary Source: http://machinelearningmastery.com/a-tour-of-machine-learning-algorithms/ Source: http://thinkbigdata.in/best-known-machine-learning-algorithms-infographic/ Algorithm Pro/Con Source: https://blog.dataiku.com/machine-learning-explained-algorithms-are-your-friend Python Algorithms Source: https://www.analyticsvidhya.com/blog/2015/09/full-cheatsheet-machine-learning-algorithms/ Python Basics Source: http://datasciencefree.com/python.pdf Source: https://www.datacamp.com/community/tutorials/python-data-science-cheat-sheet-basics#gs.0x1rxEA Numpy Source: https://www.dataquest.io/blog/numpy-cheat-sheet/ Source: http://datasciencefree.com/numpy.pdf Source: https://www.datacamp.com/community/blog/python-numpy-cheat-sheet#gs.Nw3V6CE Source: https://github.com/donnemartin/data-science-ipython-notebooks/blob/master/numpy/numpy.ipynb Pandas Source: http://datasciencefree.com/pandas.pdf Source: https://www.datacamp.com/community/blog/python-pandas-cheat-sheet#gs.S4P4T=U Source: https://github.com/donnemartin/data-science-ipython-notebooks/blob/master/pandas/pandas.ipynb Matplotlib Source: https://www.datacamp.com/community/blog/python-matplotlib-cheat-sheet Source: https://github.com/donnemartin/data-science-ipython-notebooks/blob/master/matplotlib/matplotlib.ipynb Scikit Learn Source: https://www.datacamp.com/community/blog/scikit-learn-cheat-sheet#gs.fZ2A1Jk Source: http://peekaboo-vision.blogspot.de/2013/01/machine-learning-cheat-sheet-for-scikit.html Source: https://github.com/rcompton/mlcheatsheet/blob/master/supervised_learning.ipynb Tensorflow Source: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/1Introduction/basicoperations.ipynb Pytorch Source: https://github.com/bfortuner/pytorch-cheatsheet Math Probability Source: http://www.wzchen.com/s/probability_cheatsheet.pdf Linear Algebra Source: https://minireference.com/static/tutorials/linearalgebrain4pages.pdf Statistics Source: http://web.mit.edu/~csvoss/Public/usabo/stats_handout.pdf Calculus Source: http://tutorial.math.lamar.edu/getfile.aspx?file=B,41,N

How To Self Study AI FAST
youtube
LLM Vibe Score0.4
Human Vibe Score0.89
Tina HuangDec 30, 2023

How To Self Study AI FAST

Want to get ahead in your career using AI? Join my FREE workshop: https://www.lonelyoctopus.com/workshop Head to http://brilliant.org/TinaHuang/ to get started for free with Brilliant's interactive lessons. The first 200 people will also get 20% off an annual membership. A video to learn AI skills for my short attention span friends who keep giving up on learning this field. ✉️ NEWSLETTER: https://tinahuang.substack.com/ It's about learning, coding, and generally how to get your sh*t together c: 🤖 AI Lunch & Learn series: https://www.lonelyoctopus.com/email-signup It's a FREE weekly 1hr livestream about AI & tech topics eg. how to build a GPT, how to build AI products, jobs in the era of AI etc. 🐙 Lonely Octopus: https://www.lonelyoctopus.com/ Check it out if you're interested in learning AI & data skill, then applying them to real freelance projects! 🤝 Business Inqueries: https://tally.so/r/mRDV99 🖱️Links mentioned in video ======================== Freecode camp for python: https://www.youtube.com/watch?v=zOjov-2OZ0E Python book: https://automatetheboringstuff.com/ Introduction to AI: https://www.youtube.com/watch?v=zjkBMFhNj_g Prompt engineering course: https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/ Josh Starmer: https://www.youtube.com/@statquest/ Math for Machine Learning: https://imp.i384100.net/math-for-ml Stanford Statistics: https://www.coursera.org/learn/stanford-statistics Brilliant Neural Network course: https://brilliant.org/courses/intro-neural-networks/ Brilliant Intermediate Deep Learning course: https://brilliant.org/courses/artificial-neural-networks/ Deep Learning Course: https://www.youtube.com/watch?v=zxagGtF9MeU&list=PLblh5JKOoLUIxGDQs4LFFD--41Vzf-ME1 Deep Learning Specialization: https://www.coursera.org/specializations/deep-learning Computer Vision Specialization: https://www.coursera.org/learn/introduction-computer-vision-watson-opencv Natural Language Processing Specialization: https://www.coursera.org/specializations/natural-language-processing Beginner project with starter code: https://github.com/fiverrhellotinah/youtubeproject 🔗Affiliates ======================== My SQL for data science interviews course (10 full interviews): https://365datascience.com/learn-sql-for-data-science-interviews/ 365 Data Science: https://365datascience.pxf.io/WD0za3 (link for 57% discount for their complete data science training) Check out StrataScratch for data science interview prep: https://stratascratch.com/?via=tina 🎥 My filming setup ======================== 📷 camera: https://amzn.to/3LHbi7N 🎤 mic: https://amzn.to/3LqoFJb 🔭 tripod: https://amzn.to/3DkjGHe 💡 lights: https://amzn.to/3LmOhqk ⏰Timestamps ======================== 00:00 intro 📲Socials ======================== instagram: https://www.instagram.com/hellotinah/ linkedin: https://www.linkedin.com/in/tinaw-h/ discord: https://discord.gg/5mMAtprshX 🎥Other videos you might be interested in ======================== How I consistently study with a full time job: https://www.youtube.com/watch?v=INymz5VwLmk How I would learn to code (if I could start over): https://www.youtube.com/watch?v=MHPGeQD8TvI&t=84s 🐈‍⬛🐈‍⬛About me ======================== Hi, my name is Tina and I'm an ex-Meta data scientist turned internet person! 📧Contact ======================== youtube: youtube comments are by far the best way to get a response from me! linkedin: https://www.linkedin.com/in/tinaw-h/ email for business inquiries only: hellotinah@gmail.com ======================== Some links are affiliate links and I may receive a small portion of sales price at no cost to you. I really appreciate your support in helping improve this channel! :)

What is generative AI and how does it work? – The Turing Lectures with Mirella Lapata
youtube
LLM Vibe Score0.382
Human Vibe Score0.9
The Royal InstitutionOct 12, 2023

What is generative AI and how does it work? – The Turing Lectures with Mirella Lapata

How are technologies like ChatGPT created? And what does the future hold for AI language models? This talk was filmed at the Royal Institution on 29th September 2023, in collaboration with The Alan Turing Institute. Join this channel to get access to perks: https://www.youtube.com/channel/UCYeF244yNGuFefuFKqxIAXw/join Watch the Q&A with Mirella here: https://youtu.be/9i2x2HyeW-Y Generative AI refers to a type of artificial intelligence that involves creating new and original data or content. Unlike traditional AI models that rely on large datasets and algorithms to classify or predict outcomes, generative AI models are designed to learn the underlying patterns and structure of the data and generate novel outputs that mimic human creativity. ChatGPT is perhaps the most well-known example, but the field is far larger and more varied than text generation. Other applications of generative AI include image and video synthesis, speech generation, music composition, and virtual reality. In this lecture, Mirella Lapata will present an overview of this exciting—sometimes controversial—and rapidly evolving field. Mirella Lapata is professor of natural language processing in the School of Informatics at the University of Edinburgh. Her research focuses on getting computers to understand, reason with, and generate natural language. She is the first recipient (2009) of the British Computer Society and Information Retrieval Specialist Group (BCS/IRSG) Karen Sparck Jones award and a Fellow of the Royal Society of Edinburgh, the ACL, and Academia Europaea. 00:00 Intro 2:38 Generative AI isn’t new – so what’s changed? 8:43 How did we get to ChatGPT? 12:38 How are Large Language Models created? 22:48 How good can a LLM become? 26:57 Unexpected effects of scaling up LLMs 28:05 How can ChatGPT meet the needs of humans? 32:30 Chat GPT demo 38:07 Are Language Models always right or fair? 40:21 The impact of LLMs on society 42:54 Is AI going to kill us all? -- A very special thank you to our Patreon supporters who help make these videos happen, especially: modsiw, Anton Ragin, Edward Unthank, Robert L Winer, Andy Carpenter, William Hudson Don McLaughlin, efkinel lo, Martin Paull, Ben Wynne-Simmons, Ivo Danihelka, Kevin Winoto, Jonathan Killin, Stephan Giersche, William Billy Robillard, Jeffrey Schweitzer, Frances Dunne, jonas.app, Tim Karr, Alan Latteri, David Crowner, Matt Townsend, THOMAS N TAMADA, Andrew McGhee, Paul Brown, David Schick, Dave Ostler, Osian Gwyn Williams, David Lindo, Roger Baker, Rebecca Pan -- The Ri is on Twitter: http://twitter.com/ri_science and Facebook: http://www.facebook.com/royalinstitution and TikTok: https://www.tiktok.com/@ri_science Listen to the Ri podcast: https://podcasters.spotify.com/pod/show/ri-science-podcast Our editorial policy: https://www.rigb.org/editing-ri-talks-and-moderating-comments Subscribe for the latest science videos: http://bit.ly/RiNewsletter Product links on this page may be affiliate links which means it won't cost you any extra but we may earn a small commission if you decide to purchase through the link.

The 9 AI Skills You Need NOW to Stay Ahead of 97% of People
youtube
LLM Vibe Score0.289
Human Vibe Score0.91
AI UncoveredMay 14, 2023

The 9 AI Skills You Need NOW to Stay Ahead of 97% of People

The 9 AI Skills You Need NOW to Stay Ahead of 97% of People 🔒 Keep Your Digital Life Private: Stay Safe & Secure Online with NordVPN: https://nordvpn.com/safetyfirst Welcome to our latest educational video, "The 9 AI Skills You Need NOW to Stay Ahead of 97% of People." This video is designed for anyone eager to take a deep dive into the world of artificial intelligence and machine learning. Our goal is to provide you with the most essential AI skills needed to excel in this rapidly evolving field, keeping you ahead of the curve and well-positioned in the job market. In this comprehensive guide, we explore nine fundamental AI skills, ranging from understanding algorithms to deep learning, data science, natural language processing, computer vision, robotics, and more. We also provide practical tips on how to apply these skills in real-world scenarios, whether you're an AI enthusiast or a seasoned professional. AI is not just the future; it's here NOW. By acquiring these nine essential AI skills, you can position yourself among the top 3% of people who are ready to shape the future. Don't be left behind as AI transforms industries, from healthcare and finance to entertainment and transportation. This video is a must-watch for anyone interested in AI, machine learning, data analysis, robotics, or any related field. Whether you're just starting out, looking to upskill, or aiming to stay ahead in your career, these nine AI skills are your key to success. Remember to subscribe to our channel for more valuable content and hit the notification bell so you never miss an update. Join the conversation in the comments section - we'd love to hear your thoughts on AI and how you plan to incorporate these skills into your career or studies. So get ready, click play, and let's take a step towards the future together, learning the 9 AI skills you need NOW to stay ahead of 97% of people. Your AI journey starts here. Enjoy the video! #artificialintelligence #ai #airevolution Subscribe for more! Welcome to AI Uncovered, your ultimate destination for exploring the fascinating world of artificial intelligence! Our channel delves deep into the latest AI trends and technology, providing insights into cutting-edge AI tools, AI news, and breakthroughs in artificial general intelligence (AGI). We simplify complex concepts, making AI explained in a way that is accessible to everyone. At AI Uncovered, we're passionate about uncovering the most captivating stories in AI, including the marvels of ChatGPT and advancements by organizations like OpenAI. Our content spans a wide range of topics, from science news and AI innovations to in-depth discussions on the ethical implications of artificial intelligence. Our mission is to enlighten, inspire, and inform our audience about the rapidly evolving technology landscape. Whether you're a tech enthusiast, a professional seeking to stay ahead of AI trends, or someone curious about the future of artificial intelligence, AI Uncovered is the perfect place to expand your knowledge. Join us as we uncover the secrets behind AI tools and their potential to revolutionize our world. Subscribe to AI Uncovered and stay tuned for enlightening content that bridges the gap between AI novices and experts, covering AI news, AGI, ChatGPT, OpenAI, artificial intelligence, and more. Together, let's explore the limitless possibilities of technology and AI. Disclaimer: Some links included in this description might be affiliate links. If you purchase a product or service through the links that we provide, we may receive a small commission. There is no additional charge for you. Thank you for supporting AI Uncovered so we can continue to provide you with free, high-quality content.