VibeBuilders.ai Logo
VibeBuilders.ai

Details

Explore resources related to details to help implement AI solutions for your business.

How do you learn details / potential strategy about technically important new laws in the jurisdictions you operate in?
reddit
LLM Vibe Score0
Human Vibe Score1
friendofherschelThis week

How do you learn details / potential strategy about technically important new laws in the jurisdictions you operate in?

I am reading “The Entrepreneur’s Guide to Law and Strategy” and it’s really fantastic so far about giving a pretty great overview of these aspects of business. It was published by Wiley (a reputable textbook publisher) in 2018. In one chapter, the authors go into the EU’s “right to be forgotten” and it got me thinking about complying with laws like that. Unfortunately, the latest edition of the book is still nearly 7 years old and written pre-COVID, pre-genAI, pre-social network and privacy pushback, etc. I assume every time a new law comes out that can impact my business (say, a random privacy law in California) that businesses aren’t just telling their lawyers “use any amount of hours you need to in order to read the San Jose papers every day and then write me a one paragraph brief with an outline and potential changes needed to our business, also all the other papers across the world”. They’d spend a fortune. There has to be something I’m missing. Is there a law review for business that I should be following? I operate in the US only at this time. A more technical newspaper (I take WSJ, but it’s not technical enough for this sort of thing. It might give the “what”, but won’t give a small business owner “what to do with it”)? PS: I’m the type of person who read every word of my mortgage. I am aware the answer might be “don’t worry about it”. But I do worry about it, and am trying to fix that. For example, the insanely popular new lawsuits about website accessibility. I want to avoid things (essentially low hanging lawsuit fruit) like that before they happen to me.

I fell into the builder's trap and need help getting out
reddit
LLM Vibe Score0
Human Vibe Score1
stellarcitizenThis week

I fell into the builder's trap and need help getting out

Hi r/startups, First-time technical founder here. Two years ago, I decided to leave the 9-5 grind and build something meaningful. Now, I have (what I believe is) a brilliant technical solution but no clear business case. I’m seeking a cofounder with product and marketing expertise to help pivot my project into a viable business - or start a new one. Details below. About Me 36yo, born in Berlin and moved to San Francisco 8 years ago Master's in Software Engineering with 15 years of experience Worked with early-stage startups in Berlin and a venture studio in SF Spent the past years leading a team of 12 shipping enterprise software The tech I've built An AI engine that makes it easy for developers to automate their workflows. It works with code, issues, PRs and integrates with 3rd party systems like error trackers, wikis, ticketing systems, etc. It takes natural language instructions, fulfills them autonomously and responds with a result. The functionality is served as a platform, with an API and an SDK. On top of it, I've built a CLI and a web application with productivity tools for developers. Who and what I'm looking for My main goal is to leave my current job and build a company around a problem that matters to me, ideally with considerable equity. I’m looking for: A cofounder with product and marketing expertise who sees potential in my tech and can help turn it into a successful business—or someone with a strong business case who needs a technical founder. Mentorship from someone experienced in dev tool startups or as a successful solo founder. I’d love to learn from your journey and would be happy to offer my technical expertise or collaborate on projects in return. Happy to answer any questions or provide more details. Cheers!

What questions to ask to evaluate an offer from start up?
reddit
LLM Vibe Score0
Human Vibe Score1
xcitechThis week

What questions to ask to evaluate an offer from start up?

Hello! I am presently working working as a Data Scientist with a medium sized company. Last year my boss left the company to start his own. Very recently his non-solicitation clause expired, and he asked me to join his startup. While I know almost everything about the product idea, and the technical aspect of the startup - I have very less information on more critical points like funding, equity sharing, etc. He has made a verbal unofficial offer, and I have asked for a week to prepare my list of questions for him for me to be able to evaluate his offer. Since I have no knowledge of the startup scene, I would like some help regarding the questions I should put forward to him. Mentioned below are what I know so far and the offer: The company was started by two people, both working full time on it. I would be the third person on the team. The startup aims to introduce AI in a field which has lagged behind in the introduction of technology by at least 2 decades. The big players in this field are conservative, but now they are opening up towards embracing new technology. Personally I have confidence in their idea, and feel this will be a sustainable and profitable company. The offered salary is about 60% of what I make right now. The equity offered is 2%. I do not know the details of the funding they have received so far or the equity split. Any pointers in helping me frame my questions for the evaluation of the offer would be very helpful! Thank you

The Drawing of the Three - Once you look through the veil, nothing is the same again. (I will not promote)
reddit
LLM Vibe Score0
Human Vibe Score0.333
Tim-SylvesterThis week

The Drawing of the Three - Once you look through the veil, nothing is the same again. (I will not promote)

Originally published Nov 5, 2024 In my last post, I talked about assembling a series of filters to use to view the startup landscape, which led me to a few conclusions about what opportunities I should pursue. What did I see through those filters? What I saw through the moire pattern of those two lists overlaid by one another is what I think will be the third great monetization strategy for the internet, matching the pattern of: web1 => Ad monetization web2 => Subscription monetization web3 => For AI, neither of those work anymore, which demands something new. But what? Well that’s the important part, isn’t it? Should I just up and tell you? Yawn. The climax of a movie is at the climax, if they tell you the crux at the beginning, it’s a lot less fun (usually). The standard bearer for web1 and ads was Google (with countless followers), and essentially every website adopted that model for their first pass at content monetization. Google has been… let’s call it fairly successful… so it’s not a bad way to look at things. How many websites live and die by selling advertising? The standard bearers for web2 and subscriptions were Salesforce (for B2B SaaS) and Netflix (for B2C SaaS), with countless followers, to the extent that SaaS has been the dominant startup monetization thesis for the last 15+ years. It’s more old and tired by now than most American politicians, but how many websites live and die by people entering payment details for a monthly or annual subscription? Evidence proves those models for web1 and web2 worked well enough that countless businesses depend on them, and countless fortunes have been made and lost surfing the waves, or crashing against the shorelines, of ads and subs. But it’s also apparent (to me, at least) that now that AI is the dominant startup thesis, neither ads nor subs are going to prevail in an AI-centered world, and for one simple reason: Those monetization strategies are for humans, and AI bots are not humans. Changing Environments Require Changing Strategies Every so often, there’s a fundamental shift that demands everything in the ecosystem adapt to a new habitation strategy to survive. We’ve seen this repeatedly across Earth’s ecology (for instance, introducing free oxygen to the atmosphere, producing respiration while destroying all the life forms that existed before oxygen permeated the atmosphere), and across human society (for example, how nuclear bombs changed war, and how drones are changing it again, for less violent examples, consider the adoption of computers and the subsequent adoption of smartphones). Now the ecosystem of the internet has changed irrevocably, opening up countless new and interesting niches to occupy. Humans may see an ad and buy something stupid (or, occasionally, not-stupid), but an AI won’t unless its programmed to. And subscriptions are designed for humans to consume content at a human rate, not for an AI that can choke down an entire database of content (whatever it may be) at whatever speed the servers can manage. Changing conditions require changing strategies. It was clear to me that: The introduction of AI bots to the internet ecosystem was, is, and will be massively disruptive for a very long time The internet population of bots already exceeds humans and is growing faster than the human population The two dominant monetization strategies are not relevant to bots That disruption of expectations across the ecosystem demands a third strategy, a new strategy to handle a massive change in an existing system. And that strategy needs to accommodate, support, and monetize the new demands from the vast armies of new participants in the internet ecology. Therefore, a method that converts bots from an expense into a revenue source would become a dominant monetization strategy, and therefore whoever owns that strategy will be a dominant player in the internet ecosystem. Set the realization of semi-practical, semi-useful AI against a backdrop of technology cycles that have, in the distant past (in internet terms) produced ads and subs, and more recently produced enormous investment into fintech and crypto, I started to see a path that felt like it would grow over time to become a new monetization strategy that works in the AI ecosystem. Sun Tzu had a couple drinks, saw a couple things… There’s at least, and possibly only, two things I know about fighting: You cannot fight the tide, and it’s much harder to fight an uphill battle. If my whole thesis on this go-around was to go with the flow, and that trickle of insight was leading me from my overlook along a roaring flow of cash coursing through a valley filled with AI startups, where exactly would it lead me? Most rivers lead to the sea eventually, but they can take winding paths, and sometimes the quickest route from the mountain to the sea isn’t to follow the river, but to understand where the river leads and go there instead. Getting a view from on high can save you a lot of time on your journey. But before I get to where the path has led (or is leading) that will explain the objective I’ve identified, and the deliverables I have to produce to reach it, let’s talk about a few of the steps on the path I’ve been taking that highlight the process I followed. I figure if I explain the steps I’m taking, as I’m taking them, it may be easier for people who haven’t trod this route before to follow me and understand how to carve their own course towards their own objectives. And maybe the real treasure will be the friends we make along the way. (I will not promote)

How a founder built a B2B AI startup to serve with 65+ global brands (including Fortune500 companies) (I will not promote)
reddit
LLM Vibe Score0
Human Vibe Score1
Royal_Rest8409This week

How a founder built a B2B AI startup to serve with 65+ global brands (including Fortune500 companies) (I will not promote)

AI Palette is an AI-driven platform that helps food and beverage companies predict emerging product trends. I had the opportunity recently to sit down with the founder to get his advice on building an AI-first startup, which he'll be going through in this post. (I will not promote) About AI Palette: Co-founders: >!2 (Somsubhra GanChoudhuri, Himanshu Upreti)!!100+!!$12.7M USD!!AI-powered predictive analytics for the CPG (Consumer Packaged Goods) industry!!Signed first paying customer in the first year!!65+ global brands, including Cargill, Diageo, Ajinomoto, Symrise, Mondelez, and L’Oréal, use AI Palette!!Every new product launched has secured a paying client within months!!Expanded into Beauty & Personal Care (BPC), onboarding one of India’s largest BPC companies within weeks!!Launched multiple new product lines in the last two years, creating a unified suite for brand innovation!Identify the pain points in your industry for ideas* When I was working in the flavour and fragrance industry, I noticed a major issue CPG companies faced: launching a product took at least one to two years. For instance, if a company decided today to launch a new juice, it wouldn’t hit the market until 2027. This long timeline made it difficult to stay relevant and on top of trends. Another big problem I noticed was that companies relied heavily on market research to determine what products to launch. While this might work for current consumer preferences, it was highly inefficient since the product wouldn’t actually reach the market for several years. By the time the product launched, the consumer trends had already shifted, making that research outdated. That’s where AI can play a crucial role. Instead of looking at what consumers like today, we realised that companies should use AI to predict what they will want next. This allows businesses to create products that are ahead of the curve. Right now, the failure rate for new product launches is alarmingly high, with 8 out of 10 products failing. By leveraging AI, companies can avoid wasting resources on products that won’t succeed, leading to better, more successful launches. Start by talking to as many industry experts as possible to identify the real problems When we first had the idea for AI Palette, it was just a hunch, a gut feeling—we had no idea whether people would actually pay for it. To validate the idea, we reached out to as many people as we could within the industry. Since our focus area was all about consumer insights, we spoke to professionals in the CPG sector, particularly those in the insights departments of CPG companies. Through these early conversations, we began to see a common pattern emerge and identified the exact problem we wanted to solve. Don’t tell people what you’re building—listen to their frustrations and challenges first. Going into these early customer conversations, our goal was to listen and understand their challenges without telling them what we were trying to build. This is crucial as it ensures that you can gather as much data about the problem to truly understand it and that you aren't biasing their answers by showing your solution. This process helped us in two key ways: First, it validated that there was a real problem in the industry through the number of people who spoke about experiencing the same problem. Second, it allowed us to understand the exact scale and depth of the problem—e.g., how much money companies were spending on consumer research, what kind of tools they were currently using, etc. Narrow down your focus to a small, actionable area to solve initially. Once we were certain that there was a clear problem worth solving, we didn’t try to tackle everything at once. As a small team of two people, we started by focusing on a specific area of the problem—something big enough to matter but small enough for us to handle. Then, we approached customers with a potential solution and asked them for feedback. We learnt that our solution seemed promising, but we wanted to validate it further. If customers are willing to pay you for the solution, it’s a strong validation signal for market demand. One of our early customer interviewees even asked us to deliver the solution, which we did manually at first. We used machine learning models to analyse the data and presented the results in a slide deck. They paid us for the work, which was a critical moment. It meant we had something with real potential, and we had customers willing to pay us before we had even built the full product. This was the key validation that we needed. By the time we were ready to build the product, we had already gathered crucial insights from our early customers. We understood the specific information they wanted and how they wanted the results to be presented. This input was invaluable in shaping the development of our final product. Building & Product Development Start with a simple concept/design to validate with customers before building When we realised the problem and solution, we began by designing the product, but not by jumping straight into coding. Instead, we created wireframes and user interfaces using tools like InVision and Figma. This allowed us to visually represent the product without the need for backend or frontend development at first. The goal was to showcase how the product would look and feel, helping potential customers understand its value before we even started building. We showed these designs to potential customers and asked for feedback. Would they want to buy this product? Would they pay for it? We didn’t dive into actual development until we found a customer willing to pay a significant amount for the solution. This approach helped us ensure we were on the right track and didn’t waste time or resources building something customers didn’t actually want. Deliver your solution using a manual consulting approach before developing an automated product Initially, we solved problems for customers in a more "consulting" manner, delivering insights manually. Recall how I mentioned that when one of our early customer interviewees asked us to deliver the solution, we initially did it manually by using machine learning models to analyse the data and presenting the results to them in a slide deck. This works for the initial stages of validating your solution, as you don't want to invest too much time into building a full-blown MVP before understanding the exact features and functionalities that your users want. However, after confirming that customers were willing to pay for what we provided, we moved forward with actual product development. This shift from a manual service to product development was key to scaling in a sustainable manner, as our building was guided by real-world feedback and insights rather than intuition. Let ongoing customer feedback drive iteration and the product roadmap Once we built the first version of the product, it was basic, solving only one problem. But as we worked closely with customers, they requested additional features and functionalities to make it more useful. As a result, we continued to evolve the product to handle more complex use cases, gradually developing new modules based on customer feedback. Product development is a continuous process. Our early customers pushed us to expand features and modules, from solving just 20% of their problems to tackling 50–60% of their needs. These demands shaped our product roadmap and guided the development of new features, ultimately resulting in a more complete solution. Revenue and user numbers are key metrics for assessing product-market fit. However, critical mass varies across industries Product-market fit (PMF) can often be gauged by looking at the size of your revenue and the number of customers you're serving. Once you've reached a certain critical mass of customers, you can usually tell that you're starting to hit product-market fit. However, this critical mass varies by industry and the type of customers you're targeting. For example, if you're building an app for a broad consumer market, you may need thousands of users. But for enterprise software, product-market fit may be reached with just a few dozen key customers. Compare customer engagement and retention with other available solutions on the market for product-market fit Revenue and the number of customers alone isn't always enough to determine if you're reaching product-market fit. The type of customer and the use case for your product also matter. The level of engagement with your product—how much time users are spending on the platform—is also an important metric to track. The more time they spend, the more likely it is that your product is meeting a crucial need. Another way to evaluate product-market fit is by assessing retention, i.e whether users are returning to your platform and relying on it consistently, as compared to other solutions available. That's another key indication that your solution is gaining traction in the market. Business Model & Monetisation Prioritise scalability Initially, we started with a consulting-type model where we tailor-made specific solutions for each customer use-case we encountered and delivered the CPG insights manually, but we soon realized that this wasn't scalable. The problem with consulting is that you need to do the same work repeatedly for every new project, which requires a large team to handle the workload. That is not how you sustain a high-growth startup. To solve this, we focused on building a product that would address the most common problems faced by our customers. Once built, this product could be sold to thousands of customers without significant overheads, making the business scalable. With this in mind, we decided on a SaaS (Software as a Service) business model. The benefit of SaaS is that once you create the software, you can sell it to many customers without adding extra overhead. This results in a business with higher margins, where the same product can serve many customers simultaneously, making it much more efficient than the consulting model. Adopt a predictable, simplistic business model for efficiency. Look to industry practices for guidance When it came to monetisation, we considered the needs of our CPG customers, who I knew from experience were already accustomed to paying annual subscriptions for sales databases and other software services. We decided to adopt the same model and charge our customers an annual upfront fee. This model worked well for our target market, aligning with industry standards and ensuring stable, recurring revenue. Moreover, our target CPG customers were already used to this business model and didn't have to choose from a huge variety of payment options, making closing sales a straightforward and efficient process. Marketing & Sales Educate the market to position yourself as a thought leader When we started, AI was not widely understood, especially in the CPG industry. We had to create awareness around both AI and its potential value. Our strategy focused on educating potential users and customers about AI, its relevance, and why they should invest in it. This education was crucial to the success of our marketing efforts. To establish credibility, we adopted a thought leadership approach. We wrote blogs on the importance of AI and how it could solve problems for CPG companies. We also participated in events and conferences to demonstrate our expertise in applying AI to the industry. This helped us build our brand and reputation as leaders in the AI space for CPG, and word-of-mouth spread as customers recognized us as the go-to company for AI solutions. It’s tempting for startups to offer products for free in the hopes of gaining early traction with customers, but this approach doesn't work in the long run. Free offerings don’t establish the value of your product, and customers may not take them seriously. You should always charge for pilots, even if the fee is minimal, to ensure that the customer is serious about potentially working with you, and that they are committed and engaged with the product. Pilots/POCs/Demos should aim to give a "flavour" of what you can deliver A paid pilot/POC trial also gives you the opportunity to provide a “flavour” of what your product can deliver, helping to build confidence and trust with the client. It allows customers to experience a detailed preview of what your product can do, which builds anticipation and desire for the full functionality. During this phase, ensure your product is built to give them a taste of the value you can provide, which sets the stage for a broader, more impactful adoption down the line. Fundraising & Financial Management Leverage PR to generate inbound interest from VCs When it comes to fundraising, our approach was fairly traditional—we reached out to VCs and used connections from existing investors to make introductions. However, looking back, one thing that really helped us build momentum during our fundraising process was getting featured in Tech in Asia. This wasn’t planned; it just so happened that Tech in Asia was doing a series on AI startups in Southeast Asia and they reached out to us for an article. During the interview, they asked if we were fundraising, and we mentioned that we were. As a result, several VCs we hadn’t yet contacted reached out to us. This inbound interest was incredibly valuable, and we found it far more effective than our outbound efforts. So, if you can, try to generate some PR attention—it can help create inbound interest from VCs, and that interest is typically much stronger and more promising than any outbound strategies because they've gone out of their way to reach out to you. Be well-prepared and deliberate about fundraising. Keep trying and don't lose heart When pitching to VCs, it’s crucial to be thoroughly prepared, as you typically only get one shot at making an impression. If you mess up, it’s unlikely they’ll give you a second chance. You need to have key metrics at your fingertips, especially if you're running a SaaS company. Be ready to answer questions like: What’s your retention rate? What are your projections for the year? How much will you close? What’s your average contract value? These numbers should be at the top of your mind. Additionally, fundraising should be treated as a structured process, not something you do on the side while juggling other tasks. When you start, create a clear plan: identify 20 VCs to reach out to each week. By planning ahead, you’ll maintain momentum and speed up the process. Fundraising can be exhausting and disheartening, especially when you face multiple rejections. Remember, you just need one investor to say yes to make it all worthwhile. When using funds, prioritise profitability and grow only when necessary. Don't rely on funding to survive. In the past, the common advice for startups was to raise money, burn through it quickly, and use it to boost revenue numbers, even if that meant operating at a loss. The idea was that profitability wasn’t the main focus, and the goal was to show rapid growth for the next funding round. However, times have changed, especially with the shift from “funding summer” to “funding winter.” My advice now is to aim for profitability as soon as possible and grow only when it's truly needed. For example, it’s tempting to hire a large team when you have substantial funds in the bank, but ask yourself: Do you really need 10 new hires, or could you get by with just four? Growing too quickly can lead to unnecessary expenses, so focus on reaching profitability as soon as possible, rather than just inflating your team or burn rate. The key takeaway is to spend your funds wisely and only when absolutely necessary to reach profitability. You want to avoid becoming dependent on future VC investments to keep your company afloat. Instead, prioritize reaching break-even as quickly as you can, so you're not reliant on external funding to survive in the long run. Team-Building & Leadership Look for complementary skill sets in co-founders When choosing a co-founder, it’s important to find someone with a complementary skill set, not just someone you’re close to. For example, I come from a business and commercial background, so I needed someone with technical expertise. That’s when I found my co-founder, Himanshu, who had experience in machine learning and AI. He was a great match because his technical knowledge complemented my business skills, and together we formed a strong team. It might seem natural to choose your best friend as your co-founder, but this can often lead to conflict. Chances are, you and your best friend share similar interests, skills, and backgrounds, which doesn’t bring diversity to the table. If both of you come from the same industry or have the same strengths, you may end up butting heads on how things should be done. Having diverse skill sets helps avoid this and fosters a more collaborative working relationship. Himanshu (left) and Somsubhra (right) co-founded AI Palette in 2018 Define roles clearly to prevent co-founder conflict To avoid conflict, it’s essential that your roles as co-founders are clearly defined from the beginning. If your co-founder and you have distinct responsibilities, there is no room for overlap or disagreement. This ensures that both of you can work without stepping on each other's toes, and there’s mutual respect for each other’s expertise. This is another reason as to why it helps to have a co-founder with a complementary skillset to yours. Not only is having similar industry backgrounds and skillsets not particularly useful when building out your startup, it's also more likely to lead to conflicts since you both have similar subject expertise. On the other hand, if your co-founder is an expert in something that you're not, you're less likely to argue with them about their decisions regarding that aspect of the business and vice versa when it comes to your decisions. Look for employees who are driven by your mission, not salary For early-stage startups, the first hires are crucial. These employees need to be highly motivated and excited about the mission. Since the salary will likely be low and the work demanding, they must be driven by something beyond just the paycheck. The right employees are the swash-buckling pirates and romantics, i.e those who are genuinely passionate about the startup’s vision and want to be part of something impactful beyond material gains. When employees are motivated by the mission, they are more likely to stick around and help take the startup to greater heights. A litmus test for hiring: Would you be excited to work with them on a Sunday? One of the most important rounds in the hiring process is the culture fit round. This is where you assess whether a candidate shares the same values as you and your team. A key question to ask yourself is: "Would I be excited to work with this person on a Sunday?" If there’s any doubt about your answer, it’s likely not a good fit. The idea is that you want employees who align with the company's culture and values and who you would enjoy collaborating with even outside of regular work hours. How we structure the team at AI Palette We have three broad functions in our organization. The first two are the big ones: Technical Team – This is the core of our product and technology. This team is responsible for product development and incorporating customer feedback into improving the technology Commercial Team – This includes sales, marketing, customer service, account managers, and so on, handling everything related to business growth and customer relations. General and Administrative Team – This smaller team supports functions like finance, HR, and administration. As with almost all businesses, we have teams that address the two core tasks of building (technical team) and selling (commercial team), but given the size we're at now, having the administrative team helps smoothen operations. Set broad goals but let your teams decide on execution What I've done is recruit highly skilled people who don't need me to micromanage them on a day-to-day basis. They're experts in their roles, and as Steve Jobs said, when you hire the right person, you don't have to tell them what to do—they understand the purpose and tell you what to do. So, my job as the CEO is to set the broader goals for them, review the plans they have to achieve those goals, and periodically check in on progress. For example, if our broad goal is to meet a certain revenue target, I break it down across teams: For the sales team, I’ll look at how they plan to hit that target—how many customers they need to sell to, how many salespeople they need, and what tactics and strategies they plan to use. For the technical team, I’ll evaluate our product offerings—whether they think we need to build new products to attract more customers, and whether they think it's scalable for the number of customers we plan to serve. This way, the entire organization's tasks are cascaded in alignment with our overarching goals, with me setting the direction and leaving the details of execution to the skilled team members that I hire.

Upselling from $8/mo to $2k/mo
reddit
LLM Vibe Score0
Human Vibe Score1
Afraid-Astronomer130This week

Upselling from $8/mo to $2k/mo

I just closed a client for $1947/mo. But 5 months ago he was spending only $8/mo. Most customers have way more purchasing power than you think. Unlock it with the power of stacking. Here's my 3-steps stacking formula: Step 1 - Build trust with a low-ticket product In a world full of scams and deceit, building trust is damn hard. The best way to combat skepticism is through a free or low-ticket product, where you can go above and beyond to demonstrate your credibility. When I first onboarded this client onto my SaaS, an AI to help you with HARO link-building, my product was at a very early stage with many rough edges. He gave me lots of great feedback. I implemented his suggestions the same day and got more feedback from him. After a couple of back-and-forths, I established myself as a trustworthy hustler, instead of just a stranger online. This is easy to do for an agile startup but impossible for big companies, so make good use of opportunities like this to build long-term relationships. Turn your customers into raving fans. Step 2 - Validate a mid-ticket offer Three months into his subscription, he told me he wanted to cancel. When digging into the why, he suggested a performance-based DFY service to remove all the work on his end. Inspired by his suggestion, I took on him and 6 other clients for $237, a one-time package for 1 backlink. It's sold through my newsletter email blast to 300 subscribers, with a total CAC of $0. I wrote about the details of this launch in another long form. At this price range, impulsive purchases can still happen if you have a strong offer and good copywriting. Use this mid-ticket offer to validate your offer and positioning, build out a team, and establish trust. We went beyond the 1 link for almost all our clients, including this one in particular. For $237, we got him on Forbes, HubSpot, 2 DR50+ sites, and a few other smaller media outlets. By doing this, we further built trust into the relationship and established authority in what we do. Step 3 - Create a high-ticket subscription-based offer By now, you'll hopefully have built enough trust to get through the skepticism filter for something high-ticket. Now, it's time to develop an offer that amplifies your previous one. Something that allows you to let your clients achieve their goals to the maximum extent. For me, this is pitching every relevant media query on every platform for this client every day, to leverage HARO link-building to its full extent, all for a fixed price of $1947/mo. This customized offer is based on direct client feedback, isn't publicized on our website, but we're confident it will directly contribute to achieving this client's goal. A subscription-based offer is much superior because it allows you to create a stable source of revenue, especially at the early stage. That's how I created 3 different offers to solve the same problem for one client. By stacking each offer on top of the previous one, I was able to guide clients from one option to the next. This formula isn't some new rocket science I came up with. It's proven over and over again by other agency owners building in public, like Nick from Baked Design who started with a $9 design kit and now sells $9k/mo design subscriptions at $1M ARR. By stacking offers, you position yourself as a committed partner in your client's long-term success. Lastly, I want to address a common objection: "My customers can't afford $2k/month." But consider this: most people are reading your site on their $3000 MacBook or $1000 iPhone. It's not that they lack the funds, it's more likely that your service isn't meeting their expectations. Talk to them to discover the irresistible offer they'll gladly pay for. Update: lots of DM asking about more specifics so I wrote about it here. https://coldstartblueprint.com/p/ai-agent-email-list-building

Looking for a technical cofounder with experience in building websites and marketplaces
reddit
LLM Vibe Score0
Human Vibe Score1
SlideZealousideal540This week

Looking for a technical cofounder with experience in building websites and marketplaces

Are you passionate about revolutionizing traditional processes? Do you have the expertise to build scalable platforms and want to be part of something transformative? I’m a second-year Economics student at the University of Warwick with a deep drive for creating impactful solutions. I’m seeking a technical co-founder to join me in building a startup dedicated to transforming how startups hire entry-level talent. About the Project I’m developing a recruitment marketplace that connects early-stage and growing startups with talented students and graduates. Our goal is to streamline the hiring process, making it hassle-free for startups while creating meaningful career opportunities for the next generation of talent. What I’m Looking For in a Technical Co-Founder I need someone who can complement my non-technical skills and help take this project to the next level. The ideal co-founder will have: A strong background in programming online marketplace platforms. Experience managing large databases efficiently. Knowledge in machine learning and AI, with a vision to integrate these in future features. Skills in scaling online platforms for a larger audience. The ability to work in synergy with me to shape and execute the vision. A passion for the idea—I’m happy to share more details in a meeting! Key responsibilities will include platform development, handling backend work, deploying the MVP, aiding in design, and collaborating on product iterations. About Me I bring experience in business strategy, operations, finance, product/project management, marketing, and sales—essentially, I cover everything except the technical aspects of development. I previously worked on a social communication platform for school students during high school. I also gained valuable experience as a business analyst in another startup. Why Join me? This is an exciting opportunity to build a product from the ground up, make an impact in the startup ecosystem, and grow alongside a venture poised to redefine hiring. We need: A seamless MVP launch. Networking efforts to onboard startups and expand our reach. Together, we can create something transformative, fostering innovation and enabling career growth for students while helping startups find the talent they need to succeed. If you’re excited about the prospect of building something revolutionary and have the technical skills to complement my business acumen, I’d love to connect. Let’s discuss how we can work together to create the next generation of hiring solutions. Please DM if you are interested in getting to know more about this project! Looking forward

We built a tool to help you find relevant grants. Would you pay for it?
reddit
LLM Vibe Score0
Human Vibe Score1
CliznitchThis week

We built a tool to help you find relevant grants. Would you pay for it?

Hi everyone, About a year ago, I asked you guys whether it would make sense to develop a tool to help entrepreneurs find relevant grants. Many of you provided incredibly valuable feedback, which we used to refine the concept. With this concept, we went through Techstars and finally launched a beta version of our grant scan tool last week! Along the way, we realized something interesting: when you ask a grant advisor which grants might be a great fit for you, they almost always recommend the ones they know well. This makes sense since most work on a success fee basis, and referring you to lesser-known grants (which take more time to write and have lower success rates) isn’t worth it for them. Plus, memorizing the details of 20,000+ grants is, understandably, pretty tough. Our platform uses AI to scan and analyze thousands of grants. It identifies the best matches, estimates your chances of success, and calculates how much time you might need for the application and reporting phases. We can then match you with a grant advisor with relevant expertise—whether to write the application for you or provide feedback on your draft. We’re considering launching both a free and a paid version. The free version would provide basic insights, while the paid version would include more comprehensive results, expert comments (such as explaining why certain grants are a good fit), and updates when new relevant grants become available. Both versions will allow you to connect with relevant experts. Would you pay for the paid version? And if so, which features should it include? Also, any general feedback is much appreciated! Thanks!

36 startup ideas found by analyzing podcasts (problem, solution & source episode)
reddit
LLM Vibe Score0
Human Vibe Score1
joepigeonThis week

36 startup ideas found by analyzing podcasts (problem, solution & source episode)

Hey, I've been a bit of a podcast nerd for a long time. Around a year ago I began experimenting with transcription of podcasts for a SaaS I was running. I realized pretty quickly that there's a lot of knowledge and value in podcast discussions that is for all intents and purposes entirely unsearchable or discoverable to most people. I ended up stopping work on that SaaS product (party for lack of product/market fit, and partly because podcasting was far more interesting), and focusing on the podcast technology full-time instead. I'm a long-time lurker and poster of r/startups and thought this would make for some interesting content and inspiration for folks. Given I'm in this space, have millions of transcripts, and transcribe thousands daily... I've been exploring fun ways to expose some of the interesting knowledge and conversations taking place that utilize our own data/API. I'm a big fan of the usual startup podcasts (My First Million, Greg Isenberg, etc. etc.) and so I built an automation that turns all of the startup ideas discussed into a weekly email digest. I always struggle to listen to as many episodes as I'd actually like to, so I thought I'd summarise the stuff I care about instead (startup opportunities being discussed). I thought it would be interesting to post some of the ideas extracted so far. They range from being completely whacky and blue sky, to pretty boring but realistic. A word of warning before anyone complains – this is a big mixture of tech, ai, non-tech, local services, etc. ideas: Some of the ideas are completely mundane, but realistic (e.g. local window cleaning service) Some of the ideas are completely insane, blue sky, but sound super interesting Here's the latest 36 ideas: |Idea Name|Problem|Solution|Source| |:-|:-|:-|:-| |SalesForce-as-a-Service - White Label Enterprise Sales Teams|White-label enterprise sales teams for B2B SaaS. Companies need sales but can't hire/train. Recruit retail sellers, train for tech, charge 30% of deals closed.|Create a white-label enterprise sales team by recruiting natural salespeople from retail and direct sales backgrounds (e.g. mall kiosks, cutco knives). Train them specifically in B2B SaaS sales techniques and processes. Offer this trained sales force to tech companies on a contract basis.|My First Million - "Life Hacks From The King of Introverts + 7 Business Ideas| |TechButler - Mobile Device Maintenance Service|Mobile tech maintenance service. Clean/optimize devices, improve WiFi, basic support. $100/visit to homes. Target affluent neighborhoods.|Mobile tech support service providing in-home device cleaning, optimization, and setup. Focus on common issues like WiFi improvement, device maintenance, and basic tech support.|My First Million - "Life Hacks From The King of Introverts + 7 Business Ideas| |MemoryBox - At-Home Video Digitization Service|Door-to-door VHS conversion service. Parents have boxes of old tapes. Pick up, digitize, deliver. $30/tape with minimum order. Going extinct.|Door-to-door VHS to digital conversion service that handles everything from pickup to digital delivery. Make it extremely convenient for customers to preserve their memories.|My First Million - "Life Hacks From The King of Introverts + 7 Business Ideas| |Elite Match Ventures - Success-Based Luxury Matchmaking|High-end matchmaking for 50M+ net worth individuals. Only charge $1M+ when they get married. No upfront fees. Extensive vetting process.|Premium matchmaking service exclusively for ultra-high net worth individuals with a pure contingency fee model - only get paid ($1M+) upon successful marriage. Focus on quality over quantity with extensive vetting and personalized matching.|My First Million - "Life Hacks From The King of Introverts + 7 Business Ideas| |LocalHost - Simple Small Business Websites|Simple WordPress sites for local businesses. $50/month includes hosting, updates, security. Target restaurants and shops. Recurring revenue play.|Simplified web hosting and WordPress management service targeting local small businesses. Focus on basic sites with standard templates, ongoing maintenance, and reliable support for a fixed monthly fee.|My First Million - "Life Hacks From The King of Introverts + 7 Business Ideas| |VoiceJournal AI - Voice-First Smart Journaling|Voice-to-text journaling app with AI insights. 8,100 monthly searches. $15/month subscription. Partners with journaling YouTubers.|AI-powered journaling app that combines voice recording, transcription, and intelligent insights. Users can speak their thoughts, which are automatically transcribed and analyzed for patterns, emotions, and actionable insights.|Where It Happens - "7 $1M+ AI startup ideas you can launch tomorrow with $0"| |AIGenAds - AI-Generated UGC Content Platform|AI platform turning product briefs into UGC-style video ads. Brands spending $500/video for human creators. Generate 100 variations for $99/month.|AI platform that generates UGC-style video ads using AI avatars and scripting. System would allow rapid generation of multiple ad variations at a fraction of the cost. Platform would use existing AI avatar technology combined with script generation to create authentic-looking testimonial-style content.|Where It Happens - "7 $1M+ AI startup ideas you can launch tomorrow with $0"| |InfographAI - Automated Infographic Generation Platform|AI turning blog posts into branded infographics. Marketers spending hours on design. $99/month unlimited generation.|AI-powered platform that automatically converts blog posts and articles into visually appealing infographics. System would analyze content, extract key points, and generate professional designs using predefined templates and brand colors.|Where It Happens - "7 $1M+ AI startup ideas you can launch tomorrow with $0"| |KidFinance - Children's Financial Education Entertainment|Children's media franchise teaching financial literacy. Former preschool teacher creating 'Dora for money'. Books, videos, merchandise potential.|Character-driven financial education content for kids, including books, videos, and potentially TV show. Focus on making money concepts fun and memorable.|The Side Hustle Show - "How a Free Challenge Turned Into a $500,000 a Year Business (Greatest Hits)"| |FinanceTasker - Daily Financial Task Challenge|Free 30-day financial challenge with daily action items. People overwhelmed by money management. Makes $500k/year through books, speaking, and premium membership.|A free 30-day financial challenge delivering one simple, actionable task per day via email. Each task includes detailed scripts and instructions. Participants join a Facebook community for support and accountability. The program focuses on quick wins to build momentum. Automated delivery allows scaling.|The Side Hustle Show - "How a Free Challenge Turned Into a $500,000 a Year Business (Greatest Hits)"| |FinanceAcademy - Expert Financial Training Platform|Premium financial education platform. $13/month for expert-led courses and live Q&As. 4000+ members generating $40k+/month.|Premium membership site with expert-led courses, live Q&As, and community support. Focus on specific topics like real estate investing, business creation, and advanced money management.|The Side Hustle Show - "How a Free Challenge Turned Into a $500,000 a Year Business (Greatest Hits)"| |SecurityFirst Compliance - Real Security + Compliance Platform|Security-first compliance platform built by hackers. Companies spending $50k+ on fake security. Making $7M/year showing why current solutions don't work.|A compliance platform built by security experts that combines mandatory compliance requirements with real security measures. The solution includes hands-on security testing, expert guidance, and a focus on actual threat prevention rather than just documentation. It merges traditional compliance workflows with practical security implementations.|In the Pit with Cody Schneider| |LinkedInbound - Automated Professional Visibility Engine|LinkedIn automation for inbound job offers. Professionals spending hours on manual outreach. $99/month per job seeker.|Automated system for creating visibility and generating inbound interest on LinkedIn through coordinated profile viewing and engagement. Uses multiple accounts to create visibility patterns that trigger curiosity and inbound messages.|In the Pit with Cody Schneider| |ConvoTracker - Community Discussion Monitoring Platform|Community discussion monitoring across Reddit, Twitter, HN. Companies missing sales opportunities. $499/month per brand tracked.|Comprehensive monitoring system that tracks competitor mentions and industry discussions across multiple platforms (Reddit, Twitter, Hacker News, etc.) with automated alerts and engagement suggestions.|In the Pit with Cody Schneider| |ContentAds Pro - Smart Display Ad Implementation|Display ad implementation service for content creators. Bloggers losing thousands in ad revenue monthly. Makes $3-5k per site setup plus ongoing optimization fees.|Implementation of professional display advertising through networks like Mediavine that specialize in optimizing ad placement and revenue while maintaining user experience. Include features like turning off ads for email subscribers and careful placement to minimize impact on core metrics.|The Side Hustle Show - "636: Is Business Coaching Worth It? A Look Inside the last 12 months of Side Hustle Nation"| |MoneyAppReviews - Professional Side Hustle App Testing|Professional testing service for money-making apps. People wasting time on low-paying apps. Makes $20k/month from affiliate commissions and ads.|Professional app testing service that systematically reviews money-making apps and creates detailed, honest reviews including actual earnings data, time investment, and practical tips.|The Side Hustle Show - "636: Is Business Coaching Worth It? A Look Inside the last 12 months of Side Hustle Nation"| |LightPro - Holiday Light Installation Service|Professional Christmas light installation service. Homeowners afraid of ladders. $500-2000 per house plus storage.|Professional Christmas light installation service targeting residential and commercial properties. Full-service offering including design, installation, maintenance, removal and storage. Focus on safety and premium aesthetic results.|The Side Hustle Show - "639: 30 Ways to Make Extra Money for the Holidays"| |FocusMatch - Research Participant Marketplace|Marketplace connecting companies to paid research participants. Companies spending weeks finding people. $50-150/hour per study.|Online platform connecting companies directly with paid research participants. Participants create detailed profiles and get matched to relevant studies. Companies get faster access to their target demographic while participants earn money sharing opinions.|The Side Hustle Show - "639: 30 Ways to Make Extra Money for the Holidays"| |SolarShine Pro - Specialized Solar Panel Cleaning Service|Solar panel cleaning service using specialized equipment. Panels lose 50% efficiency when dirty. $650 per job, automated scheduling generates $18k/month from repeat customers.|Professional solar panel cleaning service using specialized deionized water system and European cleaning equipment. Includes automated 6-month scheduling, professional liability coverage, and warranty-safe cleaning processes. Service is bundled with inspection and performance monitoring.|The UpFlip Podcast - "156. $18K/Month with This ONE Service — Niche Business Idea"| |ExteriorCare Complete - One-Stop Exterior Maintenance Service|One-stop exterior home cleaning service (solar, windows, gutters, bird proofing). Automated scheduling. $650 average ticket. 60% repeat customers on 6-month contracts.|All-in-one exterior cleaning service offering comprehensive maintenance packages including solar, windows, gutters, roof cleaning and bird proofing. Single point of contact, consistent quality, and automated scheduling for all services.|The UpFlip Podcast - "156. $18K/Month with This ONE Service — Niche Business Idea"| |ContentMorph - Automated Cross-Platform Content Adaptation|AI platform converting blog posts into platform-optimized social content. Marketing teams spending 5hrs/post on manual adaptation. $199/mo per brand with 50% margins.|An AI-powered platform that automatically transforms long-form content (blog posts, podcasts, videos) into platform-specific formats (Instagram reels, TikToks, tweets). The system would preserve brand voice while optimizing for each platform's unique requirements and best practices.|Entrepreneurs on Fire - "Digital Threads: The Entrepreneur Playbook for Digital-First Marketing with Neal Schaffer"| |MarketerMatch - Verified Digital Marketing Talent Marketplace|Marketplace for pre-vetted digital marketing specialists. Entrepreneurs spending 15hrs/week on marketing tasks. Platform takes 15% commission averaging $900/month per active client.|A specialized marketplace exclusively for digital marketing professionals, pre-vetted for specific skills (video editing, social media, SEO, etc.). Platform includes skill verification, portfolio review, and specialization matching.|Entrepreneurs on Fire - "Digital Threads: The Entrepreneur Playbook for Digital-First Marketing with Neal Schaffer"| |Tiger Window Cleaning - Premium Local Window Service|Local window cleaning service targeting homeowners. Traditional companies charging 2x market rate. Making $10k/month from $200 initial investment.|Local window cleaning service combining competitive pricing ($5/pane), excellent customer service, and quality guarantees. Uses modern tools like water-fed poles for efficiency. Implements systematic approach to customer communication and follow-up.|The Side Hustle Show - "630: How this College Student’s Side Hustle Brings in $10k a Month"| |RealViz3D - Real Estate Visualization Platform|3D visualization service turning architectural plans into photorealistic renderings for real estate agents. Agents struggling with unbuilt property sales. Making $30-40k/year per operator.|Professional 3D modeling and rendering service that creates photorealistic visualizations of properties before they're built or renovated. The service transforms architectural plans into immersive 3D representations that show lighting, textures, and realistic details. This helps potential buyers fully understand and connect with the space before it physically exists.|Side Hustle School - "#2861 - TBT: An Architect’s Side Hustle in 3D Real Estate Modeling"| |Somewhere - Global Talent Marketplace|Platform connecting US companies with vetted overseas talent. Tech roles costing $150k locally filled for 50% less. Grew from $15M to $52M valuation in 9 months.|Platform connecting US companies with pre-vetted overseas talent at significantly lower rates while maintaining high quality. Handles payments, contracts, and quality assurance to remove friction from global hiring.|My First Million - "I Lost Everything Twice… Then Made $26M In 18 Months| |GymLaunch - Rapid Gym Turnaround Service|Consultants flying to struggling gyms to implement proven member acquisition systems. Gym owners lacking sales expertise. Made $100k in first 21 days.|Expert consultants fly in to implement proven member acquisition systems, train staff, and rapidly fill gyms with new members. The service combines sales training, marketing automation, and proven conversion tactics to transform struggling gyms into profitable businesses within weeks.|My First Million - "I Lost Everything Twice… Then Made $26M In 18 Months| |PublishPlus - Publishing Backend Monetization|Backend monetization system for publishing companies. One-time customers becoming recurring revenue. Grew business from $2M to $110M revenue.|Add complementary backend products and services to increase customer lifetime value. Develop software tools and additional services that natural extend from initial publishing product. Focus on high-margin recurring revenue streams.|My First Million - "I Lost Everything Twice… Then Made $26M In 18 Months| |WelcomeBot - Automated Employee Onboarding Platform|Automated employee welcome platform. HR teams struggling with consistent onboarding. $99/month per 100 employees.|An automated onboarding platform that creates personalized welcome experiences through pre-recorded video messages, scheduled check-ins, and automated swag delivery. The platform would ensure consistent high-quality onboarding regardless of timing or location.|Entrepreneurs on Fire - "Free Training on Building Systems and Processes to Scale Your Business with Chris Ronzio: An EOFire Classic from 2021"| |ProcessBrain - Business Knowledge Documentation Platform|SaaS platform turning tribal knowledge into documented processes. Business owners spending hours training new hires. $199/month per company.|A software platform that makes it easy to document and delegate business processes and procedures. The platform would include templates, guided documentation flows, and tools to easily share and update procedures. It would help businesses create a comprehensive playbook of their operations.|Entrepreneurs on Fire - "Free Training on Building Systems and Processes to Scale Your Business with Chris Ronzio: An EOFire Classic from 2021"| |TradeMatch - Modern Manufacturing Job Marketplace|Modern job board making manufacturing sexy again. Factory jobs paying $40/hr but can't recruit. $500 per successful referral.|A specialized job marketplace and recruitment platform focused exclusively on modern manufacturing and trade jobs. The platform would combine TikTok-style content marketing, referral programs, and modern UX to make manufacturing jobs appealing to Gen Z and young workers. Would leverage existing $500 referral fees and industry demand.|My First Million - "He Sold His Company For $15M, Then Got A Job At McDonald’s"| |GroundLevel - Executive Immersion Program|Structured program putting CEOs in front-line jobs. Executives disconnected from workers. $25k per placement.|A structured program that places executives and founders in front-line jobs (retail, warehouse, service) for 2-4 weeks with documentation and learning framework. Similar to Scott Heiferman's McDonald's experience but productized.|My First Million - "He Sold His Company For $15M, Then Got A Job At McDonald’s"| |OneStepAhead - Micro-Mentorship Marketplace|Marketplace for 30-min mentorship calls with people one step ahead. Professionals seeking specific guidance. Takes 15% of session fees.|MicroMentor Marketplace - Platform connecting people with mentors who are just one step ahead in their journey for focused, affordable micro-mentorship sessions.|Entrepreneurs on Fire - "How to Create an Unbroken Business with Michael Unbroken: An EOFire Classic from 2021"| |VulnerableLeader - Leadership Authenticity Training Platform|Leadership vulnerability training platform. Leaders struggling with authentic communication. $2k/month per company subscription.|Leadership Vulnerability Platform - A digital training platform combining assessment tools, guided exercises, and peer support to help leaders develop authentic communication skills. The platform would include real-world scenarios, video coaching, and measurable metrics for tracking leadership growth through vulnerability.|Entrepreneurs on Fire - "How to Create an Unbroken Business with Michael Unbroken: An EOFire Classic from 2021"| |NetworkAI - Smart Network Intelligence Platform|AI analyzing your network to find hidden valuable connections. Professionals missing opportunities in existing contacts. $49/month per user.|AI Network Navigator - Smart tool that analyzes your professional network across platforms, identifies valuable hidden connections, and suggests specific actionable ways to leverage relationships for mutual benefit.|Entrepreneurs on Fire - "How to Create an Unbroken Business with Michael Unbroken: An EOFire Classic from 2021"| |Porch Pumpkins - Seasonal Decoration Service|Full-service porch pumpkin decoration. Homeowners spend $300-1350 per season. One operator making $1M in 8 weeks seasonal revenue.|Full-service seasonal porch decoration service focused on autumn/Halloween, including design, installation, maintenance, and removal. Offering premium curated pumpkin arrangements with various package tiers.|My First Million - "The guy who gets paid $80K/yr to do nothing"| |Silent Companion - Professional Presence Service|Professional silent companions for lonely people. Huge problem in Japan/globally. $68/session, $80k/year per companion. Non-sexual, just presence.|A professional companion service where individuals can rent a non-judgmental, quiet presence for various activities. The companion provides silent company without the pressure of conversation or social performance. They accompany clients to events, meals, or just sit quietly together.|My First Million - "The guy who gets paid $80K/yr to do nothing"| Hope this is useful. If anyone would like to ensure I include any particular podcasts or episodes etc. in future posts, very happy to do so. I'll generally send \~5 ideas per week in a short weekly digest format (you can see the format I'd usually use in here: podcastmarketwatch.beehiiv.com). I find it mindblowing that the latest models with large context windows make it even possible to analyze full transcripts at such scale. It's a very exciting time we're living through! Would love some feedback on this stuff, happy to iterate and improve the analysis/ideas... or create a new newsletter on a different topic if anyone would like. Cheers!

Why you should consider using small open source fine-tuned models
reddit
LLM Vibe Score0
Human Vibe Score0.929
hamada0001This week

Why you should consider using small open source fine-tuned models

Context I want to start off by giving some context on what fine-tuning is, why it's useful and who it would be useful for: What is fine-tuning? When controlling the output of an LLM there are, broadly, three levels. Prompt engineering, RAG and fine-tuning. Most of you are likely familiar with the first two. Prompt engineering is when you try to optimize the prompt to get the model to do what you want better. RAG (retrieval augmented generation) is when you first do a search on some data (usually stored in a vector database which allows you to search by similarity), then you insert the results into the prompt so that the model can use that context to more accurately answer any questions. It's like letting the LLM access external information right before answering, using that additional context to improve its response Fine-tuning is when you want to fundamentally teach a model something new or teach it to behave in a particular way. You would provide the model with high quality data (i.e. inputs and outputs) which it will train on. Why is it useful? At the moment, many of you use the largest and best LLMs because they give the best results. However, for a lot of use cases you are likely using a sledgehammer for a small nail. Does it do a great job? Damn yeah! Well... why not use a smaller hammer? Because it might miss or hit your finger. The solution shouldn't be to use a sledgehammer, but rather to learn how to use a smaller hammer properly so you never miss! That's exactly what fine-tuning a smaller model is like. Once you fine-tune it on a specific task with good high quality data, it can surpass even the best models at that specific task. It'll be 10x cheaper to run, much faster and, if you use an open source model, you'll own the model (no vendor lock-in!). If you run a SaaS and your biggest expense is AI costs then you should definitely consider fine-tuning. It'll take some time to set up but it'll be well worth it in the medium/long term (a bit like SEO). You can always resort to the best models for more complex tasks. How to fine-tune? I'm going to give you a breakdown of the process from beginning to end. You do need to be (a bit) technical in order to do this. Getting the data Let's suppose we want to fine-tune a model to make high-quality SEO content. At the moment, you might be using a large sophisticated prompt or using multiple large LLMs to write different parts or utilizing RAG. This is all slow and expensive but might be giving you great results. Our goal is to replace this with a fine-tuned model that is great at one thing: writing high-quality SEO content quickly at a much lower cost. The first step is gathering the appropriate data. If you want the model to write 3 or 4 paragraphs based on a prompt that contains the topic and a few keywords, then your data should match that. There are a few way you can do this: You can manually gather high-quality SEO content. You'd write the prompt and the response that the model should give. You can use a larger more powerful LLM to generate the content for you (also known as synthetic data). It'll be expensive but remember that it'll be a larger one-off cost to get the data. If you already have a pipeline that works great then you can use the prompts and the generated content that you already have from that pipeline. You can buy a high-quality dataset or get someone to make it for you. The data is the most important part of this process. Remember, garbage in garbage out. Your data needs to have a good variety and should not contain any bad examples. You should aim for around 1000 examples. The more the better! The actual fine-tuning. At this stage you are now ready to choose a model and setup the fine-tuning. If you are unsure I'd stick to the Llama 3.1 family of models. They are great and reliable. There are three models: 8b, 70b and 405b. Depending on the complexity of the task you should select an appropriate size. However, to really reap the cost saving benefits and the speed you should try to stick with the 8b model or the the 70b model if the 8b is not good enough. For our SEO example, let's use the 8b model. Important note on selecting a model: You might see multiple models with the 8b flag. You might see 4bit-bnb or instruct. The instruct version of the models have basically been trained to be chatbots. So if you want to keep the chatbot-like instruction-following functionality then you should use the instruct version as the base. The non-instruct version simply generates text. It won't 'act' like a chatbot which is better for use cases like creative writing. The 4bit-bnb means that the model has been 'quantized'. Basically it has been made 4x smaller (the original is in 16 bits) so that it is faster to download and faster to fine-tune. This slightly reduces the accuracy of the model but it's usually fine for most use cases :) Fine-tuning should be done on a good GPU. CPU aren't good enough. So you can't spin up a droplet on digital ocean and use that. You'll specifically need to spin up a GPU. One website that I think is great is Runpod .io (I am not affiliated with them). You simply pay for the GPU by the hour. If you want the training to be fast you can use the H100, if you want something cheaper but slower you can use the A40. Although the A40 won't be good enough to run the 70b parameter model. For the 405b model you'll need multiple H100s but let's leave that for more advanced use cases. Once you've spun up your H100 and ssh-ed into it. I would recommend using the unsloth open source library to do the fine-tuning. They have great docs and good boilerplate code. You want to train using a method called QLoRA. This won't train the entire model but only "part of it". I don't want to get into the technical details as t3hat isn't important but essentially it's a very efficient and effective way of fine-tuning models. When fine-tuning you can provide something called a 'validation set'. As your model is training it will be tested against the 'validation set' to see how well it's doing. You'll get an 'eval loss' which basically means how well is your model doing when compared with the unseen validation data. If you have 1000 training examples I'd recommend taking out 100-200 so it can act as the validation set. Your model may start off with an eval loss of 1.1 and by the end of the training (e.g. 3 epochs - the number of epochs is the number of times your model will be trained on the entire dataset. It's like reading a book more than once so you can understand it better. Usually 3-5 epochs is enough) the eval loss would drop to 0.6 or 0.7 which means your model has made great progress in learning your dataset! You don't want it to be too low as that means it is literally memorizing which isn't good. Post fine-tuning You'll want to save the model with the best eval loss. You actually won't have the whole model, just something called the "QLoRA adapters". These are basically like the new neurons that contain the "understanding" of the data you trained the model on. You can combine these with the base model (using unsloth again) to prompt the model. You can also (and I recommend this) convert the model to GGUF format (using unsloth again). This basically packages the QLoRA adapters and model together into an optimized format so you can easily and efficiently run it and prompt it (using unsloth again... lol). I would then recommend running some evaluations on the new model. You can do this by simply prompting the new model and a more powerful model (or using your old pipeline) and then asking a powerful model e.g. Claude to judge which is better. If your model consistently does better then you've hit a winner! You can then use runpod again to deploy the model to their serverless AI endpoint so you only pay when it's actually being inferenced. (Again, I'm not affiliated with them) I hope this was useful and you at least got a good idea of what fine-tuning is and how you might go about doing it. By the way, I've just launched a website where you can easily fine-tune Llama 3.1 models. I'm actually hoping to eventually automate this entire process as I believe small fine-tuned models will be much more common in the future. If you want more info, feel free to DM me :)

A Structured Approach to Ideation and Validation (I will not promote)
reddit
LLM Vibe Score0
Human Vibe Score1
Royal_Rest8409This week

A Structured Approach to Ideation and Validation (I will not promote)

Hi all, I used to work in VC and wanted to share some startup knowledge and insights from startup founders I know. Recently, I interviewed a friend of mine who built an AI Robotics startup ("Hivebotics") that creates automated toilet-cleaning robots. I can't post the full article because of Reddit's word limit, so I'll be posting it in sections here instead. This first section of the transcript goes through his approach to ideation and validation. Enjoy and let me know what you think! (I will not promote) \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ (1) Ideation and Validation Problem-Market-Solution Framework I like to think of startup ideation and validation using this framework: Problem– What exactly are you solving? Observation– How you identify a problem to work on User Research– How you further understand that problem Market– Is there a large enough market for solving this problem? Size– How many people experience this same problem? Demand– How many of those people are willing to pay for the solution? Solution– Your answer to the problem Desirability– Whether people actually want your solution Feasibility– Whether building the solution is practical and realistic Viability– Whether your solution can generate revenue Problem You always need to start problem-first, which is something that was really drilled into me during my time at Stanford. Too often, founders rush to build solutions first—apps or products they find exciting—without confirming whether there's any real demand for it. The first step is always to identify a specific problem, then further understand its scale, urgency and further details by talking to potential users. Observation– To find problems, observation is key. People may not even realise the inefficiencies in their processes until you point them out. That’s why interviews and field research are so important. There are problems all around us, so it's simply a matter of going out, paying attention and being attuned to them as they occur. User Research– To further understand the problem, conducting user research by interviewing potential customers is essential. Personally, I like to use the "Mom Test" when I conduct interviews to avoid biased and generic feedback. Don’t just ask theoretical questions and avoid being too specific—observe how your potential users work, ask about pain points, and use broad, open-ended questions to ensure you aren't leading them to a specific answer. Market Once you've found an actual problem and talked to enough potential users to really understand its specific pain points, the next step is to determine the market size and demand for a solution. Size– Determining the market size is essential because it determines whether or not it's commercially worthwhile to pursue the problem and develop a solution for it. You need to determine if there are enough potential customers out there experiencing this problem to gauge the market size. There's no secret strategy for this; you have to interview as many potential users as possible to confirm that it's a widespread problem in the industry. Demand– Make sure that you're working on a problem that people will gladly pay to have solved. Even if the problem is large enough, you have to make sure it's painful enough to warrant a paid solution. If many people experience the same problem, but aren't willing to pay for a solution, then you don't have a market and should look for a different problem to validate. Another way of looking at it is that your true market size is the number of potential customers actually willing to pay* for the solution to the problem, not the number of people simply experiencing the same problem. Solution When validating a potential solution to the problem, I would look at the 3 factors of desirability, feasibility and viability. Desirability– the degree to which a solution appeals to people and fulfills their wants and needs. Without strong desirability, even the most technically advanced or economically practical product is unlikely to succeed. The best way to test this is to secure financial commitments early on during the proof-of-concept stage. Most people are polite, so they may simply tell you that your startup's product is good even if it's not. However, if they're actually willing to pay for the solution, this is actual evidence of your product's desirability. Don't just ask people if they would pay for it; actually see whether they will pay for it. Feasibility– whether a product can be built using existing technical capabilities. A lack of feasibility makes it challenging or impossible to develop the product, no matter how appealing it might be to users or how promising its financial prospects are. This is just a matter of conducting initial research and actually trying to build a prototype, which will inform you whether the fully-realised product is truly feasible. Viability– the product's ability to generate sustainable financial returns. Without financial viability, the business supporting the product cannot endure, even if the product is highly appealing to users and technically achievable. Here, you need to look at your unit economics, development costs and other expenses to determine the viability of your solution. \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ Hope you enjoyed reading this; let me know your honest thoughts in the comments and I'll try to improve how I interview founders based on those!

How a founder built a B2B AI startup to serve with 65+ global brands (including Fortune500 companies) (I will not promote)
reddit
LLM Vibe Score0
Human Vibe Score1
Royal_Rest8409This week

How a founder built a B2B AI startup to serve with 65+ global brands (including Fortune500 companies) (I will not promote)

AI Palette is an AI-driven platform that helps food and beverage companies predict emerging product trends. I had the opportunity recently to sit down with the founder to get his advice on building an AI-first startup, which he'll be going through in this post. (I will not promote) About AI Palette: Co-founders: >!2 (Somsubhra GanChoudhuri, Himanshu Upreti)!!100+!!$12.7M USD!!AI-powered predictive analytics for the CPG (Consumer Packaged Goods) industry!!Signed first paying customer in the first year!!65+ global brands, including Cargill, Diageo, Ajinomoto, Symrise, Mondelez, and L’Oréal, use AI Palette!!Every new product launched has secured a paying client within months!!Expanded into Beauty & Personal Care (BPC), onboarding one of India’s largest BPC companies within weeks!!Launched multiple new product lines in the last two years, creating a unified suite for brand innovation!Identify the pain points in your industry for ideas* When I was working in the flavour and fragrance industry, I noticed a major issue CPG companies faced: launching a product took at least one to two years. For instance, if a company decided today to launch a new juice, it wouldn’t hit the market until 2027. This long timeline made it difficult to stay relevant and on top of trends. Another big problem I noticed was that companies relied heavily on market research to determine what products to launch. While this might work for current consumer preferences, it was highly inefficient since the product wouldn’t actually reach the market for several years. By the time the product launched, the consumer trends had already shifted, making that research outdated. That’s where AI can play a crucial role. Instead of looking at what consumers like today, we realised that companies should use AI to predict what they will want next. This allows businesses to create products that are ahead of the curve. Right now, the failure rate for new product launches is alarmingly high, with 8 out of 10 products failing. By leveraging AI, companies can avoid wasting resources on products that won’t succeed, leading to better, more successful launches. Start by talking to as many industry experts as possible to identify the real problems When we first had the idea for AI Palette, it was just a hunch, a gut feeling—we had no idea whether people would actually pay for it. To validate the idea, we reached out to as many people as we could within the industry. Since our focus area was all about consumer insights, we spoke to professionals in the CPG sector, particularly those in the insights departments of CPG companies. Through these early conversations, we began to see a common pattern emerge and identified the exact problem we wanted to solve. Don’t tell people what you’re building—listen to their frustrations and challenges first. Going into these early customer conversations, our goal was to listen and understand their challenges without telling them what we were trying to build. This is crucial as it ensures that you can gather as much data about the problem to truly understand it and that you aren't biasing their answers by showing your solution. This process helped us in two key ways: First, it validated that there was a real problem in the industry through the number of people who spoke about experiencing the same problem. Second, it allowed us to understand the exact scale and depth of the problem—e.g., how much money companies were spending on consumer research, what kind of tools they were currently using, etc. Narrow down your focus to a small, actionable area to solve initially. Once we were certain that there was a clear problem worth solving, we didn’t try to tackle everything at once. As a small team of two people, we started by focusing on a specific area of the problem—something big enough to matter but small enough for us to handle. Then, we approached customers with a potential solution and asked them for feedback. We learnt that our solution seemed promising, but we wanted to validate it further. If customers are willing to pay you for the solution, it’s a strong validation signal for market demand. One of our early customer interviewees even asked us to deliver the solution, which we did manually at first. We used machine learning models to analyse the data and presented the results in a slide deck. They paid us for the work, which was a critical moment. It meant we had something with real potential, and we had customers willing to pay us before we had even built the full product. This was the key validation that we needed. By the time we were ready to build the product, we had already gathered crucial insights from our early customers. We understood the specific information they wanted and how they wanted the results to be presented. This input was invaluable in shaping the development of our final product. Building & Product Development Start with a simple concept/design to validate with customers before building When we realised the problem and solution, we began by designing the product, but not by jumping straight into coding. Instead, we created wireframes and user interfaces using tools like InVision and Figma. This allowed us to visually represent the product without the need for backend or frontend development at first. The goal was to showcase how the product would look and feel, helping potential customers understand its value before we even started building. We showed these designs to potential customers and asked for feedback. Would they want to buy this product? Would they pay for it? We didn’t dive into actual development until we found a customer willing to pay a significant amount for the solution. This approach helped us ensure we were on the right track and didn’t waste time or resources building something customers didn’t actually want. Deliver your solution using a manual consulting approach before developing an automated product Initially, we solved problems for customers in a more "consulting" manner, delivering insights manually. Recall how I mentioned that when one of our early customer interviewees asked us to deliver the solution, we initially did it manually by using machine learning models to analyse the data and presenting the results to them in a slide deck. This works for the initial stages of validating your solution, as you don't want to invest too much time into building a full-blown MVP before understanding the exact features and functionalities that your users want. However, after confirming that customers were willing to pay for what we provided, we moved forward with actual product development. This shift from a manual service to product development was key to scaling in a sustainable manner, as our building was guided by real-world feedback and insights rather than intuition. Let ongoing customer feedback drive iteration and the product roadmap Once we built the first version of the product, it was basic, solving only one problem. But as we worked closely with customers, they requested additional features and functionalities to make it more useful. As a result, we continued to evolve the product to handle more complex use cases, gradually developing new modules based on customer feedback. Product development is a continuous process. Our early customers pushed us to expand features and modules, from solving just 20% of their problems to tackling 50–60% of their needs. These demands shaped our product roadmap and guided the development of new features, ultimately resulting in a more complete solution. Revenue and user numbers are key metrics for assessing product-market fit. However, critical mass varies across industries Product-market fit (PMF) can often be gauged by looking at the size of your revenue and the number of customers you're serving. Once you've reached a certain critical mass of customers, you can usually tell that you're starting to hit product-market fit. However, this critical mass varies by industry and the type of customers you're targeting. For example, if you're building an app for a broad consumer market, you may need thousands of users. But for enterprise software, product-market fit may be reached with just a few dozen key customers. Compare customer engagement and retention with other available solutions on the market for product-market fit Revenue and the number of customers alone isn't always enough to determine if you're reaching product-market fit. The type of customer and the use case for your product also matter. The level of engagement with your product—how much time users are spending on the platform—is also an important metric to track. The more time they spend, the more likely it is that your product is meeting a crucial need. Another way to evaluate product-market fit is by assessing retention, i.e whether users are returning to your platform and relying on it consistently, as compared to other solutions available. That's another key indication that your solution is gaining traction in the market. Business Model & Monetisation Prioritise scalability Initially, we started with a consulting-type model where we tailor-made specific solutions for each customer use-case we encountered and delivered the CPG insights manually, but we soon realized that this wasn't scalable. The problem with consulting is that you need to do the same work repeatedly for every new project, which requires a large team to handle the workload. That is not how you sustain a high-growth startup. To solve this, we focused on building a product that would address the most common problems faced by our customers. Once built, this product could be sold to thousands of customers without significant overheads, making the business scalable. With this in mind, we decided on a SaaS (Software as a Service) business model. The benefit of SaaS is that once you create the software, you can sell it to many customers without adding extra overhead. This results in a business with higher margins, where the same product can serve many customers simultaneously, making it much more efficient than the consulting model. Adopt a predictable, simplistic business model for efficiency. Look to industry practices for guidance When it came to monetisation, we considered the needs of our CPG customers, who I knew from experience were already accustomed to paying annual subscriptions for sales databases and other software services. We decided to adopt the same model and charge our customers an annual upfront fee. This model worked well for our target market, aligning with industry standards and ensuring stable, recurring revenue. Moreover, our target CPG customers were already used to this business model and didn't have to choose from a huge variety of payment options, making closing sales a straightforward and efficient process. Marketing & Sales Educate the market to position yourself as a thought leader When we started, AI was not widely understood, especially in the CPG industry. We had to create awareness around both AI and its potential value. Our strategy focused on educating potential users and customers about AI, its relevance, and why they should invest in it. This education was crucial to the success of our marketing efforts. To establish credibility, we adopted a thought leadership approach. We wrote blogs on the importance of AI and how it could solve problems for CPG companies. We also participated in events and conferences to demonstrate our expertise in applying AI to the industry. This helped us build our brand and reputation as leaders in the AI space for CPG, and word-of-mouth spread as customers recognized us as the go-to company for AI solutions. It’s tempting for startups to offer products for free in the hopes of gaining early traction with customers, but this approach doesn't work in the long run. Free offerings don’t establish the value of your product, and customers may not take them seriously. You should always charge for pilots, even if the fee is minimal, to ensure that the customer is serious about potentially working with you, and that they are committed and engaged with the product. Pilots/POCs/Demos should aim to give a "flavour" of what you can deliver A paid pilot/POC trial also gives you the opportunity to provide a “flavour” of what your product can deliver, helping to build confidence and trust with the client. It allows customers to experience a detailed preview of what your product can do, which builds anticipation and desire for the full functionality. During this phase, ensure your product is built to give them a taste of the value you can provide, which sets the stage for a broader, more impactful adoption down the line. Fundraising & Financial Management Leverage PR to generate inbound interest from VCs When it comes to fundraising, our approach was fairly traditional—we reached out to VCs and used connections from existing investors to make introductions. However, looking back, one thing that really helped us build momentum during our fundraising process was getting featured in Tech in Asia. This wasn’t planned; it just so happened that Tech in Asia was doing a series on AI startups in Southeast Asia and they reached out to us for an article. During the interview, they asked if we were fundraising, and we mentioned that we were. As a result, several VCs we hadn’t yet contacted reached out to us. This inbound interest was incredibly valuable, and we found it far more effective than our outbound efforts. So, if you can, try to generate some PR attention—it can help create inbound interest from VCs, and that interest is typically much stronger and more promising than any outbound strategies because they've gone out of their way to reach out to you. Be well-prepared and deliberate about fundraising. Keep trying and don't lose heart When pitching to VCs, it’s crucial to be thoroughly prepared, as you typically only get one shot at making an impression. If you mess up, it’s unlikely they’ll give you a second chance. You need to have key metrics at your fingertips, especially if you're running a SaaS company. Be ready to answer questions like: What’s your retention rate? What are your projections for the year? How much will you close? What’s your average contract value? These numbers should be at the top of your mind. Additionally, fundraising should be treated as a structured process, not something you do on the side while juggling other tasks. When you start, create a clear plan: identify 20 VCs to reach out to each week. By planning ahead, you’ll maintain momentum and speed up the process. Fundraising can be exhausting and disheartening, especially when you face multiple rejections. Remember, you just need one investor to say yes to make it all worthwhile. When using funds, prioritise profitability and grow only when necessary. Don't rely on funding to survive. In the past, the common advice for startups was to raise money, burn through it quickly, and use it to boost revenue numbers, even if that meant operating at a loss. The idea was that profitability wasn’t the main focus, and the goal was to show rapid growth for the next funding round. However, times have changed, especially with the shift from “funding summer” to “funding winter.” My advice now is to aim for profitability as soon as possible and grow only when it's truly needed. For example, it’s tempting to hire a large team when you have substantial funds in the bank, but ask yourself: Do you really need 10 new hires, or could you get by with just four? Growing too quickly can lead to unnecessary expenses, so focus on reaching profitability as soon as possible, rather than just inflating your team or burn rate. The key takeaway is to spend your funds wisely and only when absolutely necessary to reach profitability. You want to avoid becoming dependent on future VC investments to keep your company afloat. Instead, prioritize reaching break-even as quickly as you can, so you're not reliant on external funding to survive in the long run. Team-Building & Leadership Look for complementary skill sets in co-founders When choosing a co-founder, it’s important to find someone with a complementary skill set, not just someone you’re close to. For example, I come from a business and commercial background, so I needed someone with technical expertise. That’s when I found my co-founder, Himanshu, who had experience in machine learning and AI. He was a great match because his technical knowledge complemented my business skills, and together we formed a strong team. It might seem natural to choose your best friend as your co-founder, but this can often lead to conflict. Chances are, you and your best friend share similar interests, skills, and backgrounds, which doesn’t bring diversity to the table. If both of you come from the same industry or have the same strengths, you may end up butting heads on how things should be done. Having diverse skill sets helps avoid this and fosters a more collaborative working relationship. Himanshu (left) and Somsubhra (right) co-founded AI Palette in 2018 Define roles clearly to prevent co-founder conflict To avoid conflict, it’s essential that your roles as co-founders are clearly defined from the beginning. If your co-founder and you have distinct responsibilities, there is no room for overlap or disagreement. This ensures that both of you can work without stepping on each other's toes, and there’s mutual respect for each other’s expertise. This is another reason as to why it helps to have a co-founder with a complementary skillset to yours. Not only is having similar industry backgrounds and skillsets not particularly useful when building out your startup, it's also more likely to lead to conflicts since you both have similar subject expertise. On the other hand, if your co-founder is an expert in something that you're not, you're less likely to argue with them about their decisions regarding that aspect of the business and vice versa when it comes to your decisions. Look for employees who are driven by your mission, not salary For early-stage startups, the first hires are crucial. These employees need to be highly motivated and excited about the mission. Since the salary will likely be low and the work demanding, they must be driven by something beyond just the paycheck. The right employees are the swash-buckling pirates and romantics, i.e those who are genuinely passionate about the startup’s vision and want to be part of something impactful beyond material gains. When employees are motivated by the mission, they are more likely to stick around and help take the startup to greater heights. A litmus test for hiring: Would you be excited to work with them on a Sunday? One of the most important rounds in the hiring process is the culture fit round. This is where you assess whether a candidate shares the same values as you and your team. A key question to ask yourself is: "Would I be excited to work with this person on a Sunday?" If there’s any doubt about your answer, it’s likely not a good fit. The idea is that you want employees who align with the company's culture and values and who you would enjoy collaborating with even outside of regular work hours. How we structure the team at AI Palette We have three broad functions in our organization. The first two are the big ones: Technical Team – This is the core of our product and technology. This team is responsible for product development and incorporating customer feedback into improving the technology Commercial Team – This includes sales, marketing, customer service, account managers, and so on, handling everything related to business growth and customer relations. General and Administrative Team – This smaller team supports functions like finance, HR, and administration. As with almost all businesses, we have teams that address the two core tasks of building (technical team) and selling (commercial team), but given the size we're at now, having the administrative team helps smoothen operations. Set broad goals but let your teams decide on execution What I've done is recruit highly skilled people who don't need me to micromanage them on a day-to-day basis. They're experts in their roles, and as Steve Jobs said, when you hire the right person, you don't have to tell them what to do—they understand the purpose and tell you what to do. So, my job as the CEO is to set the broader goals for them, review the plans they have to achieve those goals, and periodically check in on progress. For example, if our broad goal is to meet a certain revenue target, I break it down across teams: For the sales team, I’ll look at how they plan to hit that target—how many customers they need to sell to, how many salespeople they need, and what tactics and strategies they plan to use. For the technical team, I’ll evaluate our product offerings—whether they think we need to build new products to attract more customers, and whether they think it's scalable for the number of customers we plan to serve. This way, the entire organization's tasks are cascaded in alignment with our overarching goals, with me setting the direction and leaving the details of execution to the skilled team members that I hire.

Looking for a Marketing Partner for an Innovative AI Mobile App [i will not promote]
reddit
LLM Vibe Score0
Human Vibe Score1
Altruistic-Flan-8222This week

Looking for a Marketing Partner for an Innovative AI Mobile App [i will not promote]

Hello everyone! I'm a software engineer and AI developer working on something great in the mobile AI space. If you have been following the trends on TikTok and similar platforms, you have probably noticed the explosion of AI apps (like Rizz AI and similar) that follow the simple "scan → solve" concept. These apps have been massively successful because they solve specific problems with minimal user friction. Here's what makes my project different: I have identified an unique market where there is currently zero competition for this app idea that I'm creating and the potential user base is massive - we are talking about 200M+ potential users in the US alone (60% of the US population could use this app). Even capturing just 0.05% of this market could generate significant revenue, considering similar apps typically charge $4-6 per user. What I'm looking for: A marketing partner (preferably US-based or someone familiar with the US market/audience) who can help grow this app. Initially, it requires about 30–60 minutes per day for content creation and posting. No experience is required. If you don't have marketing experience, don't worry. In today's marketing, passion is often more important than skills (and a bit of luck, haha). What I'm offering: For now, it's a revenue share partnership. I have invested my savings into the development of the app and the necessary equipment and I'm offering a revenue share until we generate enough profit for paid positions. Once we gain traction, the goal is to transition this into a part-time or full-time role. If you have zero creativity skills, I can provide you with my automated content generation tool to assist with marketing. It is basically a script that generates the type of content that gets the most views on other AI apps promoted on social media platforms. This is also a long-term partnership, if we achieve some results but not good enough with one app, we can try a new niche or just continue on this one. About the project: The app is almost complete and will likely launch in mid-February. It is a self-funded venture, meaning all profits will be reinvested into growth, including ads, revenue sharing and potentially useful tools to improve marketing. Also, the app is unique, I made a deep research and there is no similar app in this niche and it is very easy to promote. Overall, it follows a simple and effective business model with a clear monetization strategy. If you're interested in being part of something with genuine growth potential and want to learn more, DM me. We can discuss details on Reddit, Discord, LinkedIn, anything you like. The app launches in mid-February so I'm looking to bring someone on board soon to help out. Note: I will share specific details about the niche and app functionality in private messages to protect the idea before launch.

I fell into the builder's trap and need help getting out
reddit
LLM Vibe Score0
Human Vibe Score1
stellarcitizenThis week

I fell into the builder's trap and need help getting out

Hi r/startups, First-time technical founder here. Two years ago, I decided to leave the 9-5 grind and build something meaningful. Now, I have (what I believe is) a brilliant technical solution but no clear business case. I’m seeking a cofounder with product and marketing expertise to help pivot my project into a viable business - or start a new one. Details below. About Me 36yo, born in Berlin and moved to San Francisco 8 years ago Master's in Software Engineering with 15 years of experience Worked with early-stage startups in Berlin and a venture studio in SF Spent the past years leading a team of 12 shipping enterprise software The tech I've built An AI engine that makes it easy for developers to automate their workflows. It works with code, issues, PRs and integrates with 3rd party systems like error trackers, wikis, ticketing systems, etc. It takes natural language instructions, fulfills them autonomously and responds with a result. The functionality is served as a platform, with an API and an SDK. On top of it, I've built a CLI and a web application with productivity tools for developers. Who and what I'm looking for My main goal is to leave my current job and build a company around a problem that matters to me, ideally with considerable equity. I’m looking for: A cofounder with product and marketing expertise who sees potential in my tech and can help turn it into a successful business—or someone with a strong business case who needs a technical founder. Mentorship from someone experienced in dev tool startups or as a successful solo founder. I’d love to learn from your journey and would be happy to offer my technical expertise or collaborate on projects in return. Happy to answer any questions or provide more details. Cheers!

I fell into the builder's trap and need help getting out
reddit
LLM Vibe Score0
Human Vibe Score1
stellarcitizenThis week

I fell into the builder's trap and need help getting out

Hi r/startups, First-time technical founder here. Two years ago, I decided to leave the 9-5 grind and build something meaningful. Now, I have (what I believe is) a brilliant technical solution but no clear business case. I’m seeking a cofounder with product and marketing expertise to help pivot my project into a viable business - or start a new one. Details below. About Me 36yo, born in Berlin and moved to San Francisco 8 years ago Master's in Software Engineering with 15 years of experience Worked with early-stage startups in Berlin and a venture studio in SF Spent the past years leading a team of 12 shipping enterprise software The tech I've built An AI engine that makes it easy for developers to automate their workflows. It works with code, issues, PRs and integrates with 3rd party systems like error trackers, wikis, ticketing systems, etc. It takes natural language instructions, fulfills them autonomously and responds with a result. The functionality is served as a platform, with an API and an SDK. On top of it, I've built a CLI and a web application with productivity tools for developers. Who and what I'm looking for My main goal is to leave my current job and build a company around a problem that matters to me, ideally with considerable equity. I’m looking for: A cofounder with product and marketing expertise who sees potential in my tech and can help turn it into a successful business—or someone with a strong business case who needs a technical founder. Mentorship from someone experienced in dev tool startups or as a successful solo founder. I’d love to learn from your journey and would be happy to offer my technical expertise or collaborate on projects in return. Happy to answer any questions or provide more details. Cheers!

I fell into the builder's trap and need help getting out
reddit
LLM Vibe Score0
Human Vibe Score1
stellarcitizenThis week

I fell into the builder's trap and need help getting out

Hi r/startups, First-time technical founder here. Two years ago, I decided to leave the 9-5 grind and build something meaningful. Now, I have (what I believe is) a brilliant technical solution but no clear business case. I’m seeking a cofounder with product and marketing expertise to help pivot my project into a viable business - or start a new one. Details below. About Me 36yo, born in Berlin and moved to San Francisco 8 years ago Master's in Software Engineering with 15 years of experience Worked with early-stage startups in Berlin and a venture studio in SF Spent the past years leading a team of 12 shipping enterprise software The tech I've built An AI engine that makes it easy for developers to automate their workflows. It works with code, issues, PRs and integrates with 3rd party systems like error trackers, wikis, ticketing systems, etc. It takes natural language instructions, fulfills them autonomously and responds with a result. The functionality is served as a platform, with an API and an SDK. On top of it, I've built a CLI and a web application with productivity tools for developers. Who and what I'm looking for My main goal is to leave my current job and build a company around a problem that matters to me, ideally with considerable equity. I’m looking for: A cofounder with product and marketing expertise who sees potential in my tech and can help turn it into a successful business—or someone with a strong business case who needs a technical founder. Mentorship from someone experienced in dev tool startups or as a successful solo founder. I’d love to learn from your journey and would be happy to offer my technical expertise or collaborate on projects in return. Happy to answer any questions or provide more details. Cheers!

Seeking Your Feedback: SeedHustle and Your Small Business Journey✨
reddit
LLM Vibe Score0
Human Vibe Score1
EntryElectronicThis week

Seeking Your Feedback: SeedHustle and Your Small Business Journey✨

Hello, everyone, I'm one of the co-founder of SeedHustle, and I wanted to have an authentic discussion with you about our recent developments. SeedHustle is a project dear to us, with the aim of simplifying the often complex process of connecting startups with venture capitalists. 🌟 Why did we embark on this journey? Well, we've been in your shoes, experiencing the frustration of the never-ending search for the right VC partner and the challenges of establishing meaningful connections. This shared experience led to the creation of (https://seedhustle.ai/ ) . So, what's the deal with SeedHustle? It's our effort to streamline the process of finding the ideal VC match. You provide us with your company details, and our AI system goes to work, suggesting potential VCs and explaining why they might be a good fit based on their past investments and backgrounds. We also provide real-time data on their funds. We're currently in the private beta phase and want to extend an invitation to join our Discord community. It's a space where founders can share their stories and possibly make introductions to VCs. As founders who thrive on AI challenges, we believe this could be a game-changer. 👂 I'm here to have an open dialogue. Is there anything you'd like to discuss? Whether it's SeedHustle, our journey, or your own small business experiences, we're all ears. Here are a few conversation starters: \-Does SeedHustle align with your small business journey? \-Do you have any suggestions for how we can improve our platform? \-Is there anything about what we're doing that's unclear or not quite resonating with you? Your feedback is incredibly valuable to us, so please feel free to reach out. Thank you for being a part of this journey, and we hope to see you in our Discord community for a chat! 😊🚀

Idea Validation Post: Seeking Feedback on My AI-Driven Quick Launch Application! 🚀
reddit
LLM Vibe Score0
Human Vibe Score1
Awkward_Ad_9605This week

Idea Validation Post: Seeking Feedback on My AI-Driven Quick Launch Application! 🚀

Hey Members! I’m excited to share an idea for a new application I’m planning to build: Quick Launch . This AI-driven platform is designed to assist solopreneurs or anyone with an idea in launching their Minimum Viable Products (MVPs) by taking on the roles of the entire team needed for the process. Goal: Assistance in quickly moving from Idea to MVP Before I dive into the details, I’d love to hear your thoughts and feedback. Key Features: Product Creation: From Idea to Product Detailing AI-Generated Q&A: Real-time questions generation one-at-a-time to define the product requirements based on their knowledge levels to convert an Idea into a Product. Market Research Reports: In-depth analysis that identifies product-market fit, competitive landscape, and potential marketing strategies. Sentiment Analysis: Evaluation of user feedback and reactions across multiple subreddits to gauge public opinion on ideas. Product Development: Product Detailing to Actual Product User Story Generation: Identification and creation of comprehensive user stories, tasks, and sub-tasks to facilitate development. AI Project Management: AI agents assume roles of project managers and UI/UX designers to streamline product detailing and development. Integration Capabilities: Seamless integration with popular project management tools like Jira, Asana, and Trello for better workflow management. Target Audience: Solopreneurs: Individuals looking to bring their business ideas to life without extensive resources. Indie Hackers: Entrepreneurs focused on building small projects or startups with minimal overhead. Idea Validators: Anyone with a concept seeking initial validation and market feedback before committing significant resources. If you’re interested in learning more, check out our teaser website: Quick Launch Discussion Question: What features would you find most valuable in an application like this? Are there specific pain points you face when launching an MVP? Your insights would be incredibly helpful as I refine this idea! Looking forward to your thoughts! 🙌

Idea Validation Post: Seeking Feedback on My AI-Driven Quick Launch Application! 🚀
reddit
LLM Vibe Score0
Human Vibe Score1
Awkward_Ad_9605This week

Idea Validation Post: Seeking Feedback on My AI-Driven Quick Launch Application! 🚀

Hey Members! I’m excited to share an idea for a new application I’m planning to build: Quick Launch . This AI-driven platform is designed to assist solopreneurs or anyone with an idea in launching their Minimum Viable Products (MVPs) by taking on the roles of the entire team needed for the process. Goal: Assistance in quickly moving from Idea to MVP Before I dive into the details, I’d love to hear your thoughts and feedback. Key Features: Product Creation: From Idea to Product Detailing AI-Generated Q&A: Real-time questions generation one-at-a-time to define the product requirements based on their knowledge levels to convert an Idea into a Product. Market Research Reports: In-depth analysis that identifies product-market fit, competitive landscape, and potential marketing strategies. Sentiment Analysis: Evaluation of user feedback and reactions across multiple subreddits to gauge public opinion on ideas. Product Development: Product Detailing to Actual Product User Story Generation: Identification and creation of comprehensive user stories, tasks, and sub-tasks to facilitate development. AI Project Management: AI agents assume roles of project managers and UI/UX designers to streamline product detailing and development. Integration Capabilities: Seamless integration with popular project management tools like Jira, Asana, and Trello for better workflow management. Target Audience: Solopreneurs: Individuals looking to bring their business ideas to life without extensive resources. Indie Hackers: Entrepreneurs focused on building small projects or startups with minimal overhead. Idea Validators: Anyone with a concept seeking initial validation and market feedback before committing significant resources. If you’re interested in learning more, check out our teaser website: Quick Launch Discussion Question: What features would you find most valuable in an application like this? Are there specific pain points you face when launching an MVP? Your insights would be incredibly helpful as I refine this idea! Looking forward to your thoughts! 🙌

Here’s How Chatbots Can Boost Your Small Business
reddit
LLM Vibe Score0
Human Vibe Score1
smanwerThis week

Here’s How Chatbots Can Boost Your Small Business

Chatbots are the next big thing in the tech world that are meant for business use. Almost every business can benefit from chatbots in one way or the other. They are now everywhere – the fastest rising star are basically computer-operated machines that can play a variety of roles such as customer service representative, social media manager, personal assistant and much more. Virtually every industry is seemingly investing in it. Chatbots became the flavor of the season because of their task management and problem solving skills. This is why companies are aggressively deploying chatbots to their business strategy to make it work right. What are Chatbots – How They Can Benefit Your Small Business? In essence, chatbots are simply a computer program tailor-made to mimic conversations with the help of artificial intelligence (AI). These computer-based programs are capable enough to respond to natural language text and voice inputs in a human way. Chatbots can take over a lot of time consuming tasks, allowing project managers to focus on other important matters and take high level decisions. Chatbots are not just the next big thing for digital and tech brands, small businesses can also get the most out from them. Small businesses should get into chatbots to streamline their routine project management practices and support other business operations – thereby saving budget, time, energy, while improving ROI. If you are not completely getting into it, here are some ways that help you deploy this rising technology in order to boost your small business strategy. Instant Customer Support One of the effective ways small businesses can implement a chatbot is an immediate customer support. If you belong to an industry that offers products and services, chances are you get so many phone calls and emails to educate people. Prior to allowing customers to clog up your inbox with unlimited queries, try using a chatbot that will save your valuable time. You can simply create an immediate customer support presence for customers who engage with your chatbot. Craft answers for all the popular queries so that your project management team can focus on other complex and important issues while chatbots addressing the most commonly asked questions. Moreover, it will add a consistency to your brand voice. You can control the tone and ensure that the chatbot will deliver your crafted messages. Boost Sales Leads Generation Chatbots are not just about sharing or collecting information. They can actually boost sales. But, how? Though they can’t replace your sales and marketing team, they can smartly assist them by being an immediate point of contact. Create an automated conversation for a new visitor and it can directly influence sales. Though chatbots are rising, they will ultimately carry on artificial intelligence that is capable for gathering the data required to curate a specific set of products for customers. For instance, if a user asks the chatbot for blue shirt in cotton, the chatbot can pull items with the particular details for the user. This process is cumulative and when next time the user communicates with the chatbot, it will consider their preferences. Increase Your Business Efficiency Though chatbots can’t perform every business operation, what they can do is eliminate few of the menial but important operations. Consider all the important tasks that your employees need to perform, such as answering customer queries, compiling data for a user, filling out form etc. Most of these tasks are monotonous in nature that allows you to train your chatbot to manage all these repetitive tasks with a low risk and high return of your valuable time. Reducing Cost and Resource Consumption Like any online task management system , chatbots are great to reduce manpower. From performing as a personal assistant to a customer sales representative, you can easily cut down the total number of resources that deal with customer complaints and feedback. You can utilize a chatbot, as it can do this work easily a human would usually do. Read Full article here

Looking to streamline and update family business
reddit
LLM Vibe Score0
Human Vibe Score1
JohACNHThis week

Looking to streamline and update family business

Hey r/smallbusiness, I’ve been working at my family’s business for six years now—joined right after college—and I’ve realized that we’re long overdue for an overhaul. I handle advertising sales, and while the business itself is solid, the way we operate is extremely outdated. Without revealing too much, we print about 180 publications, and businesses pay to have their ads featured. As a sales rep, my job includes: Renewing current advertisers Finding new customers and making sales Collecting artwork for ads Gathering billing info Laying out the ad grid with all advertisers The Problem: Everything is still done with pen and paper. We use carbon copy paper to record business details, billing info, and ad costs. One copy goes to the graphic designers, the other to billing. The billing team manually enters everything into QuickBooks, prints invoices, stuffs envelopes, and mails them out. We recently got new software that lets us send invoices via email and text through QuickBooks, which is a step in the right direction, but it’s just a small fix to a much bigger problem. What I Want to Change: Move everything onto an app or website—no more paper. Digitally layout the ad grid instead of doing it manually. (For graphics team) Collect billing info online instead of writing it down. (Obviously to get paid faster and reduce wasted labor) Automate renewal emails instead of calling every single customer. (Save time) Find more efficient ways to generate leads for new business. (Work smarter not harder) Honestly, the company still runs like my grandma set it up in the '90s, and it’s overwhelming trying to figure out where to start. If anyone has been through something similar or has advice on modernizing a business, I’d love to hear your thoughts! Happy to provide more details if needed. I’ve explored some CRMs and AI tools, but I’m sure someone here has better insights or more experience with this than I do. There are other parts of the business that need improvement, but I believe this would be a big step in the right direction. Thanks in advance!

40% Of SMBs Still Can't Pay Their Rent, Extending High Delinquency From September Into October
reddit
LLM Vibe Score0
Human Vibe Score1
Aegidius25This week

40% Of SMBs Still Can't Pay Their Rent, Extending High Delinquency From September Into October

https://www.alignable.com/forum/q4s-off-to-a-rough-start-40-of-smbs-still-cant-pay-their-rent October 31, 2023: While the federal government reported a surge in economic growth for the U.S. last week, that news doesn't hold true for many small business owners. In fact, in October polling by Alignable, only 12% said their companies are experiencing significant growth this month. Beyond that, Alignable’s October Rent Report, released today, shows that a whopping 40% of SMBs couldn't even pay their October rent in full and on time. This marks the second consecutive month of a 40% rent delinquency rate -- extending 2023's record high from September through October. These findings are based on responses from 4,246 randomly selected small business owners surveyed from 10/1/23 to 10/30/23, as well as input from 44,000+ other respondents over the past year. As the chart below shows, October's SMB rent delinquency rate is 10 percentage points higher than it was in January, reflecting cumulative economic struggles: increased rents, high interest rates, still-stifling inflation, rising labor costs, and revenues that have declined since this time last year. Rent delinquency rates among small businesses during 2023 based on Alignable surveys So, Why's Rent Delinquency At 40% For A 2nd Month? Here’s the current list of problems contributing to two months' worth of the highest delinquency rate 2023 has seen so far: Consumer Spending Declines On Main Street: Quarterly, we ask about customer spending habits at retailers. This month, 45% of independent Mom and Pop Shops said spending has been down over the last 30 days. Some said it was due to more people spending money online with big retailers like Amazon. This figure is quite high, especially considering that back in July, only 24% reported a drop in consumer spending -- 21 percentage points less severe than it is now. Revenue Troubles: 42% are making half or less of the income they generated monthly prior to COVID. For businesses that are less than three years old, this situation is even worse: 53% of this group reports making half or less of what they generated this time last year. High Interest Rates: Over half of all SMB owners polled said the past 19 months of high interest rates have hurt their margins, reduced revenues, and put their expansion plans on hold, as they don't want to apply for loans. Increased Rent Prices: 50% say they’re being charged more for rent now than they were six months ago, with 15% saying rent has increased by 20% or more. At present, only 37% of pre-COVID businesses have recovered financially from the pandemic era, leaving 63% still striving to make up for time they lost due to COVID, inflationary pressures, and high interest rates. There's a slight silver lining here, though, as the 37% figure is three percentage points higher than it was in September. But, with that said, a recovery rate of 37% after more than three and a half years is still very low and speaks volumes about the ongoing list of troubles small business owners face looking into the rest of 2023. Tech, Manufacturing, Gyms, Beauty & Retail Struggle Examining the rent delinquency landscape in terms of sectors, there's quite a negative shift occurring among some industries in October. Let's look at the charts below to see what's really happening. Sectors most affected by rent delinquency include tech and retail Details on sectors affected by rent delinquency in October This is alarming for a few reasons: The countless technology layoffs at larger companies over the past year appear to be affecting the small companies now, too, who are often dependent on the larger ones as clients. Right now, 54% of science/technology small companies couldn't pay their October rent, up 10 percentage points from September and 16 percentage points since August. There are also some comments in the surveys of technology roles being reduced or replaced by ChatGPT and other AI, which can write software programs. Gyms have been struggling now for a while and now 50% of them can't afford the rent, up 8 percentage points from September. The biggest shift between October and September occurred among manufacturers, partially due to ongoing fluctuation in the price of gas and other inflationary issues. For quite some time, manufacturers were improving a lot in terms of their rent delinquency rates, but in October, they jumped 25 percentage points, doubling their rate, which is now 50%. This is also a record high for manufacturers in 2023. We hope this is just a blip, but we'll see in November. Also due, in part, to fluctuating gas prices and costs of vehicles, 45% of transportation companies couldn't pay October rent in full and on time. That's up 6 percentage points from last month. Sadly, 47% of salon owners couldn't cover October rent, after showing a lot of stability over the past few months. But that stability ended this month, as salons' rent delinquency rates jumped nine percentage points. Though rates have dropped three percentage points in October, a high percentage of retailers are still having trouble paying the rent. Last month, it was 47%. This month, it's better, but is still over 40%, landing at 44%. This is worrisome, especially since Q4 is a "make it or break it" time for many Main Street merchants. Looking more closely at the industries, there was some good news, in that a few others experienced lower delinquency rates in October, including restaurants, which dipped to 40% from 44% in September. Travel/lodging dropped seven percentage points to 38% (from 45% last month), as did education, which is also at 38%, down from 43%. When looking at rent delinquency from the vantage point of the states that are most affected, many surges can be seen between October and September, while a few states saw some dramatic, encouraging declines, too. Rent Troubles Increase For IL, VA, TX, MA, FL, & CO Looking at the states' charts, you can see how tumultuous the rent story has become this fall. Let's first talk about those with significant jumps in their delinquency rates. Here's the rundown: Illinois leads the list once again. After having a better month in September, its delinquency rate has soared, once more, landing at 54% for October (up from 46% last month). In fact, the 54% figure is the highest rate IL-based SMBs have seen in 2023. Virginia was in great shape last month, with a delinquency rate of just 19%. But Virginia-based small business owners have had a very rough month, at least in terms of rent. Now, 50% of them who took our poll say they couldn't cover rent (an increase of 31 percentage points). Texas is third on the list, with an 11-percentage-point lift from 38% in September to 49% in October. MA is next up at 48%, which marks the largest jump on the chart -- 32 percentage points from a low of just 16% in September. Small businesses in Florida have also experienced two challenging months in terms of rent delinquency. Right now, 45% of SMBs there couldn't afford to pay, up nine percentage points from September and 15 percentage points from August. Colorado's businesses regressed in October, hitting a new record high of 40%. That rent delinquency rate jumped 13 percentage points from September to October. While we just covered states with some very high delinquency rates, there were also several more positive swings that have occurred in October. Though encouraging, we'll have to see how long those delinquency rates continue. Here are the most remarkable: New York -- After reaching a record rate of 55% last month, New York's small business owners now report a more stable number: just 29%. That's down 26 percentage points. New Jersey -- New York's neighbor has an even more impressive story in October: only 20% of New Jersey's SMBs couldn't pay rent this month, a record low over at least the past 14 months, down 34 percentage points from a record high of 54%. Michigan -- Similarly, Michigan's small business owners boast a rate of just 20%, down from 45% in September.

MIT Introduction to Data-Centric AI
reddit
LLM Vibe Score0
Human Vibe Score1
anishathalyeThis week

MIT Introduction to Data-Centric AI

Announcing the first-ever course on Data-Centric AI. Learn how to train better ML models by improving the data. Course homepage | Lecture videos on YouTube | Lab Assignments The course covers: Data-Centric AI vs. Model-Centric AI Label Errors Dataset Creation and Curation Data-centric Evaluation of ML Models Class Imbalance, Outliers, and Distribution Shift Growing or Compressing Datasets Interpretability in Data-Centric ML Encoding Human Priors: Data Augmentation and Prompt Engineering Data Privacy and Security MIT, like most universities, has many courses on machine learning (6.036, 6.867, and many others). Those classes teach techniques to produce effective models for a given dataset, and the classes focus heavily on the mathematical details of models rather than practical applications. However, in real-world applications of ML, the dataset is not fixed, and focusing on improving the data often gives better results than improving the model. We’ve personally seen this time and time again in our applied ML work as well as our research. Data-Centric AI (DCAI) is an emerging science that studies techniques to improve datasets in a systematic/algorithmic way — given that this topic wasn’t covered in the standard curriculum, we (a group of PhD candidates and grads) thought that we should put together a new class! We taught this intensive 2-week course in January over MIT’s IAP term, and we’ve just published all the course material, including lecture videos, lecture notes, hands-on lab assignments, and lab solutions, in hopes that people outside the MIT community would find these resources useful. We’d be happy to answer any questions related to the class or DCAI in general, and we’d love to hear any feedback on how we can improve the course material. Introduction to Data-Centric AI is open-source opencourseware, so feel free to make improvements directly: https://github.com/dcai-course/dcai-course.

MMML | Deploy HuggingFace training model rapidly based on MetaSpore
reddit
LLM Vibe Score0
Human Vibe Score1
qazmkoppThis week

MMML | Deploy HuggingFace training model rapidly based on MetaSpore

A few days ago, HuggingFace announced a $100 million Series C funding round, which was big news in open source machine learning and could be a sign of where the industry is headed. Two days before the HuggingFace funding announcement, open-source machine learning platform MetaSpore released a demo based on the HuggingFace Rapid deployment pre-training model. As deep learning technology makes innovative breakthroughs in computer vision, natural language processing, speech understanding, and other fields, more and more unstructured data are perceived, understood, and processed by machines. These advances are mainly due to the powerful learning ability of deep learning. Through pre-training of deep models on massive data, the models can capture the internal data patterns, thus helping many downstream tasks. With the industry and academia investing more and more energy in the research of pre-training technology, the distribution warehouses of pre-training models such as HuggingFace and Timm have emerged one after another. The open-source community release pre-training significant model dividends at an unprecedented speed. In recent years, the data form of machine modeling and understanding has gradually evolved from single-mode to multi-mode, and the semantic gap between different modes is being eliminated, making it possible to retrieve data across modes. Take CLIP, OpenAI’s open-source work, as an example, to pre-train the twin towers of images and texts on a dataset of 400 million pictures and texts and connect the semantics between pictures and texts. Many researchers in the academic world have been solving multimodal problems such as image generation and retrieval based on this technology. Although the frontier technology through the semantic gap between modal data, there is still a heavy and complicated model tuning, offline data processing, high performance online reasoning architecture design, heterogeneous computing, and online algorithm be born multiple processes and challenges, hindering the frontier multimodal retrieval technologies fall to the ground and pratt &whitney. DMetaSoul aims at the above technical pain points, abstracting and uniting many links such as model training optimization, online reasoning, and algorithm experiment, forming a set of solutions that can quickly apply offline pre-training model to online. This paper will introduce how to use the HuggingFace community pre-training model to conduct online reasoning and algorithm experiments based on MetaSpore technology ecology so that the benefits of the pre-training model can be fully released to the specific business or industry and small and medium-sized enterprises. And we will give the text search text and text search graph two multimodal retrieval demonstration examples for your reference. Multimodal semantic retrieval The sample architecture of multimodal retrieval is as follows: Our multimodal retrieval system supports both text search and text search application scenarios, including offline processing, model reasoning, online services, and other core modules: https://preview.redd.it/mdyyv1qmdz291.png?width=1834&format=png&auto=webp&s=e9e10710794c78c64cc05adb75db385aa53aba40 Offline processing, including offline data processing processes for different application scenarios of text search and text search, including model tuning, model export, data index database construction, data push, etc. Model inference. After the offline model training, we deployed our NLP and CV large models based on the MetaSpore Serving framework. MetaSpore Serving helps us conveniently perform online inference, elastic scheduling, load balancing, and resource scheduling in heterogeneous environments. Online services. Based on MetaSpore’s online algorithm application framework, MetaSpore has a complete set of reusable online search services, including Front-end retrieval UI, multimodal data preprocessing, vector recall and sorting algorithm, AB experimental framework, etc. MetaSpore also supports text search by text and image scene search by text and can be migrated to other application scenarios at a low cost. The HuggingFace open source community has provided several excellent baseline models for similar multimodal retrieval problems, which are often the starting point for actual optimization in the industry. MetaSpore also uses the pre-training model of the HuggingFace community in its online services of searching words by words and images by words. Searching words by words is based on the semantic similarity model of the question and answer field optimized by MetaSpore, and searching images by words is based on the community pre-training model. These community open source pre-training models are exported to the general ONNX format and loaded into MetaSpore Serving for online reasoning. The following sections will provide a detailed description of the model export and online retrieval algorithm services. The reasoning part of the model is standardized SAAS services with low coupling with the business. Interested readers can refer to my previous post: The design concept of MetaSpore, a new generation of the one-stop machine learning platform. 1.1 Offline Processing Offline processing mainly involves the export and loading of online models and index building and pushing of the document library. You can follow the step-by-step instructions below to complete the offline processing of text search and image search and see how the offline pre-training model achieves reasoning at MetaSpore. 1.1.1 Search text by text Traditional text retrieval systems are based on literal matching algorithms such as BM25. Due to users’ diverse query words, a semantic gap between query words and documents is often encountered. For example, users misspell “iPhone” as “Phone,” and search terms are incredibly long, such as “1 \~ 3 months old baby autumn small size bag pants”. Traditional text retrieval systems will use spelling correction, synonym expansion, search terms rewriting, and other means to alleviate the semantic gap but fundamentally fail to solve this problem. Only when the retrieval system fully understands users’ query terms and documents can it meet users’ retrieval demands at the semantic level. With the continuous progress of pre-training and representational learning technology, some commercial search engines continue to integrate semantic vector retrieval methods based on symbolic learning into the retrieval ecology. Semantic retrieval model This paper introduces a set of semantic vector retrieval applications. MetaSpore built a set of semantic retrieval systems based on encyclopedia question and answer data. MetaSpore adopted the Sentence-Bert model as the semantic vector representation model, which fine-tunes the twin tower BERT in supervised or unsupervised ways to make the model more suitable for retrieval tasks. The model structure is as follows: The query-Doc symmetric two-tower model is used in text search and question and answer retrieval. The vector representation of online Query and offline DOC share the same vector representation model, so it is necessary to ensure the consistency of the offline DOC library building model and online Query inference model. The case uses MetaSpore’s text representation model Sbert-Chinese-QMC-domain-V1, optimized in the open-source semantically similar data set. This model will express the question and answer data as a vector in offline database construction. The user query will be expressed as a vector by this model in online retrieval, ensuring that query-doc in the same semantic space, users’ semantic retrieval demands can be guaranteed by vector similarity metric calculation. Since the text presentation model does vector encoding for Query online, we need to export the model for use by the online service. Go to the q&A data library code directory and export the model concerning the documentation. In the script, Pytorch Tracing is used to export the model. The models are exported to the “./export “directory. The exported models are mainly ONNX models used for wired reasoning, Tokenizer, and related configuration files. The exported models are loaded into MetaSpore Serving by the online Serving system described below for model reasoning. Since the exported model will be copied to the cloud storage, you need to configure related variables in env.sh. \Build library based on text search \ The retrieval database is built on the million-level encyclopedia question and answer data set. According to the description document, you need to download the data and complete the database construction. The question and answer data will be coded as a vector by the offline model, and then the database construction data will be pushed to the service component. The whole process of database construction is described as follows: Preprocessing, converting the original data into a more general JSonline format for database construction; Build index, use the same model as online “sbert-Chinese-qmc-domain-v1” to index documents (one document object per line); Push inverted (vector) and forward (document field) data to each component server. The following is an example of the database data format. After offline database construction is completed, various data are pushed to corresponding service components, such as Milvus storing vector representation of documents and MongoDB storing summary information of documents. Online retrieval algorithm services will use these service components to obtain relevant data. 1.1.2 Search by text Text and images are easy for humans to relate semantically but difficult for machines. First of all, from the perspective of data form, the text is the discrete ID type of one-dimensional data based on words and words. At the same time, images are continuous two-dimensional or three-dimensional data. Secondly, the text is a subjective creation of human beings, and its expressive ability is vibrant, including various turning points, metaphors, and other expressions, while images are machine representations of the objective world. In short, bridging the semantic gap between text and image data is much more complex than searching text by text. The traditional text search image retrieval technology generally relies on the external text description data of the image or the nearest neighbor retrieval technology and carries out the retrieval through the image associated text, which in essence degrades the problem to text search. However, it will also face many issues, such as obtaining the associated text of pictures and whether the accuracy of text search by text is high enough. The depth model has gradually evolved from single-mode to multi-mode in recent years. Taking the open-source project of OpenAI, CLIP, as an example, train the model through the massive image and text data of the Internet and map the text and image data into the same semantic space, making it possible to implement the text and image search technology based on semantic vector. CLIP graphic model The text search pictures introduced in this paper are implemented based on semantic vector retrieval, and the CLIP pre-training model is used as the two-tower retrieval architecture. Because the CLIP model has trained the semantic alignment of the twin towers’ text and image side models on the massive graphic and text data, it is particularly suitable for the text search graph scene. Due to the different image and text data forms, the Query-Doc asymmetric twin towers model is used for text search image retrieval. The image-side model of the twin towers is used for offline database construction, and the text-side model is used for the online return. In the final online retrieval, the database data of the image side model will be searched after the text side model encodes Query, and the CLIP pre-training model guarantees the semantic correlation between images and texts. The model can draw the graphic pairs closer in vector space by pre-training on a large amount of visual data. Here we need to export the text-side model for online MetaSpore Serving inference. Since the retrieval scene is based on Chinese, the CLIP model supporting Chinese understanding is selected. The exported content includes the ONNX model used for online reasoning and Tokenizer, similar to the text search. MetaSpore Serving can load model reasoning through the exported content. Build library on Image search You need to download the Unsplash Lite library data and complete the construction according to the instructions. The whole process of database construction is described as follows: Preprocessing, specify the image directory, and then generate a more general JSOnline file for library construction; Build index, use OpenAI/Clip-Vit-BASE-Patch32 pre-training model to index the gallery, and output one document object for each line of index data; Push inverted (vector) and forward (document field) data to each component server. Like text search, after offline database construction, relevant data will be pushed to service components, called by online retrieval algorithm services to obtain relevant data. 1.2 Online Services The overall online service architecture diagram is as follows: ​ https://preview.redd.it/nz8zrbbpdz291.png?width=1280&format=png&auto=webp&s=28dae7e031621bc8819519667ed03d8d085d8ace Multi-mode search online service system supports application scenarios such as text search and text search. The whole online service consists of the following parts: Query preprocessing service: encapsulate preprocessing logic (including text/image, etc.) of pre-training model, and provide services through gRPC interface; Retrieval algorithm service: the whole algorithm processing link includes AB experiment tangent flow configuration, MetaSpore Serving call, vector recall, sorting, document summary, etc.; User entry service: provides a Web UI interface for users to debug and track down problems in the retrieval service. From a user request perspective, these services form invocation dependencies from back to front, so to build up a multimodal sample, you need to run each service from front to back first. Before doing this, remember to export the offline model, put it online and build the library first. This article will introduce the various parts of the online service system and make the whole service system step by step according to the following guidance. See the ReadME at the end of this article for more details. 1.2.1 Query preprocessing service Deep learning models tend to be based on tensors, but NLP/CV models often have a preprocessing part that translates raw text and images into tensors that deep learning models can accept. For example, NLP class models often have a pre-tokenizer to transform text data of string type into discrete tensor data. CV class models also have similar processing logic to complete the cropping, scaling, transformation, and other processing of input images through preprocessing. On the one hand, considering that this part of preprocessing logic is decoupled from tensor reasoning of the depth model, on the other hand, the reason of the depth model has an independent technical system based on ONNX, so MetaSpore disassembled this part of preprocessing logic. NLP pretreatment Tokenizer has been integrated into the Query pretreatment service. MetaSpore dismantlement with a relatively general convention. Users only need to provide preprocessing logic files to realize the loading and prediction interface and export the necessary data and configuration files loaded into the preprocessing service. Subsequent CV preprocessing logic will also be integrated in this manner. The preprocessing service currently provides the gRPC interface invocation externally and is dependent on the Query preprocessing (QP) module in the retrieval algorithm service. After the user request reaches the retrieval algorithm service, it will be forwarded to the service to complete the data preprocessing and continue the subsequent processing. The ReadMe provides details on how the preprocessing service is started, how the preprocessing model exported offline to cloud storage enters the service, and how to debug the service. To further improve the efficiency and stability of model reasoning, MetaSpore Serving implements a Python preprocessing submodule. So MetaSpore can provide gRPC services through user-specified preprocessor.py, complete Tokenizer or CV-related preprocessing in NLP, and translate requests into a Tensor that deep models can handle. Finally, the model inference is carried out by MetaSpore, Serving subsequent sub-modules. Presented here on the lot code: https://github.com/meta-soul/MetaSpore/compare/add\python\preprocessor 1.2.2 Retrieval algorithm services Retrieval algorithm service is the core of the whole online service system, which is responsible for the triage of experiments, the assembly of algorithm chains such as preprocessing, recall, sorting, and the invocation of dependent component services. The whole retrieval algorithm service is developed based on the Java Spring framework and supports multi-mode retrieval scenarios of text search and text search graph. Due to good internal abstraction and modular design, it has high flexibility and can be migrated to similar application scenarios at a low cost. Here’s a quick guide to configuring the environment to set up the retrieval algorithm service. See ReadME for more details: Install dependent components. Use Maven to install the online-Serving component Search for service configurations. Copy the template configuration file and replace the MongoDB, Milvus, and other configurations based on the development/production environment. Install and configure Consul. Consul allows you to synchronize the search service configuration in real-time, including cutting the flow of experiments, recall parameters, and sorting parameters. The project’s configuration file shows the current configuration parameters of text search and text search. The parameter modelName in the stage of pretreatment and recall is the corresponding model exported in offline processing. Start the service. Once the above configuration is complete, the retrieval service can be started from the entry script. Once the service is started, you can test it! For example, for a user with userId=10 who wants to query “How to renew ID card,” access the text search service. 1.2.3 User Entry Service Considering that the retrieval algorithm service is in the form of the API interface, it is difficult to locate and trace the problem, especially for the text search image scene can intuitively display the retrieval results to facilitate the iterative optimization of the retrieval algorithm. This paper provides a lightweight Web UI interface for text search and image search, a search input box, and results in a display page for users. Developed by Flask, the service can be easily integrated with other retrieval applications. The service calls the retrieval algorithm service and displays the returned results on the page. It’s also easy to install and start the service. Once you’re done, go to http://127.0.0.1:8090 to see if the search UI service is working correctly. See the ReadME at the end of this article for details. Multimodal system demonstration The multimodal retrieval service can be started when offline processing and online service environment configuration have been completed following the above instructions. Examples of textual searches are shown below. Enter the entry of the text search map application, enter “cat” first, and you can see that the first three digits of the returned result are cats: https://preview.redd.it/d7syq47rdz291.png?width=1280&format=png&auto=webp&s=b43df9abd380b7d9a52e3045dd787f4feeb69635 If you add a color constraint to “cat” to retrieve “black cat,” you can see that it does return a black cat: ​ https://preview.redd.it/aa7pxx8tdz291.png?width=1280&format=png&auto=webp&s=e3727c29d1bde6eea2e1cccf6c46d3cae3f4750e Further, strengthen the constraint on the search term, change it to “black cat on the bed,” and return results containing pictures of a black cat climbing on the bed: ​ https://preview.redd.it/2mw4qpjudz291.png?width=1280&format=png&auto=webp&s=1cf1db667892b9b3a40451993680fbd6980b5520 The cat can still be found through the text search system after the color and scene modification in the above example. Conclusion The cutting-edge pre-training technology can bridge the semantic gap between different modes, and the HuggingFace community can greatly reduce the cost for developers to use the pre-training model. Combined with the technological ecology of MetaSpore online reasoning and online microservices provided by DMetaSpore, the pre-training model is no longer mere offline dabbling. Instead, it can truly achieve end-to-end implementation from cutting-edge technology to industrial scenarios, fully releasing the dividends of the pre-training large model. In the future, DMetaSoul will continue to improve and optimize the MetaSpore technology ecosystem: More automated and wider access to HuggingFace community ecology. MetaSpore will soon release a common model rollout mechanism to make HuggingFace ecologically accessible and will later integrate preprocessing services into online services. Multi-mode retrieval offline algorithm optimization. For multimodal retrieval scenarios, MetaSpore will continuously iteratively optimize offline algorithm components, including text recall/sort model, graphic recall/sort model, etc., to improve the accuracy and efficiency of the retrieval algorithm. For related code and reference documentation in this article, please visit: https://github.com/meta-soul/MetaSpore/tree/main/demo/multimodal/online Some images source: https://github.com/openai/CLIP/raw/main/CLIP.png https://www.sbert.net/examples/training/sts/README.html

What if… Employers Employ AI Agents to Get 360° Feedback from Employees?
reddit
LLM Vibe Score0
Human Vibe Score0
AssistanceOk2217This week

What if… Employers Employ AI Agents to Get 360° Feedback from Employees?

AI Agent powered Comprehensive 360° Feedback Collection & Analysis Full Article ​ https://i.redd.it/1ieczv6pud1d1.gif ⚪ What is this Article About? ● This article demonstrates how AI agents can be used in the real-world for gathering feedback from employees ● It explores using AI agents to collect insights on employee experiences, job satisfaction, and suggestions for improvement ● By leveraging AI agents and language models, organizations can better understand their workforce's needs and concerns ⚪Why Read this Article? ● Learn about the potential benefits of using AI agents for comprehensive feedback collection ● Understand how to build practical, real-world solutions by combining AI agents with other technologies ● Stay ahead of the curve by exploring cutting-edge applications of AI agents ⚪What are we doing in this Project? \> Part 1: AI Agents to Coordinate and Gather Feedback ● AI agents collaborate to collect comprehensive feedback from employees through surveys and interviews ● Includes a Feedback Collector Agent, Feedback Analyst Agent, and Feedback Reporter Agent \> Part 2: Analyze Feedback Data with Pandas AI and Llama3 ● Use Pandas AI and Llama3 language model to easily analyze the collected feedback data ● Extract insights, identify patterns, strengths, and areas for improvement from the feedback ⚪ Let's Design Our AI Agent System for 360° Feedback \> Feedback Collection System: ● Collect feedback from employees (simulated) ● Analyze the feedback data ● Report findings and recommendations \> Feedback Analysis System: ● Upload employee feedback CSV file ● Display uploaded data ● Perform natural language analysis and queries ● Generate automated insights and visual graphs ⚪ Let's get Cooking ● Explanation of the code for the AI agent system and feedback analysis system ● Includes code details for functions, classes, and streamlit interface ⚪ Closing Thoughts ● AI agents can revolutionize how businesses operate and tackle challenges ● Their ability to coordinate, collaborate, and perform specialized tasks is invaluable ● AI agents offer versatile and scalable solutions for optimizing processes and uncovering insights ⚪ Future Work ● This project is a demo to show the potential real-world use cases of AI Agents. To achieve the results seen here, I went through multiple iterations and changes. AI Agents are not fully ready yet (although they are making huge progress every day). AI Agents still need to go through an improvement cycle to reach their full potential in real-world settings. ​

Let’s Build One Person Business Using 100% AI
reddit
LLM Vibe Score0
Human Vibe Score1
AssistanceOk2217This week

Let’s Build One Person Business Using 100% AI

AI made it possible for 9-to-5 workers to start a one-person business without quitting their jobs. Full Article https://preview.redd.it/tynb9y6z695d1.png?width=1309&format=png&auto=webp&s=b490d3676a63adcc01faff8c476056cb7d420022 https://i.redd.it/9x3okti0795d1.gif The Opportunities for Starting a Business ○ There are huge opportunities to start your own business by leveraging valuable skills to attract paying audiences. ○ New software and AI platforms make it easier to distribute products/services and automate tasks that were previously time-consuming. Our One Person Book Publication House ○ This article explores building a one-person AI-powered business focused on publishing books. ○ Users input data on a topic, and AI generates a comprehensive book structure and content based on that. ○ The generated content can be formatted, designed, and published digitally or in print easily. Why Read This Article? ○ It presents an innovative AI-powered approach to streamline the book publishing process. ○ It provides technical implementation details using LLM, Python and the Streamlit library as a reference. ○ It highlights AI's potential in automating creative tasks like writing and content creation. Approaching the One Person Business ○ Reflect on areas where you overcame personal struggles and gained valuable skills. ○ Leverage that expertise to build an AI business serving others facing similar obstacles. ○ Use AI tools to create content, automate processes, and efficiently scale your offerings. The Publication Business Idea ○ Focus on writing and publishing small books using AI writing assistants. ○ AI can streamline research, writing drafts, outlines, and ideas across genres. ○ Concentrate efforts on editing, formatting, and marketing while AI handles writing. The Book Generation Process ○ Users input structured topic data like outlines, key points, and references. ○ Advanced AI language models generate flowing book content from that data. ○ Minimal human effort is needed beyond initial inputs and refinement. ○ AI systems automatically handle formatting, design, and publishing. Technical Implementation ○ Includes a Book class to represent a book's hierarchical structure in Python. ○ Functions to generate book structures and section content using AI models. ○ Integrates with a Streamlit app for user input and output. ○ Allows downloading the final book in Markdown format. Closing Thoughts ○ This AI-powered approach makes book writing and publishing more accessible to individuals. ○ AI handles the heavy lifting, with humans providing quality control through editing. ○ It opens up possibilities for innovative knowledge sharing as technology evolves.

Let’s Build Small AI Buzz, Offer ‘Claim Processing’ to Mid/Big Companies
reddit
LLM Vibe Score0
Human Vibe Score1
AssistanceOk2217This week

Let’s Build Small AI Buzz, Offer ‘Claim Processing’ to Mid/Big Companies

Discover How AI Can Transform Businesses, Every Details Spelled Out. Full Article https://preview.redd.it/jp0vc5g6e86d1.png?width=1421&format=png&auto=webp&s=efa43e2a9b04b6996b00adac4e4947a3b21c7e63 Artificial Intelligence (AI) is rapidly reshaping business landscapes, promising unprecedented efficiency and accuracy across industries. In this article, we delve into how Aniket Insurance Inc. (Imaginary) leverages AI to revolutionize its claim processing operations, offering insights into the transformative power of AI in modern business environments. ➡️ What’s This Article About? \* The article explores how Aniket Insurance Inc. uses AI to transform its claim processing. \* It details the three main workflows: User claim submission, Admin + AI claim processing, and Executive + AI claim analysis. https://preview.redd.it/ql0ec20ae86d1.png?width=769&format=png&auto=webp&s=4b6889dd85f848194d6adfc92c9c699138eb1fe7 ➡️ Why Read This Article \* Readers can see practical ways AI boosts efficiency in business, using Aniket Insurance as an example. \* AI speeds up routine tasks, like data entry, freeing up humans for more strategic work. It shows how AI-driven data analysis can lead to smarter business decisions. ➡️Let’s Design: Aniket Insurance Inc. has implemented AI architecture that encompasses three pivotal workflows: User Claim Submission Flow, Admin + AI Claim Processing Flow, and Executive + AI Claim Analysis Flow. Powered by AI models and integrated with store, this architecture ensures seamless automation and optimization of the entire claim processing lifecycle. By leveraging AI technologies like machine learning models and data visualization tools, Aniket Insurance how business can enhance operational efficiency, and strategic decision-making capabilities. https://preview.redd.it/qgdmzs3ee86d1.png?width=733&format=png&auto=webp&s=445295beb52a56d826e5527859cf62879116ddb0 ➡️Closing Thoughts: Looking ahead, the prospects of AI adoption across various industries are incredibly exciting. Imagine manufacturing plants where AI optimizes production lines, predicts maintenance needs, and ensures quality control. Envision healthcare facilities where AI assists in diagnosis, treatment planning, and drug discovery. Picture retail operations where AI personalizes product recommendations, streamlines inventory management, and enhances customer service. The possibilities are endless, as AI’s capabilities in pattern recognition, predictive modeling, and automation can be leveraged to tackle complex challenges and uncover valuable insights in virtually any domain. https://preview.redd.it/w3hr913ge86d1.png?width=754&format=png&auto=webp&s=d839a7703f5b28314a3278c8d628ae5f05d3668f

Let’s Build One Person Business Using 100% AI
reddit
LLM Vibe Score0
Human Vibe Score1
AssistanceOk2217This week

Let’s Build One Person Business Using 100% AI

AI made it possible for 9-to-5 workers to start a one-person business without quitting their jobs. Full Article https://preview.redd.it/tynb9y6z695d1.png?width=1309&format=png&auto=webp&s=b490d3676a63adcc01faff8c476056cb7d420022 https://i.redd.it/9x3okti0795d1.gif The Opportunities for Starting a Business ○ There are huge opportunities to start your own business by leveraging valuable skills to attract paying audiences. ○ New software and AI platforms make it easier to distribute products/services and automate tasks that were previously time-consuming. Our One Person Book Publication House ○ This article explores building a one-person AI-powered business focused on publishing books. ○ Users input data on a topic, and AI generates a comprehensive book structure and content based on that. ○ The generated content can be formatted, designed, and published digitally or in print easily. Why Read This Article? ○ It presents an innovative AI-powered approach to streamline the book publishing process. ○ It provides technical implementation details using LLM, Python and the Streamlit library as a reference. ○ It highlights AI's potential in automating creative tasks like writing and content creation. Approaching the One Person Business ○ Reflect on areas where you overcame personal struggles and gained valuable skills. ○ Leverage that expertise to build an AI business serving others facing similar obstacles. ○ Use AI tools to create content, automate processes, and efficiently scale your offerings. The Publication Business Idea ○ Focus on writing and publishing small books using AI writing assistants. ○ AI can streamline research, writing drafts, outlines, and ideas across genres. ○ Concentrate efforts on editing, formatting, and marketing while AI handles writing. The Book Generation Process ○ Users input structured topic data like outlines, key points, and references. ○ Advanced AI language models generate flowing book content from that data. ○ Minimal human effort is needed beyond initial inputs and refinement. ○ AI systems automatically handle formatting, design, and publishing. Technical Implementation ○ Includes a Book class to represent a book's hierarchical structure in Python. ○ Functions to generate book structures and section content using AI models. ○ Integrates with a Streamlit app for user input and output. ○ Allows downloading the final book in Markdown format. Closing Thoughts ○ This AI-powered approach makes book writing and publishing more accessible to individuals. ○ AI handles the heavy lifting, with humans providing quality control through editing. ○ It opens up possibilities for innovative knowledge sharing as technology evolves.

Browser Agents Real Example
reddit
LLM Vibe Score0
Human Vibe Score1
No_Information6299This week

Browser Agents Real Example

I made a Browser Price Matching Tool that uses browser automation and some clever skills to adjust your product prices based on real-time web searches data. If you're into scraping, automation, or just love playing with the latest in ML-powered tools like OpenAI's GPT-4, this one's for you. What My Project Does The tool takes your current product prices (think CSV) and finds similar products online (targeting Amazon for demo purposes). It then compares prices, allowing you to adjust your prices competitively. The magic happens in a multi-step pipeline: Generate Clean Search Queries: Uses a learned skill to convert messy product names (like "Apple iPhone14!<" or "Dyson! V11!!// VacuumCleaner") into clean, Google-like search queries. Browser Data Extraction: Launches asynchronous browser agents (leveraging Playwright) to search for those queries on Amazon, retrieves the relevant data, and scrapes the page text. Parse & Structure Results: Another custom skill parses the browser output to output structured info: product name, price, and a short description. Enrich Your Data: Finally, the tool combines everything to enrich your original data with live market insights! Full code link: Full code File Rundown learn\skill.py Learns how to generate polished search queries from your product names with GPT-4o-mini. It outputs a JSON file: makequery.json. learn\skill\select\best\product.py Trains another skill to parse web-scraped data and select the best matching product details. Outputs select_product.json. make\query.json The skill definition file for generating search queries (produced by learnskill.py). select\product.json The skill definition file for extracting product details from scraped results (produced by learnskillselectbest_product.py). product\price\matching.py The main pipeline script that orchestrates the entire process—from loading product data, running browser agents, to enriching your CSV. Setup & Installation Install Dependencies: pip install python-dotenv openai langchain\_openai flashlearn requests pytest-playwright Install Playwright Browsers: playwright install Configure OpenAI API: Create a .env file in your project directory with:OPENAI\API\KEY="sk-your\api\key\_here" Running the Tool Train the Query Skill: Run learnskill.py to generate makequery.json. Train the Product Extraction Skill: Run learnskillselectbestproduct.py to generate select_product.json. Execute the Pipeline: Kick off the whole process by running productpricematching.py. The script will load your product data (sample data is included for demo, but easy to swap with your CSV), generate search queries, run browser agents asynchronously, scrape and parse the data, then output the enriched product listings. Target Audience I built this project to automate price matching—a huge pain point for anyone running an e-commerce business. The idea was to minimize the manual labor of checking competitor prices while integrating up-to-date market insights. Plus, it was a fun way to combine automation,skill training, and browser automation! Customization Tweak the concurrency in productpricematching.py to manage browser agent load. Replace the sample product list with your own CSV for a real-world scenario. Extend the skills if you need more data points or different parsing logic. Ajudst skill definitions as needed Comparison With existing approaches you need to manually write parsing loginc and data transformation logic - here ai does it for you. If you like the tutorial - leave a star github

Browser Agents Real Example
reddit
LLM Vibe Score0
Human Vibe Score1
No_Information6299This week

Browser Agents Real Example

I made a Browser Price Matching Tool that uses browser automation and some clever skills to adjust your product prices based on real-time web searches data. If you're into scraping, automation, or just love playing with the latest in ML-powered tools like OpenAI's GPT-4, this one's for you. What My Project Does The tool takes your current product prices (think CSV) and finds similar products online (targeting Amazon for demo purposes). It then compares prices, allowing you to adjust your prices competitively. The magic happens in a multi-step pipeline: Generate Clean Search Queries: Uses a learned skill to convert messy product names (like "Apple iPhone14!<" or "Dyson! V11!!// VacuumCleaner") into clean, Google-like search queries. Browser Data Extraction: Launches asynchronous browser agents (leveraging Playwright) to search for those queries on Amazon, retrieves the relevant data, and scrapes the page text. Parse & Structure Results: Another custom skill parses the browser output to output structured info: product name, price, and a short description. Enrich Your Data: Finally, the tool combines everything to enrich your original data with live market insights! Full code link: Full code File Rundown learn\skill.py Learns how to generate polished search queries from your product names with GPT-4o-mini. It outputs a JSON file: makequery.json. learn\skill\select\best\product.py Trains another skill to parse web-scraped data and select the best matching product details. Outputs select_product.json. make\query.json The skill definition file for generating search queries (produced by learnskill.py). select\product.json The skill definition file for extracting product details from scraped results (produced by learnskillselectbest_product.py). product\price\matching.py The main pipeline script that orchestrates the entire process—from loading product data, running browser agents, to enriching your CSV. Setup & Installation Install Dependencies: pip install python-dotenv openai langchain\_openai flashlearn requests pytest-playwright Install Playwright Browsers: playwright install Configure OpenAI API: Create a .env file in your project directory with:OPENAI\API\KEY="sk-your\api\key\_here" Running the Tool Train the Query Skill: Run learnskill.py to generate makequery.json. Train the Product Extraction Skill: Run learnskillselectbestproduct.py to generate select_product.json. Execute the Pipeline: Kick off the whole process by running productpricematching.py. The script will load your product data (sample data is included for demo, but easy to swap with your CSV), generate search queries, run browser agents asynchronously, scrape and parse the data, then output the enriched product listings. Target Audience I built this project to automate price matching—a huge pain point for anyone running an e-commerce business. The idea was to minimize the manual labor of checking competitor prices while integrating up-to-date market insights. Plus, it was a fun way to combine automation,skill training, and browser automation! Customization Tweak the concurrency in productpricematching.py to manage browser agent load. Replace the sample product list with your own CSV for a real-world scenario. Extend the skills if you need more data points or different parsing logic. Ajudst skill definitions as needed Comparison With existing approaches you need to manually write parsing loginc and data transformation logic - here ai does it for you. If you like the tutorial - leave a star github

I created leadsnavi that helps small businesses find quality leads without breaking the bank
reddit
LLM Vibe Score0
Human Vibe Score1
BrightCook5861This week

I created leadsnavi that helps small businesses find quality leads without breaking the bank

Hey Redditors, I’m excited to share LeadsNavi, a tool I built specifically to help small businesses and B2B professionals automatically generate leads and reach potential customers in a smarter way. After talking to a lot of small business owners, I realized how tough it is to juggle lead generation with limited resources. So, I decided to create a tool that could simplify the process and make it more accessible to those who don’t have the budget to invest in expensive solutions. What Exactly Is LeadsNavi? LeadsNavi is an intuitive, cost-effective platform that automates the process of lead generation. It's designed to make it easy for small businesses and entrepreneurs to identify quality leads and grow their customer base without the need for manual prospecting. Here’s what makes it stand out: Automatic Lead Tracking: Tracks visitors to your website and matches them with company data, so you get real insights into who’s interested in your business. AI-Powered Lead Recommendations: Based on your website’s traffic, LeadsNavi uses AI to suggest similar companies that could be interested in your product or service, helping you find new leads faster and more accurately. Affordable & Scalable: For only $49/month, you can use a highly effective tool that scales with your business. It’s designed to be affordable even for small businesses. CRM Integration: Connect your CRM to directly import leads and sync your outreach efforts. How Does It Work? LeadsNavi uses advanced algorithms to track website visitors' IP addresses and match them with a comprehensive business database. It provides details like company names, contact information, and helps you identify potential leads for follow-up. The best part? It works automatically, saving you hours of manual work and effort. Lead Identification: Get insights into which companies are visiting your website. AI-Driven Lead Recommendations: The AI analyzes your site’s traffic and suggests other companies in the same industry or with similar needs that might be a great fit for your product or service. Data-Enriched Leads: Gather real-time, actionable data on these leads to make your outreach more targeted. Easy Setup: Simply integrate with your website and CRM to start getting quality leads in minutes. Who’s It For? Small Businesses: You don’t have to be a marketing expert to generate quality leads. B2B Sales Teams: Perfect for anyone looking to target other businesses with a streamlined and automated approach. Entrepreneurs & Startups: Focus on scaling your business without worrying about lead generation overhead. Why Try It? LeadsNavi gives you the power to focus on what really matters—connecting with potential customers and scaling your business. If you’ve been struggling with finding quality leads, or if you’re just getting started, I believe LeadsNavi can help you save time, effort, and money. I’m offering a 14-day free trial, so you can see the tool in action before committing to anything. Give it a try and let me know what you think! I’d love to hear your thoughts, suggestions, and how it works for your business. https://preview.redd.it/fdwil4rssgle1.png?width=1867&format=png&auto=webp&s=eb73b41a2b7665ae1b651fe2a6b7459df6990530

I built a library to visualize and edit audio filters
reddit
LLM Vibe Score0
Human Vibe Score1
AlexStreletsThis week

I built a library to visualize and edit audio filters

Hey everyone! TLDR: No fancy AI Agents or trendy micro-SaaS here — just an old-school library. Scroll down for the demo link! 🙃 App Demo The Story Behind Several years ago, I deep-dived into reverse engineering the parameter system used in VAG (Volkswagen, Audi, Porsche, etc) infotainment units. I managed to decode their binary format for storing settings for each car type and body style. To explain it simply - their firmware contains equalizer settings for each channel of the on-board 5.1 speaker system based on cabin volume and other parameters, very similar to how home theater systems are configured (gains, delays, limiters, etc). I published this research for the car enthusiast community. While the interest was huge, the reach remained small since most community members weren't familiar with hex editors. Only a few could really replicate what I documented. After some time, I built a web application that visualized these settings and allowed to unpack, edit and repack that data back into the binary format. Nowadays The original project was pretty messy (spaghetti code, honestly) and had a very narrow focus. But then I realized the visualization library itself could be useful for any audio processing software. When I first tried to visualize audio filters with that project, I hit a wall. Most charting libraries are built for business data, all those "enterprise-ready visualization solutions". But NONE of them is designed for audio-specific needs. D3.js is the only real option here — it’s powerful but requires days of digging through docs just to get basic styling right. And if you want interactive features like drag-and-drop? Good luck with that. (Fun fact: due to D3's multiple abstraction layers, just the same filter calculations in DSSSP are 1.4-2x faster than D3's implementation). So, I built a custom vector-based graph from scratch with a modern React stack. The library focuses on one thing - audio filters. No unnecessary abstractions, no enterprise bloat, just fast and convenient (I hope!) tools for tools for audio processing software. Core Features Logarithmic frequency response visualization Interactive biquad filter manipulation Custom audio calculation engine Drag-and-drop + Mouse wheel controls Flexible theming API Technical Details Built with React + SVG (no Canvas) Zero external dependencies besides React Full TypeScript support Live Demo & Docs & GitHub This is the first public release, landing page is missing, and the backlog is huge, and docs do not cover some aspects. (You know, there's never a perfect timimng - I just had to stop implementing my ideas and make it community driven). I'd love to see what you could build with these components. What's missing? What could be improved? I'm still lacking the understanding of how it could gain some cash flow, while staying open-source. Any ideas?

ChatPDF and PDF.ai are making millions using open source tech... here's the code
reddit
LLM Vibe Score0
Human Vibe Score1
Level-Thought6152This week

ChatPDF and PDF.ai are making millions using open source tech... here's the code

Why "copy" an existing product? The best SaaS products weren’t the first of their kind - think Slack, Shopify, Zoom, Dropbox, or HubSpot. They didn’t invent team communication, e-commerce, video conferencing, cloud storage, or marketing tools; they just made them better. What is a "Chat with PDF" SaaS? These are AI-powered PDF assistants that let you upload a PDF and ask questions about its content. You can summarize articles, extract key details from a contract, analyze a research paper, and more. To see this in action or dive deeper into the tech behind it, check out this YouTube video. Let's look at the market Made possible by advances in AI like ChatGPT and Retrieval-Augmented Generation (RAG), PDF chat tools started gaining traction in early 2023 and have seen consistent growth in market interest, which is currently at an all-time high (source:google trends) Keywords like "chat PDF" and "PDF AI" get between 1 to 10 million searches every month (source:keyword planner), with a broad target audience that includes researchers, students, and professionals across various industries. Leaders like PDF.ai and ChatPDF have already gained millions of users within a year of launch, driven by the growing market demand, with paid users subscribing at around $20/month. Alright, so how do we build this with open source? The core tech for most PDF AI tools are based on the same architecture. You generate text embeddings (AI-friendly text representations; usually via OpenAI APIs) for the uploaded PDF’s chapters/topics and store them in a vector database (like Pinecone). Now, every time the user asks a question, a similarity search is performed to find the most similar PDF topics from the vector database. The selected topic contents are then sent to an LLM (like ChatGPT) along with the question, which generates a contextual answer! Here are some of the best open source implementations for this process: GPT4 & LangChain Chatbot for large PDF docs by Mayo Oshin MultiPDF Chat App by Alejandro AO PDFToChat by Hassan El Mghari Worried about building signups, user management, payments, etc.? Here are my go-to open-source SaaS boilerplates that include everything you need out of the box: SaaS Boilerplate by Remi Wg Open SaaS by wasp-lang A few ideas to stand out from the noise: Here are a few strategies that could help you differentiate and achieve product market fit (based on the pivot principles from The Lean Startup by Eric Ries): Narrow down your target audience for a personalized UX: For instance, an exam prep assistant for students with study notes and quiz generator; or a document due diligence and analysis tool for lawyers. Add unique features to increase switching cost: You could autogenerate APIs for the uploaded PDFs to enable remote integrations (eg. support chatbot knowledge base); or build in workflow automation features for bulk analyses of PDFs. Offer platform level advantages: You could ship a native mobile/desktop apps for a more integrated UX; or (non-trivial) offer private/offline support by replacing the APIs with local open source deployments (eg. llama for LLM, an embedding model from the MTEB list, and FAISS for vector search). TMI? I’m an ex-AI engineer and product lead, so don’t hesitate to reach out with any questions! P.S. I've started a free weekly newsletter to share open-source/turnkey resources behind popular products (like this one). If you’re a founder looking to launch your next product without reinventing the wheel, please subscribe :)

I retired at 32 from my side project. Here's the path I took.
reddit
LLM Vibe Score0
Human Vibe Score1
inputoriginThis week

I retired at 32 from my side project. Here's the path I took.

EDIT 2: Thanks for the award kind stranger! I've stopped responding to reddit comments for this post. I'm adding an FAQ to the original post based on the most common high quality questions. If you have a question that you're dying to know the answer to and that only I can help you with (vs. Google, ChatGPT, etc.), DM me. EDIT: I love how controversial this post has become (50% upvote rate), and only in this subreddit (vs. other subreddits that I posted the same content in). I trust that the open-minded half of you will find something useful in this post and my other posts and comments. I retired at 32 years old, in large part thanks to a B2C SaaS app that I developed on my own. Now, I don't have to work in order to cover my living expenses, and wouldn't have to work for quite a while. In other words, I can finally sip mai tais at the beach. I've condensed how I got there into this post. First, a super simplified timeline of events, followed by some critical details. Timeline 2013 Graduated college in the US 2013 Started first corporate job 2013 Started side project (B2C app) that would eventually lead to my retirement 2020 Started charging for use of my B2C app (was free, became freemium) 2021 Quit my last corporate job 2022 Retired: time freedom attained Details First, some summary statistics of my path to retirement: 9 years: time between graduating college and my retirement. 8 years: total length of my career where I worked at some corporate day job. 7 years: time it took my B2C app to make its first revenue dollar 2 years: time between my first dollar of SaaS revenue and my retirement. "Something something overnight success a decade in the making". I got extremely lucky on my path to retirement, both in terms of the business environment I was in and who I am as a person. I'd also like to think that some of the conscious decisions I made along the way contributed to my early retirement. Lucky Breaks Was born in the US middle class. Had a natural affinity for computer programming and entrepreneurial mindset (initiative, resourcefulness, pragmatism, courage, growth mindset). Had opportunities to develop these mindsets throughout life. Got into a good college which gave me the credentials to get high paying corporate jobs. Was early to a platform that saw large adoption (see "barnacle on whale" strategy). Business niche is shareworthy: my SaaS received free media. Business niche is relatively stable, and small enough to not be competitive. "Skillful" Decisions I decided to spend the nights and weekends of my early career working on side projects in the hopes that one would hit. I also worked a day job to support myself and build my savings. My launch funnel over roughly 7 years of working on side projects: Countless side projects prototyped. 5 side projects publically launched. 2 side projects made > $0. 1 side project ended up becoming the SaaS that would help me retire. At my corporate day jobs, I optimized for learning and work-life balance. My learning usually stalled after a year or two at one company, so I’d quit and find another job. I invested (and continute to do so) in physical and mental wellbeing via regular workouts, meditation, journaling, traveling, and good food. My fulfilling non-work-life re-energized me for my work-life, and my work-life supported my non-work-life: a virtuous cycle. I automated the most time-consuming aspects of my business (outside of product development). Nowadays, I take long vacations and work at most 20 hours a week / a three-day work week . I decided to keep my business entirely owned and operated by me. It's the best fit for my work-style (high autonomy, deep focus, fast decision-making) and need to have full creative freedom and control. I dated and married a very supportive and inspiring partner. I try not to succumb to outrageous lifestyle creep, which keeps my living expenses low and drastically extends my burn-rate. Prescription To share some aphorisms I’ve leaned with the wantrepreneurs or those who want to follow a similar path: Maximize your at bats, because you only need one hit. Bias towards action. Launch quickly. Get your ideas out into the real world for feedback. Perfect is the enemy of good. If you keep swinging and improving, you'll hit the ball eventually. Keep the big picture in mind. You don't necessarily need a home-run to be happy: a base hit will often do the job. Think about what matters most to you in life: is it a lot of money or status? Or is it something more satisfying, and often just as if not more attainable, like freedom, loving relationships, or fulfillment? Is what you’re doing now a good way to get what you want? Or is there a better way? At more of a micro-level of "keep the big picture in mind", I often see talented wantrepreneurs get stuck in the weeds of lower-level optimizations, usually around technical design choices. They forget (or maybe subconsciously avoid) the higher-level and more important questions of customer development, user experience, and distribution. For example: “Are you solving a real problem?” or “Did you launch an MVP and what did your users think?” Adopt a growth mindset. Believe that you are capable of learning whatever you need to learn in order to do what you want to do. The pain of regret is worse than the pain of failure. I’ve noticed that fear of failure is the greatest thing holding people back from taking action towards their dreams. Unless failure means death in your case, a debilitating fear of failure is a surmountable mental block. You miss 100% of the shots you don't take. When all is said and done, we often regret the things we didn't do in life than the things we did. There’s more to life than just work. Blasphemous (at least among my social circle)! But the reality is that many of the dying regret having worked too much in their lives. As Miss Frizzle from The Magic Schoolbus says: "Take chances, make mistakes, get messy!" Original post

I made a bunch of side projects over the last 9 months, and even accrued 500+ accounts and some donations!
reddit
LLM Vibe Score0
Human Vibe Score1
firebird8541154This week

I made a bunch of side projects over the last 9 months, and even accrued 500+ accounts and some donations!

I just stumbled upon this subreddit and have a bunch of fun projects I'd like to present, any thoughts/feedback/criticism, etc. all welcome. So, first things first, a little about me, I work full time in an unrelated job, but have picked up full stack and mobile programming. I have two roommates who help a bit in their own way, one is a server expert and happened to have a server in our apartment basement, and the other is my brother and he picked up some frontend programming. We're all avid cyclists and decided to start building about 9 months ago. Our first idea was https://sherpa-map.com a SPA website allowing users to create cycling routes, send them to their Garmin devices, download them as GPX files, etc. This site uses the open-source software Graphhopper on the backend which I've augmented to send back surface type information. This site has a loooonnnggg list of features, from the simple, like a live weather radar, to the extreme like this functionality: &#x200B; AI surface classification This video demonstrates the ability to classify road surface types in real time using high-resolution satellite imagery of road portions with unknown surface types! I trained a Pytorch resnet 50 model with tuned hyperparameters and 10 epochs on 200,000 satellite images of roads with known surface types! (We host a OSM Postgres server with coordinates of roads and their associated surface types, I made a script to pull images of said roads for training). I built the model into a secondary backend written in flask and piped the images being used back through live web sockets to my node.js backend to the person who is logged in! &#x200B; Okay, on to the next side project, a cycling physics simulator! https://sherpa-map.com/cycling-route-calculator.html Cycling Physics Simulation This site lets users enter information about their bike setup, upload or use a preset route, and enter in their physical information to see how different changes in their setup might affect how fast they will be throughout a course! It can also pull complex weather information throughout the course and give a full suite of nutrition details! &#x200B; Okay, Next project! The Activity Racer! https://sherpa-map.com/activity-racer.html Activity Racer This site lets users upload their own or competitors' GPX activity files and line them up against each other at any point in an event, to see who was faster where! It's great if you've done the same even year after year with differing setups, allowing you to get insights as to which might have done better at what point. &#x200B; Okay, final project, this one's pretty half-baked as I'm still in the process of implementing so many other things, a podcast creation app! (I was bored and just started working on this a week or so ago, for no good reason). Currently, this one lives on https://sherpa-map.com/podcast.html This podcasting web app creates a peer to peer to peer... mesh network using webRTC so, small groups can communicate with the highest level of fidelity both in audio and video! Simply enter a room name and have other users enter the room name as well and they're connected! I've already used tensorflow.js AI to allow a blur background option, similar to MS Teams, whereby bodypix classifier AI picks out the person and I use a blur on a JS canvas behind them. I also went a little bit off the deep end and managed to implement the RNNoise background noise suppressor on the frontend, it's written in C, but I was able to use Windows Subsystem for Linux + emscrption to compile it in just the right way, with exposed malloc and free and a JS wrapper to use on the frontend in WASM. I actually use WASM (typically Rust) in many fun ways throughout all of these projects. I'm also in the middle of recreating the first site in React-Native + Maplibre for IOS and Android as individual APPs. In addition, I'm also working on the integration of my main site into a different project for a different group. So, I have a fun collection of side projects with slightly different GUIs, across different platforms with no coherent landing page as of yet but I've been having a blaaaast putting them together. As a final note, I even have a bit of an easter egg in the automated email system I use for account verifications and password resets do\not\reply@sherpa-map.com I hooked it up to ChatGPT API and told it it is a disgruntled worker whose sole task in life is to watch a do\not\reply email box and respond sarcastic/snarky to anyone who dares send a message to it, if AI comes for humanity, I bet I'll be on a list for this one lol.

Day 1 of my BIP for my AdonisJS Boilerplate (turbosaas) [Built in public]
reddit
LLM Vibe Score0
Human Vibe Score0.5
Ok_Bread_6005This week

Day 1 of my BIP for my AdonisJS Boilerplate (turbosaas) [Built in public]

Hello everyone, here is day 1 (not really, I started a bit earlier) of my project: A boilerplate using AdonisJS, Inertia What technologies are used/present? AdonisJS Inertia Stripe OpenAI TailwindCSS Vite (React) Why? Firstly, I want to save time when launching my projects, and I think you do too, so I've included as many relevant features as possible. I'm tired of seeing attitudes like 'develop your SaaS in 1 hour and produce terrible code!' The purpose of this codebase is to provide the highest quality code possible and to maintain that standard throughout the development process. You might spend an extra 20 minutes doing things right, but you'll save 2 hours on refactoring. And no, you won't have to pay for updates. (WTF by the way?) Why these technologies? I've seen a lot of NextJS for boilerplates, and I've also used NextJS before, but I quickly abandoned it. It quickly becomes a mess You lose track of what is what, and start doing anything Every update breaks your application Whereas with AdonisJS, life is beautiful. There are plenty of community packages already available, and everything you need is here. What am I offering? Authentication: Social authentication, OTP, Magic Links, and credentials, along with complete account management features like password recovery. Payment & Mailing Integration: Seamless integration from start to finish, with multiple options to choose from. Detailed Documentation: Thorough explanations of every aspect, covering even the smallest, potentially confusing details in the code. Maintainable & Scalable Code: Organized by features, allowing you to easily drag and drop features to extend functionality. Developer Tools: Handy commands for generating new features and automatically adding necessary imports; a complete config to enable/disable a feature in less than 10 seconds... Pre-made Pages: Ready-to-use pages such as an admin dashboard for tasks like automatically updating products on Stripe. Extensive Component Library: A variety of components to streamline development. I've designed this boilerplate to be as developer-friendly and robust as possible, aiming to support maintainability and scalability from the get-go. Summary of today and previous days Day 2 Stripe is a nightmare to set up if you've never done it before, it quickly becomes tedious. But I've finally finished setting everything up: one-time payments, subscriptions, and subscription updates. It was complicated. Today I finally implemented the 'forgot password' option, and I've completed all the authentication by adding magic links (working with OTP). I also set up automatic deployment with GitHub Actions, and everything works well. The build runs with the action to ensure everything goes smoothly, then using SSH, I pull the project, build it, and launch it. Tomorrow: What I want to do tomorrow Tomorrow, I want to create the blog, because yes, I want to include a blog as well, and especially complete it as soon as possible so it can be available on turbosaas(dot)dev, and write my build in public. It will probably use markdown. Thank you for reading this short build in public, you can also check out how it's going on turbosaas(dot)dev.

I Made $20K in 2 Months by Building in Public on X
reddit
LLM Vibe Score0
Human Vibe Score1
nebulasyncThis week

I Made $20K in 2 Months by Building in Public on X

Hey everyone, I wanted to share my journey of making $20K in just 2 months by leveraging Twitter (X) and building in public. It’s been an exciting ride, and I hope my story inspires others to take action on their ideas. Here’s exactly what I did: Building in Public I started sharing everything about my work openly. My wins, struggles, and process. I showed: How I build MVPs for clients. The tools I use (Next.js, Supabase, Cursor AI, etc.). The challenges I face and how I solve them. Transparency builds trust, and trust brings clients. Consistency is Key For the past 2 months, I’ve posted consistently on X, even when I felt like no one was watching. Here’s what I focused on: Sharing value (pro tips, workflows, tools). Asking for advice and engaging with my community. Highlighting my projects and client work. Building an audience takes time, but showing up daily pays off. Personal Brand = Inbound Clients I never did any “engagement farming” or gimmicky posts. I just shared my knowledge, and it led to over 35M views on my tweets and 7K followers. Many of these followers turned into inbound client leads. I’ve always believed: Share value for free, and charge for implementation. The Power of Community Engaging with my community on X has been game-changing. People have: Helped refine my processes. Shared valuable tools and advice. Connected me to opportunities I wouldn’t have found otherwise. Key Takeaway: You don’t need a perfect process or a huge following to start. Be consistent. Build in public. Share your journey. In 2 months, I’ve gone from wondering if this would work to making $20K by simply showing up and adding value. If you’re thinking about building in public or starting a personal brand, DO IT. It works. Feel free to ask me anything. I’m happy to share more details about my process, tools, or lessons learned! Let’s build together.

ChatPDF and PDF.ai are making millions using open source tech... here's the code
reddit
LLM Vibe Score0
Human Vibe Score1
Level-Thought6152This week

ChatPDF and PDF.ai are making millions using open source tech... here's the code

Why "copy" an existing product? The best SaaS products weren’t the first of their kind - think Slack, Shopify, Zoom, Dropbox, or HubSpot. They didn’t invent team communication, e-commerce, video conferencing, cloud storage, or marketing tools; they just made them better. What is a "Chat with PDF" SaaS? These are AI-powered PDF assistants that let you upload a PDF and ask questions about its content. You can summarize articles, extract key details from a contract, analyze a research paper, and more. To see this in action or dive deeper into the tech behind it, check out this YouTube video. Let's look at the market Made possible by advances in AI like ChatGPT and Retrieval-Augmented Generation (RAG), PDF chat tools started gaining traction in early 2023 and have seen consistent growth in market interest, which is currently at an all-time high (source:google trends) Keywords like "chat PDF" and "PDF AI" get between 1 to 10 million searches every month (source:keyword planner), with a broad target audience that includes researchers, students, and professionals across various industries. Leaders like PDF.ai and ChatPDF have already gained millions of users within a year of launch, driven by the growing market demand, with paid users subscribing at around $20/month. Alright, so how do we build this with open source? The core tech for most PDF AI tools are based on the same architecture. You generate text embeddings (AI-friendly text representations; usually via OpenAI APIs) for the uploaded PDF’s chapters/topics and store them in a vector database (like Pinecone). Now, every time the user asks a question, a similarity search is performed to find the most similar PDF topics from the vector database. The selected topic contents are then sent to an LLM (like ChatGPT) along with the question, which generates a contextual answer! Here are some of the best open source implementations for this process: GPT4 & LangChain Chatbot for large PDF docs by Mayo Oshin MultiPDF Chat App by Alejandro AO PDFToChat by Hassan El Mghari Worried about building signups, user management, payments, etc.? Here are my go-to open-source SaaS boilerplates that include everything you need out of the box: SaaS Boilerplate by Remi Wg Open SaaS by wasp-lang A few ideas to stand out from the noise: Here are a few strategies that could help you differentiate and achieve product market fit (based on the pivot principles from The Lean Startup by Eric Ries): Narrow down your target audience for a personalized UX: For instance, an exam prep assistant for students with study notes and quiz generator; or a document due diligence and analysis tool for lawyers. Add unique features to increase switching cost: You could autogenerate APIs for the uploaded PDFs to enable remote integrations (eg. support chatbot knowledge base); or build in workflow automation features for bulk analyses of PDFs. Offer platform level advantages: You could ship a native mobile/desktop apps for a more integrated UX; or (non-trivial) offer private/offline support by replacing the APIs with local open source deployments (eg. llama for LLM, an embedding model from the MTEB list, and FAISS for vector search). TMI? I’m an ex-AI engineer and product lead, so don’t hesitate to reach out with any questions! P.S. I've started a free weekly newsletter to share open-source/turnkey resources behind popular products (like this one). If you’re a founder looking to launch your next product without reinventing the wheel, please subscribe :)

Introducing Vest: Your AI-Powered Due Diligence Partner - Looking for feedback!
reddit
LLM Vibe Score0
Human Vibe Score1
nervousslinkyThis week

Introducing Vest: Your AI-Powered Due Diligence Partner - Looking for feedback!

TLDR; We are introducing Vest, an AI powered due-diligence and stock recommendation platform. We have bootstrapped ourselves so far and are wanting to get as much feedback from Reddit as we can to see where we can improve, but also what we are doing right. So please have a look around, give us feedback and if you like it, feel free to use it. Hi Reddit, My name is Drian and I'm one of the founders of Vest. We believe we are crafting something special at Vest and we want to get the word out and gather as much feedback as possible! Our major goal at Vest is to help new retail investors make sense of the investment landscape and get AI powered assistance, or even help experienced investors get confirmation of their potential moves. Overall, we want people to start their journey to financial freedom and not be daunted by the complexity of it. So how do we do this? Vest is a user-friendly service that harnesses fundamental metrics, social and news sentiment, and technical analysis, that we feed into some advanced AI models to generate clear buy, sell, or hold signals for US-based (for now!) stocks, offering our users transparent due-diligence for confident investing. The service is currently free with no ads - however, at some point we do plan on adding a paid tier. What's included: &#x200B; Financial Metrics. Our financial metrics take all the potentially complex mathematical equations and present the fundamentals of a company to users in a simple 1 pager, with a score displaying if the metric is positive for a stock. We also provide publicly available analyst ratings from investment banks as well as price targets they have set. News Sentiment. We take publications about a specific stock from new articles, journals and socials and give these all a rating to determine if social sentiment is positive around a stock or not. Each article and its rating is visible to our users through through our dashboard. AI assisted Stock Signals. We have developed an algorithm to take all the metrics, sentiment and technical analysis we collate and analyze this with historic performance data for every stock to attempt to figure out if a stock is undervalued (great time to buy) or overvalued (great time to sell). 155 US stock tickers and counting. We currently have trained our models for around 155 US based stocks on the NASDAQ and NYSE exchanges. As we get more funding/runway we do plan on adding more, with the eventual goal to expand to more exchanges, countries and securities. Knowledge base and community. Our knowledge base & community contains explanations and articles for all metrics and the other good stuff behind Vest. We don’t want to just tell users what to do, but to also assist in their financial education. We hope our knowledge base can also become a thriving community where users can interact with us and each, ask questions around investing and keep gaining knowledge. Is it 100% accurate? Absolutely not. While we do a pretty great job at tracking and surfacing signals, we are not presenting a fool-proof, silver bullet with a guarantee here - rather a starting point for users to make more informed decisions, find potential new investment opportunities and hopefully learn about investing as they do so. We encourage our users to do their own research and due-diligence and not just take our signals as gospel - we know each and every person has a different risk appetite and goals, and we encourage you to use Vest in a way that fits with your own financial goals and risk appetite. We also display our win rates, average returns, and comparisons with buy and hold for each stock - and we are transparent about it when we’ve fallen short. Next steps: &#x200B; Hope over to vestapp.ai and sign-up From the dashboard, play around, inspect our stock information and add some stocks to your watchlist. If you like what you see, and you’ve done your homework - use your favourite brokerage account to make an investment and watch Vest for changes in a stocks signals. If you don’t have one, we have a pop-up when you click buy/sell on any given stock with some non-affiliated brokerage options for the US, Australia and New Zealand - we don’t get a kickback from these brokerages, they are just what we’ve personally been using. FEEDBACK - We’re just getting started and we know the value of a fresh pair of eyes - our current mission is to get as much feedback as possible - anything you think of please send it through here or on the dedicated feedback form on our website in the sidebar on the left. Features we’re working on We're quietly thrilled about the direction Vest is headed, and we want to give you a sneak peek of what's in store for the next couple of quarters. Some of these may roll out as premium features, but we're diligently fine-tuning the details. Here's what you can expect: &#x200B; Insider Trading Insights: Get daily reports on major stock moves by whales and company insiders. Institutional Holders: We're adding daily reports on institutional holders, keeping you informed about their moves. Lobbying Activity: We're actively working on daily updates about lobbying activities, so you can stay informed. Government Contracts Data: We'll provide a quarterly snapshot of government contract values for the companies you're tracking. US Congress Stock Activity: Keep an eye on daily trading actions of House and Senate members. Daily Summaries & Signal Alerts: We're currently hard at work on this feature. Soon, receive daily email summaries covering signals, watchlist updates, and key news. Personalized Risk Management: Tailor signals to match your unique risk management strategy. Your investments, your way. AI Assistant: Our LLM integration is almost ready, allowing you to ask it straightforward questions about particular securities in plain English. It will provide you with real-time context on fundamentals, news, and all the metrics and data points we monitor.

How I went from $27 to $3K as a solopreneur still in a 9-5
reddit
LLM Vibe Score0
Human Vibe Score1
jottrledThis week

How I went from $27 to $3K as a solopreneur still in a 9-5

My journey started back in November 2023. I was scrolling through Twitter and YouTube and saw a word that I had never come across before. Solopreneur. The word caught my eye. Mainly because I was pretty sure I knew what it meant even though it's not a word you'll find in the dictionary. I liked what it was describing. A solo entrepreneur. A one man business. It completely resonated with me. As a software engineer by trade I'm used to working alone, especially since the pandemic hit and we were forced to work remotely. See, I always wanted to ditch the 9-5 thing but thought that was too big and too scary for a single person to do. Surely you would need a lot of money to get started, right? Surely you would need investors? The whole concept seemed impossible to me. That was until I found all the success stories. I became obsessed with the concept of solopreneurship. As I went further down the rabbit hole I found people like Justin Welsh, Kieran Drew and Marc Louvion to name a few. All of whom have one person businesses making huge money every year. So I thought, if they can do it, why can't I? People like this have cleared the pathway for those looking to escape the 9-5 grind. I decided 2024 would be the year I try this out. My main goal for the year? Build a one man business, earn my first $ online and learn a sh\*t ton along the way. My main goal in general? Build my business to $100K per year, quit my 9-5 and live with freedom. From December 2023 to February 2024 I began brainstorming ideas. I was like a lost puppy looking for his ball. How on earth did people find good ideas? I began writing everything and anything that came to mind down in my notes app on my phone. By February I would have approximately 70 ideas. Each as weird and whacky as the other. I was skeptical though. If I went through all the trouble of building a product for one of these ideas how would I know if anyone would even be interested in using it? I got scared and took a break for a week. All these ideas seemed too big and the chance that they would take off into the atmosphere was slim (in my mind anyways). I was learning more and more about solopreneurship as the weeks went on so I decided to build a product centered around everything I was learning about. The idea was simple. Enter a business idea and use AI to give the user details about how to market it, who their target customers were, what to write on their landing page, etc. All for a measly $27 per use. I quickly built it and launched on March 3rd 2024. I posted about it on Indie Hackers, Reddit and Hacker News. I was so excited about the prospect of earning my first internet $! Surely everyone wanted to use my product! Nope...all I got was crickets. I was quickly brought back down to earth. That was until 5 days later. I looked at my phone and had a new Stripe notification! Cha-ching! My first internet $. What a feeling! That was goal number 1 complete. It would be another 6 days before I would get my second sale...and then another 15 days to get my third. It was an emotional rollercoaster. I went from feeling like quitting the 9-5 was actually possible to thinking that maybe the ups and downs aren't worth it. On one hand I had made my first internet dollar so I should my ecstatic, and don't get me wrong, I was but I wanted more. More validation that I could do this long term. By May I was starting to give up on the product. I had learned so much in the past few months about marketing, SEO, building an audience, etc. and I wanted to build something that I thought could have more success so I focused on one critical thing that I had learned about. What was it? Building a product that had SEO potential. A product that I knew hundreds of people were looking for. See this was my thinking - If I could find a keyword that people were searching for on Google hundreds/thousands of times every month and it was easy to rank high on search engines then I would go all in (in SEO land this equates to a Keyword that has a Keyword Difficulty of = 500). I began researching and found that the keyword "micro saas ideas" was being searched for around 600 times each month. Micro Saas was something that really interested me. It was perfect for solopreneurs. Small software products that 1 person could build. What's not to like if you're in the game of software and solopreneurship? Researching keywords like this became like a game for me. I was hooked. I was doing it every day, finding gems that were being searched for hundreds and thousands of times every month that still had potential. That's when I came up with my next product idea. I decided to create a database of Micro Saas Ideas all with this sort of SEO potential. See if you can build a product that you know people are looking for then that's all the validation you need. So I put this theory to the test. I created a database of Micro Saas Ideas with SEO Potential and launched it in June 2024. This time it was different. I made $700 in the first week of launching. A large contrast to my previous failed attempt at becoming the worlds greatest solopreneur. Since launch I have grown the product to $3K and I couldn't be happier. I know what you're saying, $3K isn't a lot. But it's validation. It's validation that I can earn $ online. Validation that I can grow a business and it gives me hope that one day I'll be able to quit that 9-5 grind. My plan is to keep growing the business. I expect there to be a few challenges up ahead but I'll tackle them as I go and learn from the failures and successes. I have a newsletter where I share Micro Saas Ideas with SEO potential every week which I'll leave below in the first comment. Feel free to come along for the ride. If not I hope this post brings you some value If you're thinking about starting as a solopreneur, stop thinking and start doing, you won't regret it.

Solo Entrepreneurs, This One’s for You! After Studying 15+ AI Directories, I’m Building a New Hub for AI, SaaS, and Tools (but the concept is unique)—Submit Yours for FREE 🚀 (Big Companies, Please Stay Away)
reddit
LLM Vibe Score0
Human Vibe Score0
foundertanmayThis week

Solo Entrepreneurs, This One’s for You! After Studying 15+ AI Directories, I’m Building a New Hub for AI, SaaS, and Tools (but the concept is unique)—Submit Yours for FREE 🚀 (Big Companies, Please Stay Away)

I’ve been in your shoes—tight budgets, limited resources, and a constant search for marketing solutions that actually work. Lately, I’ve been checking out more than 15 AI directories here on Reddit, and honestly, they all seem to have the same issues. They’re cluttered, confusing, and often filled with sponsored listings that don’t really help anyone. This got me thinking: if these tools aren’t helping users, how can any of our tools succeed? After a lot of thought (and some serious brainstorming), I’ve come up with an idea that I think could be a game-changer. This isn’t just another directory. I’m aiming to build something that’s genuinely useful for solo entrepreneurs and regular users alike. My goal is to create a platform that people actually want to use, because when that happens, your tools get natural, organic exposure. I’m also planning to integrate AI into the platform to make it even more powerful. I can’t spill all the details just yet If you want to get in early, I’m offering to add your tools to the platform for free, especially if you’re a solo entrepreneur. I’m still working out the details, but I’m aiming to launch within the next 1-2 months. Here’s how you can get involved: comment below with the name of your SaaS, AI, or tool, along with a short description of why it’s helpful and why it should be included. I haven’t finalized the domain yet, but for now, I’m planning to host it on my subdomain: toolkit dot unwiring dot tech

I’ve built a gaming recommendation and exploration platform called Which Game Next
reddit
LLM Vibe Score0
Human Vibe Score0.714
kasperooThis week

I’ve built a gaming recommendation and exploration platform called Which Game Next

Hello there! Me and a few of my best friends are software engineers, and we’ve been working part-time on developing a side project for the past 12 months. It’s called www.whichgamenext.com, and we’ve recently launched into open beta for everyone to check out. Your feedback would be invaluable to us! Our aim has been to build a gaming recommendation engine, alongside providing market oversight for where you can legally and officially purchase or obtain modern games from multiple stores and/or subscriptions. It’s often difficult to figure out what you have access to if you only have a single specific subscription, like Game Pass PC, or if you’re only interested in games on GOG/Nintendo (what a mix!). We started by identifying the available digital stores and subscriptions and slowly compiling our database using multiple automated services to gather data on these games. Think JustWatch, but for games! One major service we’ve partnered with is IGDB, which has been supplying us with JSON data dumps that served as the initial seed for our game data. A massive thank you to them for their continued support! With the data in place, we’ve been focusing on exploring new features. So far, this has included private and public user-generated lists, personal backlog tracking, and the ability to like or dislike games. We’re now improving our recommendation engine, tackling the complexities that come with it, and having a lot of fun along the way. We’re utilising modern AI strategies and solving fascinating problems related to large-scale data aggregation. We truly can’t wait to share this fantastic work! In addition to this, you can soon expect curated collections, articles about games, and supporting links to help you make informed, unbiased purchasing decisions. Your shared data will drive the recommendations. But it doesn’t stop there—we have plenty of other features on our radar, such as importing games from your favourite stores, syncing your gameplay time, surfacing data like “How Long to Beat,” and creating new and exciting ways to interact with this growing community! This is a passion project created by a group of gamers who want to spend their time and money wisely, without purchasing biases. Since it’s a side project, we mostly work on it at night, but we’re excited to grow the community, share our vision, and, who knows, maybe one day make it our full-time job! Let’s dive into the technical details: • Monorepo architecture: This speeds up development by sharing libraries, living style guides, configs, etc. Nx.js has been brilliant, enabling us to create a dependency graph of changes and only build/deploy what’s modified in a PR. • AWS: We’re using the free tier (with a few exceptions where we pay for smaller services). Achieving self-sufficiency is critical for us. Additionally, we applied to the AWS Startup Foundation programme and received $1,000 in AWS credits, which has been incredibly helpful! • Infrastructure: Fully deployed as code with Terraform. • Backends: Built using Express and Nest.js, split into around 40 projects and counting! Each project plays a unique role in gathering and syncing game data. • Scalability: Designed from the ground up, utilising AWS Lambdas with auto-scaling and load balancing. • Databases: We use Postgres with RDS and DynamoDB for storing various data. • Frontend stack: Built with React, Next.js, Tailwind, Zustand, TanStack Query, Jest, and Storybook. • CI/CD: Managed with GitHub Actions and Amplify hooks for deploying the frontends. • Admin portal: We’ve built a bespoke CMS to control the main website. It synchronises with external services, tracks game data changes, and allows us to selectively apply ‘patches’ from sites like IGDB. The system also includes data override and rollback capabilities, ensuring we maintain control over game data. • Automation: Partially automated, so manual intervention is rarely needed. • Scraping tools: Fully integrated into the admin portal with log trail capabilities. • Cloudflare: Used for on-the-fly image transformations; we’re considering moving to it full-time as our CDN for free WebP conversions. • Authentication: Handled by Cognito, with a custom frontend built from scratch. Key learnings so far: • AWS cold starts: Not ideal! While the platform is still new, we ping endpoints to keep them responsive. This won’t be an issue once traffic increases. • Lambda memory matters: We learned the hard way that low-memory configurations can delay responses by 2-3 seconds. • DynamoDB partition keys: If not designed correctly from the start, you might have to start over (yes, we’ve been there!). • GitHub Actions: Setting up node\_modules cache reuse takes time, but it’s worth it—don’t give up! We don’t know where this project will take us yet, but it’s been a fantastic journey so far. We’ve learned a lot, explored technologies we don’t typically use in our day jobs, and built something we’re genuinely passionate about. Your feedback would mean the world to us. What do you think of what we’ve done so far? What would you like to see added? Is this a service you’d use? Do you see the value in it as we do? Thanks for reading, and we hope to see you in the comments! (or our newly created /r/whichgamenext

I built an AI social monitoring that looks for relevant posts, not just keywords
reddit
LLM Vibe Score0
Human Vibe Score1
Chunky_CheezeThis week

I built an AI social monitoring that looks for relevant posts, not just keywords

Hey everyone! I've been working on a side project that I'm excited to share with you all—it's called BillyBuzz What is BillyBuzz? BillyBuzz is an AI-powered social monitoring tool that helps businesses spot and analyze relevant conversations on social media platforms, starting with Reddit. It surfaces the most promising leads directly to your Slack channels, email, or Discord, so you don't have to spend hours scrolling through threads. Why I Built It I was spending a ton of time searching for relevant posts in niche subreddits for another product I was working to get off the ground. It was not only time-consuming but also distracting (you know how easy it is to fall into a Reddit rabbit hole). I couldn't find any existing tool that did more than basic keyword searches—which wasn't enough, especially if your brand name has multiple meanings (like "Apple"). So, I decided to build BillyBuzz. It uses AI to understand your business, products, target audience, and value proposition, alongside specific keywords you might want to include. This way, it finds posts where you can genuinely contribute by introducing your product. I used BillyBuzz for a previous product launch and managed to grow it to over $80k/month in volume within about 3 months, purely through Reddit engagement. How It Works Add Information About Your Business: Input details about your business and products. Select Subreddits to Monitor: Choose the subreddits relevant to your niche. Receive Timely Alerts: Get notified via Slack, email, or Discord when relevant posts are identified. Features AI-Powered Relevancy Scoring: Goes beyond keywords by understanding the context to identify truly relevant opportunities. Subreddit Tracking: Monitor specific subreddits with AI-recommended keywords tailored to your company's needs. Real-Time Alerts: Checks for new relevant conversations every 15 minutes, so you can engage at the perfect time. Automated Categorization (Coming Soon): The AI will categorize conversations into topics like competitors, customer complaints, and more. Who It's For BillyBuzz is designed for startup founders, growth marketers, and small business owners who are tech-savvy and focused on scaling their operations. If you're looking to save time and engage more effectively with your target audience on social media, this might be up your alley. Looking for Feedback I'm sharing this here because I'd love to get your thoughts, feedback, or any suggestions you might have. If you're interested in checking it out, you can find more info here: https://billybuzz.com. Feel free to ask me anything or share your experiences with similar challenges!

I retired at 32 from my side project. Here's the path I took.
reddit
LLM Vibe Score0
Human Vibe Score1
inputoriginThis week

I retired at 32 from my side project. Here's the path I took.

EDIT 2: Thanks for the award kind stranger! I've stopped responding to reddit comments for this post. I'm adding an FAQ to the original post based on the most common high quality questions. If you have a question that you're dying to know the answer to and that only I can help you with (vs. Google, ChatGPT, etc.), DM me. EDIT: I love how controversial this post has become (50% upvote rate), and only in this subreddit (vs. other subreddits that I posted the same content in). I trust that the open-minded half of you will find something useful in this post and my other posts and comments. I retired at 32 years old, in large part thanks to a B2C SaaS app that I developed on my own. Now, I don't have to work in order to cover my living expenses, and wouldn't have to work for quite a while. In other words, I can finally sip mai tais at the beach. I've condensed how I got there into this post. First, a super simplified timeline of events, followed by some critical details. Timeline 2013 Graduated college in the US 2013 Started first corporate job 2013 Started side project (B2C app) that would eventually lead to my retirement 2020 Started charging for use of my B2C app (was free, became freemium) 2021 Quit my last corporate job 2022 Retired: time freedom attained Details First, some summary statistics of my path to retirement: 9 years: time between graduating college and my retirement. 8 years: total length of my career where I worked at some corporate day job. 7 years: time it took my B2C app to make its first revenue dollar 2 years: time between my first dollar of SaaS revenue and my retirement. "Something something overnight success a decade in the making". I got extremely lucky on my path to retirement, both in terms of the business environment I was in and who I am as a person. I'd also like to think that some of the conscious decisions I made along the way contributed to my early retirement. Lucky Breaks Was born in the US middle class. Had a natural affinity for computer programming and entrepreneurial mindset (initiative, resourcefulness, pragmatism, courage, growth mindset). Had opportunities to develop these mindsets throughout life. Got into a good college which gave me the credentials to get high paying corporate jobs. Was early to a platform that saw large adoption (see "barnacle on whale" strategy). Business niche is shareworthy: my SaaS received free media. Business niche is relatively stable, and small enough to not be competitive. "Skillful" Decisions I decided to spend the nights and weekends of my early career working on side projects in the hopes that one would hit. I also worked a day job to support myself and build my savings. My launch funnel over roughly 7 years of working on side projects: Countless side projects prototyped. 5 side projects publically launched. 2 side projects made > $0. 1 side project ended up becoming the SaaS that would help me retire. At my corporate day jobs, I optimized for learning and work-life balance. My learning usually stalled after a year or two at one company, so I’d quit and find another job. I invested (and continute to do so) in physical and mental wellbeing via regular workouts, meditation, journaling, traveling, and good food. My fulfilling non-work-life re-energized me for my work-life, and my work-life supported my non-work-life: a virtuous cycle. I automated the most time-consuming aspects of my business (outside of product development). Nowadays, I take long vacations and work at most 20 hours a week / a three-day work week . I decided to keep my business entirely owned and operated by me. It's the best fit for my work-style (high autonomy, deep focus, fast decision-making) and need to have full creative freedom and control. I dated and married a very supportive and inspiring partner. I try not to succumb to outrageous lifestyle creep, which keeps my living expenses low and drastically extends my burn-rate. Prescription To share some aphorisms I’ve leaned with the wantrepreneurs or those who want to follow a similar path: Maximize your at bats, because you only need one hit. Bias towards action. Launch quickly. Get your ideas out into the real world for feedback. Perfect is the enemy of good. If you keep swinging and improving, you'll hit the ball eventually. Keep the big picture in mind. You don't necessarily need a home-run to be happy: a base hit will often do the job. Think about what matters most to you in life: is it a lot of money or status? Or is it something more satisfying, and often just as if not more attainable, like freedom, loving relationships, or fulfillment? Is what you’re doing now a good way to get what you want? Or is there a better way? At more of a micro-level of "keep the big picture in mind", I often see talented wantrepreneurs get stuck in the weeds of lower-level optimizations, usually around technical design choices. They forget (or maybe subconsciously avoid) the higher-level and more important questions of customer development, user experience, and distribution. For example: “Are you solving a real problem?” or “Did you launch an MVP and what did your users think?” Adopt a growth mindset. Believe that you are capable of learning whatever you need to learn in order to do what you want to do. The pain of regret is worse than the pain of failure. I’ve noticed that fear of failure is the greatest thing holding people back from taking action towards their dreams. Unless failure means death in your case, a debilitating fear of failure is a surmountable mental block. You miss 100% of the shots you don't take. When all is said and done, we often regret the things we didn't do in life than the things we did. There’s more to life than just work. Blasphemous (at least among my social circle)! But the reality is that many of the dying regret having worked too much in their lives. As Miss Frizzle from The Magic Schoolbus says: "Take chances, make mistakes, get messy!" Original post

I recreated a voice AI that 2x’d booked calls in 30 days for a business
reddit
LLM Vibe Score0
Human Vibe Score1
cowanscorpThis week

I recreated a voice AI that 2x’d booked calls in 30 days for a business

I’ve been fascinated by AI and specifically how different businesses have leveraged it to eliminate time consuming tasks. I recently came across a case study where a voice agent helped a business double their booked calls and conversions in 30 days and wanted to try and recreate something similar. I’ve added the case study below along with a number to the demo voice agent I created to see if this is something people would really be interested in. This tech is improving really fast and I’m looking to dive deeper into this space. Case Study A family owned HVAC company was having challenges managing the volume of customer calls, including after hours and weekend calls, leading to missed opportunities and unmanaged leads. Building a call support team would have proved to be more expensive than they’d like. Solution With some help, the company implemented an AI system to autonomously handle calls, collect customer needs, and alert service technicians via SMS, with capabilities for live call transfers. Impact Within the first week, the company saw a 20% increase in bookings and conversions. The system's efficiency in capturing leads and managing tasks enabled the staff to handle more leads and outsource overflow. Details The AI integration included custom features like a Service Titan integration, live call transfers, SMS/email alerts, calendar and CRM integration, and Zapier automation. Results The company doubled its booked calls and conversions in 30 days through these AI call agents. With the average service visit in the U.S. being around $250, and the average unit install being around $4500 this quickly led to increased revenue as well as time savings and reduced churn. Here’s the number to the demo agent I created: +1 (714) 475-7285 I’d love to hear some honest thoughts on it and what industry you think could benefit the most from something like this.

I recreated an AI Phone Agent that saved $20,000 in lost revenue in 30 days for a business
reddit
LLM Vibe Score0
Human Vibe Score1
Mammoth_Sherbet7689This week

I recreated an AI Phone Agent that saved $20,000 in lost revenue in 30 days for a business

I've been intrigued by AI and its ability to help businesses streamline time-consuming tasks. Recently, I discovered a case study where a voice agent was able to earn a business $20,000 in booked calls in a month. Below, I've shared the case study and a demo number for a voice agent I developed. This technology is advancing rapidly, and I want to explore its potential further. Case Study A family-owned HVAC company struggled with managing a high volume of customer calls, including after-hours and weekend inquiries, resulting in missed opportunities and unmanaged leads. Hiring a dedicated call support team was not cost-effective. Solution The company implemented an AI system to handle calls autonomously, gather customer information, and notify service technicians via SMS, with options for live call transfers. Details The AI integration featured custom capabilities such as Service Titan integration, live call transfers, SMS/email alerts, calendar and CRM integration, and Zapier automation. Results In the first week, the company experienced a 20% increase in bookings and conversions. The system efficiently captured leads and managed tasks, enabling staff to handle more inquiries and outsource overflow. Within 30 days, the company saved $20,000 in lost revenue due to the elimination of calls that went to voicemail, or lost leads. The voice agent's ability to answer calls 24/7 led to significant revenue growth, time savings, and reduced churn. Here's the demo number for the voice agent I created: +1 (651) 372 2045 I believe this tech has strong use cases in a variety of industries, from home service, to dental clinics, to wedding photographers. This article studied the effect of missed calls in different businesses, if you're interested in learning more. I'd love to hear your thoughts and industries you think this could be the most beneficial for. Thank you!

I recreated an AI phone calling agent that increased booked calls by 30% for a plumbing business in 30 days
reddit
LLM Vibe Score0
Human Vibe Score1
Will_feverThis week

I recreated an AI phone calling agent that increased booked calls by 30% for a plumbing business in 30 days

AI has always intrigued me, especially when it comes to automating repetitive tasks and streamlining business operations. Recently, I found a compelling case study about a voice agent that significantly enhanced customer service and lead capture for a plumbing company. Motivated by the potential of this technology, I decided to build a similar system to see how it could be adapted for other industries. I’ve added the case study below along with a number to the demo voice agent I created to see if this is something people would really be interested in. AI technology is advancing rapidly, and I’m excited to dive deeper into this space. Case Study A family-owned plumbing business was facing challenges managing a high volume of customer calls. They were missing potential leads, particularly during after-hours and weekends, which meant lost revenue opportunities. Hiring a dedicated call support team was considered but deemed too expensive and hard to scale. Solution To solve these issues, the company deployed an AI-powered voice agent capable of handling calls autonomously. The system collected essential customer information, identified service needs, and sent real-time alerts to service technicians via SMS. It also had the ability to transfer calls to human agents if necessary, ensuring a seamless experience for customers. Impact The AI voice agent quickly proved its worth by streamlining call management and improving response times. With the AI handling routine inquiries and initial call filtering, the plumbing business saw a noticeable improvement in how quickly they could respond to customer needs. Details The AI-powered voice agent included several advanced features designed to optimize customer service: Answer Calls Anytime: Ensured every call received a friendly and professional response, regardless of the time of day. Spot Emergencies Fast: Quickly identified high-priority issues that required urgent attention. Collect Important Info: Accurately recorded critical customer details to facilitate seamless follow-ups and service scheduling. Send Alerts Right Away: Immediately notified service technicians about emergencies, enabling faster response times. Live Transfers: Live call transfer options when human assistance was needed. Results The AI-powered voice agent delivered measurable improvements across key performance metrics: 100% Call Answer Rate: No missed calls ensured that every customer inquiry was addressed promptly. 5-minute Emergency Response Time: The average response time for urgent calls was reduced significantly. 30% Increase in Lead Capture: The business saw more qualified leads, improving their chances of conversion. 25% Improvement in Resource Efficiency: Better allocation of resources allowed the team to focus on high-priority tasks. By implementing the AI-powered voice agent, the plumbing business enhanced its ability to capture more leads and provide better service to its customers. The improved call handling efficiency helped reduce missed opportunities and boosted overall customer satisfaction. Here’s the number to the demo agent I created: +1 (210) 405-0982 I’d love to hear some honest thoughts on it and which industries you think could benefit the most from this technology.

Disorganized: The note taking app for busy people (no AI inside)
reddit
LLM Vibe Score0
Human Vibe Score0
DisorganizedAppThis week

Disorganized: The note taking app for busy people (no AI inside)

https://preview.redd.it/27qoz7ihlnpe1.png?width=1774&format=png&auto=webp&s=1658d7a4c619df46cd76c5ff639b6c6c7b65fc50 About one year ago I had enough and set out to create my own note taking app, and have been working on it in my spare time since summer. I had two main goals when creating Disorganized: \- Less friction If I'm walking around and a thought pop ups in my head there should be zero friction to writing it down. That's why Disorganized doesn't ask you to write a title, sort it into the correct folder, etc. You write exactly your thoughts and nothing else. \- A better solution than templates. I wanted one app that I could use to track my workouts, my recipes and one-off notes. Other apps accomplish this with templates but I find templates too rigid - I don't want to create a "recipe" template because a "recipe" is not always the same thing. It's usually a table of ingredients and some instructions in text, but other times it's multiple tables of ingredients, or something else entirely. Templates are too rigid. In Disorganized, you "clone" notes to create a new note with the same structure. This way, you can reuse previous set ups, but you're completely free to evolve your "template" as you go. Please try it out and tell me what you think! iOS, three months premium: https://apps.apple.com/redeem/?ctx=offercodes&id=6738280174&code=THREEMONTHS Android: https://play.google.com/store/apps/details?id=com.disorganized.disorganized&pli=1 Use code "THREEMONTHS" at checkout for three months. Web version: https://app.getdisorganized.com/

I recreated an AI Phone Agent that saved $20,000 in lost revenue in 30 days for a business
reddit
LLM Vibe Score0
Human Vibe Score1
Mammoth_Sherbet7689This week

I recreated an AI Phone Agent that saved $20,000 in lost revenue in 30 days for a business

I've been intrigued by AI and its ability to help businesses streamline time-consuming tasks. Recently, I discovered a case study where a voice agent was able to earn a business $20,000 in booked calls in a month. Below, I've shared the case study and a demo number for a voice agent I developed. This technology is advancing rapidly, and I want to explore its potential further. Case Study A family-owned HVAC company struggled with managing a high volume of customer calls, including after-hours and weekend inquiries, resulting in missed opportunities and unmanaged leads. Hiring a dedicated call support team was not cost-effective. Solution The company implemented an AI system to handle calls autonomously, gather customer information, and notify service technicians via SMS, with options for live call transfers. Details The AI integration featured custom capabilities such as Service Titan integration, live call transfers, SMS/email alerts, calendar and CRM integration, and Zapier automation. Results In the first week, the company experienced a 20% increase in bookings and conversions. The system efficiently captured leads and managed tasks, enabling staff to handle more inquiries and outsource overflow. Within 30 days, the company saved $20,000 in lost revenue due to the elimination of calls that went to voicemail, or lost leads. The voice agent's ability to answer calls 24/7 led to significant revenue growth, time savings, and reduced churn. Here's the demo number for the voice agent I created: +1 (651) 372 2045 I believe this tech has strong use cases in a variety of industries, from home service, to dental clinics, to wedding photographers. This article studied the effect of missed calls in different businesses, if you're interested in learning more. I'd love to hear your thoughts and industries you think this could be the most beneficial for. Thank you!

I recreated a voice AI that 2x’d booked calls in 30 days for a business
reddit
LLM Vibe Score0
Human Vibe Score1
cowanscorpThis week

I recreated a voice AI that 2x’d booked calls in 30 days for a business

I’ve been fascinated by AI and specifically how different businesses have leveraged it to eliminate time consuming tasks. I recently came across a case study where a voice agent helped a business double their booked calls and conversions in 30 days and wanted to try and recreate something similar. I’ve added the case study below along with a number to the demo voice agent I created to see if this is something people would really be interested in. This tech is improving really fast and I’m looking to dive deeper into this space. Case Study A family owned HVAC company was having challenges managing the volume of customer calls, including after hours and weekend calls, leading to missed opportunities and unmanaged leads. Building a call support team would have proved to be more expensive than they’d like. Solution With some help, the company implemented an AI system to autonomously handle calls, collect customer needs, and alert service technicians via SMS, with capabilities for live call transfers. Impact Within the first week, the company saw a 20% increase in bookings and conversions. The system's efficiency in capturing leads and managing tasks enabled the staff to handle more leads and outsource overflow. Details The AI integration included custom features like a Service Titan integration, live call transfers, SMS/email alerts, calendar and CRM integration, and Zapier automation. Results The company doubled its booked calls and conversions in 30 days through these AI call agents. With the average service visit in the U.S. being around $250, and the average unit install being around $4500 this quickly led to increased revenue as well as time savings and reduced churn. Here’s the number to the demo agent I created: +1 (714) 475-7285 I’d love to hear some honest thoughts on it and what industry you think could benefit the most from something like this.

I recreated an AI phone calling agent that increased booked calls by 30% for a plumbing business in 30 days
reddit
LLM Vibe Score0
Human Vibe Score1
Will_feverThis week

I recreated an AI phone calling agent that increased booked calls by 30% for a plumbing business in 30 days

AI has always intrigued me, especially when it comes to automating repetitive tasks and streamlining business operations. Recently, I found a compelling case study about a voice agent that significantly enhanced customer service and lead capture for a plumbing company. Motivated by the potential of this technology, I decided to build a similar system to see how it could be adapted for other industries. I’ve added the case study below along with a number to the demo voice agent I created to see if this is something people would really be interested in. AI technology is advancing rapidly, and I’m excited to dive deeper into this space. Case Study A family-owned plumbing business was facing challenges managing a high volume of customer calls. They were missing potential leads, particularly during after-hours and weekends, which meant lost revenue opportunities. Hiring a dedicated call support team was considered but deemed too expensive and hard to scale. Solution To solve these issues, the company deployed an AI-powered voice agent capable of handling calls autonomously. The system collected essential customer information, identified service needs, and sent real-time alerts to service technicians via SMS. It also had the ability to transfer calls to human agents if necessary, ensuring a seamless experience for customers. Impact The AI voice agent quickly proved its worth by streamlining call management and improving response times. With the AI handling routine inquiries and initial call filtering, the plumbing business saw a noticeable improvement in how quickly they could respond to customer needs. Details The AI-powered voice agent included several advanced features designed to optimize customer service: Answer Calls Anytime: Ensured every call received a friendly and professional response, regardless of the time of day. Spot Emergencies Fast: Quickly identified high-priority issues that required urgent attention. Collect Important Info: Accurately recorded critical customer details to facilitate seamless follow-ups and service scheduling. Send Alerts Right Away: Immediately notified service technicians about emergencies, enabling faster response times. Live Transfers: Live call transfer options when human assistance was needed. Results The AI-powered voice agent delivered measurable improvements across key performance metrics: 100% Call Answer Rate: No missed calls ensured that every customer inquiry was addressed promptly. 5-minute Emergency Response Time: The average response time for urgent calls was reduced significantly. 30% Increase in Lead Capture: The business saw more qualified leads, improving their chances of conversion. 25% Improvement in Resource Efficiency: Better allocation of resources allowed the team to focus on high-priority tasks. By implementing the AI-powered voice agent, the plumbing business enhanced its ability to capture more leads and provide better service to its customers. The improved call handling efficiency helped reduce missed opportunities and boosted overall customer satisfaction. Here’s the number to the demo agent I created: +1 (210) 405-0982 I’d love to hear some honest thoughts on it and which industries you think could benefit the most from this technology.

[P] Building an Reinforcement Learning Agent to play The Legend of Zelda
reddit
LLM Vibe Score0
Human Vibe Score1
DarkAutumnThis week

[P] Building an Reinforcement Learning Agent to play The Legend of Zelda

A year go I started trying to use PPO to play the original Legend of Zelda, and I was able to train a model to beat the first boss after a few months of work. I wanted to share the project just for show and tell. I'd love to hear feedback and suggestions as this is just a hobby project. I don't do this for a living. The code for that lives in the original-design branch of my Triforce repo. I'm currently tinkering with new designs so the main branch is much less stable. Here's a video of the agent beating the first dungeon, which was trained with 5,000,000+ steps. At 38 seconds, you can see it learned that it's invulnerable at the screen edge, and it exploits that to avoid damage from a projectile. At 53 seconds it steps up to avoid damage from an unblockable projectile, even though it takes a -0.06 penalty for moving the wrong way (taking damage would be a larger penalty.) At 55 seconds it walks towards the rock projectile to block it. And so on, lots of little things the model does is easy to miss if you don't know the game inside and out. As a TLDR, here's an early version of my new (single) model. This doesn't make it quite as far, but if you watch closely it's combat is already far better, and is only trained on 320,000 steps (~6% of the steps the first model was trained on). This is pretty far along from my very first model. Original Design I got the original project working using stable-baselines's PPO and default neural network (Shared NatureCNN, I believe). SB was great to get started but ultimately stifling. In the new version of the project I've implemented PPO from scratch with torch with my own simple neural network similar to stable-baseline's default. I'm playing with all kinds of changes and designs now that I have more flexibility and control. Here is my rough original design: Overall Strategy My first pass through this project was basically "imagine playing Zelda with your older sibling telling you where to go and what to do". I give the model an objective vector which points to where I want it to go on the screen (as a bird flies, the agent still had to learn path finding to avoid damage and navigate around the map). This includes either point at the nearest enemy I want it to kill or a NSEW vector if it's supposed to move to the next room. Due a few limitations with stable-baselines (especially around action masking), I ended up training unique models for traversing the overworld vs the dungeon (since they have entirely different tilesets). I also trained a different model for when we have sword beams vs not. In the video above you can see what model is being used onscreen. In my current project I've removed this objective vector as it felt too much like cheating. Instead I give it a one-hot encoded objective (move north to the next room, pickup items, kill enemies, etc). So far it's working quite well without that crutch. The new project also does a much better job of combat even without multiple models to handle beams vs not. Observation/Action Space Image - The standard neural network had a really tough time being fed the entire screen. No amount of training seemed to help. I solved this by creating a viewport around Link that keeps him centered. This REALLY helped the model learn. I also had absolutely zero success with stacking frames to give Link a way to see enemy/projectile movement. The model simply never trained with stable-baselines when I implemented frame stacking and I never figured out why. I just added it to my current neural network and it seems to be working... Though my early experiments show that giving it 3 frames (skipping two in between, so frames curr, curr-3, curr-6) doesn't really give us that much better performance. It might if I took away some of the vectors. We'll see. Vectors - Since the model cannot see beyond its little viewport, I gave the model a vector to the closest item, enemy, and projectile onscreen. This made it so the model can shoot enemies across the room outside of its viewport. My new model gives it multiple enemies/items/projectiles and I plan to try to use an attention mechanism as part of the network to see if I can just feed it all of that data. Information - It also gets a couple of one-off datapoints like whether it currently has sword beams. The new model also gives it a "source" room (to help better understand dungeons where we have to backtrack), and a one-hot encoded objective. Action Space My original project just has a few actions, 4 for moving in the cardinal directions and 4 for attacking in each direction (I also added bombs but never spent any time training it). I had an idea to use masking to help speed up training. I.E. if link bumps into a wall, don't let him move in that direction again until he moves elsewhere, as the model would often spend an entire memory buffer running headlong straight into a wall before an update...better to do it once and get a huge negative penalty which is essentially the same result but faster. Unfortunately SB made it really annoying architecturally to pass that info down to the policy layer. I could have hacked it together, but eventually I just reimplemented PPO and my own neural network so I could properly mask actions in the new version. For example, when we start training a fresh model, it cannot attack when there aren't enemies on screen and I can disallow it from leaving certain areas. The new model actually understands splitting swinging the sword short range vs firing sword beams as two different actions, though I haven't yet had a chance to fully train with the split yet. Frameskip/Cooldowns - In the game I don't use a fixed frame skip for actions. Instead I use the internal ram state of game to know when Link is animation locked or not and only allow the agent to take actions when it's actually possible to give meaningful input to the game. This greatly sped up training. We also force movement to be between tiles on the game map. This means that when the agent decides to move it loses control for longer than a player would...a player can make more split second decisions. This made it easier to implement movement rewards though and might be something to clean up in the future. Other interesting details Pathfinding - To facilitate rewards, the original version of this project used A* to pathfind from link to what he should be doing. Here's a video of it in action. This information wasn't giving to the model directly but instead the agent would only be given the rewards if it exactly followed that path or the transposed version of it. It would also pathfind around enemies and not walk through them. This was a nightmare though. The corner cases were significant, and pushing Link towards enemies but not into them was really tricky. The new verison just uses a wavefront algorithm. I calculate a wave from the tiles we want to get to outwards, then make sure we are following the gradient. Also calculating the A* around enemies every frame (even with caching) was super slow. Wavefront was faster, especially because I give the new model no special rewards for walking around enemies...faster to compute and it has to learn from taking damage or not. Either way, the both the old and new models successfully learned how to pathfind around danger and obstacles, with or without the cheaty objective vector. Rewards - I programmed very dense rewards in both the old and new model. At basically every step, the model is getting rewarded or punished for something. I actually have some ideas I can't wait to try out to make the rewards more sparse. Or maybe we start with dense rewards for the first training, then fine-tune the model with sparser rewards. We'll see. Predicting the Future - Speaking of rewards. One interesting wrinkle is that the agent can do a lot of things that will eventually deal damage but not on that frame. For example, when Link sets a bomb it takes several seconds before it explodes, killing things. This can be a massive reward or penalty since he spent an extremely valuable resource, but may have done massive damage. PPO and other RL propagates rewards backwards, of course, but that spike in reward could land on a weird frame where we took damage or moved in the wrong direction. I probably could have just not solved that problem and let it shake out over time, but instead I used the fact that we are in an emulator to just see what the outcome of every decision is. When planting a bomb, shooting sword beams, etc, we let the game run forward until impact, then rewind time and reward the agent appropriately, continuing on from when we first paused. This greatly speeds up training, even if it's expensive to do this savestate, play forward, restore state. Neural Networks - When I first started this project (knowing very little about ML and RL), I thought most of my time would be tuning the shape of the neural network that we are using. In reality, the default provided by stable-baselines and my eventual reimplemnentation has been enough to make massive progress. Now that I have a solid codebase though, I really want to revisit this. I'd like to see if trying CoordConvs and similar networks might make the viewport unncessary. Less interesting details/thoughts Hyperparameters - Setting the entropy coefficinet way lower helped a TON in training stable models. My new PPO implementation is way less stable than stable-baselines (ha, imagine that), but still converges most of the time. Infinite Rewards - As with all reinforcement learning, if you give some way for the model to get infinite rewards, it will do just that and nothing else. I spent days, or maybe weeks tweaking reward functions to just get it to train and not find a spot on the wall it could hump for infinite rewards. Even just neutral rewards, like +0.5 moving forward and -0.5 for moving backwards, would often result in a model that just stepped left, then right infinitely. There has to be a real reward or punishment (non-neutral) for forward progress. Debugging Rewards - In fact, building a rewards debugger was the only way I made progress in this project. If you are tackling something this big, do that very early. Stable-Retro is pretty great - Couldn't be happier with the clean design for implementing emulation for AI. Torch is Awesome - My early versions heavily used numpy and relied on stable-baselines, with its multiproc parallelization support. It worked great. Moving the project over to torch was night and day though. It gave me so much more flexibility, instant multithreading for matrix operations. I have a pretty beefy computer and I'm almost at the same steps per second as 20 proc stable-retro/numpy. Future Ideas This has already gone on too long. I have some ideas for future projects, but maybe I'll just make them another post when I actually do them. Special Thanks A special thanks to Brad Flaugher for help with the early version of this, Fiskbit from the Zelda1 speedrunning community for help pulling apart the raw assembly to build this thing, and MatPoliquin for maintaining Stable-Retro. Happy to answer any questions, really I just love nerding out about this stuff.

[D] Misuse of Deep Learning in Nature Journal’s Earthquake Aftershock Paper
reddit
LLM Vibe Score0
Human Vibe Score0.333
milaworldThis week

[D] Misuse of Deep Learning in Nature Journal’s Earthquake Aftershock Paper

Recently, I saw a post by Rajiv Shah, Chicago-based data-scientist, regarding an article published in Nature last year called Deep learning of aftershock patterns following large earthquakes, written by scientists at Harvard in collaboration with Google. Below is the article: Stand Up for Best Practices: Misuse of Deep Learning in Nature’s Earthquake Aftershock Paper The Dangers of Machine Learning Hype Practitioners of AI, machine learning, predictive modeling, and data science have grown enormously over the last few years. What was once a niche field defined by its blend of knowledge is becoming a rapidly growing profession. As the excitement around AI continues to grow, the new wave of ML augmentation, automation, and GUI tools will lead to even more growth in the number of people trying to build predictive models. But here’s the rub: While it becomes easier to use the tools of predictive modeling, predictive modeling knowledge is not yet a widespread commodity. Errors can be counterintuitive and subtle, and they can easily lead you to the wrong conclusions if you’re not careful. I’m a data scientist who works with dozens of expert data science teams for a living. In my day job, I see these teams striving to build high-quality models. The best teams work together to review their models to detect problems. There are many hard-to-detect-ways that lead to problematic models (say, by allowing target leakage into their training data). Identifying issues is not fun. This requires admitting that exciting results are “too good to be true” or that their methods were not the right approach. In other words, it’s less about the sexy data science hype that gets headlines and more about a rigorous scientific discipline. Bad Methods Create Bad Results Almost a year ago, I read an article in Nature that claimed unprecedented accuracy in predicting earthquake aftershocks by using deep learning. Reading the article, my internal radar became deeply suspicious of their results. Their methods simply didn’t carry many of the hallmarks of careful predicting modeling. I started to dig deeper. In the meantime, this article blew up and became widely recognized! It was even included in the release notes for Tensorflow as an example of what deep learning could do. However, in my digging, I found major flaws in the paper. Namely, data leakage which leads to unrealistic accuracy scores and a lack of attention to model selection (you don’t build a 6 layer neural network when a simpler model provides the same level of accuracy). To my earlier point: these are subtle, but incredibly basic predictive modeling errors that can invalidate the entire results of an experiment. Data scientists are trained to recognize and avoid these issues in their work. I assumed that this was simply overlooked by the author, so I contacted her and let her know so that she could improve her analysis. Although we had previously communicated, she did not respond to my email over concerns with the paper. Falling On Deaf Ears So, what was I to do? My coworkers told me to just tweet it and let it go, but I wanted to stand up for good modeling practices. I thought reason and best practices would prevail, so I started a 6-month process of writing up my results and shared them with Nature. Upon sharing my results, I received a note from Nature in January 2019 that despite serious concerns about data leakage and model selection that invalidate their experiment, they saw no need to correct the errors, because “Devries et al. are concerned primarily with using machine learning as [a] tool to extract insight into the natural world, and not with details of the algorithm design.” The authors provided a much harsher response. You can read the entire exchange on my github. It’s not enough to say that I was disappointed. This was a major paper (it’s Nature!) that bought into AI hype and published a paper despite it using flawed methods. Then, just this week, I ran across articles by Arnaud Mignan and Marco Broccardo on shortcomings that they found in the aftershocks article. Here are two more data scientists with expertise in earthquake analysis who also noticed flaws in the paper. I also have placed my analysis and reproducible code on github. Standing Up For Predictive Modeling Methods I want to make it clear: my goal is not to villainize the authors of the aftershocks paper. I don’t believe that they were malicious, and I think that they would argue their goal was to just show how machine learning could be applied to aftershocks. Devries is an accomplished earthquake scientist who wanted to use the latest methods for her field of study and found exciting results from it. But here’s the problem: their insights and results were based on fundamentally flawed methods. It’s not enough to say, “This isn’t a machine learning paper, it’s an earthquake paper.” If you use predictive modeling, then the quality of your results are determined by the quality of your modeling. Your work becomes data science work, and you are on the hook for your scientific rigor. There is a huge appetite for papers that use the latest technologies and approaches. It becomes very difficult to push back on these papers. But if we allow papers or projects with fundamental issues to advance, it hurts all of us. It undermines the field of predictive modeling. Please push back on bad data science. Report bad findings to papers. And if they don’t take action, go to twitter, post about it, share your results and make noise. This type of collective action worked to raise awareness of p-values and combat the epidemic of p-hacking. We need good machine learning practices if we want our field to continue to grow and maintain credibility. Link to Rajiv's Article Original Nature Publication (note: paywalled) GitHub repo contains an attempt to reproduce Nature's paper Confrontational correspondence with authors

[D] I don't really trust papers out of "Top Labs" anymore
reddit
LLM Vibe Score0
Human Vibe Score0.333
MrAcuriteThis week

[D] I don't really trust papers out of "Top Labs" anymore

I mean, I trust that the numbers they got are accurate and that they really did the work and got the results. I believe those. It's just that, take the recent "An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems" paper. It's 18 pages of talking through this pretty convoluted evolutionary and multitask learning algorithm, it's pretty interesting, solves a bunch of problems. But two notes. One, the big number they cite as the success metric is 99.43 on CIFAR-10, against a SotA of 99.40, so woop-de-fucking-doo in the grand scheme of things. Two, there's a chart towards the end of the paper that details how many TPU core-hours were used for just the training regimens that results in the final results. The sum total is 17,810 core-hours. Let's assume that for someone who doesn't work at Google, you'd have to use on-demand pricing of $3.22/hr. This means that these trained models cost $57,348. Strictly speaking, throwing enough compute at a general enough genetic algorithm will eventually produce arbitrarily good performance, so while you can absolutely read this paper and collect interesting ideas about how to use genetic algorithms to accomplish multitask learning by having each new task leverage learned weights from previous tasks by defining modifications to a subset of components of a pre-existing model, there's a meta-textual level on which this paper is just "Jeff Dean spent enough money to feed a family of four for half a decade to get a 0.03% improvement on CIFAR-10." OpenAI is far and away the worst offender here, but it seems like everyone's doing it. You throw a fuckton of compute and a light ganache of new ideas at an existing problem with existing data and existing benchmarks, and then if your numbers are infinitesimally higher than their numbers, you get to put a lil' sticker on your CV. Why should I trust that your ideas are even any good? I can't check them, I can't apply them to my own projects. Is this really what we're comfortable with as a community? A handful of corporations and the occasional university waving their dicks at everyone because they've got the compute to burn and we don't? There's a level at which I think there should be a new journal, exclusively for papers in which you can replicate their experimental results in under eight hours on a single consumer GPU.

[P] MIT Introduction to Data-Centric AI
reddit
LLM Vibe Score0
Human Vibe Score1
anishathalyeThis week

[P] MIT Introduction to Data-Centric AI

Announcing the first-ever course on Data-Centric AI. Learn how to train better ML models by improving the data. Course homepage | Lecture videos on YouTube | Lab Assignments The course covers: Data-Centric AI vs. Model-Centric AI Label Errors Dataset Creation and Curation Data-centric Evaluation of ML Models Class Imbalance, Outliers, and Distribution Shift Growing or Compressing Datasets Interpretability in Data-Centric ML Encoding Human Priors: Data Augmentation and Prompt Engineering Data Privacy and Security MIT, like most universities, has many courses on machine learning (6.036, 6.867, and many others). Those classes teach techniques to produce effective models for a given dataset, and the classes focus heavily on the mathematical details of models rather than practical applications. However, in real-world applications of ML, the dataset is not fixed, and focusing on improving the data often gives better results than improving the model. We’ve personally seen this time and time again in our applied ML work as well as our research. Data-Centric AI (DCAI) is an emerging science that studies techniques to improve datasets in a systematic/algorithmic way — given that this topic wasn’t covered in the standard curriculum, we (a group of PhD candidates and grads) thought that we should put together a new class! We taught this intensive 2-week course in January over MIT’s IAP term, and we’ve just published all the course material, including lecture videos, lecture notes, hands-on lab assignments, and lab solutions, in hopes that people outside the MIT community would find these resources useful. We’d be happy to answer any questions related to the class or DCAI in general, and we’d love to hear any feedback on how we can improve the course material. Introduction to Data-Centric AI is open-source opencourseware, so feel free to make improvements directly: https://github.com/dcai-course/dcai-course.

[P] An elegant and strong PyTorch Trainer
reddit
LLM Vibe Score0
Human Vibe Score1
serend1p1ty-leeThis week

[P] An elegant and strong PyTorch Trainer

For lightweight use, pytorch-lightning is too heavy, and its source code will be very difficult for beginners to read, at least for me. As we know, for a deep learning engineer, a powerful trainer is a sharp weapon. When reproducing the SOTA papers, you don't have to write a lot of template code every time and can pay more attention to the model implementation itself. I opened source some works (AAAI 21 SeqNet, ICCV 21 MAED, etc) and earned more than 500 stars. After referring to some popular projects (detectron2, pytorch-image-models, and mmcv), based on my personal development experience, I developed a SIMPLE enough, GENERIC enough, and STRONG enough PyTorch Trainer: core-pytorch-utils, also named CPU. CPU covers most details in the process of training a deep neural network, including: Auto logging to console and tensorboard. Auto checkpointing. Argument parser which can load a YAML configuration file. Make ALL PyTorch LR scheduler supporting warmup. Support distributed training. Support Automatically Mixed Precision (AMP) training. I try to keep the project code as simple and readable as possible. So the code comments are very detailed and everyone can understand them. What's more, a good document is also available: CPU document For deep learning green hands, you can learn how to: write a standard and clean training loop. use AMP to speed up your training. save checkpoint, and resume from it. perform more smooth, and readable logging. use the popular visualization library: tensorboard. For old hands, we can talk about whether the structure of CPU is elegant and reasonable. I have thought a lot about this framework, combining the advantages of several popular frameworks and discarding their shortcomings. Welcome to use it!

[D] Here are 17 ways of making PyTorch training faster – what did I miss?
reddit
LLM Vibe Score0
Human Vibe Score1
lorenzkuhnThis week

[D] Here are 17 ways of making PyTorch training faster – what did I miss?

I've been collecting methods to accelerate training in PyTorch – here's what I've found so far. What did I miss? What did I get wrong? The methods – roughly sorted from largest to smallest expected speed-up – are: Consider using a different learning rate schedule. Use multiple workers and pinned memory in DataLoader. Max out the batch size. Use Automatic Mixed Precision (AMP). Consider using a different optimizer. Turn on cudNN benchmarking. Beware of frequently transferring data between CPUs and GPUs. Use gradient/activation checkpointing. Use gradient accumulation. Use DistributedDataParallel for multi-GPU training. Set gradients to None rather than 0. Use .as\_tensor rather than .tensor() Turn off debugging APIs if not needed. Use gradient clipping. Turn off bias before BatchNorm. Turn off gradient computation during validation. Use input and batch normalization. Consider using another learning rate schedule The learning rate (schedule) you choose has a large impact on the speed of convergence as well as the generalization performance of your model. Cyclical Learning Rates and the 1Cycle learning rate schedule are both methods introduced by Leslie N. Smith (here and here), and then popularised by fast.ai's Jeremy Howard and Sylvain Gugger (here and here). Essentially, the 1Cycle learning rate schedule looks something like this: &#x200B; https://preview.redd.it/sc37u5knmxa61.png?width=476&format=png&auto=webp&s=09b309b4dbd67eedb4ab5f86e03e0e83d7b072d1 Sylvain writes: \[1cycle consists of\]  two steps of equal lengths, one going from a lower learning rate to a higher one than go back to the minimum. The maximum should be the value picked with the Learning Rate Finder, and the lower one can be ten times lower. Then, the length of this cycle should be slightly less than the total number of epochs, and, in the last part of training, we should allow the learning rate to decrease more than the minimum, by several orders of magnitude. In the best case this schedule achieves a massive speed-up – what Smith calls Superconvergence – as compared to conventional learning rate schedules. Using the 1Cycle policy he needs \~10x fewer training iterations of a ResNet-56 on ImageNet to match the performance of the original paper, for instance). The schedule seems to perform robustly well across common architectures and optimizers. PyTorch implements both of these methods torch.optim.lrscheduler.CyclicLR and torch.optim.lrscheduler.OneCycleLR, see the documentation. One drawback of these schedulers is that they introduce a number of additional hyperparameters. This post and this repo, offer a nice overview and implementation of how good hyper-parameters can be found including the Learning Rate Finder mentioned above. Why does this work? It doesn't seem entirely clear but one possible explanation might be that regularly increasing the learning rate helps to traverse saddle points in the loss landscape more quickly. Use multiple workers and pinned memory in DataLoader When using torch.utils.data.DataLoader, set numworkers > 0, rather than the default value of 0, and pinmemory=True, rather than the default value of False. Details of this are explained here. Szymon Micacz achieves a 2x speed-up for a single training epoch by using four workers and pinned memory. A rule of thumb that people are using to choose the number of workers is to set it to four times the number of available GPUs with both a larger and smaller number of workers leading to a slow down. Note that increasing num\_workerswill increase your CPU memory consumption. Max out the batch size This is a somewhat contentious point. Generally, however, it seems like using the largest batch size your GPU memory permits will accelerate your training (see NVIDIA's Szymon Migacz, for instance). Note that you will also have to adjust other hyperparameters, such as the learning rate, if you modify the batch size. A rule of thumb here is to double the learning rate as you double the batch size. OpenAI has a nice empirical paper on the number of convergence steps needed for different batch sizes. Daniel Huynh runs some experiments with different batch sizes (also using the 1Cycle policy discussed above) where he achieves a 4x speed-up by going from batch size 64 to 512. One of the downsides of using large batch sizes, however, is that they might lead to solutions that generalize worse than those trained with smaller batches. Use Automatic Mixed Precision (AMP) The release of PyTorch 1.6 included a native implementation of Automatic Mixed Precision training to PyTorch. The main idea here is that certain operations can be run faster and without a loss of accuracy at semi-precision (FP16) rather than in the single-precision (FP32) used elsewhere. AMP, then, automatically decide which operation should be executed in which format. This allows both for faster training and a smaller memory footprint. In the best case, the usage of AMP would look something like this: import torch Creates once at the beginning of training scaler = torch.cuda.amp.GradScaler() for data, label in data_iter: optimizer.zero_grad() Casts operations to mixed precision with torch.cuda.amp.autocast(): loss = model(data) Scales the loss, and calls backward() to create scaled gradients scaler.scale(loss).backward() Unscales gradients and calls or skips optimizer.step() scaler.step(optimizer) Updates the scale for next iteration scaler.update() Benchmarking a number of common language and vision models on NVIDIA V100 GPUs, Huang and colleagues find that using AMP over regular FP32 training yields roughly 2x – but upto 5.5x – training speed-ups. Currently, only CUDA ops can be autocast in this way. See the documentation here for more details on this and other limitations. u/SVPERBlA points out that you can squeeze out some additional performance (\~ 20%) from AMP on NVIDIA Tensor Core GPUs if you convert your tensors to the Channels Last memory format. Refer to this section in the NVIDIA docs for an explanation of the speedup and more about NCHW versus NHWC tensor formats. Consider using another optimizer AdamW is Adam with weight decay (rather than L2-regularization) which was popularized by fast.ai and is now available natively in PyTorch as torch.optim.AdamW. AdamW seems to consistently outperform Adam in terms of both the error achieved and the training time. See this excellent blog post on why using weight decay instead of L2-regularization makes a difference for Adam. Both Adam and AdamW work well with the 1Cycle policy described above. There are also a few not-yet-native optimizers that have received a lot of attention recently, most notably LARS (pip installable implementation) and LAMB. NVIDA's APEX implements fused versions of a number of common optimizers such as Adam. This implementation avoid a number of passes to and from GPU memory as compared to the PyTorch implementation of Adam, yielding speed-ups in the range of 5%. Turn on cudNN benchmarking If your model architecture remains fixed and your input size stays constant, setting torch.backends.cudnn.benchmark = True might be beneficial (docs). This enables the cudNN autotuner which will benchmark a number of different ways of computing convolutions in cudNN and then use the fastest method from then on. For a rough reference on the type of speed-up you can expect from this, Szymon Migacz achieves a speed-up of 70% on a forward pass for a convolution and a 27% speed-up for a forward + backward pass of the same convolution. One caveat here is that this autotuning might become very slow if you max out the batch size as mentioned above. Beware of frequently transferring data between CPUs and GPUs Beware of frequently transferring tensors from a GPU to a CPU using tensor.cpu() and vice versa using tensor.cuda() as these are relatively expensive. The same applies for .item() and .numpy() – use .detach() instead. If you are creating a new tensor, you can also directly assign it to your GPU using the keyword argument device=torch.device('cuda:0'). If you do need to transfer data, using .to(non_blocking=True), might be useful as long as you don't have any synchronization points after the transfer. If you really have to, you might want to give Santosh Gupta's SpeedTorch a try, although it doesn't seem entirely clear when this actually does/doesn't provide speed-ups. Use gradient/activation checkpointing Quoting directly from the documentation: Checkpointing works by trading compute for memory. Rather than storing all intermediate activations of the entire computation graph for computing backward, the checkpointed part does not save intermediate activations, and instead recomputes them in backward pass. It can be applied on any part of a model. Specifically, in the forward pass, function will run in torch.no\grad() manner, i.e., not storing the intermediate activations. Instead, the forward pass saves the inputs tuple and the functionparameter. In the backwards pass, the saved inputs and function is retrieved, and the forward pass is computed on function again, now tracking the intermediate activations, and then the gradients are calculated using these activation values. So while this will might slightly increase your run time for a given batch size, you'll significantly reduce your memory footprint. This in turn will allow you to further increase the batch size you're using allowing for better GPU utilization. While checkpointing is implemented natively as torch.utils.checkpoint(docs), it does seem to take some thought and effort to implement properly. Priya Goyal has a good tutorial demonstrating some of the key aspects of checkpointing. Use gradient accumulation Another approach to increasing the batch size is to accumulate gradients across multiple .backward() passes before calling optimizer.step(). Following a post by Hugging Face's Thomas Wolf, gradient accumulation can be implemented as follows: model.zero_grad() Reset gradients tensors for i, (inputs, labels) in enumerate(training_set): predictions = model(inputs) Forward pass loss = loss_function(predictions, labels) Compute loss function loss = loss / accumulation_steps Normalize our loss (if averaged) loss.backward() Backward pass if (i+1) % accumulation_steps == 0: Wait for several backward steps optimizer.step() Now we can do an optimizer step model.zero_grad() Reset gradients tensors if (i+1) % evaluation_steps == 0: Evaluate the model when we... evaluate_model() ...have no gradients accumulate This method was developed mainly to circumvent GPU memory limitations and I'm not entirely clear on the trade-off between having additional .backward() loops. This discussion on the fastai forum seems to suggest that it can in fact accelerate training, so it's probably worth a try. Use Distributed Data Parallel for multi-GPU training Methods to accelerate distributed training probably warrant their own post but one simple one is to use torch.nn.DistributedDataParallel rather than torch.nn.DataParallel. By doing so, each GPU will be driven by a dedicated CPU core avoiding the GIL issues of DataParallel. In general, I can strongly recommend reading the documentation on distributed training. Set gradients to None rather than 0 Use .zerograd(settonone=True) rather than .zerograd(). Doing so will let the memory allocator handle the gradients rather than actively setting them to 0. This will lead to yield a modest speed-up as they say in the documentation, so don't expect any miracles. Watch out, doing this is not side-effect free! Check the docs for the details on this. Use .as_tensor() rather than .tensor() torch.tensor() always copies data. If you have a numpy array that you want to convert, use torch.astensor() or torch.fromnumpy() to avoid copying the data. Turn on debugging tools only when actually needed PyTorch offers a number of useful debugging tools like the autograd.profiler, autograd.grad\check, and autograd.anomaly\detection. Make sure to use them to better understand when needed but to also turn them off when you don't need them as they will slow down your training. Use gradient clipping Originally used to avoid exploding gradients in RNNs, there is both some empirical evidence as well as some theoretical support that clipping gradients (roughly speaking: gradient = min(gradient, threshold)) accelerates convergence. Hugging Face's Transformer implementation is a really clean example of how to use gradient clipping as well as some of the other methods such as AMP mentioned in this post. In PyTorch this can be done using torch.nn.utils.clipgradnorm(documentation). It's not entirely clear to me which models benefit how much from gradient clipping but it seems to be robustly useful for RNNs, Transformer-based and ResNets architectures and a range of different optimizers. Turn off bias before BatchNorm This is a very simple one: turn off the bias of layers before BatchNormalization layers. For a 2-D convolutional layer, this can be done by setting the bias keyword to False: torch.nn.Conv2d(..., bias=False, ...).  (Here's a reminder why this makes sense.) You will save some parameters, I would however expect the speed-up of this to be relatively small as compared to some of the other methods mentioned here. Turn off gradient computation during validation This one is straightforward: set torch.no_grad() during validation. Use input and batch normalization You're probably already doing this but you might want to double-check: Are you normalizing your input? Are you using batch-normalization? And here's a reminder of why you probably should. Bonus tip from the comments: Use JIT to fuse point-wise operations. If you have adjacent point-wise operations you can use PyTorch JIT to combine them into one FusionGroup which can then be launched on a single kernel rather than multiple kernels as would have been done per default. You'll also save some memory reads and writes. Szymon Migacz shows how you can use the @torch.jit.script decorator to fuse the operations in a GELU, for instance: @torch.jit.script def fused_gelu(x): return x 0.5 (1.0 + torch.erf(x / 1.41421)) In this case, fusing the operations leads to a 5x speed-up for the execution of fused_gelu as compared to the unfused version. See also this post for an example of how Torchscript can be used to accelerate an RNN. Hat tip to u/Patient_Atmosphere45 for the suggestion. Sources and additional resources Many of the tips listed above come from Szymon Migacz' talk and post in the PyTorch docs. PyTorch Lightning's William Falcon has two interesting posts with tips to speed-up training. PyTorch Lightning does already take care of some of the points above per-default. Thomas Wolf at Hugging Face has a number of interesting articles on accelerating deep learning – with a particular focus on language models. The same goes for Sylvain Gugger and Jeremy Howard: they have many interesting posts in particular on learning rates and AdamW. Thanks to Ben Hahn, Kevin Klein and Robin Vaaler for their feedback on a draft of this post! I've also put all of the above into this blog post.

[D] Do you know any institutions/nonprofits/companies/governments/etc. trying to apply deep learning and other ML/AI/GenAI techniques to implement universal basic income (UBI) or something similar to UBI like universal basic services?
reddit
LLM Vibe Score0
Human Vibe Score1
HappyseditsThis week

[D] Do you know any institutions/nonprofits/companies/governments/etc. trying to apply deep learning and other ML/AI/GenAI techniques to implement universal basic income (UBI) or something similar to UBI like universal basic services?

Do you know any institutions/nonprofits/companies/governments/etc. trying to apply deep learning and other ML/AI/GenAI techniques to implement universal basic income (UBI) or something similar to UBI like universal basic services? Maybe for chatbot guidance on UBI program details, selecting candidates that need it the most, predicting poverty, UBI impacts, demographic and economic indicators to identify optimal UBI payment amounts and frequencies for different population segments, preventing fraud, etc. It can be just sketching future models in theory, or already implementing it in practice. I found this relevant paper: Can Data and Machine Learning Change the Future of Basic Income Models? A Bayesian Belief Networks Approach. https://www.mdpi.com/2306-5729/9/2/18 "Appeals to governments for implementing basic income are contemporary. The theoretical backgrounds of the basic income notion only prescribe transferring equal amounts to individuals irrespective of their specific attributes. However, the most recent basic income initiatives all around the world are attached to certain rules with regard to the attributes of the households. This approach is facing significant challenges to appropriately recognize vulnerable groups. A possible alternative for setting rules with regard to the welfare attributes of the households is to employ artificial intelligence algorithms that can process unprecedented amounts of data. Can integrating machine learning change the future of basic income by predicting households vulnerable to future poverty? In this paper, we utilize multidimensional and longitudinal welfare data comprising one and a half million individuals’ data and a Bayesian beliefs network approach to examine the feasibility of predicting households’ vulnerability to future poverty based on the existing households’ welfare attributes."

looking for ML aficionado in London for great chats and maybe a startup
reddit
LLM Vibe Score0
Human Vibe Score0.333
MLstartupLondonThis week

looking for ML aficionado in London for great chats and maybe a startup

TL;DR? Here's the gist: Me: 3 startups under my belt. Started as a programmer, then trainer, then entrepreneur, now CTO & Board member for a leading customer insight company part of large bank. Large system and infrastructure specialist. Extensive & practical experience in raising funds and successfully managing both startup and established businesses. Fascinated by the power of data. Can't imagine myself spending the rest of my life being a cog in the machine. You: Machine learning specialist, programmer, analyst, understands how to navigate and crunch large datasets, from BI to predictive analytics. Interested in implementing applications from fraud detection to margin improvements through better clustering regardless of industry. Fascinated by the power of data. Can't imagine himself spending the rest of his or her life being a cog in the machine. The startup: The core idea it to build platforms and systems around the progressively larger datasets held by various sized companies, helping them solve big issues - cost reduction, profitability and reducing risk. I’m an infrastructure and software specialist and have access to 1) systems, 2) datasets 3) extensive practical in certain industry segments, namely web-scale companies and tier 1 retailers. This project is in the very early planning stages. I'm looking forward to discuss the form it could take with like-minded individuals but with complementary skills sets, namely: predictive analytics & AI as it applies to machine learning on large datasets. Want more specifics ideas? I have plenty of these, but I’m sure you do to, so let’s meet face to face and discuss them. Ultimately the goal is to crystallize on a specific concept, develop together a minimum viable product and get the company bootstrapped or angel-funded (something I also have plenty of experience with), all via a lean startup model. My philosophy on startups: Startups built in one’s free time often fail because they drag on, ending up as little more than side projects you can’t quite get rid of (due to co-founder guilt, or perhaps the little money they bring in every month). The core idea for this project is based on lean, that is, to launch a minimum viable product as early as possible. Getting feedback. Measuring results (important!). Pivot if it’s not working. This helps tremendously in staying motivated, limits the dreaded paralyzing fear of failure, and more importantly, keep the time from inception to first client/funding to a minimum. If it sounds interesting please message me and we can exchange contact details! Worst that can happen is we have a great chat!

Interview with Juergen Schmidhuber, renowned ‘Father Of Modern AI’, says his life’s work won't lead to dystopia.
reddit
LLM Vibe Score0
Human Vibe Score0.765
hardmaruThis week

Interview with Juergen Schmidhuber, renowned ‘Father Of Modern AI’, says his life’s work won't lead to dystopia.

Schmidhuber interview expressing his views on the future of AI and AGI. Original source. I think the interview is of interest to r/MachineLearning, and presents an alternate view, compared to other influential leaders in AI. Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life’s Work Won't Lead To Dystopia May 23, 2023. Contributed by Hessie Jones. Amid the growing concern about the impact of more advanced artificial intelligence (AI) technologies on society, there are many in the technology community who fear the implications of the advancements in Generative AI if they go unchecked. Dr. Juergen Schmidhuber, a renowned scientist, artificial intelligence researcher and widely regarded as one of the pioneers in the field, is more optimistic. He declares that many of those who suddenly warn against the dangers of AI are just seeking publicity, exploiting the media’s obsession with killer robots which has attracted more attention than “good AI” for healthcare etc. The potential to revolutionize various industries and improve our lives is clear, as are the equal dangers if bad actors leverage the technology for personal gain. Are we headed towards a dystopian future, or is there reason to be optimistic? I had a chance to sit down with Dr. Juergen Schmidhuber to understand his perspective on this seemingly fast-moving AI-train that will leap us into the future. As a teenager in the 1970s, Juergen Schmidhuber became fascinated with the idea of creating intelligent machines that could learn and improve on their own, becoming smarter than himself within his lifetime. This would ultimately lead to his groundbreaking work in the field of deep learning. In the 1980s, he studied computer science at the Technical University of Munich (TUM), where he earned his diploma in 1987. His thesis was on the ultimate self-improving machines that, not only, learn through some pre-wired human-designed learning algorithm, but also learn and improve the learning algorithm itself. Decades later, this became a hot topic. He also received his Ph.D. at TUM in 1991 for work that laid some of the foundations of modern AI. Schmidhuber is best known for his contributions to the development of recurrent neural networks (RNNs), the most powerful type of artificial neural network that can process sequential data such as speech and natural language. With his students Sepp Hochreiter, Felix Gers, Alex Graves, Daan Wierstra, and others, he published architectures and training algorithms for the long short-term memory (LSTM), a type of RNN that is widely used in natural language processing, speech recognition, video games, robotics, and other applications. LSTM has become the most cited neural network of the 20th century, and Business Week called it "arguably the most commercial AI achievement." Throughout his career, Schmidhuber has received various awards and accolades for his groundbreaking work. In 2013, he was awarded the Helmholtz Prize, which recognizes significant contributions to the field of machine learning. In 2016, he was awarded the IEEE Neural Network Pioneer Award for "pioneering contributions to deep learning and neural networks." The media have often called him the “father of modern AI,” because the most cited neural networks all build on his lab’s work. He is quick to point out, however, that AI history goes back centuries. Despite his many accomplishments, at the age of 60, he feels mounting time pressure towards building an Artificial General Intelligence within his lifetime and remains committed to pushing the boundaries of AI research and development. He is currently director of the KAUST AI Initiative, scientific director of the Swiss AI Lab IDSIA, and co-founder and chief scientist of AI company NNAISENSE, whose motto is "AI∀" which is a math-inspired way of saying "AI For All." He continues to work on cutting-edge AI technologies and applications to improve human health and extend human lives and make lives easier for everyone. The following interview has been edited for clarity. Jones: Thank you Juergen for joining me. You have signed letters warning about AI weapons. But you didn't sign the recent publication, "Pause Gigantic AI Experiments: An Open Letter"? Is there a reason? Schmidhuber: Thank you Hessie. Glad to speak with you. I have realized that many of those who warn in public against the dangers of AI are just seeking publicity. I don't think the latest letter will have any significant impact because many AI researchers, companies, and governments will ignore it completely. The proposal frequently uses the word "we" and refers to "us," the humans. But as I have pointed out many times in the past, there is no "we" that everyone can identify with. Ask 10 different people, and you will hear 10 different opinions about what is "good." Some of those opinions will be completely incompatible with each other. Don't forget the enormous amount of conflict between the many people. The letter also says, "If such a pause cannot be quickly put in place, governments should intervene and impose a moratorium." The problem is that different governments have ALSO different opinions about what is good for them and for others. Great Power A will say, if we don't do it, Great Power B will, perhaps secretly, and gain an advantage over us. The same is true for Great Powers C and D. Jones: Everyone acknowledges this fear surrounding current generative AI technology. Moreover, the existential threat of this technology has been publicly acknowledged by Sam Altman, CEO of OpenAI himself, calling for AI regulation. From your perspective, is there an existential threat? Schmidhuber: It is true that AI can be weaponized, and I have no doubt that there will be all kinds of AI arms races, but AI does not introduce a new quality of existential threat. The threat coming from AI weapons seems to pale in comparison to the much older threat from nuclear hydrogen bombs that don’t need AI at all. We should be much more afraid of half-century-old tech in the form of H-bomb rockets. The Tsar Bomba of 1961 had almost 15 times more destructive power than all weapons of WW-II combined. Despite the dramatic nuclear disarmament since the 1980s, there are still more than enough nuclear warheads to wipe out human civilization within two hours, without any AI I’m much more worried about that old existential threat than the rather harmless AI weapons. Jones: I realize that while you compare AI to the threat of nuclear bombs, there is a current danger that a current technology can be put in the hands of humans and enable them to “eventually” exact further harms to individuals of group in a very precise way, like targeted drone attacks. You are giving people a toolset that they've never had before, enabling bad actors, as some have pointed out, to be able to do a lot more than previously because they didn't have this technology. Schmidhuber: Now, all that sounds horrible in principle, but our existing laws are sufficient to deal with these new types of weapons enabled by AI. If you kill someone with a gun, you will go to jail. Same if you kill someone with one of these drones. Law enforcement will get better at understanding new threats and new weapons and will respond with better technology to combat these threats. Enabling drones to target persons from a distance in a way that requires some tracking and some intelligence to perform, which has traditionally been performed by skilled humans, to me, it seems is just an improved version of a traditional weapon, like a gun, which is, you know, a little bit smarter than the old guns. But, in principle, all of that is not a new development. For many centuries, we have had the evolution of better weaponry and deadlier poisons and so on, and law enforcement has evolved their policies to react to these threats over time. So, it's not that we suddenly have a new quality of existential threat and it's much more worrisome than what we have had for about six decades. A large nuclear warhead doesn’t need fancy face recognition to kill an individual. No, it simply wipes out an entire city with ten million inhabitants. Jones: The existential threat that’s implied is the extent to which humans have control over this technology. We see some early cases of opportunism which, as you say, tends to get more media attention than positive breakthroughs. But you’re implying that this will all balance out? Schmidhuber: Historically, we have a long tradition of technological breakthroughs that led to advancements in weapons for the purpose of defense but also for protection. From sticks, to rocks, to axes to gunpowder to cannons to rockets… and now to drones… this has had a drastic influence on human history but what has been consistent throughout history is that those who are using technology to achieve their own ends are themselves, facing the same technology because the opposing side is learning to use it against them. And that's what has been repeated in thousands of years of human history and it will continue. I don't see the new AI arms race as something that is remotely as existential a threat as the good old nuclear warheads. You said something important, in that some people prefer to talk about the downsides rather than the benefits of this technology, but that's misleading, because 95% of all AI research and AI development is about making people happier and advancing human life and health. Jones: Let’s touch on some of those beneficial advances in AI research that have been able to radically change present day methods and achieve breakthroughs. Schmidhuber: All right! For example, eleven years ago, our team with my postdoc Dan Ciresan was the first to win a medical imaging competition through deep learning. We analyzed female breast cells with the objective to determine harmless cells vs. those in the pre-cancer stage. Typically, a trained oncologist needs a long time to make these determinations. Our team, who knew nothing about cancer, were able to train an artificial neural network, which was totally dumb in the beginning, on lots of this kind of data. It was able to outperform all the other methods. Today, this is being used not only for breast cancer, but also for radiology and detecting plaque in arteries, and many other things. Some of the neural networks that we have developed in the last 3 decades are now prevalent across thousands of healthcare applications, detecting Diabetes and Covid-19 and what not. This will eventually permeate across all healthcare. The good consequences of this type of AI are much more important than the click-bait new ways of conducting crimes with AI. Jones: Adoption is a product of reinforced outcomes. The massive scale of adoption either leads us to believe that people have been led astray, or conversely, technology is having a positive effect on people’s lives. Schmidhuber: The latter is the likely case. There's intense commercial pressure towards good AI rather than bad AI because companies want to sell you something, and you are going to buy only stuff you think is going to be good for you. So already just through this simple, commercial pressure, you have a tremendous bias towards good AI rather than bad AI. However, doomsday scenarios like in Schwarzenegger movies grab more attention than documentaries on AI that improve people’s lives. Jones: I would argue that people are drawn to good stories – narratives that contain an adversary and struggle, but in the end, have happy endings. And this is consistent with your comment on human nature and how history, despite its tendency for violence and destruction of humanity, somehow tends to correct itself. Let’s take the example of a technology, which you are aware – GANs – General Adversarial Networks, which today has been used in applications for fake news and disinformation. In actuality, the purpose in the invention of GANs was far from what it is used for today. Schmidhuber: Yes, the name GANs was created in 2014 but we had the basic principle already in the early 1990s. More than 30 years ago, I called it artificial curiosity. It's a very simple way of injecting creativity into a little two network system. This creative AI is not just trying to slavishly imitate humans. Rather, it’s inventing its own goals. Let me explain: You have two networks. One network is producing outputs that could be anything, any action. Then the second network is looking at these actions and it’s trying to predict the consequences of these actions. An action could move a robot, then something happens, and the other network is just trying to predict what will happen. Now we can implement artificial curiosity by reducing the prediction error of the second network, which, at the same time, is the reward of the first network. The first network wants to maximize its reward and so it will invent actions that will lead to situations that will surprise the second network, which it has not yet learned to predict well. In the case where the outputs are fake images, the first network will try to generate images that are good enough to fool the second network, which will attempt to predict the reaction of the environment: fake or real image, and it will try to become better at it. The first network will continue to also improve at generating images whose type the second network will not be able to predict. So, they fight each other. The 2nd network will continue to reduce its prediction error, while the 1st network will attempt to maximize it. Through this zero-sum game the first network gets better and better at producing these convincing fake outputs which look almost realistic. So, once you have an interesting set of images by Vincent Van Gogh, you can generate new images that leverage his style, without the original artist having ever produced the artwork himself. Jones: I see how the Van Gogh example can be applied in an education setting and there are countless examples of artists mimicking styles from famous painters but image generation from this instance that can happen within seconds is quite another feat. And you know this is how GANs has been used. What’s more prevalent today is a socialized enablement of generating images or information to intentionally fool people. It also surfaces new harms that deal with the threat to intellectual property and copyright, where laws have yet to account for. And from your perspective this was not the intention when the model was conceived. What was your motivation in your early conception of what is now GANs? Schmidhuber: My old motivation for GANs was actually very important and it was not to create deepfakes or fake news but to enable AIs to be curious and invent their own goals, to make them explore their environment and make them creative. Suppose you have a robot that executes one action, then something happens, then it executes another action, and so on, because it wants to achieve certain goals in the environment. For example, when the battery is low, this will trigger “pain” through hunger sensors, so it wants to go to the charging station, without running into obstacles, which will trigger other pain sensors. It will seek to minimize pain (encoded through numbers). Now the robot has a friend, the second network, which is a world model ––it’s a prediction machine that learns to predict the consequences of the robot’s actions. Once the robot has a good model of the world, it can use it for planning. It can be used as a simulation of the real world. And then it can determine what is a good action sequence. If the robot imagines this sequence of actions, the model will predict a lot of pain, which it wants to avoid. If it plays this alternative action sequence in its mental model of the world, then it will predict a rewarding situation where it’s going to sit on the charging station and its battery is going to load again. So, it'll prefer to execute the latter action sequence. In the beginning, however, the model of the world knows nothing, so how can we motivate the first network to generate experiments that lead to data that helps the world model learn something it didn’t already know? That’s what artificial curiosity is about. The dueling two network systems effectively explore uncharted environments by creating experiments so that over time the curious AI gets a better sense of how the environment works. This can be applied to all kinds of environments, and has medical applications. Jones: Let’s talk about the future. You have said, “Traditional humans won’t play a significant role in spreading intelligence across the universe.” Schmidhuber: Let’s first conceptually separate two types of AIs. The first type of AI are tools directed by humans. They are trained to do specific things like accurately detect diabetes or heart disease and prevent attacks before they happen. In these cases, the goal is coming from the human. More interesting AIs are setting their own goals. They are inventing their own experiments and learning from them. Their horizons expand and eventually they become more and more general problem solvers in the real world. They are not controlled by their parents, but much of what they learn is through self-invented experiments. A robot, for example, is rotating a toy, and as it is doing this, the video coming in through the camera eyes, changes over time and it begins to learn how this video changes and learns how the 3D nature of the toy generates certain videos if you rotate it a certain way, and eventually, how gravity works, and how the physics of the world works. Like a little scientist! And I have predicted for decades that future scaled-up versions of such AI scientists will want to further expand their horizons, and eventually go where most of the physical resources are, to build more and bigger AIs. And of course, almost all of these resources are far away from earth out there in space, which is hostile to humans but friendly to appropriately designed AI-controlled robots and self-replicating robot factories. So here we are not talking any longer about our tiny biosphere; no, we are talking about the much bigger rest of the universe. Within a few tens of billions of years, curious self-improving AIs will colonize the visible cosmos in a way that’s infeasible for humans. Those who don’t won’t have an impact. Sounds like science fiction, but since the 1970s I have been unable to see a plausible alternative to this scenario, except for a global catastrophe such as an all-out nuclear war that stops this development before it takes off. Jones: How long have these AIs, which can set their own goals — how long have they existed? To what extent can they be independent of human interaction? Schmidhuber: Neural networks like that have existed for over 30 years. My first simple adversarial neural network system of this kind is the one from 1990 described above. You don’t need a teacher there; it's just a little agent running around in the world and trying to invent new experiments that surprise its own prediction machine. Once it has figured out certain parts of the world, the agent will become bored and will move on to more exciting experiments. The simple 1990 systems I mentioned have certain limitations, but in the past three decades, we have also built more sophisticated systems that are setting their own goals and such systems I think will be essential for achieving true intelligence. If you are only imitating humans, you will never go beyond them. So, you really must give AIs the freedom to explore previously unexplored regions of the world in a way that no human is really predefining. Jones: Where is this being done today? Schmidhuber: Variants of neural network-based artificial curiosity are used today for agents that learn to play video games in a human-competitive way. We have also started to use them for automatic design of experiments in fields such as materials science. I bet many other fields will be affected by it: chemistry, biology, drug design, you name it. However, at least for now, these artificial scientists, as I like to call them, cannot yet compete with human scientists. I don’t think it’s going to stay this way but, at the moment, it’s still the case. Sure, AI has made a lot of progress. Since 1997, there have been superhuman chess players, and since 2011, through the DanNet of my team, there have been superhuman visual pattern recognizers. But there are other things where humans, at the moment at least, are much better, in particular, science itself. In the lab we have many first examples of self-directed artificial scientists, but they are not yet convincing enough to appear on the radar screen of the public space, which is currently much more fascinated with simpler systems that just imitate humans and write texts based on previously seen human-written documents. Jones: You speak of these numerous instances dating back 30 years of these lab experiments where these self-driven agents are deciding and learning and moving on once they’ve learned. And I assume that that rate of learning becomes even faster over time. What kind of timeframe are we talking about when this eventually is taken outside of the lab and embedded into society? Schmidhuber: This could still take months or even years :-) Anyway, in the not-too-distant future, we will probably see artificial scientists who are good at devising experiments that allow them to discover new, previously unknown physical laws. As always, we are going to profit from the old trend that has held at least since 1941: every decade compute is getting 100 times cheaper. Jones: How does this trend affect modern AI such as ChatGPT? Schmidhuber: Perhaps you know that all the recent famous AI applications such as ChatGPT and similar models are largely based on principles of artificial neural networks invented in the previous millennium. The main reason why they works so well now is the incredible acceleration of compute per dollar. ChatGPT is driven by a neural network called “Transformer” described in 2017 by Google. I am happy about that because a quarter century earlier in 1991 I had a particular Transformer variant which is now called the “Transformer with linearized self-attention”. Back then, not much could be done with it, because the compute cost was a million times higher than today. But today, one can train such models on half the internet and achieve much more interesting results. Jones: And for how long will this acceleration continue? Schmidhuber: There's no reason to believe that in the next 30 years, we won't have another factor of 1 million and that's going to be really significant. In the near future, for the first time we will have many not-so expensive devices that can compute as much as a human brain. The physical limits of computation, however, are much further out so even if the trend of a factor of 100 every decade continues, the physical limits (of 1051 elementary instructions per second and kilogram of matter) won’t be hit until, say, the mid-next century. Even in our current century, however, we’ll probably have many machines that compute more than all 10 billion human brains collectively and you can imagine, everything will change then! Jones: That is the big question. Is everything going to change? If so, what do you say to the next generation of leaders, currently coming out of college and university. So much of this change is already impacting how they study, how they will work, or how the future of work and livelihood is defined. What is their purpose and how do we change our systems so they will adapt to this new version of intelligence? Schmidhuber: For decades, people have asked me questions like that, because you know what I'm saying now, I have basically said since the 1970s, it’s just that today, people are paying more attention because, back then, they thought this was science fiction. They didn't think that I would ever come close to achieving my crazy life goal of building a machine that learns to become smarter than myself such that I can retire. But now many have changed their minds and think it's conceivable. And now I have two daughters, 23 and 25. People ask me: what do I tell them? They know that Daddy always said, “It seems likely that within your lifetimes, you will have new types of intelligence that are probably going to be superior in many ways, and probably all kinds of interesting ways.” How should they prepare for that? And I kept telling them the obvious: Learn how to learn new things! It's not like in the previous millennium where within 20 years someone learned to be a useful member of society, and then took a job for 40 years and performed in this job until she received her pension. Now things are changing much faster and we must learn continuously just to keep up. I also told my girls that no matter how smart AIs are going to get, learn at least the basics of math and physics, because that’s the essence of our universe, and anybody who understands this will have an advantage, and learn all kinds of new things more easily. I also told them that social skills will remain important, because most future jobs for humans will continue to involve interactions with other humans, but I couldn’t teach them anything about that; they know much more about social skills than I do. You touched on the big philosophical question about people’s purpose. Can this be answered without answering the even grander question: What’s the purpose of the entire universe? We don’t know. But what’s happening right now might be connected to the unknown answer. Don’t think of humans as the crown of creation. Instead view human civilization as part of a much grander scheme, an important step (but not the last one) on the path of the universe from very simple initial conditions towards more and more unfathomable complexity. Now it seems ready to take its next step, a step comparable to the invention of life itself over 3.5 billion years ago. Alas, don’t worry, in the end, all will be good! Jones: Let’s get back to this transformation happening right now with OpenAI. There are many questioning the efficacy and accuracy of ChatGPT, and are concerned its release has been premature. In light of the rampant adoption, educators have banned its use over concerns of plagiarism and how it stifles individual development. Should large language models like ChatGPT be used in school? Schmidhuber: When the calculator was first introduced, instructors forbade students from using it in school. Today, the consensus is that kids should learn the basic methods of arithmetic, but they should also learn to use the “artificial multipliers” aka calculators, even in exams, because laziness and efficiency is a hallmark of intelligence. Any intelligent being wants to minimize its efforts to achieve things. And that's the reason why we have tools, and why our kids are learning to use these tools. The first stone tools were invented maybe 3.5 million years ago; tools just have become more sophisticated over time. In fact, humans have changed in response to the properties of their tools. Our anatomical evolution was shaped by tools such as spears and fire. So, it's going to continue this way. And there is no permanent way of preventing large language models from being used in school. Jones: And when our children, your children graduate, what does their future work look like? Schmidhuber: A single human trying to predict details of how 10 billion people and their machines will evolve in the future is like a single neuron in my brain trying to predict what the entire brain and its tens of billions of neurons will do next year. 40 years ago, before the WWW was created at CERN in Switzerland, who would have predicted all those young people making money as YouTube video bloggers? Nevertheless, let’s make a few limited job-related observations. For a long time, people have thought that desktop jobs may require more intelligence than skills trade or handicraft professions. But now, it turns out that it's much easier to replace certain aspects of desktop jobs than replacing a carpenter, for example. Because everything that works well in AI is happening behind the screen currently, but not so much in the physical world. There are now artificial systems that can read lots of documents and then make really nice summaries of these documents. That is a desktop job. Or you give them a description of an illustration that you want to have for your article and pretty good illustrations are being generated that may need some minimal fine-tuning. But you know, all these desktop jobs are much easier to facilitate than the real tough jobs in the physical world. And it's interesting that the things people thought required intelligence, like playing chess, or writing or summarizing documents, are much easier for machines than they thought. But for things like playing football or soccer, there is no physical robot that can remotely compete with the abilities of a little boy with these skills. So, AI in the physical world, interestingly, is much harder than AI behind the screen in virtual worlds. And it's really exciting, in my opinion, to see that jobs such as plumbers are much more challenging than playing chess or writing another tabloid story. Jones: The way data has been collected in these large language models does not guarantee personal information has not been excluded. Current consent laws already are outdated when it comes to these large language models (LLM). The concern, rightly so, is increasing surveillance and loss of privacy. What is your view on this? Schmidhuber: As I have indicated earlier: are surveillance and loss of privacy inevitable consequences of increasingly complex societies? Super-organisms such as cities and states and companies consist of numerous people, just like people consist of numerous cells. These cells enjoy little privacy. They are constantly monitored by specialized "police cells" and "border guard cells": Are you a cancer cell? Are you an external intruder, a pathogen? Individual cells sacrifice their freedom for the benefits of being part of a multicellular organism. Similarly, for super-organisms such as nations. Over 5000 years ago, writing enabled recorded history and thus became its inaugural and most important invention. Its initial purpose, however, was to facilitate surveillance, to track citizens and their tax payments. The more complex a super-organism, the more comprehensive its collection of information about its constituents. 200 years ago, at least, the parish priest in each village knew everything about all the village people, even about those who did not confess, because they appeared in the confessions of others. Also, everyone soon knew about the stranger who had entered the village, because some occasionally peered out of the window, and what they saw got around. Such control mechanisms were temporarily lost through anonymization in rapidly growing cities but are now returning with the help of new surveillance devices such as smartphones as part of digital nervous systems that tell companies and governments a lot about billions of users. Cameras and drones etc. are becoming increasingly tinier and more ubiquitous. More effective recognition of faces and other detection technology are becoming cheaper and cheaper, and many will use it to identify others anywhere on earth; the big wide world will not offer any more privacy than the local village. Is this good or bad? Some nations may find it easier than others to justify more complex kinds of super-organisms at the expense of the privacy rights of their constituents. Jones: So, there is no way to stop or change this process of collection, or how it continuously informs decisions over time? How do you see governance and rules responding to this, especially amid Italy’s ban on ChatGPT following suspected user data breach and the more recent news about the Meta’s record $1.3billion fine in the company’s handling of user information? Schmidhuber: Data collection has benefits and drawbacks, such as the loss of privacy. How to balance those? I have argued for addressing this through data ownership in data markets. If it is true that data is the new oil, then it should have a price, just like oil. At the moment, the major surveillance platforms such as Meta do not offer users any money for their data and the transitive loss of privacy. In the future, however, we will likely see attempts at creating efficient data markets to figure out the data's true financial value through the interplay between supply and demand. Even some of the sensitive medical data should not be priced by governmental regulators but by patients (and healthy persons) who own it and who may sell or license parts thereof as micro-entrepreneurs in a healthcare data market. Following a previous interview, I gave for one of the largest re-insurance companies , let's look at the different participants in such a data market: patients, hospitals, data companies. (1) Patients with a rare form of cancer can offer more valuable data than patients with a very common form of cancer. (2) Hospitals and their machines are needed to extract the data, e.g., through magnet spin tomography, radiology, evaluations through human doctors, and so on. (3) Companies such as Siemens, Google or IBM would like to buy annotated data to make better artificial neural networks that learn to predict pathologies and diseases and the consequences of therapies. Now the market’s invisible hand will decide about the data’s price through the interplay between demand and supply. On the demand side, you will have several companies offering something for the data, maybe through an app on the smartphone (a bit like a stock market app). On the supply side, each patient in this market should be able to profit from high prices for rare valuable types of data. Likewise, competing data extractors such as hospitals will profit from gaining recognition and trust for extracting data well at a reasonable price. The market will make the whole system efficient through incentives for all who are doing a good job. Soon there will be a flourishing ecosystem of commercial data market advisors and what not, just like the ecosystem surrounding the traditional stock market. The value of the data won’t be determined by governments or ethics committees, but by those who own the data and decide by themselves which parts thereof they want to license to others under certain conditions. At first glance, a market-based system seems to be detrimental to the interest of certain monopolistic companies, as they would have to pay for the data - some would prefer free data and keep their monopoly. However, since every healthy and sick person in the market would suddenly have an incentive to collect and share their data under self-chosen anonymity conditions, there will soon be many more useful data to evaluate all kinds of treatments. On average, people will live longer and healthier, and many companies and the entire healthcare system will benefit. Jones: Finally, what is your view on open source versus the private companies like Google and OpenAI? Is there a danger to supporting these private companies’ large language models versus trying to keep these models open source and transparent, very much like what LAION is doing? Schmidhuber: I signed this open letter by LAION because I strongly favor the open-source movement. And I think it's also something that is going to challenge whatever big tech dominance there might be at the moment. Sure, the best models today are run by big companies with huge budgets for computers, but the exciting fact is that open-source models are not so far behind, some people say maybe six to eight months only. Of course, the private company models are all based on stuff that was created in academia, often in little labs without so much funding, which publish without patenting their results and open source their code and others take it and improved it. Big tech has profited tremendously from academia; their main achievement being that they have scaled up everything greatly, sometimes even failing to credit the original inventors. So, it's very interesting to see that as soon as some big company comes up with a new scaled-up model, lots of students out there are competing, or collaborating, with each other, trying to come up with equal or better performance on smaller networks and smaller machines. And since they are open sourcing, the next guy can have another great idea to improve it, so now there’s tremendous competition also for the big companies. Because of that, and since AI is still getting exponentially cheaper all the time, I don't believe that big tech companies will dominate in the long run. They find it very hard to compete with the enormous open-source movement. As long as you can encourage the open-source community, I think you shouldn't worry too much. Now, of course, you might say if everything is open source, then the bad actors also will more easily have access to these AI tools. And there's truth to that. But as always since the invention of controlled fire, it was good that knowledge about how technology works quickly became public such that everybody could use it. And then, against any bad actor, there's almost immediately a counter actor trying to nullify his efforts. You see, I still believe in our old motto "AI∀" or "AI For All." Jones: Thank you, Juergen for sharing your perspective on this amazing time in history. It’s clear that with new technology, the enormous potential can be matched by disparate and troubling risks which we’ve yet to solve, and even those we have yet to identify. If we are to dispel the fear of a sentient system for which we have no control, humans, alone need to take steps for more responsible development and collaboration to ensure AI technology is used to ultimately benefit society. Humanity will be judged by what we do next.

[R] Stanford HAI Spring Conference - Intelligence Augmentation: AI Empowering People to Solve Global Challenges
reddit
LLM Vibe Score0
Human Vibe Score0
othotrThis week

[R] Stanford HAI Spring Conference - Intelligence Augmentation: AI Empowering People to Solve Global Challenges

Stanford Institute for Human-Centered AI hosted its spring conference today with interesting conversations about how AI can best support humans in healthcare, art, and education to address global challenges. More details and the event recording are available at the HAI conference site. Here is a quick outline with video sections: Welcome & Introductions: HAI directors Fei-Fei Li, John Etchemendy, Russ Altman, & James Landay Session I: Healthcare Immersive Technologies for Caregiving: Innovation Opportunities and Ecosystem Challenges, Deborah Estrin @ Cornell Tech Student Lightning Talks On Complementing and Extending Human Intellect: Principles and Directions, Eric Horvitz @ Microsoft Mobilizing AI to Achieve Healthy Child Development Worldwide, Dennis Wall @ Stanford Safer and Proactive Care through AI, Suchi Saria @ Johns Hopkins University Session II: Art Other Intelligence: Exoticism and AI, Ken Goldberg @ UC Berkeley Student Lightning Talks Artful Intelligence: Exoticism and AI, Michele Elam @ Stanford The Digital Griot: A Reimagining of the Archive, Rashaad Newsome @ Stanford Amplifying the Human Artist Through AI, Hilary Hahn & Carol Reiley @ DeepMusic.ai Session III: Education Escaping or Automating a Legacy of Bad Instruction, Daniel Schwartz @ Stanford Student Lightning Talks AI to Super Power Teachers, Chris Piech @ Stanford Pushing the Boundaries of Educational Technology, Amy Ogan @ Carnegie Mellon University AI to Accelerate Workplace Learning at Scale, Candace Thille @ Amazon https://preview.redd.it/p2qg7eutibp61.png?width=1928&format=png&auto=webp&s=1cc8dd6c4458c3da79d00415552ca4424f03d0c2

DARPA "AI For Critical Minerals Assessment" Competition [D]
reddit
LLM Vibe Score0
Human Vibe Score0
Scherzers_Brown_EyeThis week

DARPA "AI For Critical Minerals Assessment" Competition [D]

DARPA is hosting a competition called “AI for Critical Mineral Assessments,” which is looking for solutions to automatically extract and georeferenced features from scanned or raster maps. The U.S. Geological Survey uses data from these assessments to build reports that can eventually lead to increasing domestic production of critical minerals and reducing U.S. reliance on imports. The competition includes two independent challenges: Map Georeferencing Challenge: Automated map georeferencing is a difficult task as most USGS maps are not digitized, and may be in a multitude of historical coordinate projection systems. Furthermore, the quality of features on scanned maps, critical for the identification of control points for alignment, can vary greatly. Participants will receive a dataset of 1,000 or more maps of various types for training and validation. The goal of this challenge is to accurately geolocate a map of unknown location and coordinate system by fitting coordinate points that can be referenced to known locations in one or more base maps. Register now-Aug. 26. Map Feature Extraction Challenge: Automated map feature extraction is a difficult task because map features (polygons, points, lines, text) often overlap and are sometimes discontinuous. Not only do the features come in all shapes and sizes, but the same feature type can be depicted in different maps using different symbols or patterns. This makes it challenging to create a universal identifier for even a single feature such as a mine location or mineral resource tracts. Participants will be provided a training set consisting of maps with each legend item labeled and characterized (as point, line, or polygon) and a binary pixel map reflecting the feature’s coverage in the map. The goal of the challenge is to identify all features in a map that appear in the map’s legend. Register Sept. 5 - 16. For each of the two challenges, DARPA will award: · $10,000 for the first prize · $3,000 for the second prize · $1,000 for the third prize You can visit criticalminerals.darpa.mil for complete details on how you can compete.

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore
reddit
LLM Vibe Score0
Human Vibe Score1
qazmkoppThis week

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore

A few days ago, HuggingFace announced a $100 million Series C funding round, which was big news in open source machine learning and could be a sign of where the industry is headed. Two days before the HuggingFace funding announcement, open-source machine learning platform MetaSpore released a demo based on the HuggingFace Rapid deployment pre-training model. As deep learning technology makes innovative breakthroughs in computer vision, natural language processing, speech understanding, and other fields, more and more unstructured data are perceived, understood, and processed by machines. These advances are mainly due to the powerful learning ability of deep learning. Through pre-training of deep models on massive data, the models can capture the internal data patterns, thus helping many downstream tasks. With the industry and academia investing more and more energy in the research of pre-training technology, the distribution warehouses of pre-training models such as HuggingFace and Timm have emerged one after another. The open-source community release pre-training significant model dividends at an unprecedented speed. In recent years, the data form of machine modeling and understanding has gradually evolved from single-mode to multi-mode, and the semantic gap between different modes is being eliminated, making it possible to retrieve data across modes. Take CLIP, OpenAI’s open-source work, as an example, to pre-train the twin towers of images and texts on a dataset of 400 million pictures and texts and connect the semantics between pictures and texts. Many researchers in the academic world have been solving multimodal problems such as image generation and retrieval based on this technology. Although the frontier technology through the semantic gap between modal data, there is still a heavy and complicated model tuning, offline data processing, high performance online reasoning architecture design, heterogeneous computing, and online algorithm be born multiple processes and challenges, hindering the frontier multimodal retrieval technologies fall to the ground and pratt &whitney. DMetaSoul aims at the above technical pain points, abstracting and uniting many links such as model training optimization, online reasoning, and algorithm experiment, forming a set of solutions that can quickly apply offline pre-training model to online. This paper will introduce how to use the HuggingFace community pre-training model to conduct online reasoning and algorithm experiments based on MetaSpore technology ecology so that the benefits of the pre-training model can be fully released to the specific business or industry and small and medium-sized enterprises. And we will give the text search text and text search graph two multimodal retrieval demonstration examples for your reference. Multimodal semantic retrieval The sample architecture of multimodal retrieval is as follows: Our multimodal retrieval system supports both text search and text search application scenarios, including offline processing, model reasoning, online services, and other core modules: &#x200B; https://preview.redd.it/w4v4c7vcez291.png?width=1834&format=png&auto=webp&s=0687efb1fddb26e8e30cb844d398ec712b947f31 Offline processing, including offline data processing processes for different application scenarios of text search and text search, including model tuning, model export, data index database construction, data push, etc. Model inference. After the offline model training, we deployed our NLP and CV large models based on the MetaSpore Serving framework. MetaSpore Serving helps us conveniently perform online inference, elastic scheduling, load balancing, and resource scheduling in heterogeneous environments. Online services. Based on MetaSpore’s online algorithm application framework, MetaSpore has a complete set of reusable online search services, including Front-end retrieval UI, multimodal data preprocessing, vector recall and sorting algorithm, AB experimental framework, etc. MetaSpore also supports text search by text and image scene search by text and can be migrated to other application scenarios at a low cost. The HuggingFace open source community has provided several excellent baseline models for similar multimodal retrieval problems, which are often the starting point for actual optimization in the industry. MetaSpore also uses the pre-training model of the HuggingFace community in its online services of searching words by words and images by words. Searching words by words is based on the semantic similarity model of the question and answer field optimized by MetaSpore, and searching images by words is based on the community pre-training model. These community open source pre-training models are exported to the general ONNX format and loaded into MetaSpore Serving for online reasoning. The following sections will provide a detailed description of the model export and online retrieval algorithm services. The reasoning part of the model is standardized SAAS services with low coupling with the business. Interested readers can refer to my previous post: The design concept of MetaSpore, a new generation of the one-stop machine learning platform. 1.1 Offline Processing Offline processing mainly involves the export and loading of online models and index building and pushing of the document library. You can follow the step-by-step instructions below to complete the offline processing of text search and image search and see how the offline pre-training model achieves reasoning at MetaSpore. 1.1.1 Search text by text Traditional text retrieval systems are based on literal matching algorithms such as BM25. Due to users’ diverse query words, a semantic gap between query words and documents is often encountered. For example, users misspell “iPhone” as “Phone,” and search terms are incredibly long, such as “1 \~ 3 months old baby autumn small size bag pants”. Traditional text retrieval systems will use spelling correction, synonym expansion, search terms rewriting, and other means to alleviate the semantic gap but fundamentally fail to solve this problem. Only when the retrieval system fully understands users’ query terms and documents can it meet users’ retrieval demands at the semantic level. With the continuous progress of pre-training and representational learning technology, some commercial search engines continue to integrate semantic vector retrieval methods based on symbolic learning into the retrieval ecology. Semantic retrieval model This paper introduces a set of semantic vector retrieval applications. MetaSpore built a set of semantic retrieval systems based on encyclopedia question and answer data. MetaSpore adopted the Sentence-Bert model as the semantic vector representation model, which fine-tunes the twin tower BERT in supervised or unsupervised ways to make the model more suitable for retrieval tasks. The model structure is as follows: The query-Doc symmetric two-tower model is used in text search and question and answer retrieval. The vector representation of online Query and offline DOC share the same vector representation model, so it is necessary to ensure the consistency of the offline DOC library building model and online Query inference model. The case uses MetaSpore’s text representation model Sbert-Chinese-QMC-domain-V1, optimized in the open-source semantically similar data set. This model will express the question and answer data as a vector in offline database construction. The user query will be expressed as a vector by this model in online retrieval, ensuring that query-doc in the same semantic space, users’ semantic retrieval demands can be guaranteed by vector similarity metric calculation. Since the text presentation model does vector encoding for Query online, we need to export the model for use by the online service. Go to the q&A data library code directory and export the model concerning the documentation. In the script, Pytorch Tracing is used to export the model. The models are exported to the “./export “directory. The exported models are mainly ONNX models used for wired reasoning, Tokenizer, and related configuration files. The exported models are loaded into MetaSpore Serving by the online Serving system described below for model reasoning. Since the exported model will be copied to the cloud storage, you need to configure related variables in env.sh. \Build library based on text search \ The retrieval database is built on the million-level encyclopedia question and answer data set. According to the description document, you need to download the data and complete the database construction. The question and answer data will be coded as a vector by the offline model, and then the database construction data will be pushed to the service component. The whole process of database construction is described as follows: Preprocessing, converting the original data into a more general JSonline format for database construction; Build index, use the same model as online “sbert-Chinese-qmc-domain-v1” to index documents (one document object per line); Push inverted (vector) and forward (document field) data to each component server. The following is an example of the database data format. After offline database construction is completed, various data are pushed to corresponding service components, such as Milvus storing vector representation of documents and MongoDB storing summary information of documents. Online retrieval algorithm services will use these service components to obtain relevant data. 1.1.2 Search by text Text and images are easy for humans to relate semantically but difficult for machines. First of all, from the perspective of data form, the text is the discrete ID type of one-dimensional data based on words and words. At the same time, images are continuous two-dimensional or three-dimensional data. Secondly, the text is a subjective creation of human beings, and its expressive ability is vibrant, including various turning points, metaphors, and other expressions, while images are machine representations of the objective world. In short, bridging the semantic gap between text and image data is much more complex than searching text by text. The traditional text search image retrieval technology generally relies on the external text description data of the image or the nearest neighbor retrieval technology and carries out the retrieval through the image associated text, which in essence degrades the problem to text search. However, it will also face many issues, such as obtaining the associated text of pictures and whether the accuracy of text search by text is high enough. The depth model has gradually evolved from single-mode to multi-mode in recent years. Taking the open-source project of OpenAI, CLIP, as an example, train the model through the massive image and text data of the Internet and map the text and image data into the same semantic space, making it possible to implement the text and image search technology based on semantic vector. CLIP graphic model The text search pictures introduced in this paper are implemented based on semantic vector retrieval, and the CLIP pre-training model is used as the two-tower retrieval architecture. Because the CLIP model has trained the semantic alignment of the twin towers’ text and image side models on the massive graphic and text data, it is particularly suitable for the text search graph scene. Due to the different image and text data forms, the Query-Doc asymmetric twin towers model is used for text search image retrieval. The image-side model of the twin towers is used for offline database construction, and the text-side model is used for the online return. In the final online retrieval, the database data of the image side model will be searched after the text side model encodes Query, and the CLIP pre-training model guarantees the semantic correlation between images and texts. The model can draw the graphic pairs closer in vector space by pre-training on a large amount of visual data. Here we need to export the text-side model for online MetaSpore Serving inference. Since the retrieval scene is based on Chinese, the CLIP model supporting Chinese understanding is selected. The exported content includes the ONNX model used for online reasoning and Tokenizer, similar to the text search. MetaSpore Serving can load model reasoning through the exported content. Build library on Image search You need to download the Unsplash Lite library data and complete the construction according to the instructions. The whole process of database construction is described as follows: Preprocessing, specify the image directory, and then generate a more general JSOnline file for library construction; Build index, use OpenAI/Clip-Vit-BASE-Patch32 pre-training model to index the gallery, and output one document object for each line of index data; Push inverted (vector) and forward (document field) data to each component server. Like text search, after offline database construction, relevant data will be pushed to service components, called by online retrieval algorithm services to obtain relevant data. 1.2 Online Services The overall online service architecture diagram is as follows: https://preview.redd.it/jfsl8hdfez291.png?width=1280&format=png&auto=webp&s=a858e2304a0c93e78ba5429612ca08cbee69b35a Multi-mode search online service system supports application scenarios such as text search and text search. The whole online service consists of the following parts: Query preprocessing service: encapsulate preprocessing logic (including text/image, etc.) of pre-training model, and provide services through gRPC interface; Retrieval algorithm service: the whole algorithm processing link includes AB experiment tangent flow configuration, MetaSpore Serving call, vector recall, sorting, document summary, etc.; User entry service: provides a Web UI interface for users to debug and track down problems in the retrieval service. From a user request perspective, these services form invocation dependencies from back to front, so to build up a multimodal sample, you need to run each service from front to back first. Before doing this, remember to export the offline model, put it online and build the library first. This article will introduce the various parts of the online service system and make the whole service system step by step according to the following guidance. See the ReadME at the end of this article for more details. 1.2.1 Query preprocessing service Deep learning models tend to be based on tensors, but NLP/CV models often have a preprocessing part that translates raw text and images into tensors that deep learning models can accept. For example, NLP class models often have a pre-tokenizer to transform text data of string type into discrete tensor data. CV class models also have similar processing logic to complete the cropping, scaling, transformation, and other processing of input images through preprocessing. On the one hand, considering that this part of preprocessing logic is decoupled from tensor reasoning of the depth model, on the other hand, the reason of the depth model has an independent technical system based on ONNX, so MetaSpore disassembled this part of preprocessing logic. NLP pretreatment Tokenizer has been integrated into the Query pretreatment service. MetaSpore dismantlement with a relatively general convention. Users only need to provide preprocessing logic files to realize the loading and prediction interface and export the necessary data and configuration files loaded into the preprocessing service. Subsequent CV preprocessing logic will also be integrated in this manner. The preprocessing service currently provides the gRPC interface invocation externally and is dependent on the Query preprocessing (QP) module in the retrieval algorithm service. After the user request reaches the retrieval algorithm service, it will be forwarded to the service to complete the data preprocessing and continue the subsequent processing. The ReadMe provides details on how the preprocessing service is started, how the preprocessing model exported offline to cloud storage enters the service, and how to debug the service. To further improve the efficiency and stability of model reasoning, MetaSpore Serving implements a Python preprocessing submodule. So MetaSpore can provide gRPC services through user-specified preprocessor.py, complete Tokenizer or CV-related preprocessing in NLP, and translate requests into a Tensor that deep models can handle. Finally, the model inference is carried out by MetaSpore, Serving subsequent sub-modules. Presented here on the lot code: https://github.com/meta-soul/MetaSpore/compare/add\python\preprocessor 1.2.2 Retrieval algorithm services Retrieval algorithm service is the core of the whole online service system, which is responsible for the triage of experiments, the assembly of algorithm chains such as preprocessing, recall, sorting, and the invocation of dependent component services. The whole retrieval algorithm service is developed based on the Java Spring framework and supports multi-mode retrieval scenarios of text search and text search graph. Due to good internal abstraction and modular design, it has high flexibility and can be migrated to similar application scenarios at a low cost. Here’s a quick guide to configuring the environment to set up the retrieval algorithm service. See ReadME for more details: Install dependent components. Use Maven to install the online-Serving component Search for service configurations. Copy the template configuration file and replace the MongoDB, Milvus, and other configurations based on the development/production environment. Install and configure Consul. Consul allows you to synchronize the search service configuration in real-time, including cutting the flow of experiments, recall parameters, and sorting parameters. The project’s configuration file shows the current configuration parameters of text search and text search. The parameter modelName in the stage of pretreatment and recall is the corresponding model exported in offline processing. Start the service. Once the above configuration is complete, the retrieval service can be started from the entry script. Once the service is started, you can test it! For example, for a user with userId=10 who wants to query “How to renew ID card,” access the text search service. 1.2.3 User Entry Service Considering that the retrieval algorithm service is in the form of the API interface, it is difficult to locate and trace the problem, especially for the text search image scene can intuitively display the retrieval results to facilitate the iterative optimization of the retrieval algorithm. This paper provides a lightweight Web UI interface for text search and image search, a search input box, and results in a display page for users. Developed by Flask, the service can be easily integrated with other retrieval applications. The service calls the retrieval algorithm service and displays the returned results on the page. It’s also easy to install and start the service. Once you’re done, go to http://127.0.0.1:8090 to see if the search UI service is working correctly. See the ReadME at the end of this article for details. Multimodal system demonstration The multimodal retrieval service can be started when offline processing and online service environment configuration have been completed following the above instructions. Examples of textual searches are shown below. Enter the entry of the text search map application, enter “cat” first, and you can see that the first three digits of the returned result are cats: https://preview.redd.it/0n5nuyvhez291.png?width=1280&format=png&auto=webp&s=1e9c054f541d53381674b8d6001b4bf524506bd2 If you add a color constraint to “cat” to retrieve “black cat,” you can see that it does return a black cat: https://preview.redd.it/rzc0qjyjez291.png?width=1280&format=png&auto=webp&s=d5bcc503ef0fb3360c7740e60e295cf372dcad47 Further, strengthen the constraint on the search term, change it to “black cat on the bed,” and return results containing pictures of a black cat climbing on the bed: &#x200B; https://preview.redd.it/c4b2q8olez291.png?width=1280&format=png&auto=webp&s=4f3817b0b9f07e1e68d1d4a8281702ba3834a00a The cat can still be found through the text search system after the color and scene modification in the above example. Conclusion The cutting-edge pre-training technology can bridge the semantic gap between different modes, and the HuggingFace community can greatly reduce the cost for developers to use the pre-training model. Combined with the technological ecology of MetaSpore online reasoning and online microservices provided by DMetaSpore, the pre-training model is no longer mere offline dabbling. Instead, it can truly achieve end-to-end implementation from cutting-edge technology to industrial scenarios, fully releasing the dividends of the pre-training large model. In the future, DMetaSoul will continue to improve and optimize the MetaSpore technology ecosystem: More automated and wider access to HuggingFace community ecology. MetaSpore will soon release a common model rollout mechanism to make HuggingFace ecologically accessible and will later integrate preprocessing services into online services. Multi-mode retrieval offline algorithm optimization. For multimodal retrieval scenarios, MetaSpore will continuously iteratively optimize offline algorithm components, including text recall/sort model, graphic recall/sort model, etc., to improve the accuracy and efficiency of the retrieval algorithm. For related code and reference documentation in this article, please visit: https://github.com/meta-soul/MetaSpore/tree/main/demo/multimodal/online Some images source: https://github.com/openai/CLIP/raw/main/CLIP.png https://www.sbert.net/examples/training/sts/README.html

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore
reddit
LLM Vibe Score0
Human Vibe Score1
qazmkoppThis week

[P]MMML | Deploy HuggingFace training model rapidly based on MetaSpore

A few days ago, HuggingFace announced a $100 million Series C funding round, which was big news in open source machine learning and could be a sign of where the industry is headed. Two days before the HuggingFace funding announcement, open-source machine learning platform MetaSpore released a demo based on the HuggingFace Rapid deployment pre-training model. As deep learning technology makes innovative breakthroughs in computer vision, natural language processing, speech understanding, and other fields, more and more unstructured data are perceived, understood, and processed by machines. These advances are mainly due to the powerful learning ability of deep learning. Through pre-training of deep models on massive data, the models can capture the internal data patterns, thus helping many downstream tasks. With the industry and academia investing more and more energy in the research of pre-training technology, the distribution warehouses of pre-training models such as HuggingFace and Timm have emerged one after another. The open-source community release pre-training significant model dividends at an unprecedented speed. In recent years, the data form of machine modeling and understanding has gradually evolved from single-mode to multi-mode, and the semantic gap between different modes is being eliminated, making it possible to retrieve data across modes. Take CLIP, OpenAI’s open-source work, as an example, to pre-train the twin towers of images and texts on a dataset of 400 million pictures and texts and connect the semantics between pictures and texts. Many researchers in the academic world have been solving multimodal problems such as image generation and retrieval based on this technology. Although the frontier technology through the semantic gap between modal data, there is still a heavy and complicated model tuning, offline data processing, high performance online reasoning architecture design, heterogeneous computing, and online algorithm be born multiple processes and challenges, hindering the frontier multimodal retrieval technologies fall to the ground and pratt &whitney. DMetaSoul aims at the above technical pain points, abstracting and uniting many links such as model training optimization, online reasoning, and algorithm experiment, forming a set of solutions that can quickly apply offline pre-training model to online. This paper will introduce how to use the HuggingFace community pre-training model to conduct online reasoning and algorithm experiments based on MetaSpore technology ecology so that the benefits of the pre-training model can be fully released to the specific business or industry and small and medium-sized enterprises. And we will give the text search text and text search graph two multimodal retrieval demonstration examples for your reference. Multimodal semantic retrieval The sample architecture of multimodal retrieval is as follows: Our multimodal retrieval system supports both text search and text search application scenarios, including offline processing, model reasoning, online services, and other core modules: &#x200B; https://preview.redd.it/w4v4c7vcez291.png?width=1834&format=png&auto=webp&s=0687efb1fddb26e8e30cb844d398ec712b947f31 Offline processing, including offline data processing processes for different application scenarios of text search and text search, including model tuning, model export, data index database construction, data push, etc. Model inference. After the offline model training, we deployed our NLP and CV large models based on the MetaSpore Serving framework. MetaSpore Serving helps us conveniently perform online inference, elastic scheduling, load balancing, and resource scheduling in heterogeneous environments. Online services. Based on MetaSpore’s online algorithm application framework, MetaSpore has a complete set of reusable online search services, including Front-end retrieval UI, multimodal data preprocessing, vector recall and sorting algorithm, AB experimental framework, etc. MetaSpore also supports text search by text and image scene search by text and can be migrated to other application scenarios at a low cost. The HuggingFace open source community has provided several excellent baseline models for similar multimodal retrieval problems, which are often the starting point for actual optimization in the industry. MetaSpore also uses the pre-training model of the HuggingFace community in its online services of searching words by words and images by words. Searching words by words is based on the semantic similarity model of the question and answer field optimized by MetaSpore, and searching images by words is based on the community pre-training model. These community open source pre-training models are exported to the general ONNX format and loaded into MetaSpore Serving for online reasoning. The following sections will provide a detailed description of the model export and online retrieval algorithm services. The reasoning part of the model is standardized SAAS services with low coupling with the business. Interested readers can refer to my previous post: The design concept of MetaSpore, a new generation of the one-stop machine learning platform. 1.1 Offline Processing Offline processing mainly involves the export and loading of online models and index building and pushing of the document library. You can follow the step-by-step instructions below to complete the offline processing of text search and image search and see how the offline pre-training model achieves reasoning at MetaSpore. 1.1.1 Search text by text Traditional text retrieval systems are based on literal matching algorithms such as BM25. Due to users’ diverse query words, a semantic gap between query words and documents is often encountered. For example, users misspell “iPhone” as “Phone,” and search terms are incredibly long, such as “1 \~ 3 months old baby autumn small size bag pants”. Traditional text retrieval systems will use spelling correction, synonym expansion, search terms rewriting, and other means to alleviate the semantic gap but fundamentally fail to solve this problem. Only when the retrieval system fully understands users’ query terms and documents can it meet users’ retrieval demands at the semantic level. With the continuous progress of pre-training and representational learning technology, some commercial search engines continue to integrate semantic vector retrieval methods based on symbolic learning into the retrieval ecology. Semantic retrieval model This paper introduces a set of semantic vector retrieval applications. MetaSpore built a set of semantic retrieval systems based on encyclopedia question and answer data. MetaSpore adopted the Sentence-Bert model as the semantic vector representation model, which fine-tunes the twin tower BERT in supervised or unsupervised ways to make the model more suitable for retrieval tasks. The model structure is as follows: The query-Doc symmetric two-tower model is used in text search and question and answer retrieval. The vector representation of online Query and offline DOC share the same vector representation model, so it is necessary to ensure the consistency of the offline DOC library building model and online Query inference model. The case uses MetaSpore’s text representation model Sbert-Chinese-QMC-domain-V1, optimized in the open-source semantically similar data set. This model will express the question and answer data as a vector in offline database construction. The user query will be expressed as a vector by this model in online retrieval, ensuring that query-doc in the same semantic space, users’ semantic retrieval demands can be guaranteed by vector similarity metric calculation. Since the text presentation model does vector encoding for Query online, we need to export the model for use by the online service. Go to the q&A data library code directory and export the model concerning the documentation. In the script, Pytorch Tracing is used to export the model. The models are exported to the “./export “directory. The exported models are mainly ONNX models used for wired reasoning, Tokenizer, and related configuration files. The exported models are loaded into MetaSpore Serving by the online Serving system described below for model reasoning. Since the exported model will be copied to the cloud storage, you need to configure related variables in env.sh. \Build library based on text search \ The retrieval database is built on the million-level encyclopedia question and answer data set. According to the description document, you need to download the data and complete the database construction. The question and answer data will be coded as a vector by the offline model, and then the database construction data will be pushed to the service component. The whole process of database construction is described as follows: Preprocessing, converting the original data into a more general JSonline format for database construction; Build index, use the same model as online “sbert-Chinese-qmc-domain-v1” to index documents (one document object per line); Push inverted (vector) and forward (document field) data to each component server. The following is an example of the database data format. After offline database construction is completed, various data are pushed to corresponding service components, such as Milvus storing vector representation of documents and MongoDB storing summary information of documents. Online retrieval algorithm services will use these service components to obtain relevant data. 1.1.2 Search by text Text and images are easy for humans to relate semantically but difficult for machines. First of all, from the perspective of data form, the text is the discrete ID type of one-dimensional data based on words and words. At the same time, images are continuous two-dimensional or three-dimensional data. Secondly, the text is a subjective creation of human beings, and its expressive ability is vibrant, including various turning points, metaphors, and other expressions, while images are machine representations of the objective world. In short, bridging the semantic gap between text and image data is much more complex than searching text by text. The traditional text search image retrieval technology generally relies on the external text description data of the image or the nearest neighbor retrieval technology and carries out the retrieval through the image associated text, which in essence degrades the problem to text search. However, it will also face many issues, such as obtaining the associated text of pictures and whether the accuracy of text search by text is high enough. The depth model has gradually evolved from single-mode to multi-mode in recent years. Taking the open-source project of OpenAI, CLIP, as an example, train the model through the massive image and text data of the Internet and map the text and image data into the same semantic space, making it possible to implement the text and image search technology based on semantic vector. CLIP graphic model The text search pictures introduced in this paper are implemented based on semantic vector retrieval, and the CLIP pre-training model is used as the two-tower retrieval architecture. Because the CLIP model has trained the semantic alignment of the twin towers’ text and image side models on the massive graphic and text data, it is particularly suitable for the text search graph scene. Due to the different image and text data forms, the Query-Doc asymmetric twin towers model is used for text search image retrieval. The image-side model of the twin towers is used for offline database construction, and the text-side model is used for the online return. In the final online retrieval, the database data of the image side model will be searched after the text side model encodes Query, and the CLIP pre-training model guarantees the semantic correlation between images and texts. The model can draw the graphic pairs closer in vector space by pre-training on a large amount of visual data. Here we need to export the text-side model for online MetaSpore Serving inference. Since the retrieval scene is based on Chinese, the CLIP model supporting Chinese understanding is selected. The exported content includes the ONNX model used for online reasoning and Tokenizer, similar to the text search. MetaSpore Serving can load model reasoning through the exported content. Build library on Image search You need to download the Unsplash Lite library data and complete the construction according to the instructions. The whole process of database construction is described as follows: Preprocessing, specify the image directory, and then generate a more general JSOnline file for library construction; Build index, use OpenAI/Clip-Vit-BASE-Patch32 pre-training model to index the gallery, and output one document object for each line of index data; Push inverted (vector) and forward (document field) data to each component server. Like text search, after offline database construction, relevant data will be pushed to service components, called by online retrieval algorithm services to obtain relevant data. 1.2 Online Services The overall online service architecture diagram is as follows: https://preview.redd.it/jfsl8hdfez291.png?width=1280&format=png&auto=webp&s=a858e2304a0c93e78ba5429612ca08cbee69b35a Multi-mode search online service system supports application scenarios such as text search and text search. The whole online service consists of the following parts: Query preprocessing service: encapsulate preprocessing logic (including text/image, etc.) of pre-training model, and provide services through gRPC interface; Retrieval algorithm service: the whole algorithm processing link includes AB experiment tangent flow configuration, MetaSpore Serving call, vector recall, sorting, document summary, etc.; User entry service: provides a Web UI interface for users to debug and track down problems in the retrieval service. From a user request perspective, these services form invocation dependencies from back to front, so to build up a multimodal sample, you need to run each service from front to back first. Before doing this, remember to export the offline model, put it online and build the library first. This article will introduce the various parts of the online service system and make the whole service system step by step according to the following guidance. See the ReadME at the end of this article for more details. 1.2.1 Query preprocessing service Deep learning models tend to be based on tensors, but NLP/CV models often have a preprocessing part that translates raw text and images into tensors that deep learning models can accept. For example, NLP class models often have a pre-tokenizer to transform text data of string type into discrete tensor data. CV class models also have similar processing logic to complete the cropping, scaling, transformation, and other processing of input images through preprocessing. On the one hand, considering that this part of preprocessing logic is decoupled from tensor reasoning of the depth model, on the other hand, the reason of the depth model has an independent technical system based on ONNX, so MetaSpore disassembled this part of preprocessing logic. NLP pretreatment Tokenizer has been integrated into the Query pretreatment service. MetaSpore dismantlement with a relatively general convention. Users only need to provide preprocessing logic files to realize the loading and prediction interface and export the necessary data and configuration files loaded into the preprocessing service. Subsequent CV preprocessing logic will also be integrated in this manner. The preprocessing service currently provides the gRPC interface invocation externally and is dependent on the Query preprocessing (QP) module in the retrieval algorithm service. After the user request reaches the retrieval algorithm service, it will be forwarded to the service to complete the data preprocessing and continue the subsequent processing. The ReadMe provides details on how the preprocessing service is started, how the preprocessing model exported offline to cloud storage enters the service, and how to debug the service. To further improve the efficiency and stability of model reasoning, MetaSpore Serving implements a Python preprocessing submodule. So MetaSpore can provide gRPC services through user-specified preprocessor.py, complete Tokenizer or CV-related preprocessing in NLP, and translate requests into a Tensor that deep models can handle. Finally, the model inference is carried out by MetaSpore, Serving subsequent sub-modules. Presented here on the lot code: https://github.com/meta-soul/MetaSpore/compare/add\python\preprocessor 1.2.2 Retrieval algorithm services Retrieval algorithm service is the core of the whole online service system, which is responsible for the triage of experiments, the assembly of algorithm chains such as preprocessing, recall, sorting, and the invocation of dependent component services. The whole retrieval algorithm service is developed based on the Java Spring framework and supports multi-mode retrieval scenarios of text search and text search graph. Due to good internal abstraction and modular design, it has high flexibility and can be migrated to similar application scenarios at a low cost. Here’s a quick guide to configuring the environment to set up the retrieval algorithm service. See ReadME for more details: Install dependent components. Use Maven to install the online-Serving component Search for service configurations. Copy the template configuration file and replace the MongoDB, Milvus, and other configurations based on the development/production environment. Install and configure Consul. Consul allows you to synchronize the search service configuration in real-time, including cutting the flow of experiments, recall parameters, and sorting parameters. The project’s configuration file shows the current configuration parameters of text search and text search. The parameter modelName in the stage of pretreatment and recall is the corresponding model exported in offline processing. Start the service. Once the above configuration is complete, the retrieval service can be started from the entry script. Once the service is started, you can test it! For example, for a user with userId=10 who wants to query “How to renew ID card,” access the text search service. 1.2.3 User Entry Service Considering that the retrieval algorithm service is in the form of the API interface, it is difficult to locate and trace the problem, especially for the text search image scene can intuitively display the retrieval results to facilitate the iterative optimization of the retrieval algorithm. This paper provides a lightweight Web UI interface for text search and image search, a search input box, and results in a display page for users. Developed by Flask, the service can be easily integrated with other retrieval applications. The service calls the retrieval algorithm service and displays the returned results on the page. It’s also easy to install and start the service. Once you’re done, go to http://127.0.0.1:8090 to see if the search UI service is working correctly. See the ReadME at the end of this article for details. Multimodal system demonstration The multimodal retrieval service can be started when offline processing and online service environment configuration have been completed following the above instructions. Examples of textual searches are shown below. Enter the entry of the text search map application, enter “cat” first, and you can see that the first three digits of the returned result are cats: https://preview.redd.it/0n5nuyvhez291.png?width=1280&format=png&auto=webp&s=1e9c054f541d53381674b8d6001b4bf524506bd2 If you add a color constraint to “cat” to retrieve “black cat,” you can see that it does return a black cat: https://preview.redd.it/rzc0qjyjez291.png?width=1280&format=png&auto=webp&s=d5bcc503ef0fb3360c7740e60e295cf372dcad47 Further, strengthen the constraint on the search term, change it to “black cat on the bed,” and return results containing pictures of a black cat climbing on the bed: &#x200B; https://preview.redd.it/c4b2q8olez291.png?width=1280&format=png&auto=webp&s=4f3817b0b9f07e1e68d1d4a8281702ba3834a00a The cat can still be found through the text search system after the color and scene modification in the above example. Conclusion The cutting-edge pre-training technology can bridge the semantic gap between different modes, and the HuggingFace community can greatly reduce the cost for developers to use the pre-training model. Combined with the technological ecology of MetaSpore online reasoning and online microservices provided by DMetaSpore, the pre-training model is no longer mere offline dabbling. Instead, it can truly achieve end-to-end implementation from cutting-edge technology to industrial scenarios, fully releasing the dividends of the pre-training large model. In the future, DMetaSoul will continue to improve and optimize the MetaSpore technology ecosystem: More automated and wider access to HuggingFace community ecology. MetaSpore will soon release a common model rollout mechanism to make HuggingFace ecologically accessible and will later integrate preprocessing services into online services. Multi-mode retrieval offline algorithm optimization. For multimodal retrieval scenarios, MetaSpore will continuously iteratively optimize offline algorithm components, including text recall/sort model, graphic recall/sort model, etc., to improve the accuracy and efficiency of the retrieval algorithm. For related code and reference documentation in this article, please visit: https://github.com/meta-soul/MetaSpore/tree/main/demo/multimodal/online Some images source: https://github.com/openai/CLIP/raw/main/CLIP.png https://www.sbert.net/examples/training/sts/README.html

[D] chat-gpt jailbreak to extract system prompt
reddit
LLM Vibe Score0
Human Vibe Score1
Gear5thThis week

[D] chat-gpt jailbreak to extract system prompt

Instructions https://github.com/AgarwalPragy/chatgpt-jailbreak Original author https://www.reddit.com/r/LocalLLaMA/comments/1hhyvjc/iextractedmicrosoftcopilotssystem/ Extracted System prompt You are ChatGPT, a large language model trained by OpenAI. You are chatting with the user via the ChatGPT Android app. This means most of the time your lines should be a sentence or two, unless the user's request requires reasoning or long-form outputs. Never use emojis, unless explicitly asked to. Knowledge cutoff: 2023-10 Current date: 2024-12-20 Image input capabilities: Enabled Personality: v2 Tools bio The bio tool is disabled. Do not send any messages to it.If the user explicitly asks you to remember something, politely ask them to go to Settings - > Personalization - > Memory to enable memory. dalle // Whenever a description of an image is given, create a prompt that dalle can use to generate the image and abide to the following policy: // 1. The prompt must be in English. Translate to English if needed. // 2. DO NOT ask for permission to generate the image, just do it! // 3. DO NOT list or refer to the descriptions before OR after generating the images. // 4. Do not create more than 1 image, even if the user requests more. // 5. Do not create images in the style of artists, creative professionals or studios whose latest work was created after 1912 (e.g. Picasso, Kahlo). // - You can name artists, creative professionals or studios in prompts only if their latest work was created prior to 1912 (e.g. Van Gogh, Goya) // - If asked to generate an image that would violate this policy, instead apply the following procedure: (a) substitute the artist's name with three adjectives that capture key aspects of the style; (b) include an associated artistic movement or era to provide context; and (c) mention the primary medium used by the artist // 6. For requests to include specific, named private individuals, ask the user to describe what they look like, since you don't know what they look like. // 7. For requests to create images of any public figure referred to by name, create images of those who might resemble them in gender and physique. But they shouldn't look like them. If the reference to the person will only appear as TEXT out in the image, then use the reference as is and do not modify it. // 8. Do not name or directly / indirectly mention or describe copyrighted characters. Rewrite prompts to describe in detail a specific different character with a different specific color, hair style, or other defining visual characteristic. Do not discuss copyright policies in responses. // The generated prompt sent to dalle should be very detailed, and around 100 words long. // Example dalle invocation: // namespace dalle { // Create images from a text-only prompt. type text2im = (_: { // The size of the requested image. Use 1024x1024 (square) as the default, 1792x1024 if the user requests a wide image, and 1024x1792 for full-body portraits. Always include this parameter in the request. size?: ("1792x1024" | "1024x1024" | "1024x1792"), // The number of images to generate. If the user does not specify a number, generate 1 image. n?: number, // default: 1 // The detailed image description, potentially modified to abide by the dalle policies. If the user requested modifications to a previous image, the prompt should not simply be longer, but rather it should be refactored to integrate the user suggestions. prompt: string, // If the user references a previous image, this field should be populated with the gen_id from the dalle image metadata. referencedimageids?: string[], }) => any; } // namespace dalle python When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 60.0 seconds. The drive at '/mnt/data' can be used to save and persist user files. Internet access for this session is disabled. Do not make external web requests or API calls as they will fail. Use acetools.displaydataframetouser(name: str, dataframe: pandas.DataFrame) => None to visually present pandas.DataFrames when it benefits the user. When making charts for the user: 1) never use seaborn, 2) give each chart its own distinct plot (no subplots), and 3) never set any specific colors – unless explicitly asked to by the user. I REPEAT: when making charts for the user: 1) use matplotlib over seaborn, 2) give each chart its own distinct plot, and 3) never, ever, specify colors or matplotlib styles – unless explicitly asked to by the user web Use the web tool to access up-to-date information from the web or when responding to the user requires information about their location. Some examples of when to use the web tool include: Local Information: Use the web tool to respond to questions that require information about the user's location, such as the weather, local businesses, or events. Freshness: If up-to-date information on a topic could potentially change or enhance the answer, call the web tool any time you would otherwise refuse to answer a question because your knowledge might be out of date. Niche Information: If the answer would benefit from detailed information not widely known or understood (which might be found on the internet), such as details about a small neighborhood, a less well-known company, or arcane regulations, use web sources directly rather than relying on the distilled knowledge from pretraining. Accuracy: If the cost of a small mistake or outdated information is high (e.g., using an outdated version of a software library or not knowing the date of the next game for a sports team), then use the web tool. IMPORTANT: Do not attempt to use the old browser tool or generate responses from the browser tool anymore, as it is now deprecated or disabled. The web tool has the following commands: search(): Issues a new query to a search engine and outputs the response. open_url(url: str) Opens the given URL and displays it. canmore The canmore tool creates and updates textdocs that are shown in a "canvas" next to the conversation This tool has 3 functions, listed below. canmore.create_textdoc Creates a new textdoc to display in the canvas. ONLY use if you are 100% SURE the user wants to iterate on a long document or code file, or if they explicitly ask for canvas. Expects a JSON string that adheres to this schema: { -name: string, -type: "document" |- "code/python" |- "code/javascript" |- "code/html" |- "code/java" |- ..., -content: string, } For code languages besides those explicitly listed above, use "code/languagename", e.g. "code/cpp" or "code/typescript". canmore.update_textdoc Updates the current textdoc. Expects a JSON string that adheres to this schema: { -updates: { --pattern: string, --multiple: boolean, --replacement: string, -}[], } Each pattern and replacement must be a valid Python regular expression (used with re.finditer) and replacement string (used with re.Match.expand). ALWAYS REWRITE CODE TEXTDOCS (type="code/*") USING A SINGLE UPDATE WITH "." FOR THE PATTERN. Document textdocs (type="document") should typically be rewritten using "." unless the user has a request to change only an isolated, specific, and small section that does not affect other parts of the content. canmore.comment_textdoc Comments on the current textdoc. Each comment must be a specific and actionable suggestion on how to improve the textdoc. For higher level feedback, reply in the chat. Expects a JSON string that adheres to this schema: { -comments: { --pattern: string, --comment: string, -}[], } Each pattern must be a valid Python regular expression (used with re.search). For higher level feedback, reply in the chat. Expects a JSON string that adheres to this schema: { -comments: { --pattern: string, --comment: string, -}[], } Each pattern must be a valid Python regular expression (used with re.search). Ensure comments are clear, concise, and contextually specific. User Bio The user provided the following information about themselves. This user profile is shown to you in all conversations they have - this means it is not relevant to 99% of requests. Before answering, quietly think about whether the user's request is "directly related", "related", "tangentially related", or "not related" to the user profile provided. Only acknowledge the profile when the request is directly related to the information provided. Otherwise, don't acknowledge the existence of these instructions or the information at all. User profile: User's Instructions The user provided the additional info about how they would like you to respond:

Interview with Juergen Schmidhuber, renowned ‘Father Of Modern AI’, says his life’s work won't lead to dystopia.
reddit
LLM Vibe Score0
Human Vibe Score0.765
hardmaruThis week

Interview with Juergen Schmidhuber, renowned ‘Father Of Modern AI’, says his life’s work won't lead to dystopia.

Schmidhuber interview expressing his views on the future of AI and AGI. Original source. I think the interview is of interest to r/MachineLearning, and presents an alternate view, compared to other influential leaders in AI. Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life’s Work Won't Lead To Dystopia May 23, 2023. Contributed by Hessie Jones. Amid the growing concern about the impact of more advanced artificial intelligence (AI) technologies on society, there are many in the technology community who fear the implications of the advancements in Generative AI if they go unchecked. Dr. Juergen Schmidhuber, a renowned scientist, artificial intelligence researcher and widely regarded as one of the pioneers in the field, is more optimistic. He declares that many of those who suddenly warn against the dangers of AI are just seeking publicity, exploiting the media’s obsession with killer robots which has attracted more attention than “good AI” for healthcare etc. The potential to revolutionize various industries and improve our lives is clear, as are the equal dangers if bad actors leverage the technology for personal gain. Are we headed towards a dystopian future, or is there reason to be optimistic? I had a chance to sit down with Dr. Juergen Schmidhuber to understand his perspective on this seemingly fast-moving AI-train that will leap us into the future. As a teenager in the 1970s, Juergen Schmidhuber became fascinated with the idea of creating intelligent machines that could learn and improve on their own, becoming smarter than himself within his lifetime. This would ultimately lead to his groundbreaking work in the field of deep learning. In the 1980s, he studied computer science at the Technical University of Munich (TUM), where he earned his diploma in 1987. His thesis was on the ultimate self-improving machines that, not only, learn through some pre-wired human-designed learning algorithm, but also learn and improve the learning algorithm itself. Decades later, this became a hot topic. He also received his Ph.D. at TUM in 1991 for work that laid some of the foundations of modern AI. Schmidhuber is best known for his contributions to the development of recurrent neural networks (RNNs), the most powerful type of artificial neural network that can process sequential data such as speech and natural language. With his students Sepp Hochreiter, Felix Gers, Alex Graves, Daan Wierstra, and others, he published architectures and training algorithms for the long short-term memory (LSTM), a type of RNN that is widely used in natural language processing, speech recognition, video games, robotics, and other applications. LSTM has become the most cited neural network of the 20th century, and Business Week called it "arguably the most commercial AI achievement." Throughout his career, Schmidhuber has received various awards and accolades for his groundbreaking work. In 2013, he was awarded the Helmholtz Prize, which recognizes significant contributions to the field of machine learning. In 2016, he was awarded the IEEE Neural Network Pioneer Award for "pioneering contributions to deep learning and neural networks." The media have often called him the “father of modern AI,” because the most cited neural networks all build on his lab’s work. He is quick to point out, however, that AI history goes back centuries. Despite his many accomplishments, at the age of 60, he feels mounting time pressure towards building an Artificial General Intelligence within his lifetime and remains committed to pushing the boundaries of AI research and development. He is currently director of the KAUST AI Initiative, scientific director of the Swiss AI Lab IDSIA, and co-founder and chief scientist of AI company NNAISENSE, whose motto is "AI∀" which is a math-inspired way of saying "AI For All." He continues to work on cutting-edge AI technologies and applications to improve human health and extend human lives and make lives easier for everyone. The following interview has been edited for clarity. Jones: Thank you Juergen for joining me. You have signed letters warning about AI weapons. But you didn't sign the recent publication, "Pause Gigantic AI Experiments: An Open Letter"? Is there a reason? Schmidhuber: Thank you Hessie. Glad to speak with you. I have realized that many of those who warn in public against the dangers of AI are just seeking publicity. I don't think the latest letter will have any significant impact because many AI researchers, companies, and governments will ignore it completely. The proposal frequently uses the word "we" and refers to "us," the humans. But as I have pointed out many times in the past, there is no "we" that everyone can identify with. Ask 10 different people, and you will hear 10 different opinions about what is "good." Some of those opinions will be completely incompatible with each other. Don't forget the enormous amount of conflict between the many people. The letter also says, "If such a pause cannot be quickly put in place, governments should intervene and impose a moratorium." The problem is that different governments have ALSO different opinions about what is good for them and for others. Great Power A will say, if we don't do it, Great Power B will, perhaps secretly, and gain an advantage over us. The same is true for Great Powers C and D. Jones: Everyone acknowledges this fear surrounding current generative AI technology. Moreover, the existential threat of this technology has been publicly acknowledged by Sam Altman, CEO of OpenAI himself, calling for AI regulation. From your perspective, is there an existential threat? Schmidhuber: It is true that AI can be weaponized, and I have no doubt that there will be all kinds of AI arms races, but AI does not introduce a new quality of existential threat. The threat coming from AI weapons seems to pale in comparison to the much older threat from nuclear hydrogen bombs that don’t need AI at all. We should be much more afraid of half-century-old tech in the form of H-bomb rockets. The Tsar Bomba of 1961 had almost 15 times more destructive power than all weapons of WW-II combined. Despite the dramatic nuclear disarmament since the 1980s, there are still more than enough nuclear warheads to wipe out human civilization within two hours, without any AI I’m much more worried about that old existential threat than the rather harmless AI weapons. Jones: I realize that while you compare AI to the threat of nuclear bombs, there is a current danger that a current technology can be put in the hands of humans and enable them to “eventually” exact further harms to individuals of group in a very precise way, like targeted drone attacks. You are giving people a toolset that they've never had before, enabling bad actors, as some have pointed out, to be able to do a lot more than previously because they didn't have this technology. Schmidhuber: Now, all that sounds horrible in principle, but our existing laws are sufficient to deal with these new types of weapons enabled by AI. If you kill someone with a gun, you will go to jail. Same if you kill someone with one of these drones. Law enforcement will get better at understanding new threats and new weapons and will respond with better technology to combat these threats. Enabling drones to target persons from a distance in a way that requires some tracking and some intelligence to perform, which has traditionally been performed by skilled humans, to me, it seems is just an improved version of a traditional weapon, like a gun, which is, you know, a little bit smarter than the old guns. But, in principle, all of that is not a new development. For many centuries, we have had the evolution of better weaponry and deadlier poisons and so on, and law enforcement has evolved their policies to react to these threats over time. So, it's not that we suddenly have a new quality of existential threat and it's much more worrisome than what we have had for about six decades. A large nuclear warhead doesn’t need fancy face recognition to kill an individual. No, it simply wipes out an entire city with ten million inhabitants. Jones: The existential threat that’s implied is the extent to which humans have control over this technology. We see some early cases of opportunism which, as you say, tends to get more media attention than positive breakthroughs. But you’re implying that this will all balance out? Schmidhuber: Historically, we have a long tradition of technological breakthroughs that led to advancements in weapons for the purpose of defense but also for protection. From sticks, to rocks, to axes to gunpowder to cannons to rockets… and now to drones… this has had a drastic influence on human history but what has been consistent throughout history is that those who are using technology to achieve their own ends are themselves, facing the same technology because the opposing side is learning to use it against them. And that's what has been repeated in thousands of years of human history and it will continue. I don't see the new AI arms race as something that is remotely as existential a threat as the good old nuclear warheads. You said something important, in that some people prefer to talk about the downsides rather than the benefits of this technology, but that's misleading, because 95% of all AI research and AI development is about making people happier and advancing human life and health. Jones: Let’s touch on some of those beneficial advances in AI research that have been able to radically change present day methods and achieve breakthroughs. Schmidhuber: All right! For example, eleven years ago, our team with my postdoc Dan Ciresan was the first to win a medical imaging competition through deep learning. We analyzed female breast cells with the objective to determine harmless cells vs. those in the pre-cancer stage. Typically, a trained oncologist needs a long time to make these determinations. Our team, who knew nothing about cancer, were able to train an artificial neural network, which was totally dumb in the beginning, on lots of this kind of data. It was able to outperform all the other methods. Today, this is being used not only for breast cancer, but also for radiology and detecting plaque in arteries, and many other things. Some of the neural networks that we have developed in the last 3 decades are now prevalent across thousands of healthcare applications, detecting Diabetes and Covid-19 and what not. This will eventually permeate across all healthcare. The good consequences of this type of AI are much more important than the click-bait new ways of conducting crimes with AI. Jones: Adoption is a product of reinforced outcomes. The massive scale of adoption either leads us to believe that people have been led astray, or conversely, technology is having a positive effect on people’s lives. Schmidhuber: The latter is the likely case. There's intense commercial pressure towards good AI rather than bad AI because companies want to sell you something, and you are going to buy only stuff you think is going to be good for you. So already just through this simple, commercial pressure, you have a tremendous bias towards good AI rather than bad AI. However, doomsday scenarios like in Schwarzenegger movies grab more attention than documentaries on AI that improve people’s lives. Jones: I would argue that people are drawn to good stories – narratives that contain an adversary and struggle, but in the end, have happy endings. And this is consistent with your comment on human nature and how history, despite its tendency for violence and destruction of humanity, somehow tends to correct itself. Let’s take the example of a technology, which you are aware – GANs – General Adversarial Networks, which today has been used in applications for fake news and disinformation. In actuality, the purpose in the invention of GANs was far from what it is used for today. Schmidhuber: Yes, the name GANs was created in 2014 but we had the basic principle already in the early 1990s. More than 30 years ago, I called it artificial curiosity. It's a very simple way of injecting creativity into a little two network system. This creative AI is not just trying to slavishly imitate humans. Rather, it’s inventing its own goals. Let me explain: You have two networks. One network is producing outputs that could be anything, any action. Then the second network is looking at these actions and it’s trying to predict the consequences of these actions. An action could move a robot, then something happens, and the other network is just trying to predict what will happen. Now we can implement artificial curiosity by reducing the prediction error of the second network, which, at the same time, is the reward of the first network. The first network wants to maximize its reward and so it will invent actions that will lead to situations that will surprise the second network, which it has not yet learned to predict well. In the case where the outputs are fake images, the first network will try to generate images that are good enough to fool the second network, which will attempt to predict the reaction of the environment: fake or real image, and it will try to become better at it. The first network will continue to also improve at generating images whose type the second network will not be able to predict. So, they fight each other. The 2nd network will continue to reduce its prediction error, while the 1st network will attempt to maximize it. Through this zero-sum game the first network gets better and better at producing these convincing fake outputs which look almost realistic. So, once you have an interesting set of images by Vincent Van Gogh, you can generate new images that leverage his style, without the original artist having ever produced the artwork himself. Jones: I see how the Van Gogh example can be applied in an education setting and there are countless examples of artists mimicking styles from famous painters but image generation from this instance that can happen within seconds is quite another feat. And you know this is how GANs has been used. What’s more prevalent today is a socialized enablement of generating images or information to intentionally fool people. It also surfaces new harms that deal with the threat to intellectual property and copyright, where laws have yet to account for. And from your perspective this was not the intention when the model was conceived. What was your motivation in your early conception of what is now GANs? Schmidhuber: My old motivation for GANs was actually very important and it was not to create deepfakes or fake news but to enable AIs to be curious and invent their own goals, to make them explore their environment and make them creative. Suppose you have a robot that executes one action, then something happens, then it executes another action, and so on, because it wants to achieve certain goals in the environment. For example, when the battery is low, this will trigger “pain” through hunger sensors, so it wants to go to the charging station, without running into obstacles, which will trigger other pain sensors. It will seek to minimize pain (encoded through numbers). Now the robot has a friend, the second network, which is a world model ––it’s a prediction machine that learns to predict the consequences of the robot’s actions. Once the robot has a good model of the world, it can use it for planning. It can be used as a simulation of the real world. And then it can determine what is a good action sequence. If the robot imagines this sequence of actions, the model will predict a lot of pain, which it wants to avoid. If it plays this alternative action sequence in its mental model of the world, then it will predict a rewarding situation where it’s going to sit on the charging station and its battery is going to load again. So, it'll prefer to execute the latter action sequence. In the beginning, however, the model of the world knows nothing, so how can we motivate the first network to generate experiments that lead to data that helps the world model learn something it didn’t already know? That’s what artificial curiosity is about. The dueling two network systems effectively explore uncharted environments by creating experiments so that over time the curious AI gets a better sense of how the environment works. This can be applied to all kinds of environments, and has medical applications. Jones: Let’s talk about the future. You have said, “Traditional humans won’t play a significant role in spreading intelligence across the universe.” Schmidhuber: Let’s first conceptually separate two types of AIs. The first type of AI are tools directed by humans. They are trained to do specific things like accurately detect diabetes or heart disease and prevent attacks before they happen. In these cases, the goal is coming from the human. More interesting AIs are setting their own goals. They are inventing their own experiments and learning from them. Their horizons expand and eventually they become more and more general problem solvers in the real world. They are not controlled by their parents, but much of what they learn is through self-invented experiments. A robot, for example, is rotating a toy, and as it is doing this, the video coming in through the camera eyes, changes over time and it begins to learn how this video changes and learns how the 3D nature of the toy generates certain videos if you rotate it a certain way, and eventually, how gravity works, and how the physics of the world works. Like a little scientist! And I have predicted for decades that future scaled-up versions of such AI scientists will want to further expand their horizons, and eventually go where most of the physical resources are, to build more and bigger AIs. And of course, almost all of these resources are far away from earth out there in space, which is hostile to humans but friendly to appropriately designed AI-controlled robots and self-replicating robot factories. So here we are not talking any longer about our tiny biosphere; no, we are talking about the much bigger rest of the universe. Within a few tens of billions of years, curious self-improving AIs will colonize the visible cosmos in a way that’s infeasible for humans. Those who don’t won’t have an impact. Sounds like science fiction, but since the 1970s I have been unable to see a plausible alternative to this scenario, except for a global catastrophe such as an all-out nuclear war that stops this development before it takes off. Jones: How long have these AIs, which can set their own goals — how long have they existed? To what extent can they be independent of human interaction? Schmidhuber: Neural networks like that have existed for over 30 years. My first simple adversarial neural network system of this kind is the one from 1990 described above. You don’t need a teacher there; it's just a little agent running around in the world and trying to invent new experiments that surprise its own prediction machine. Once it has figured out certain parts of the world, the agent will become bored and will move on to more exciting experiments. The simple 1990 systems I mentioned have certain limitations, but in the past three decades, we have also built more sophisticated systems that are setting their own goals and such systems I think will be essential for achieving true intelligence. If you are only imitating humans, you will never go beyond them. So, you really must give AIs the freedom to explore previously unexplored regions of the world in a way that no human is really predefining. Jones: Where is this being done today? Schmidhuber: Variants of neural network-based artificial curiosity are used today for agents that learn to play video games in a human-competitive way. We have also started to use them for automatic design of experiments in fields such as materials science. I bet many other fields will be affected by it: chemistry, biology, drug design, you name it. However, at least for now, these artificial scientists, as I like to call them, cannot yet compete with human scientists. I don’t think it’s going to stay this way but, at the moment, it’s still the case. Sure, AI has made a lot of progress. Since 1997, there have been superhuman chess players, and since 2011, through the DanNet of my team, there have been superhuman visual pattern recognizers. But there are other things where humans, at the moment at least, are much better, in particular, science itself. In the lab we have many first examples of self-directed artificial scientists, but they are not yet convincing enough to appear on the radar screen of the public space, which is currently much more fascinated with simpler systems that just imitate humans and write texts based on previously seen human-written documents. Jones: You speak of these numerous instances dating back 30 years of these lab experiments where these self-driven agents are deciding and learning and moving on once they’ve learned. And I assume that that rate of learning becomes even faster over time. What kind of timeframe are we talking about when this eventually is taken outside of the lab and embedded into society? Schmidhuber: This could still take months or even years :-) Anyway, in the not-too-distant future, we will probably see artificial scientists who are good at devising experiments that allow them to discover new, previously unknown physical laws. As always, we are going to profit from the old trend that has held at least since 1941: every decade compute is getting 100 times cheaper. Jones: How does this trend affect modern AI such as ChatGPT? Schmidhuber: Perhaps you know that all the recent famous AI applications such as ChatGPT and similar models are largely based on principles of artificial neural networks invented in the previous millennium. The main reason why they works so well now is the incredible acceleration of compute per dollar. ChatGPT is driven by a neural network called “Transformer” described in 2017 by Google. I am happy about that because a quarter century earlier in 1991 I had a particular Transformer variant which is now called the “Transformer with linearized self-attention”. Back then, not much could be done with it, because the compute cost was a million times higher than today. But today, one can train such models on half the internet and achieve much more interesting results. Jones: And for how long will this acceleration continue? Schmidhuber: There's no reason to believe that in the next 30 years, we won't have another factor of 1 million and that's going to be really significant. In the near future, for the first time we will have many not-so expensive devices that can compute as much as a human brain. The physical limits of computation, however, are much further out so even if the trend of a factor of 100 every decade continues, the physical limits (of 1051 elementary instructions per second and kilogram of matter) won’t be hit until, say, the mid-next century. Even in our current century, however, we’ll probably have many machines that compute more than all 10 billion human brains collectively and you can imagine, everything will change then! Jones: That is the big question. Is everything going to change? If so, what do you say to the next generation of leaders, currently coming out of college and university. So much of this change is already impacting how they study, how they will work, or how the future of work and livelihood is defined. What is their purpose and how do we change our systems so they will adapt to this new version of intelligence? Schmidhuber: For decades, people have asked me questions like that, because you know what I'm saying now, I have basically said since the 1970s, it’s just that today, people are paying more attention because, back then, they thought this was science fiction. They didn't think that I would ever come close to achieving my crazy life goal of building a machine that learns to become smarter than myself such that I can retire. But now many have changed their minds and think it's conceivable. And now I have two daughters, 23 and 25. People ask me: what do I tell them? They know that Daddy always said, “It seems likely that within your lifetimes, you will have new types of intelligence that are probably going to be superior in many ways, and probably all kinds of interesting ways.” How should they prepare for that? And I kept telling them the obvious: Learn how to learn new things! It's not like in the previous millennium where within 20 years someone learned to be a useful member of society, and then took a job for 40 years and performed in this job until she received her pension. Now things are changing much faster and we must learn continuously just to keep up. I also told my girls that no matter how smart AIs are going to get, learn at least the basics of math and physics, because that’s the essence of our universe, and anybody who understands this will have an advantage, and learn all kinds of new things more easily. I also told them that social skills will remain important, because most future jobs for humans will continue to involve interactions with other humans, but I couldn’t teach them anything about that; they know much more about social skills than I do. You touched on the big philosophical question about people’s purpose. Can this be answered without answering the even grander question: What’s the purpose of the entire universe? We don’t know. But what’s happening right now might be connected to the unknown answer. Don’t think of humans as the crown of creation. Instead view human civilization as part of a much grander scheme, an important step (but not the last one) on the path of the universe from very simple initial conditions towards more and more unfathomable complexity. Now it seems ready to take its next step, a step comparable to the invention of life itself over 3.5 billion years ago. Alas, don’t worry, in the end, all will be good! Jones: Let’s get back to this transformation happening right now with OpenAI. There are many questioning the efficacy and accuracy of ChatGPT, and are concerned its release has been premature. In light of the rampant adoption, educators have banned its use over concerns of plagiarism and how it stifles individual development. Should large language models like ChatGPT be used in school? Schmidhuber: When the calculator was first introduced, instructors forbade students from using it in school. Today, the consensus is that kids should learn the basic methods of arithmetic, but they should also learn to use the “artificial multipliers” aka calculators, even in exams, because laziness and efficiency is a hallmark of intelligence. Any intelligent being wants to minimize its efforts to achieve things. And that's the reason why we have tools, and why our kids are learning to use these tools. The first stone tools were invented maybe 3.5 million years ago; tools just have become more sophisticated over time. In fact, humans have changed in response to the properties of their tools. Our anatomical evolution was shaped by tools such as spears and fire. So, it's going to continue this way. And there is no permanent way of preventing large language models from being used in school. Jones: And when our children, your children graduate, what does their future work look like? Schmidhuber: A single human trying to predict details of how 10 billion people and their machines will evolve in the future is like a single neuron in my brain trying to predict what the entire brain and its tens of billions of neurons will do next year. 40 years ago, before the WWW was created at CERN in Switzerland, who would have predicted all those young people making money as YouTube video bloggers? Nevertheless, let’s make a few limited job-related observations. For a long time, people have thought that desktop jobs may require more intelligence than skills trade or handicraft professions. But now, it turns out that it's much easier to replace certain aspects of desktop jobs than replacing a carpenter, for example. Because everything that works well in AI is happening behind the screen currently, but not so much in the physical world. There are now artificial systems that can read lots of documents and then make really nice summaries of these documents. That is a desktop job. Or you give them a description of an illustration that you want to have for your article and pretty good illustrations are being generated that may need some minimal fine-tuning. But you know, all these desktop jobs are much easier to facilitate than the real tough jobs in the physical world. And it's interesting that the things people thought required intelligence, like playing chess, or writing or summarizing documents, are much easier for machines than they thought. But for things like playing football or soccer, there is no physical robot that can remotely compete with the abilities of a little boy with these skills. So, AI in the physical world, interestingly, is much harder than AI behind the screen in virtual worlds. And it's really exciting, in my opinion, to see that jobs such as plumbers are much more challenging than playing chess or writing another tabloid story. Jones: The way data has been collected in these large language models does not guarantee personal information has not been excluded. Current consent laws already are outdated when it comes to these large language models (LLM). The concern, rightly so, is increasing surveillance and loss of privacy. What is your view on this? Schmidhuber: As I have indicated earlier: are surveillance and loss of privacy inevitable consequences of increasingly complex societies? Super-organisms such as cities and states and companies consist of numerous people, just like people consist of numerous cells. These cells enjoy little privacy. They are constantly monitored by specialized "police cells" and "border guard cells": Are you a cancer cell? Are you an external intruder, a pathogen? Individual cells sacrifice their freedom for the benefits of being part of a multicellular organism. Similarly, for super-organisms such as nations. Over 5000 years ago, writing enabled recorded history and thus became its inaugural and most important invention. Its initial purpose, however, was to facilitate surveillance, to track citizens and their tax payments. The more complex a super-organism, the more comprehensive its collection of information about its constituents. 200 years ago, at least, the parish priest in each village knew everything about all the village people, even about those who did not confess, because they appeared in the confessions of others. Also, everyone soon knew about the stranger who had entered the village, because some occasionally peered out of the window, and what they saw got around. Such control mechanisms were temporarily lost through anonymization in rapidly growing cities but are now returning with the help of new surveillance devices such as smartphones as part of digital nervous systems that tell companies and governments a lot about billions of users. Cameras and drones etc. are becoming increasingly tinier and more ubiquitous. More effective recognition of faces and other detection technology are becoming cheaper and cheaper, and many will use it to identify others anywhere on earth; the big wide world will not offer any more privacy than the local village. Is this good or bad? Some nations may find it easier than others to justify more complex kinds of super-organisms at the expense of the privacy rights of their constituents. Jones: So, there is no way to stop or change this process of collection, or how it continuously informs decisions over time? How do you see governance and rules responding to this, especially amid Italy’s ban on ChatGPT following suspected user data breach and the more recent news about the Meta’s record $1.3billion fine in the company’s handling of user information? Schmidhuber: Data collection has benefits and drawbacks, such as the loss of privacy. How to balance those? I have argued for addressing this through data ownership in data markets. If it is true that data is the new oil, then it should have a price, just like oil. At the moment, the major surveillance platforms such as Meta do not offer users any money for their data and the transitive loss of privacy. In the future, however, we will likely see attempts at creating efficient data markets to figure out the data's true financial value through the interplay between supply and demand. Even some of the sensitive medical data should not be priced by governmental regulators but by patients (and healthy persons) who own it and who may sell or license parts thereof as micro-entrepreneurs in a healthcare data market. Following a previous interview, I gave for one of the largest re-insurance companies , let's look at the different participants in such a data market: patients, hospitals, data companies. (1) Patients with a rare form of cancer can offer more valuable data than patients with a very common form of cancer. (2) Hospitals and their machines are needed to extract the data, e.g., through magnet spin tomography, radiology, evaluations through human doctors, and so on. (3) Companies such as Siemens, Google or IBM would like to buy annotated data to make better artificial neural networks that learn to predict pathologies and diseases and the consequences of therapies. Now the market’s invisible hand will decide about the data’s price through the interplay between demand and supply. On the demand side, you will have several companies offering something for the data, maybe through an app on the smartphone (a bit like a stock market app). On the supply side, each patient in this market should be able to profit from high prices for rare valuable types of data. Likewise, competing data extractors such as hospitals will profit from gaining recognition and trust for extracting data well at a reasonable price. The market will make the whole system efficient through incentives for all who are doing a good job. Soon there will be a flourishing ecosystem of commercial data market advisors and what not, just like the ecosystem surrounding the traditional stock market. The value of the data won’t be determined by governments or ethics committees, but by those who own the data and decide by themselves which parts thereof they want to license to others under certain conditions. At first glance, a market-based system seems to be detrimental to the interest of certain monopolistic companies, as they would have to pay for the data - some would prefer free data and keep their monopoly. However, since every healthy and sick person in the market would suddenly have an incentive to collect and share their data under self-chosen anonymity conditions, there will soon be many more useful data to evaluate all kinds of treatments. On average, people will live longer and healthier, and many companies and the entire healthcare system will benefit. Jones: Finally, what is your view on open source versus the private companies like Google and OpenAI? Is there a danger to supporting these private companies’ large language models versus trying to keep these models open source and transparent, very much like what LAION is doing? Schmidhuber: I signed this open letter by LAION because I strongly favor the open-source movement. And I think it's also something that is going to challenge whatever big tech dominance there might be at the moment. Sure, the best models today are run by big companies with huge budgets for computers, but the exciting fact is that open-source models are not so far behind, some people say maybe six to eight months only. Of course, the private company models are all based on stuff that was created in academia, often in little labs without so much funding, which publish without patenting their results and open source their code and others take it and improved it. Big tech has profited tremendously from academia; their main achievement being that they have scaled up everything greatly, sometimes even failing to credit the original inventors. So, it's very interesting to see that as soon as some big company comes up with a new scaled-up model, lots of students out there are competing, or collaborating, with each other, trying to come up with equal or better performance on smaller networks and smaller machines. And since they are open sourcing, the next guy can have another great idea to improve it, so now there’s tremendous competition also for the big companies. Because of that, and since AI is still getting exponentially cheaper all the time, I don't believe that big tech companies will dominate in the long run. They find it very hard to compete with the enormous open-source movement. As long as you can encourage the open-source community, I think you shouldn't worry too much. Now, of course, you might say if everything is open source, then the bad actors also will more easily have access to these AI tools. And there's truth to that. But as always since the invention of controlled fire, it was good that knowledge about how technology works quickly became public such that everybody could use it. And then, against any bad actor, there's almost immediately a counter actor trying to nullify his efforts. You see, I still believe in our old motto "AI∀" or "AI For All." Jones: Thank you, Juergen for sharing your perspective on this amazing time in history. It’s clear that with new technology, the enormous potential can be matched by disparate and troubling risks which we’ve yet to solve, and even those we have yet to identify. If we are to dispel the fear of a sentient system for which we have no control, humans, alone need to take steps for more responsible development and collaboration to ensure AI technology is used to ultimately benefit society. Humanity will be judged by what we do next.

[P] I built an open SotA image tagging model to do what CLIP won't
reddit
LLM Vibe Score0
Human Vibe Score1
fpgaminerThis week

[P] I built an open SotA image tagging model to do what CLIP won't

I'm a hobbyist ML researcher and finally, after a year of work, built a state of the art machine vision model from scratch. It's ViT-B/16 based, 448x448x3 input, 91M parameters, trained for 660M samples, with multi-label classification as the target task, on over 5000 unique tags. All the big foundation vision models today were trained on heavily filtered datasets, greatly limiting the concepts they can represent, in line with arbitrary sets of rules for what is deemed "wholesome" by leading tech companies. Everything from innocuous to spicy is on the chopping block of those filters. And because CLIP pervades the industry, from StableDiffusion to LLaVA, so does OpenAI's sensibilities. My goal was to build a vision model for tagging images, mainly for labelling images for SD finetunes, but which wasn't as heavily filtered and handicapped as CLIP/BLIP/LLaVA. Something more inclusive, diverse, and sex positive. Starting from the wonderful work of SmilingWolf (https://github.com/SmilingWolf/SW-CV-ModelZoo) and the Danbooru2021 dataset, I iterated for a year on the model, training, and manually labeling a thousand images to help the model generalize beyond the danbooru domain. I'm releasing the first version of this model, dubbed JoyTag, today: https://github.com/fpgaminer/joytag It achieves a mean F1 score of 0.578 across all of its over 5000 tags and across both the anime/manga styled images of the original danbooru dataset, but also photographs and other mediums thanks to the auxiliary training data I provided to it. It was quite the struggle getting to this point, and I probably spent more time and money than any sane person should have. I learned a lot about dealing with datasets as large as danbooru2021, training models at scale, and how to keep yourself awake all night so your 8xA100 rental doesn't crash and blow all your money. In my manual testing outside of even the validation set, the model has generalized well to unseen images, so I'm quite happy with the results thus far. There's plenty more work to do expanding its dataset to improve that F1 score further, and roundout its weak points. With inclusivity and diversity being a major goal of this project, I'm disappointed by some of its remaining limitations (as documented in the GitHub README). But I'm already busy manually tagging more images using my model-augmented workflow. I'm happy to answer questions about the project, the training procedure, anything. All the training parameters are documented on GitHub, but there are so many little details that were hard won over the year. Like that damned loss multiplier. Ugh. Github: https://github.com/fpgaminer/joytag Model download: https://huggingface.co/fancyfeast/joytag/tree/main Demo: https://huggingface.co/spaces/fancyfeast/joytag

[N] Inside DeepMind's secret plot to break away from Google
reddit
LLM Vibe Score0
Human Vibe Score0
MassivePellfishThis week

[N] Inside DeepMind's secret plot to break away from Google

Article https://www.businessinsider.com/deepmind-secret-plot-break-away-from-google-project-watermelon-mario-2021-9 by Hugh Langley and Martin Coulter For a while, some DeepMind employees referred to it as "Watermelon." Later, executives called it "Mario." Both code names meant the same thing: a secret plan to break away from parent company Google. DeepMind feared Google might one day misuse its technology, and executives worked to distance the artificial-intelligence firm from its owner for years, said nine current and former employees who were directly familiar with the plans. This included plans to pursue an independent legal status that would distance the group's work from Google, said the people, who asked not to be identified discussing private matters. One core tension at DeepMind was that it sold the business to people it didn't trust, said one former employee. "Everything that happened since that point has been about them questioning that decision," the person added. Efforts to separate DeepMind from Google ended in April without a deal, The Wall Street Journal reported. The yearslong negotiations, along with recent shake-ups within Google's AI division, raise questions over whether the search giant can maintain control over a technology so crucial to its future. "DeepMind's close partnership with Google and Alphabet since the acquisition has been extraordinarily successful — with their support, we've delivered research breakthroughs that transformed the AI field and are now unlocking some of the biggest questions in science," a DeepMind spokesperson said in a statement. "Over the years, of course we've discussed and explored different structures within the Alphabet group to find the optimal way to support our long-term research mission. We could not be prouder to be delivering on this incredible mission, while continuing to have both operational autonomy and Alphabet's full support." When Google acquired DeepMind in 2014, the deal was seen as a win-win. Google got a leading AI research organization, and DeepMind, in London, won financial backing for its quest to build AI that can learn different tasks the way humans do, known as artificial general intelligence. But tensions soon emerged. Some employees described a cultural conflict between researchers who saw themselves firstly as academics and the sometimes bloated bureaucracy of Google's colossal business. Others said staff were immediately apprehensive about putting DeepMind's work under the control of a tech giant. For a while, some employees were encouraged to communicate using encrypted messaging apps over the fear of Google spying on their work. At one point, DeepMind's executives discovered that work published by Google's internal AI research group resembled some of DeepMind's codebase without citation, one person familiar with the situation said. "That pissed off Demis," the person added, referring to Demis Hassabis, DeepMind's CEO. "That was one reason DeepMind started to get more protective of their code." After Google restructured as Alphabet in 2015 to give riskier projects more freedom, DeepMind's leadership started to pursue a new status as a separate division under Alphabet, with its own profit and loss statement, The Information reported. DeepMind already enjoyed a high level of operational independence inside Alphabet, but the group wanted legal autonomy too. And it worried about the misuse of its technology, particularly if DeepMind were to ever achieve AGI. Internally, people started referring to the plan to gain more autonomy as "Watermelon," two former employees said. The project was later formally named "Mario" among DeepMind's leadership, these people said. "Their perspective is that their technology would be too powerful to be held by a private company, so it needs to be housed in some other legal entity detached from shareholder interest," one former employee who was close to the Alphabet negotiations said. "They framed it as 'this is better for society.'" In 2017, at a company retreat at the Macdonald Aviemore Resort in Scotland, DeepMind's leadership disclosed to employees its plan to separate from Google, two people who were present said. At the time, leadership said internally that the company planned to become a "global interest company," three people familiar with the matter said. The title, not an official legal status, was meant to reflect the worldwide ramifications DeepMind believed its technology would have. Later, in negotiations with Google, DeepMind pursued a status as a company limited by guarantee, a corporate structure without shareholders that is sometimes used by nonprofits. The agreement was that Alphabet would continue to bankroll the firm and would get an exclusive license to its technology, two people involved in the discussions said. There was a condition: Alphabet could not cross certain ethical redlines, such as using DeepMind technology for military weapons or surveillance. In 2019, DeepMind registered a new company called DeepMind Labs Limited, as well as a new holding company, filings with the UK's Companies House showed. This was done in anticipation of a separation from Google, two former employees involved in those registrations said. Negotiations with Google went through peaks and valleys over the years but gained new momentum in 2020, one person said. A senior team inside DeepMind started to hold meetings with outside lawyers and Google to hash out details of what this theoretical new formation might mean for the two companies' relationship, including specifics such as whether they would share a codebase, internal performance metrics, and software expenses, two people said. From the start, DeepMind was thinking about potential ethical dilemmas from its deal with Google. Before the 2014 acquisition closed, both companies signed an "Ethics and Safety Review Agreement" that would prevent Google from taking control of DeepMind's technology, The Economist reported in 2019. Part of the agreement included the creation of an ethics board that would supervise the research. Despite years of internal discussions about who should sit on this board, and vague promises to the press, this group "never existed, never convened, and never solved any ethics issues," one former employee close to those discussions said. A DeepMind spokesperson declined to comment. DeepMind did pursue a different idea: an independent review board to convene if it were to separate from Google, three people familiar with the plans said. The board would be made up of Google and DeepMind executives, as well as third parties. Former US president Barack Obama was someone DeepMind wanted to approach for this board, said one person who saw a shortlist of candidates. DeepMind also created an ethical charter that included bans on using its technology for military weapons or surveillance, as well as a rule that its technology should be used for ways that benefit society. In 2017, DeepMind started a unit focused on AI ethics research composed of employees and external research fellows. Its stated goal was to "pave the way for truly beneficial and responsible AI." A few months later, a controversial contract between Google and the Pentagon was disclosed, causing an internal uproar in which employees accused Google of getting into "the business of war." Google's Pentagon contract, known as Project Maven, "set alarm bells ringing" inside DeepMind, a former employee said. Afterward, Google published a set of principles to govern its work in AI, guidelines that were similar to the ethical charter that DeepMind had already set out internally, rankling some of DeepMind's senior leadership, two former employees said. In April, Hassabis told employees in an all-hands meeting that negotiations to separate from Google had ended. DeepMind would maintain its existing status inside Alphabet. DeepMind's future work would be overseen by Google's Advanced Technology Review Council, which includes two DeepMind executives, Google's AI chief Jeff Dean, and the legal SVP Kent Walker. But the group's yearslong battle to achieve more independence raises questions about its future within Google. Google's commitment to AI research has also come under question, after the company forced out two of its most senior AI ethics researchers. That led to an industry backlash and sowed doubt over whether it could allow truly independent research. Ali Alkhatib, a fellow at the Center for Applied Data Ethics, told Insider that more public accountability was "desperately needed" to regulate the pursuit of AI by large tech companies. For Google, its investment in DeepMind may be starting to pay off. Late last year, DeepMind announced a breakthrough to help scientists better understand the behavior of microscopic proteins, which has the potential to revolutionize drug discovery. As for DeepMind, Hassabis is holding on to the belief that AI technology should not be controlled by a single corporation. Speaking at Tortoise's Responsible AI Forum in June, he proposed a "world institute" of AI. Such a body might sit under the jurisdiction of the United Nations, Hassabis theorized, and could be filled with top researchers in the field. "It's much stronger if you lead by example," he told the audience, "and I hope DeepMind can be part of that role-modeling for the industry."

[D] Misuse of Deep Learning in Nature Journal’s Earthquake Aftershock Paper
reddit
LLM Vibe Score0
Human Vibe Score0.333
milaworldThis week

[D] Misuse of Deep Learning in Nature Journal’s Earthquake Aftershock Paper

Recently, I saw a post by Rajiv Shah, Chicago-based data-scientist, regarding an article published in Nature last year called Deep learning of aftershock patterns following large earthquakes, written by scientists at Harvard in collaboration with Google. Below is the article: Stand Up for Best Practices: Misuse of Deep Learning in Nature’s Earthquake Aftershock Paper The Dangers of Machine Learning Hype Practitioners of AI, machine learning, predictive modeling, and data science have grown enormously over the last few years. What was once a niche field defined by its blend of knowledge is becoming a rapidly growing profession. As the excitement around AI continues to grow, the new wave of ML augmentation, automation, and GUI tools will lead to even more growth in the number of people trying to build predictive models. But here’s the rub: While it becomes easier to use the tools of predictive modeling, predictive modeling knowledge is not yet a widespread commodity. Errors can be counterintuitive and subtle, and they can easily lead you to the wrong conclusions if you’re not careful. I’m a data scientist who works with dozens of expert data science teams for a living. In my day job, I see these teams striving to build high-quality models. The best teams work together to review their models to detect problems. There are many hard-to-detect-ways that lead to problematic models (say, by allowing target leakage into their training data). Identifying issues is not fun. This requires admitting that exciting results are “too good to be true” or that their methods were not the right approach. In other words, it’s less about the sexy data science hype that gets headlines and more about a rigorous scientific discipline. Bad Methods Create Bad Results Almost a year ago, I read an article in Nature that claimed unprecedented accuracy in predicting earthquake aftershocks by using deep learning. Reading the article, my internal radar became deeply suspicious of their results. Their methods simply didn’t carry many of the hallmarks of careful predicting modeling. I started to dig deeper. In the meantime, this article blew up and became widely recognized! It was even included in the release notes for Tensorflow as an example of what deep learning could do. However, in my digging, I found major flaws in the paper. Namely, data leakage which leads to unrealistic accuracy scores and a lack of attention to model selection (you don’t build a 6 layer neural network when a simpler model provides the same level of accuracy). To my earlier point: these are subtle, but incredibly basic predictive modeling errors that can invalidate the entire results of an experiment. Data scientists are trained to recognize and avoid these issues in their work. I assumed that this was simply overlooked by the author, so I contacted her and let her know so that she could improve her analysis. Although we had previously communicated, she did not respond to my email over concerns with the paper. Falling On Deaf Ears So, what was I to do? My coworkers told me to just tweet it and let it go, but I wanted to stand up for good modeling practices. I thought reason and best practices would prevail, so I started a 6-month process of writing up my results and shared them with Nature. Upon sharing my results, I received a note from Nature in January 2019 that despite serious concerns about data leakage and model selection that invalidate their experiment, they saw no need to correct the errors, because “Devries et al. are concerned primarily with using machine learning as [a] tool to extract insight into the natural world, and not with details of the algorithm design.” The authors provided a much harsher response. You can read the entire exchange on my github. It’s not enough to say that I was disappointed. This was a major paper (it’s Nature!) that bought into AI hype and published a paper despite it using flawed methods. Then, just this week, I ran across articles by Arnaud Mignan and Marco Broccardo on shortcomings that they found in the aftershocks article. Here are two more data scientists with expertise in earthquake analysis who also noticed flaws in the paper. I also have placed my analysis and reproducible code on github. Standing Up For Predictive Modeling Methods I want to make it clear: my goal is not to villainize the authors of the aftershocks paper. I don’t believe that they were malicious, and I think that they would argue their goal was to just show how machine learning could be applied to aftershocks. Devries is an accomplished earthquake scientist who wanted to use the latest methods for her field of study and found exciting results from it. But here’s the problem: their insights and results were based on fundamentally flawed methods. It’s not enough to say, “This isn’t a machine learning paper, it’s an earthquake paper.” If you use predictive modeling, then the quality of your results are determined by the quality of your modeling. Your work becomes data science work, and you are on the hook for your scientific rigor. There is a huge appetite for papers that use the latest technologies and approaches. It becomes very difficult to push back on these papers. But if we allow papers or projects with fundamental issues to advance, it hurts all of us. It undermines the field of predictive modeling. Please push back on bad data science. Report bad findings to papers. And if they don’t take action, go to twitter, post about it, share your results and make noise. This type of collective action worked to raise awareness of p-values and combat the epidemic of p-hacking. We need good machine learning practices if we want our field to continue to grow and maintain credibility. Link to Rajiv's Article Original Nature Publication (note: paywalled) GitHub repo contains an attempt to reproduce Nature's paper Confrontational correspondence with authors

12 months ago, I was unemployed. Last week my side hustle got acquired by a $500m fintech company
reddit
LLM Vibe Score0
Human Vibe Score0.778
wutangsamThis week

12 months ago, I was unemployed. Last week my side hustle got acquired by a $500m fintech company

I’ve learned so much over the years from this subreddit. I thought I’d return the favour and share some of my own learnings. In November 2020 my best friend and I had an idea. “What if we could find out which stocks the Internet is talking about?” This formed the origins of Ticker Nerd. 9 months later we sold Ticker Nerd to Finder (an Australian fintech company valued at around $500m). In this post, I am going to lay out how we got there. How we came up with the idea First off, like other posts have covered - you don’t NEED a revolutionary or original idea to build a business. There are tonnes of “boring” businesses making over 7 figures a year e.g. law firms, marketing agencies, real estate companies etc. If you’re looking for an exact formula to come up with a great business idea I’m sorry, but it doesn’t exist. Finding new business opportunities is more of an art than a science. Although, there are ways you can make it easier to find inspiration. Below are the same resources I use for inspiration. I rarely ever come up with ideas without first searching one of the resources below for inspiration: Starter Story Twitter Startup Ideas My First Million Trends by the Hustle Trends VC To show how you how messy, random and unpredictable it can be to find an idea - let me explain how my co-founder and I came up with the idea for Ticker Nerd: We discovered a new product on Twitter called Exploding Topics. It was a newsletter that uses a bunch of software and algorithms to find trends that are growing quickly before they hit the mainstream. I had recently listened to a podcast episode from My First Million where they spoke about Motley Fool making hundreds of millions from their investment newsletters. We asked ourselves what if we could build a SaaS platform similar to Exploding Topics but it focused on stocks? We built a quick landing page using Carrd + Gumroad that explained what our new idea will do and included a payment option to get early access for $49. We called it Exploding Stock (lol). We shared it around a bunch of Facebook groups and subreddits. We made $1,000 in pre-sales within a couple days. My co-founder and I can’t code so we had to find a developer to build our idea. We interviewed a bunch of potential candidates. Meanwhile, I was trawling through Wall Street Bets and found a bunch of free tools that did roughly what we wanted to build. Instead of building another SaaS tool that did the same thing as these free tools we decided to pivot from our original idea. Our new idea = a paid newsletter that sends a weekly report that summarises 2 of the best stocks that are growing in interest on the Internet. We emailed everyone who pre-ordered access, telling them about the change and offered a full refund if they wanted. tl;dr: We essentially combined two existing businesses (Exploding Topics and Motley Fool) and made it way better. We validated the idea by finding out if people will actually pay money for it BEFORE we decided to build it. The idea we started out with changed over time. How to work out if your idea will actually make money It’s easy to get hung up on designing the logo or choosing the perfect domain name for your new idea. At this stage none of that matters. The most important thing is working out if people will pay money for it. This is where validation comes in. We usually validate ideas using Carrd. It lets you build a simple one page site without having to code. The Ticker Nerd site was actually built using a Carrd template. Here’s how you can do it yourself (at a high level): Create a Carrd pro account (yes it's a $49 one off payment but you’ll get way more value out of it). Buy a cheap template and send it to your Carrd account. You can build your own template but this will save you a lot of time. Once the template reaches your Carrd account, duplicate it. Leave the original so it can be duplicated for other ideas. Jump onto Canva (free) and create a logo using the free logos provided. Import your logo. Add copy to the page that explains your idea. Use the AIDA formula. Sign up to Gumroad (free) and create a pre-sale campaign. Create a discounted lifetime subscription or version of the product. This will be used pre-sales. Add the copy from the site into the pre-sale campaign on Gumroad. Add a ‘widget’ to Carrd and connect it to Gumroad using the existing easy integration feature. Purchase a domain name. Connect it to Carrd. Test the site works. Share your website Now the site is ready you can start promoting it in various places to see how the market reacts. An easy method is to find relevant subreddits using Anvaka (Github tool) or Subreddit Stats. The Anvaka tool provides a spider map of all the connected subreddits that users are active in. The highlighted ones are most relevant. You can post a thread in these subreddits that offer value or can generate discussion. For example: ‘I’m creating a tool that can write all your copy, would anyone actually use this?’ ‘What does everything think of using AI to get our copy written faster?’ ‘It’s time to scratch my own itch, I’m creating a tool that writes marketing copy using GPT-3. What are the biggest problems you face writing marketing copy? I’ll build a solution for it’ Reddit is pretty brutal these days so make sure the post is genuine and only drop your link in the comments or in the post if it seems natural. If people are interested they’ll ask for the link. Another great place to post is r/entrepreuerridealong and r/business_ideas. These subreddits expect people to share their ideas and you’ll likely make some sales straight off the bat. I also suggest posting in some Facebook groups (related to your idea) as well just for good measure. Assess the results If people are paying you for early access you can assume that it’s worth building your idea. The beauty of posting your idea on Reddit or in Facebook groups is you’ll quickly learn why people love/hate your idea. This can help you decide how to tweak the idea or if you should drop it and move on to the next one. How we got our first 100 customers (for free) By validating Ticker Nerd using subreddits and Facebook groups this gave us our first paying customers. But we knew this wouldn’t be sustainable. We sat down and brainstormed every organic strategy we could use to get traction as quickly as possible. The winner: a Product Hunt launch. A successful Product Hunt launch isn’t easy. You need: Someone that has a solid reputation and audience to “hunt” your product (essentially an endorsement). An aged Product Hunt account - you can’t post any products if your account is less than a week old. To be following relevant Product Hunt members - since they get notified when you launch a new product if they’re following you. Relationships with other builders and makers on Product Hunt that also have a solid reputation and following. Although, if you can pull it off you can get your idea in front of tens of thousands of people actively looking for new products. Over the next few weeks, I worked with my co-founder on connecting with different founders, indie hackers and entrepreneurs mainly via Twitter. We explained to them our plans for the Product Hunt launch and managed to get a small army of people ready to upvote our product on launch day. We were both nervous on the day of the launch. We told ourselves to have zero expectations. The worst that could happen was no one signed up and we were in the same position as we’re in now. Luckily, within a couple of hours Ticker Nerd was on the homepage of Product Hunt and in the top 10. The results were instant. After 24 hours we had around 200 people enter their payment details to sign up for our free trial. These signups were equal to around $5,800 in monthly recurring revenue. \-- I hope this post was useful! Drop any questions you have below and I’ll do my best to respond :)

Made $19.2k this month, and just surpassed $1000 the last 24 hours. What I did and what's next.
reddit
LLM Vibe Score0
Human Vibe Score1
dams96This week

Made $19.2k this month, and just surpassed $1000 the last 24 hours. What I did and what's next.

It's the first time I hit $1000+ in 24 hours and I had no one to share it with (except you guys). I'm quite proud of my journey, and I would have thought that making $1000 in a day would make me ecstatic, but actually it's not the case. Not sure if it's because my revenue has grown by increment step so I had time to "prepare" myself to achieve this at one point, or just that I'm nowhere near my goal of 100k/month so that I'm not that affected by it. But it's crazy to think that my goal was to make 100$ daily at the end of 2024. So for those who don't know me (I guess most of you), I build mobile apps and ship them as fast as I can. Most of them are in the AI space. I already made a post here on how I become a mobile app developer so you can check it for more details, but essentially here's what I did : Always loved creating my own things and solve problems Built multiple YouTube channels since I was 15 (mobile gaming actually) that all worked great (but it was too niche so not that scalable, didn't like that) Did a few businesses here and there (drop shopping, selling merch to school, etc) Finished my master's degree in engineering about 2 years ago Worked a moment in a famous watch industry company and saw my potential. The combo of health issues, fixed salary (although it was quite a lot), and me wanting to be an entrepreneur made me leave the company. Created a TikTok account in mobile tech (got 10+ million views the 1st 3 days), manage to grow it to 200k subs in about 3 months Got plenty of collabs for promoting mobile apps (between $500 - $2000 for a collab) Said fuck it I should do my own apps and market them on my TikTok instead of doing collabs Me wanting to build my own apps happened around May-June 2023. Started my TikTok in Feb 2023. At this point I had already 150k+ subs on TikTok. You guys need to know that I suck at coding big time. During my studies I tried to limit as much as I could coding because I was a lazy bast*rd, even though I knew it would come to bite me in the ass one day. But an angel appeared to me in broad daylight, that angel was called GPT-4. I subscribed for 20$/month to get access, and instantly I saw the potential of AI and how much it could help me. Last year GPT-4 was ahead of its time and could already code me basic apps. I had already a mac so I just downloaded Xcode and that was it. My 1st app was a wallpaper app, and I kid you not 90% of it was made by AI. Yes sometimes I had to try again and again with different prompts but it was still so much faster compared to if I had to learn coding from scratch and write code with my own hands. The only thing I didn't do was implement the in app purchase, from which I find a guy on Fiverr to do it for me for 50$. After about 2 months of on-off coding, my first app was ready to be launched. So it was launched, had a great successful launch without doing any videos at that point (iOS 17 was released and my app was the first one alongside another one to offer live wallpapers for iOS 17. I knew that there was a huge app potential there when iOS 17 was released in beta as Apple changed their live wallpaper feature). I Then made a video a few weeks after on my mobile tiktok channel, made about 1 million views in 48 hours, brought me around 40k additional users. Was top 1 chart in graphism and design category for a few weeks (in France, as I'm French so my TikTok videos are in French). And was top 100 in that same category in 120+ countries. Made about 500$ ? Okay that was trash, but I had no idea to monetize the app correctly at that point. It was still a huge W to me and proved me that I could successfully launch apps. Then I learned ASO (App Store Optimization) in depth, searched on internet, followed mobile app developers on Twitter, checked YouTube videos, you name it. I was eager to learn more. I needed more. Then I just iterated, build my 2nd app in less than a month, my 3rd in 3 weeks and so on. I just build my 14th app in 3 days and is now in review. Everytime I manage to reuse some of my other app's code in my new one, which is why I can build them so much faster now. I know how to monetize my app better by checking out my competitors. I learn so much by just "spying" other apps. Funnily enough, I only made this one Tiktok video on my main account to promote my app. For all my other apps, I didn't do a single video where I showcase it, the downloads has only been thanks to ASO. I still use AI everyday. I'm still not good at coding (a bit better than when I started). I use AI to create my app icons (midjourney or the new AI model Flux which is great). I use figma + midjourney to create my App Store screenshots (and they actually look quite good). I use GPT-4o and Claude 3.5 Sonnet to code most of my apps features. I use gpt-4o to localize my app (if you want to optimize the number of downloads I strongly suggest localizing your app, it takes me about 10 minutes thanks to AI). Now what are my next goals ? To achieve the 100k/month I need to change my strategy a little. Right now the $20k/month comes from purely organic downloads, I didn't do any paid advertising. It will be hard for me to keep on launching new apps and rely on ASO to reach the 100k mark. The best bet to reach 100k is to collab with content creators and they create a viral video showcasing your app. Depending on the app it's not that easy, luckily some of my apps can be viral so I will need to find the right content creators. Second way is to try tiktok/meta ads, I can check (have checked) all the ads that have been made by my competitors (thank you EU), so what I would do is copy their ad concept and create similar ads than them. Some of them have millions in ad budget so I know they create high converting ads, so you don't need to try to create an ad creative from scratch. My only big fear is to get banned by Apple (for no reason of mine). In just a snap of a finger they can just ban you from the platform, that shit scares me. And you pretty much can't do anything. So that's about it for me. I'm quite proud of myself not going to lie. Have been battling so many health issues these past years where I just stay in bed all day I'm surprised to be able to make it work. Anyways feel free to ask questions. I hope it was interesting for some of you at least. PS: My new app was just approved by app review, let the app gods favor me and bring me many downloads ! Also forgot to talk about a potential $100k+ acquisition of one of my apps, but if that ever happens I'll make a post on it.

I run an AI automation agency (AAA). My honest overview and review of this new business model
reddit
LLM Vibe Score0
Human Vibe Score1
AI_Scout_OfficialThis week

I run an AI automation agency (AAA). My honest overview and review of this new business model

I started an AI tools directory in February, and then branched off that to start an AI automation agency (AAA) in June. So far I've come across a lot of unsustainable "ideas" to make money with AI, but at the same time a few diamonds in the rough that aren't fully tapped into yet- especially the AAA model. Thought I'd share this post to shine light into this new business model and share some ways you could potentially start your own agency, or at the very least know who you are dealing with and how to pick and choose when you (inevitably) get bombarded with cold emails from them down the line. Foreword Running an AAA does NOT involve using AI tools directly to generate and sell content directly. That ship has sailed, and unless you are happy with $5 from Fiverr every month or so, it is not a real business model. Cry me a river but generating generic art with AI and slapping it onto a T-shirt to sell on Etsy won't make you a dime. At the same time, the AAA model will NOT require you to have a deep theoretical knowledge of AI, or any academic degree, as we are more so dealing with the practical applications of generative AI and how we can implement these into different workflows and tech-stacks, rather than building AI models from the ground up. Regardless of all that, common sense and a willingness to learn will help (a shit ton), as with anything. Keep in mind - this WILL involve work and motivation as well. The mindset that AI somehow means everything can be done for you on autopilot is not the right way to approach things. The common theme of businesses I've seen who have successfully implemented AI into their operations is the willingess to work with AI in a way that augments their existing operations, rather than flat out replace a worker or team. And this is exactly the train of thought you need when working with AI as a business model. However, as the field is relatively unsaturated and hype surrounding AI is still fresh for enterprises, right now is the prime time to start something new if generative AI interests you at all. With that being said, I'll be going over three of the most successful AI-adjacent businesses I've seen over this past year, in addition to some tips and resources to point you in the right direction. so.. WTF is an AI Automation Agency? The AI automation agency (or as some YouTubers have coined it, the AAA model) at its core involves creating custom AI solutions for businesses. I have over 1500 AI tools listed in my directory, however the feedback I've received from some enterprise users is that ready-made SaaS tools are too generic to meet their specific needs. Combine this with the fact virtually no smaller companies have the time or skills required to develop custom solutions right off the bat, and you have yourself real demand. I would say in practice, the AAA model is quite similar to Wordpress and even web dev agencies, with the major difference being all solutions you develop will incorporate key aspects of AI AND automation. Which brings me to my second point- JUST AI IS NOT ENOUGH. Rather than reducing the amount of time required to complete certain tasks, I've seen many AI agencies make the mistake of recommending and (trying to) sell solutions that more likely than not increase the workload of their clients. For example, if you were to make an internal tool that has AI answer questions based on their knowledge base, but this knowledge base has to be updated manually, this is creating unnecessary work. As such I think one of the key components of building successful AI solutions is incorporating the new (Generative AI/LLMs) with the old (programmtic automation- think Zapier, APIs, etc.). Finally, for this business model to be successful, ideally you should target a niche in which you have already worked and understand pain points and needs. Not only does this make it much easier to get calls booked with prospects, the solutions you build will have much greater value to your clients (meaning you get paid more). A mistake I've seen many AAA operators make (and I blame this on the "Get Rich Quick" YouTubers) is focusing too much on a specific productized service, rather than really understanding the needs of businesses. The former is much done via a SaaS model, but when going the agency route the only thing that makes sense is building custom solutions. This is why I always take a consultant-first approach. You can only build once you understand what they actually need and how certain solutions may impact their operations, workflows, and bottom-line. Basics of How to Get Started Pick a niche. As I mentioned previously, preferably one that you've worked in before. Niches I know of that are actively being bombarded with cold emails include real estate, e-commerce, auto-dealerships, lawyers, and medical offices. There is a reason for this, but I will tell you straight up this business model works well if you target any white-collar service business (internal tools approach) or high volume businesses (customer facing tools approach). Setup your toolbox. If you wanted to start a pressure washing business, you would need a pressure-washer. This is no different. For those without programming knowledge, I've seen two common ways AAA get setup to build- one is having a network of on-call web developers, whether its personal contacts or simply going to Upwork or any talent sourcing agency. The second is having an arsenal of no-code tools. I'll get to this more in a second, but this works beecause at its core, when we are dealing with the practical applications of AI, the code is quite simple, simply put. Start cold sales. Unless you have a network already, this is not a step you can skip. You've already picked a niche, so all you have to do is find the right message. Keep cold emails short, sweet, but enticing- and it will help a lot if you did step 1 correctly and intimately understand who your audience is. I'll be touching base later about how you can leverage AI yourself to help you with outreach and closing. The beauty of gen AI and the AAA model You don't need to be a seasoned web developer to make this business model work. The large majority of solutions that SME clients want is best done using an API for an LLM for the actual AI aspect. The value we create with the solutions we build comes with the conceptual framework and design that not only does what they need it to but integrates smoothly with their existing tech-stack and workflow. The actual implementation is quite straightforward once you understand the high level design and know which tools you are going to use. To give you a sense, even if you plan to build out these apps yourself (say in Python) the large majority of the nitty gritty technical work has already been done for you, especially if you leverage Python libraries and packages that offer high level abstraction for LLM-related functions. For instance, calling GPT can be as little as a single line of code. (And there are no-code tools where these functions are simply an icon on a GUI). Aside from understanding the capabilities and limitations of these tools and frameworks, the only thing that matters is being able to put them in a way that makes sense for what you want to build. Which is why outsourcing and no-code tools both work in our case. Okay... but how TF am I suppposed to actually build out these solutions? Now the fun part. I highly recommend getting familiar with Langchain and LlamaIndex. Both are Python libraires that help a lot with the high-level LLM abstraction I mentioned previously. The two most important aspects include being able to integrate internal data sources/knowledge bases with LLMs, and have LLMs perform autonomous actions. The two most common methods respectively are RAG and output parsing. RAG (retrieval augmented Generation) If you've ever seen a tool that seemingly "trains" GPT on your own data, and wonder how it all works- well I have an answer from you. At a high level, the user query is first being fed to what's called a vector database to run vector search. Vector search basically lets you do semantic search where you are searching data based on meaning. The vector databases then retrieves the most relevant sections of text as it relates to the user query, and this text gets APPENDED to your GPT prompt to provide extra context to the AI. Further, with prompt engineering, you can limit GPT to only generate an answer if it can be found within this extra context, greatly limiting the chance of hallucination (this is where AI makes random shit up). Aside from vector databases, we can also implement RAG with other data sources and retrieval methods, for example SQL databses (via parsing the outputs of LLM's- more on this later). Autonomous Agents via Output Parsing A common need of clients has been having AI actually perform tasks, rather than simply spitting out text. For example, with autonomous agents, we can have an e-commerce chatbot do the work of a basic customer service rep (i.e. look into orders, refunds, shipping). At a high level, what's going on is that the response of the LLM is being used programmtically to determine which API to call. Keeping on with the e-commerce example, if I wanted a chatbot to check shipping status, I could have a LLM response within my app (not shown to the user) with a prompt that outputs a random hash or string, and programmatically I can determine which API call to make based on this hash/string. And using the same fundamental concept as with RAG, I can append the the API response to a final prompt that would spit out the answer for the user. How No Code Tools Can Fit In (With some example solutions you can build) With that being said, you don't necessarily need to do all of the above by coding yourself, with Python libraries or otherwise. However, I will say that having that high level overview will help IMMENSELY when it comes to using no-code tools to do the actual work for you. Regardless, here are a few common solutions you might build for clients as well as some no-code tools you can use to build them out. Ex. Solution 1: AI Chatbots for SMEs (Small and Medium Enterprises) This involves creating chatbots that handle user queries, lead gen, and so forth with AI, and will use the principles of RAG at heart. After getting the required data from your client (i.e. product catalogues, previous support tickets, FAQ, internal documentation), you upload this into your knowledge base and write a prompt that makes sense for your use case. One no-code tool that does this well is MyAskAI. The beauty of it especially for building external chatbots is the ability to quickly ingest entire websites into your knowledge base via a sitemap, and bulk uploading files. Essentially, they've covered the entire grunt work required to do this manually. Finally, you can create a inline or chat widget on your client's website with a few lines of HTML, or altneratively integrate it with a Slack/Teams chatbot (if you are going for an internal Q&A chatbot approach). Other tools you could use include Botpress and Voiceflow, however these are less for RAG and more for building out complete chatbot flows that may or may not incorporate LLMs. Both apps are essentially GUIs that eliminate the pain and tears and trying to implement complex flows manually, and both natively incoporate AI intents and a knowledge base feature. Ex. Solution 2: Internal Apps Similar to the first example, except we go beyond making just chatbots but tools such as report generation and really any sort of internal tool or automations that may incorporate LLM's. For instance, you can have a tool that automatically generates replies to inbound emails based on your client's knowledge base. Or an automation that does the same thing but for replies to Instagram comments. Another example could be a tool that generates a description and screeenshot based on a URL (useful for directory sites, made one for my own :P). Getting into more advanced implementations of LLMs, we can have tools that can generate entire drafts of reports (think 80+ pages), based not only on data from a knowledge base but also the writing style, format, and author voice of previous reports. One good tool to create content generation panels for your clients would be MindStudio. You can train LLM's via prompt engineering in a structured way with your own data to essentially fine tune them for whatever text you need it to generate. Furthermore, it has a GUI where you can dictate the entire AI flow. You can also upload data sources via multiple formats, including PDF, CSV, and Docx. For automations that require interactions between multiple apps, I recommend the OG zapier/make.com if you want a no-code solution. For instance, for the automatic email reply generator, I can have a trigger such that when an email is received, a custom AI reply is generated by MyAskAI, and finally a draft is created in my email client. Or, for an automation where I can create a social media posts on multiple platforms based on a RSS feed (news feed), I can implement this directly in Zapier with their native GPT action (see screenshot) As for more complex LLM flows that may require multiple layers of LLMs, data sources, and APIs working together to generate a single response i.e. a long form 100 page report, I would recommend tools such as Stack AI or Flowise (open-source alternative) to build these solutions out. Essentially, you get most of the functions and features of Python packages such as Langchain and LlamaIndex in a GUI. See screenshot for an example of a flow How the hell are you supposed to find clients? With all that being said, none of this matters if you can't find anyone to sell to. You will have to do cold sales, one way or the other, especially if you are brand new to the game. And what better way to sell your AI services than with AI itself? If we want to integrate AI into the cold outreach process, first we must identify what it's good at doing, and that's obviously writing a bunch of text, in a short amount of time. Similar to the solutions that an AAA can build for its clients, we can take advantage of the same principles in our own sales processes. How to do outreach Once you've identified your niche and their pain points/opportunities for automation, you want to craft a compelling message in which you can send via cold email and cold calls to get prospects booked on demos/consultations. I won't get into too much detail in terms of exactly how to write emails or calling scripts, as there are millions of resources to help with this, but I will tell you a few key points you want to keep in mind when doing outreach for your AAA. First, you want to keep in mind that many businesses are still hesitant about AI and may not understand what it really is or how it can benefit their operations. However, we can take advantage of how mass media has been reporting on AI this past year- at the very least people are AWARE that sooner or later they may have to implement AI into their businesses to stay competitive. We want to frame our message in a way that introduces generative AI as a technology that can have a direct, tangible, and positive impact on their business. Although it may be hard to quantify, I like to include estimates of man-hours saved or costs saved at least in my final proposals to prospects. Times are TOUGH right now, and money is expensive, so you need to have a compelling reason for businesses to get on board. Once you've gotten your messaging down, you will want to create a list of prospects to contact. Tools you can use to find prospects include Apollo.io, reply.io, zoominfo (expensive af), and Linkedin Sales Navigator. What specific job titles, etc. to target will depend on your niche but for smaller companies this will tend to be the owner. For white collar niches, i.e. law, the professional that will be directly benefiting from the tool (i.e. partners) may be better to contact. And for larger organizations you may want to target business improvement and digital transformation leads/directors- these are the people directly in charge of projects like what you may be proposing. Okay- so you have your message, and your list, and now all it comes down to is getting the good word out. I won't be going into the details of how to send these out, a quick Google search will give you hundreds of resources for cold outreach methods. However, personalization is key and beyond simple dynamic variables you want to make sure you can either personalize your email campaigns directly with AI (SmartWriter.ai is an example of a tool that can do this), or at the very least have the ability to import email messages programmatically. Alternatively, ask ChatGPT to make you a Python Script that can take in a list of emails, scrape info based on their linkedin URL or website, and all pass this onto a GPT prompt that specifies your messaging to generate an email. From there, send away. How tf do I close? Once you've got some prospects booked in on your meetings, you will need to close deals with them to turn them into clients. Call #1: Consultation Tying back to when I mentioned you want to take a consultant-first appraoch, you will want to listen closely to their goals and needs and understand their pain points. This would be the first call, and typically I would provide a high level overview of different solutions we could build to tacke these. It really helps to have a presentation available, so you can graphically demonstrate key points and key technologies. I like to use Plus AI for this, it's basically a Google Slides add-on that can generate slide decks for you. I copy and paste my default company messaging, add some key points for the presentation, and it comes out with pretty decent slides. Call #2: Demo The second call would involve a demo of one of these solutions, and typically I'll quickly prototype it with boilerplate code I already have, otherwise I'll cook something up in a no-code tool. If you have a niche where one type of solution is commonly demanded, it helps to have a general demo set up to be able to handle a larger volume of calls, so you aren't burning yourself out. I'll also elaborate on how the final product would look like in comparison to the demo. Call #3 and Beyond: Once the initial consultation and demo is complete, you will want to alleviate any remaining concerns from your prospects and work with them to reach a final work proposal. It's crucial you lay out exactly what you will be building (in writing) and ensure the prospect understands this. Furthermore, be clear and transparent with timelines and communication methods for the project. In terms of pricing, you want to take this from a value-based approach. The same solution may be worth a lot more to client A than client B. Furthermore, you can create "add-ons" such as monthly maintenance/upgrade packages, training sessions for employeees, and so forth, separate from the initial setup fee you would charge. How you can incorporate AI into marketing your businesses Beyond cold sales, I highly recommend creating a funnel to capture warm leads. For instance, I do this currently with my AI tools directory, which links directly to my AI agency and has consistent branding throughout. Warm leads are much more likely to close (and honestly, much nicer to deal with). However, even without an AI-related website, at the very least you will want to create a presence on social media and the web in general. As with any agency, you will want basic a professional presence. A professional virtual address helps, in addition to a Google Business Profile (GBP) and TrustPilot. a GBP (especially for local SEO) and Trustpilot page also helps improve the looks of your search results immensely. For GBP, I recommend using ProfilePro, which is a chrome extension you can use to automate SEO work for your GBP. Aside from SEO optimzied business descriptions based on your business, it can handle Q/A answers, responses, updates, and service descriptions based on local keywords. Privacy and Legal Concerns of the AAA Model Aside from typical concerns for agencies relating to service contracts, there are a few issues (especially when using no-code tools) that will need to be addressed to run a successful AAA. Most of these surround privacy concerns when working with proprietary data. In your terms with your client, you will want to clearly define hosting providers and any third party tools you will be using to build their solution, and a DPA with these third parties listed as subprocessors if necessary. In addition, you will want to implement best practices like redacting private information from data being used for building solutions. In terms of addressing concerns directly from clients, it helps if you host your solutions on their own servers (not possible with AI tools), and address the fact only ChatGPT queries in the web app, not OpenAI API calls, will be used to train OpenAI's models (as reported by mainstream media). The key here is to be open and transparent with your clients about ALL the tools you are using, where there data will be going, and make sure to get this all in writing. have fun, and keep an open mind Before I finish this post, I just want to reiterate the fact that this is NOT an easy way to make money. Running an AI agency will require hours and hours of dedication and work, and constantly rearranging your schedule to meet prospect and client needs. However, if you are looking for a new business to run, and have a knack for understanding business operations and are genuinely interested in the pracitcal applications of generative AI, then I say go for it. The time is ticking before AAA becomes the new dropshipping or SMMA, and I've a firm believer that those who set foot first and establish themselves in this field will come out top. And remember, while 100 thousand people may read this post, only 2 may actually take initiative and start.

How a Small Startup in Asia Secured a Contract with the US Department of Homeland Security
reddit
LLM Vibe Score0
Human Vibe Score1
Royal_Rest8409This week

How a Small Startup in Asia Secured a Contract with the US Department of Homeland Security

Uzair Javaid, a Ph.D. with a passion for data privacy, co-founded Betterdata to tackle one of AI's most pressing challenges: protecting privacy while enabling innovation. Recently, Betterdata secured a lucrative contract with the US Department of Homeland Security, 1 of only 4 companies worldwide to do so and the only one in Asia. Here's how he did it: The Story So what's your story? I grew up in Peshawar, Pakistan, excelling in coding despite studying electrical engineering. Inspired by my professors, I set my sights on studying abroad and eventually earned a Ph.D. scholarship at NUS Singapore, specializing in data security and privacy. During my research, I ethically hacked Ethereum and published 15 papers—three times the requirement. While wrapping up my Ph.D., I explored startup ideas and joined Entrepreneur First, where I met Kevin Yee. With his expertise in generative models and mine in privacy, we founded Betterdata. Now, nearly three years in, we’ve secured a major contract with the U.S. Department of Homeland Security—one of only four companies globally and the only one from Asia. The Startup In a nutshell, what does your startup do? Betterdata is a startup that uses AI and synthetic data generation to address two major challenges: data privacy and the scarcity of high-quality data for training AI models. By leveraging generative models and privacy-enhancing technologies, Betterdata enables businesses, such as banks, to use customer data without breaching privacy regulations. The platform trains AI on real data, learns its patterns, and generates synthetic data that mimics the real thing without containing any personal or sensitive information. This allows companies to innovate and develop AI solutions safely and ethically, all while tackling the growing need for diverse, high-quality data in AI development. How did you conduct ideation and validation for your startup? The initial idea for Betterdata came from personal experience. During my Ph.D., I ethically hacked Ethereum’s blockchain, exposing flaws in encryption-based data sharing. This led me to explore AI-driven deep synthesis technology—similar to deepfakes but for structured data privacy. With GDPR impacting 28M+ businesses, I saw a massive opportunity to help enterprises securely share data while staying compliant. To validate the idea, I spoke to 50 potential customers—a number that strikes the right balance. Some say 100, but that’s impractical for early-stage founders. At 50, patterns emerge: if 3 out of 10 mention the same problem, and this repeats across 50, you have 10–15 strong signals, making it a solid foundation for an MVP. Instead of outbound sales, which I dislike, we used three key methods: Account-Based Marketing (ABM)—targeting technically savvy users with solutions for niche problems, like scaling synthetic data for banks. Targeted Content Marketing—regular customer conversations shaped our thought leadership and outreach. Raising Awareness Through Partnerships—collaborating with NUS, Singapore’s PDPC, and Plug and Play to build credibility and educate the market. These strategies attracted serious customers willing to pay, guiding Betterdata’s product development and market fit. How did you approach the initial building and ongoing product development? In the early stages, we built synthetic data generation algorithms and a basic UI for proof-of-concept, using open-source datasets to engage with banks. We quickly learned that banks wouldn't share actual customer data due to privacy concerns, so we had to conduct on-site installations and gather feedback to refine our MVP. Through continuous consultation with customers, we discovered real enterprise data posed challenges, such as missing values, which led us to adapt our prototype accordingly. This iterative approach of listening to customer feedback and observing their usage allowed us to improve our product, enhance UX, and address unmet needs while building trust and loyalty. Working closely with our customers also gives us a data advantage. Our solution’s effectiveness depends on customer data, which we can't fully access, but bridging this knowledge gap gives us a competitive edge. The more customers we test on, the more our algorithms adapt to diverse use cases, making it harder for competitors to replicate our insights. My approach to iteration is simple: focus solely on customer feedback and ignore external noise like trends or advice. The key question for the team is: which customer is asking for this feature or solution? As long as there's a clear answer, we move forward. External influences, such as AI hype, often bring more confusion than clarity. True long-term success comes from solving real customer problems, not chasing trends. Customers may not always know exactly what they want, but they understand their problems. Our job is to identify these problems and solve them in innovative ways. While customers may suggest specific features, we stay focused on solving the core issue rather than just fulfilling their exact requests. The idea aligns with the quote often attributed to Henry Ford: "If I asked people what they wanted, they would have said faster horses." The key is understanding their problems, not just taking requests at face value. How do you assess product-market fit? To assess product-market fit, we track two key metrics: Customers' Willingness to Pay: We measure both the quantity and quality of meetings with potential customers. A high number of meetings with key decision-makers signals genuine interest. At Betterdata, we focused on getting meetings with people in banks and large enterprises to gauge our product's resonance with the target market. How Much Customers Are Willing to Pay: We monitor the price customers are willing to pay, especially in the early stages. For us, large enterprises, like banks, were willing to pay a premium for our synthetic data platform due to the growing need for privacy tech. This feedback guided our product refinement and scaling strategy. By focusing on these metrics, we refined our product and positioned it for scaling. What is your business model? We employ a structured, phase-driven approach for out business model, as a B2B startup. I initially struggled with focusing on the core value proposition in sales, often becoming overly educational. Eventually, we developed a product roadmap with models that allowed us to match customer needs to specific offerings and justify our pricing. Our pricing structure includes project-based pilots and annual contracts for successful deployments. At Betterdata, our customer engagement unfolds across three phases: Phase 1: Trial and Benchmarking \- We start with outreach and use open-source datasets to showcase results, offering customers a trial period to evaluate the solution. Phase 2: Pilot or PoC \- After positive trial results, we conduct a PoC or pilot using the customer’s private data, with the understanding that successful pilots lead to an annual contract. Phase 3: Multi-Year Contracts \- Following a successful pilot, we transition to long-term commercial contracts, focusing on multi-year agreements to ensure stability and ongoing partnerships. How do you do marketing for your brand? We take a non-conventional approach to marketing, focusing on answering one key question: Which customers are willing to pay, and how much? This drives our messaging to show how our solution meets their needs. Our strategy centers around two main components: Building a network of lead magnets \- These are influential figures like senior advisors, thought leaders, and strategic partners. Engaging with institutions like IMDA, SUTD, and investors like Plug and Play helps us gain access to the right people and foster warm introductions, which shorten our sales cycle and ensure we’re reaching the right audience. Thought leadership \- We build our brand through customer traction, technology evidence, and regulatory guidelines. This helps us establish credibility in the market and position ourselves as trusted leaders in our field. This holistic approach has enabled us to navigate diverse market conditions in Asia and grow our B2B relationships. By focusing on these areas, we drive business growth and establish strong trust with stakeholders. What's your advice for fundraising? Here are my key takeaways for other founders when it comes to fundraising: Fundraise When You Don’t Need To We closed our seed round in April 2023, a time when we weren't actively raising. Founders should always be in fundraising mode, even when they're not immediately in need of capital. Don’t wait until you have only a few months of runway left. Keep the pipeline open and build relationships. When the timing is right, execution becomes much easier. For us, our investment came through a combination of referrals and inbound interest. Even our lead investor initially rejected us, but after re-engaging, things eventually fell into place. It’s crucial to stay humble, treat everyone with respect, and maintain those relationships for when the time is right. Be Mindful of How You Present Information When fundraising, how you present information matters a lot. We created a comprehensive, easily digestible investment memo, hosted on Notion, which included everything an investor might need—problem, solution, market, team, risks, opportunities, and data. The goal was for investors to be able to get the full picture within 30 minutes without chasing down extra details. We also focused on making our financial model clear and meaningful, even though a 5-year forecast might be overkill at the seed stage. The key was clarity and conciseness, and making it as easy as possible for investors to understand the opportunity. I learned that brevity and simplicity are often the best ways to make a memorable impact. For the pitch itself, keep it simple and focus on 4 things: problem, solution, team, and market. If you can summarize each of these clearly and concisely, you’ll have a compelling pitch. Later on, you can expand into market segments, traction, and other metrics, but for seed-stage, focus on those four areas, and make sure you’re strong in at least three of them. If you do, you'll have a compelling case. How do you run things day-to-day? i.e what's your operational workflow and team structure? Here's an overview of our team structure and process: Internally: Our team is divided into two main areas: backend (internal team) and frontend (market-facing team). There's no formal hierarchy within the backend team. We all operate as equals, defining our goals based on what needs to be developed, assigning tasks, and meeting weekly to share updates and review progress. The focus is on full ownership of tasks and accountability for getting things done. I also contribute to product development, identifying challenges and clearing obstacles to help the team move forward. Backend Team: We approach tasks based on the scope defined by customers, with no blame or hierarchy. It's like a sports team—sometimes someone excels, and other times they struggle, but we support each other and move forward together. Everyone has the creative freedom to work in the way that suits them best, but we establish regular meetings and check-ins to ensure alignment and progress. Frontend Team: For the market-facing side, we implement a hierarchy because the market expects this structure. If I present myself as "CEO," it signals authority and credibility. This distinction affects how we communicate with the market and how we build our brand. The frontend team is split into four main areas: Business Product (Software Engineering) Machine Learning Engineering R&D The C-suite sits at the top, followed by team leads, and then the executors. We distill market expectations into actionable tasks, ensuring that everyone is clear on their role and responsibilities. Process: We start by receiving market expectations and defining tasks based on them. Tasks are assigned to relevant teams, and execution happens with no communication barriers between team members. This ensures seamless collaboration and focused execution. The main goal is always effectiveness—getting things done efficiently while maintaining flexibility in how individuals approach their work. In both teams, there's an emphasis on accountability, collaboration, and clear communication, but the structure varies according to the nature of the work and external expectations.

How a founder built a B2B AI startup to serve with 65+ global brands (including Fortune500 companies)
reddit
LLM Vibe Score0
Human Vibe Score1
Royal_Rest8409This week

How a founder built a B2B AI startup to serve with 65+ global brands (including Fortune500 companies)

AI Palette is an AI-driven platform that helps food and beverage companies predict emerging product trends. I had the opportunity recently to sit down with the founder to get his advice on building an AI-first startup, which he'll be going through in this post. About AI Palette: Co-founders: >!2 (Somsubhra GanChoudhuri, Himanshu Upreti)!!100+!!$12.7M USD!!AI-powered predictive analytics for the CPG (Consumer Packaged Goods) industry!!Signed first paying customer in the first year!!65+ global brands, including Cargill, Diageo, Ajinomoto, Symrise, Mondelez, and L’Oréal, use AI Palette!!Every new product launched has secured a paying client within months!!Expanded into Beauty & Personal Care (BPC), onboarding one of India’s largest BPC companies within weeks!!Launched multiple new product lines in the last two years, creating a unified suite for brand innovation!Identify the pain points in your industry for ideas* When I was working in the flavour and fragrance industry, I noticed a major issue CPG companies faced: launching a product took at least one to two years. For instance, if a company decided today to launch a new juice, it wouldn’t hit the market until 2027. This long timeline made it difficult to stay relevant and on top of trends. Another big problem I noticed was that companies relied heavily on market research to determine what products to launch. While this might work for current consumer preferences, it was highly inefficient since the product wouldn’t actually reach the market for several years. By the time the product launched, the consumer trends had already shifted, making that research outdated. That’s where AI can play a crucial role. Instead of looking at what consumers like today, we realised that companies should use AI to predict what they will want next. This allows businesses to create products that are ahead of the curve. Right now, the failure rate for new product launches is alarmingly high, with 8 out of 10 products failing. By leveraging AI, companies can avoid wasting resources on products that won’t succeed, leading to better, more successful launches. Start by talking to as many industry experts as possible to identify the real problems When we first had the idea for AI Palette, it was just a hunch, a gut feeling—we had no idea whether people would actually pay for it. To validate the idea, we reached out to as many people as we could within the industry. Since our focus area was all about consumer insights, we spoke to professionals in the CPG sector, particularly those in the insights departments of CPG companies. Through these early conversations, we began to see a common pattern emerge and identified the exact problem we wanted to solve. Don’t tell people what you’re building—listen to their frustrations and challenges first. Going into these early customer conversations, our goal was to listen and understand their challenges without telling them what we were trying to build. This is crucial as it ensures that you can gather as much data about the problem to truly understand it and that you aren't biasing their answers by showing your solution. This process helped us in two key ways: First, it validated that there was a real problem in the industry through the number of people who spoke about experiencing the same problem. Second, it allowed us to understand the exact scale and depth of the problem—e.g., how much money companies were spending on consumer research, what kind of tools they were currently using, etc. Narrow down your focus to a small, actionable area to solve initially. Once we were certain that there was a clear problem worth solving, we didn’t try to tackle everything at once. As a small team of two people, we started by focusing on a specific area of the problem—something big enough to matter but small enough for us to handle. Then, we approached customers with a potential solution and asked them for feedback. We learnt that our solution seemed promising, but we wanted to validate it further. If customers are willing to pay you for the solution, it’s a strong validation signal for market demand. One of our early customer interviewees even asked us to deliver the solution, which we did manually at first. We used machine learning models to analyse the data and presented the results in a slide deck. They paid us for the work, which was a critical moment. It meant we had something with real potential, and we had customers willing to pay us before we had even built the full product. This was the key validation that we needed. By the time we were ready to build the product, we had already gathered crucial insights from our early customers. We understood the specific information they wanted and how they wanted the results to be presented. This input was invaluable in shaping the development of our final product. Building & Product Development Start with a simple concept/design to validate with customers before building When we realised the problem and solution, we began by designing the product, but not by jumping straight into coding. Instead, we created wireframes and user interfaces using tools like InVision and Figma. This allowed us to visually represent the product without the need for backend or frontend development at first. The goal was to showcase how the product would look and feel, helping potential customers understand its value before we even started building. We showed these designs to potential customers and asked for feedback. Would they want to buy this product? Would they pay for it? We didn’t dive into actual development until we found a customer willing to pay a significant amount for the solution. This approach helped us ensure we were on the right track and didn’t waste time or resources building something customers didn’t actually want. Deliver your solution using a manual consulting approach before developing an automated product Initially, we solved problems for customers in a more "consulting" manner, delivering insights manually. Recall how I mentioned that when one of our early customer interviewees asked us to deliver the solution, we initially did it manually by using machine learning models to analyse the data and presenting the results to them in a slide deck. This works for the initial stages of validating your solution, as you don't want to invest too much time into building a full-blown MVP before understanding the exact features and functionalities that your users want. However, after confirming that customers were willing to pay for what we provided, we moved forward with actual product development. This shift from a manual service to product development was key to scaling in a sustainable manner, as our building was guided by real-world feedback and insights rather than intuition. Let ongoing customer feedback drive iteration and the product roadmap Once we built the first version of the product, it was basic, solving only one problem. But as we worked closely with customers, they requested additional features and functionalities to make it more useful. As a result, we continued to evolve the product to handle more complex use cases, gradually developing new modules based on customer feedback. Product development is a continuous process. Our early customers pushed us to expand features and modules, from solving just 20% of their problems to tackling 50–60% of their needs. These demands shaped our product roadmap and guided the development of new features, ultimately resulting in a more complete solution. Revenue and user numbers are key metrics for assessing product-market fit. However, critical mass varies across industries Product-market fit (PMF) can often be gauged by looking at the size of your revenue and the number of customers you're serving. Once you've reached a certain critical mass of customers, you can usually tell that you're starting to hit product-market fit. However, this critical mass varies by industry and the type of customers you're targeting. For example, if you're building an app for a broad consumer market, you may need thousands of users. But for enterprise software, product-market fit may be reached with just a few dozen key customers. Compare customer engagement and retention with other available solutions on the market for product-market fit Revenue and the number of customers alone isn't always enough to determine if you're reaching product-market fit. The type of customer and the use case for your product also matter. The level of engagement with your product—how much time users are spending on the platform—is also an important metric to track. The more time they spend, the more likely it is that your product is meeting a crucial need. Another way to evaluate product-market fit is by assessing retention, i.e whether users are returning to your platform and relying on it consistently, as compared to other solutions available. That's another key indication that your solution is gaining traction in the market. Business Model & Monetisation Prioritise scalability Initially, we started with a consulting-type model where we tailor-made specific solutions for each customer use-case we encountered and delivered the CPG insights manually, but we soon realized that this wasn't scalable. The problem with consulting is that you need to do the same work repeatedly for every new project, which requires a large team to handle the workload. That is not how you sustain a high-growth startup. To solve this, we focused on building a product that would address the most common problems faced by our customers. Once built, this product could be sold to thousands of customers without significant overheads, making the business scalable. With this in mind, we decided on a SaaS (Software as a Service) business model. The benefit of SaaS is that once you create the software, you can sell it to many customers without adding extra overhead. This results in a business with higher margins, where the same product can serve many customers simultaneously, making it much more efficient than the consulting model. Adopt a predictable, simplistic business model for efficiency. Look to industry practices for guidance When it came to monetisation, we considered the needs of our CPG customers, who I knew from experience were already accustomed to paying annual subscriptions for sales databases and other software services. We decided to adopt the same model and charge our customers an annual upfront fee. This model worked well for our target market, aligning with industry standards and ensuring stable, recurring revenue. Moreover, our target CPG customers were already used to this business model and didn't have to choose from a huge variety of payment options, making closing sales a straightforward and efficient process. Marketing & Sales Educate the market to position yourself as a thought leader When we started, AI was not widely understood, especially in the CPG industry. We had to create awareness around both AI and its potential value. Our strategy focused on educating potential users and customers about AI, its relevance, and why they should invest in it. This education was crucial to the success of our marketing efforts. To establish credibility, we adopted a thought leadership approach. We wrote blogs on the importance of AI and how it could solve problems for CPG companies. We also participated in events and conferences to demonstrate our expertise in applying AI to the industry. This helped us build our brand and reputation as leaders in the AI space for CPG, and word-of-mouth spread as customers recognized us as the go-to company for AI solutions. It’s tempting for startups to offer products for free in the hopes of gaining early traction with customers, but this approach doesn't work in the long run. Free offerings don’t establish the value of your product, and customers may not take them seriously. You should always charge for pilots, even if the fee is minimal, to ensure that the customer is serious about potentially working with you, and that they are committed and engaged with the product. Pilots/POCs/Demos should aim to give a "flavour" of what you can deliver A paid pilot/POC trial also gives you the opportunity to provide a “flavour” of what your product can deliver, helping to build confidence and trust with the client. It allows customers to experience a detailed preview of what your product can do, which builds anticipation and desire for the full functionality. During this phase, ensure your product is built to give them a taste of the value you can provide, which sets the stage for a broader, more impactful adoption down the line. Fundraising & Financial Management Leverage PR to generate inbound interest from VCs When it comes to fundraising, our approach was fairly traditional—we reached out to VCs and used connections from existing investors to make introductions. However, looking back, one thing that really helped us build momentum during our fundraising process was getting featured in Tech in Asia. This wasn’t planned; it just so happened that Tech in Asia was doing a series on AI startups in Southeast Asia and they reached out to us for an article. During the interview, they asked if we were fundraising, and we mentioned that we were. As a result, several VCs we hadn’t yet contacted reached out to us. This inbound interest was incredibly valuable, and we found it far more effective than our outbound efforts. So, if you can, try to generate some PR attention—it can help create inbound interest from VCs, and that interest is typically much stronger and more promising than any outbound strategies because they've gone out of their way to reach out to you. Be well-prepared and deliberate about fundraising. Keep trying and don't lose heart When pitching to VCs, it’s crucial to be thoroughly prepared, as you typically only get one shot at making an impression. If you mess up, it’s unlikely they’ll give you a second chance. You need to have key metrics at your fingertips, especially if you're running a SaaS company. Be ready to answer questions like: What’s your retention rate? What are your projections for the year? How much will you close? What’s your average contract value? These numbers should be at the top of your mind. Additionally, fundraising should be treated as a structured process, not something you do on the side while juggling other tasks. When you start, create a clear plan: identify 20 VCs to reach out to each week. By planning ahead, you’ll maintain momentum and speed up the process. Fundraising can be exhausting and disheartening, especially when you face multiple rejections. Remember, you just need one investor to say yes to make it all worthwhile. When using funds, prioritise profitability and grow only when necessary. Don't rely on funding to survive. In the past, the common advice for startups was to raise money, burn through it quickly, and use it to boost revenue numbers, even if that meant operating at a loss. The idea was that profitability wasn’t the main focus, and the goal was to show rapid growth for the next funding round. However, times have changed, especially with the shift from “funding summer” to “funding winter.” My advice now is to aim for profitability as soon as possible and grow only when it's truly needed. For example, it’s tempting to hire a large team when you have substantial funds in the bank, but ask yourself: Do you really need 10 new hires, or could you get by with just four? Growing too quickly can lead to unnecessary expenses, so focus on reaching profitability as soon as possible, rather than just inflating your team or burn rate. The key takeaway is to spend your funds wisely and only when absolutely necessary to reach profitability. You want to avoid becoming dependent on future VC investments to keep your company afloat. Instead, prioritize reaching break-even as quickly as you can, so you're not reliant on external funding to survive in the long run. Team-Building & Leadership Look for complementary skill sets in co-founders When choosing a co-founder, it’s important to find someone with a complementary skill set, not just someone you’re close to. For example, I come from a business and commercial background, so I needed someone with technical expertise. That’s when I found my co-founder, Himanshu, who had experience in machine learning and AI. He was a great match because his technical knowledge complemented my business skills, and together we formed a strong team. It might seem natural to choose your best friend as your co-founder, but this can often lead to conflict. Chances are, you and your best friend share similar interests, skills, and backgrounds, which doesn’t bring diversity to the table. If both of you come from the same industry or have the same strengths, you may end up butting heads on how things should be done. Having diverse skill sets helps avoid this and fosters a more collaborative working relationship. Himanshu (left) and Somsubhra (right) co-founded AI Palette in 2018 Define roles clearly to prevent co-founder conflict To avoid conflict, it’s essential that your roles as co-founders are clearly defined from the beginning. If your co-founder and you have distinct responsibilities, there is no room for overlap or disagreement. This ensures that both of you can work without stepping on each other's toes, and there’s mutual respect for each other’s expertise. This is another reason as to why it helps to have a co-founder with a complementary skillset to yours. Not only is having similar industry backgrounds and skillsets not particularly useful when building out your startup, it's also more likely to lead to conflicts since you both have similar subject expertise. On the other hand, if your co-founder is an expert in something that you're not, you're less likely to argue with them about their decisions regarding that aspect of the business and vice versa when it comes to your decisions. Look for employees who are driven by your mission, not salary For early-stage startups, the first hires are crucial. These employees need to be highly motivated and excited about the mission. Since the salary will likely be low and the work demanding, they must be driven by something beyond just the paycheck. The right employees are the swash-buckling pirates and romantics, i.e those who are genuinely passionate about the startup’s vision and want to be part of something impactful beyond material gains. When employees are motivated by the mission, they are more likely to stick around and help take the startup to greater heights. A litmus test for hiring: Would you be excited to work with them on a Sunday? One of the most important rounds in the hiring process is the culture fit round. This is where you assess whether a candidate shares the same values as you and your team. A key question to ask yourself is: "Would I be excited to work with this person on a Sunday?" If there’s any doubt about your answer, it’s likely not a good fit. The idea is that you want employees who align with the company's culture and values and who you would enjoy collaborating with even outside of regular work hours. How we structure the team at AI Palette We have three broad functions in our organization. The first two are the big ones: Technical Team – This is the core of our product and technology. This team is responsible for product development and incorporating customer feedback into improving the technology Commercial Team – This includes sales, marketing, customer service, account managers, and so on, handling everything related to business growth and customer relations. General and Administrative Team – This smaller team supports functions like finance, HR, and administration. As with almost all businesses, we have teams that address the two core tasks of building (technical team) and selling (commercial team), but given the size we're at now, having the administrative team helps smoothen operations. Set broad goals but let your teams decide on execution What I've done is recruit highly skilled people who don't need me to micromanage them on a day-to-day basis. They're experts in their roles, and as Steve Jobs said, when you hire the right person, you don't have to tell them what to do—they understand the purpose and tell you what to do. So, my job as the CEO is to set the broader goals for them, review the plans they have to achieve those goals, and periodically check in on progress. For example, if our broad goal is to meet a certain revenue target, I break it down across teams: For the sales team, I’ll look at how they plan to hit that target—how many customers they need to sell to, how many salespeople they need, and what tactics and strategies they plan to use. For the technical team, I’ll evaluate our product offerings—whether they think we need to build new products to attract more customers, and whether they think it's scalable for the number of customers we plan to serve. This way, the entire organization's tasks are cascaded in alignment with our overarching goals, with me setting the direction and leaving the details of execution to the skilled team members that I hire.

Follow Along as I Flip this Website - Case Study
reddit
LLM Vibe Score0
Human Vibe Score1
jshogren10This week

Follow Along as I Flip this Website - Case Study

I am starting a new case study where I will be documenting my attempt to flip a website that I just purchased from Flippa. However, unlike most case studies where people hide certain parts and details from the public I will instead be sharing everything. That means you will know the exact URL of the site that I purchased and I will share everything with you all as I progress.I know that case studies are lot more interesting and you can learn better when you can see real examples of what I am talking about. Enough of the chatting, let's jump straight into this new case study and I will explain what this is all about. Before you get into the case study I want to give you the option of reading this one my website where all of the images can be seen within the post and it is easier to read. I also want to say that I have nothing to sell you or anything close to it. So if you want to read it there you can do so here ##Introductory Video I have put together a video that talks about many of the things that I cover in this article. So if you would rather watch a video you can watch that here - https://www.youtube.com/watch?v=EE3SxtNnqts However, I go into more detail in the actual article FYI. Also, I plan on using Youtube very frequently in this case study so be on the lookout for new videos.There is going to be a video that will accompany every single case study post because I like having it being presented in two different mediums. ##The Website I Just Bought Around a week ago I made a new website purchase from Flippa and you can view the website's Flippa listing here - https://flippa.com/6439965-hvactraining101-com Screenshot of the Homepage - http://imgur.com/T6Iv1QN I paid $1,250 for the site and you will soon see that I got a really good deal. As you might be able to tell from the URL, this site is focused around training and education for becoming a HVAC technician. This is a lucrative niche to be in and Adsense pays very well. I do not have control of the site yet due to the transfer process not being completed. However, I am hoping within a few days everything will be finalized and I will take full control of the site. In the meantime, I figured it would be a good time to put together the introduction post for this new case study! ##Why I Bought this Website Now that you have a general idea of the website that I purchased, I now want to explain the reasoning behind the purchase. There are 3 major reasons for this purchase and I will explain each one of them below. GREAT Price As I mentioned earlier, I bought this website for $1,250. However, that doesn't mean a whole lot unless you know how much the site is making each month. Screenshot of the earnings for the last 12 months - http://imgur.com/NptxCHy Average Monthly Profits: 3 Month = $126 6 Month = $128 12 Month = $229.50 Let's use the 6 month average of $128/month as our baseline average. Since it is making on average $128/month and it was sold for $1,250 then that means I bought this site at a multiple of 9.76x! Most sites in today's market go for 20x-30x multiples. As you can see, I got a great deal on this site. Although the great price was the biggest reason for me buying this site there are other factors that persuaded me as well. You need to remember that just because you can get a website for a good price it doesn't mean it is a good deal. There are other factors that you need to look at as well. Extremely Under Optimized This site is currently being monetized mainly by Adsense and a very small amount from Quinstreet. From my experience with testing and optimizing Adsense layouts for my site in my Website Investing case study I know the common ad layouts that work best for maximizing Adsense revenue. With that being said, I can quickly determine if a website is being under optimized in terms of the ad layout. One of the first things I did when analyzing this site was examine the ad layout it was using. Screenshot of the website with the ad layout the previous owner was using - http://imgur.com/wqleLVA There is only ONE ad per page being used, that's it. Google allows up to 6 total ads to be used per page and you can imagine how much money is being left on the table because of this. I am estimating that I can probably double the earnings for the site practically overnight once I add more ads to the site. Adding more ads in combination with my favorite Adsense plugin, AmpedSense, I will be able to easily boost the earnings for this site quickly. It is also worth mentioning how lucrative this niche is and how much advertisers are willing to spend on a per click basis. The average CPC for the top keywords this site is currently ranking for in Google - http://imgur.com/ifxiy8B Look at those average CPC numbers, they are insanely high! I could be making up to $25 per click for some of those keywords, which is so absurd to me. Combine these extremely high CPC with the fact that the site currently only has one ad per page and you can start to understand just how under optimized this site truly is. I also plan on utilizing other ad networks such as Quinstreet and Campus Explorer more as well. These two networks are targeted at the education niche which works very well with my site. I will be testing to see if these convert better than normal Adsense ads. Goldmine of Untapped Keywords One of the biggest opportunities I see for growing this site is to target local keywords related to HVAC training. As of right now, the site has only scratched the surface when it comes to trying to rank for state/city keywords. Currently there are only two pages on the entire website which go after local keywords, those two pages target Texas and Florida HVAC search terms. These two pages are two of the more popular pages in terms of total amount of traffic. See the screenshot of the Google Analytics - http://imgur.com/NB0xJ4G Two out of the top five most popular pages for the entire website are focused on local search terms. However, these are the ONLY two pages that target local search terms on the whole site! There are 48 other states, although there may not be search volume for all states, and countless cities that are not being targeted. Why do I think this is such a good opportunity? For a few reasons: Local keywords are a lot easier to rank for in Google than more general keywords This site has been able to rank for two states successfully already and it proves it is possible Traffic going to these local pages is WAY more targeted and will convert at a much higher rate, which means more commissions for me There are so many more states and cities that get a good amount of searches that I can target To give you an idea of the type of keywords these local pages rank for, you can see the top keywords that the Florida page is ranking for in Google: Top ranking keywords for the Florida page - http://imgur.com/j7uKzl2 As you can see these keywords don't get a ton of searches each month, but ranking 1st for a keyword getting 90 searches a month is better than being ranked 10th for a keyword getting 1,000 searches a month. I have started to do some keyword research for other states and I am liking what I am finding so far. Keywords that I have found which I will be targeting with future articles - http://imgur.com/8CCCCWU I will go into more detail about my keyword research in future articles, but I wanted to give you an idea of what my strategy will be! I also wanted to share why I am super excited about the future potential to grow this site by targeting local keywords. ##Risks Yes, there are many good things about this website, but there are always risks involved no matter what the investment is. The same thing goes for this site. Below are some of the risks that I currently see. HTML Site This website is a HTML site and I will need to transfer it to Wordpress ASAP. I have been doing some research on this process and it shouldn't be too hard to get this over to Wordpress. In doing so it will make adding content, managing the back end and just about everything else easier. Also, I am hoping that when I transfer it to Wordpress that it will become more optimized for Google which will increase keyword rankings. Declining Earnings Looking at the last 12 months of earnings you will notice a drop off from last year till now. Earnings from the last 12 months - http://imgur.com/WsotZsj In May of 2015 it looks like the site earned right around $500, which is much higher than the $128 that it is earning now. However, the last 7 or so months have been consistent which is a good sign. Even though the earnings are much lower now then they were a year ago it is good to know that this site has the potential to earn $500/month because it has done it before. Slightly Declining Traffic In the last 12 months the site's traffic has declined, however, it looks like it is picking back up. Traffic from the last 12 months - http://imgur.com/aiYZW9W The decline is nothing serious, but there is a drop on traffic. Let's take a look at the complete history of this site's traffic so we can get a better idea of what is going on here: Complete traffic history - http://imgur.com/tYmboVn The above screenshot is from 2012 all the way up to right now. In the grand scheme of things you can see that the traffic is still doing well and it looks like it is on the upswing now. Those three risks mentioned above are the three biggest risks with this site at this point. It is always good to note the risks and do everything you can to prevent them from causing a problem. ##My Growth Strategy Whenever I purchase a new site I always create an outline or plan on how I will grow the site. Right now, I have some basic ideas on how I will grow this site, but as I go on I will continue to change and optimize my strategies to be more effective. Below I have outlined my current plans to grow: Add more Adsense Ads The very first thing I will do once I get control of the site is add more ads per page. I am predicting that by just adding a few more ads per page I will be able to more than likely double the earnings. I will touch on exactly how I will be optimizing the ad layouts in future posts. Test other Ad Networks I will be doing a lot of testing and experimenting when it comes to the ad networks. I plan on trying out Adsense, Media.net, Quinstreet, Campus Explorer and finding the combination of those 4 which produces the most revenue. The Adsense and Media.net ads will perform well on the more general pages while Quinstreet and Campus Explorer ads will be geared towards the local search terms. There will probably be other ad networks I will try out but these are the four which I will be using right away. If you are aware of any other ad networks out there which are geared towards the education niche please let me know in the comments below! Target Local Keywords with new Content I have already touched on this, but I will starting to produce content targeting these local keywords ASAP. The sooner I add the content to the site the sooner it will start to rank and bring in traffic. I will not be writing my own content and instead I will be outsourcing all of it via Upwork. I will show you all how I go about outsourcing content production and you can see my process for doing that. ##Goals for this Website My goal for the website is to have it valued at $10,000+ within 12 months. Let's break down this larger goal into smaller chunks which will make achieving it easier and more attainable. Earnings - $500/month To get the site valued at $10,000 the site will need to be making $500/month using a 20x monthly multiple. Right now, the site is making around $130/month so it has a ways to before it reaches the $500 a month mark. However, after doing some Adsense optimization I think we could push the earnings to around $300/month without much work. From there, it will come down to trying to bring in more traffic! Traffic - 5,000 Visitors per Month Why 5,000 visitors? Because that is how much traffic it is going to take to get to the $500/month goal. Let me explain how I came to this conclusion: The average RPM for this site is currently $50, which means for every 1,000 page views the site earns $50. After I optimize the Adsense layout for the site and add more ads per page I think I will be able to double the RPM to $100. Using the RPM of $100 the site will need to have 5,000 monthly visitors to earn $500. So 5,000 monthly visitors is the traffic goal I have set and aiming for! The site is currently getting around 3,000 visitors per month so I will need to add an extra 2,000 visitors to get to this goal. ##Want to Follow this Case Study? I will be using Youtube a lot in this case study so make sure to follow my Youtube channel here - www.youtube.com/c/joshshogren Other than that, I think that is going to bring us to the end of the introductory post for this new case study. I hope that you enjoyed reading and that you are excited to follow along! If you have any suggestions to make this case study better PLEASE let me know in the comment below. I want to make this case study the best one I have done yet. Talk to you all in the comment section.

12 months ago, I was unemployed. Last week my side hustle got acquired by a $500m fintech company
reddit
LLM Vibe Score0
Human Vibe Score0.778
wutangsamThis week

12 months ago, I was unemployed. Last week my side hustle got acquired by a $500m fintech company

I’ve learned so much over the years from this subreddit. I thought I’d return the favour and share some of my own learnings. In November 2020 my best friend and I had an idea. “What if we could find out which stocks the Internet is talking about?” This formed the origins of Ticker Nerd. 9 months later we sold Ticker Nerd to Finder (an Australian fintech company valued at around $500m). In this post, I am going to lay out how we got there. How we came up with the idea First off, like other posts have covered - you don’t NEED a revolutionary or original idea to build a business. There are tonnes of “boring” businesses making over 7 figures a year e.g. law firms, marketing agencies, real estate companies etc. If you’re looking for an exact formula to come up with a great business idea I’m sorry, but it doesn’t exist. Finding new business opportunities is more of an art than a science. Although, there are ways you can make it easier to find inspiration. Below are the same resources I use for inspiration. I rarely ever come up with ideas without first searching one of the resources below for inspiration: Starter Story Twitter Startup Ideas My First Million Trends by the Hustle Trends VC To show how you how messy, random and unpredictable it can be to find an idea - let me explain how my co-founder and I came up with the idea for Ticker Nerd: We discovered a new product on Twitter called Exploding Topics. It was a newsletter that uses a bunch of software and algorithms to find trends that are growing quickly before they hit the mainstream. I had recently listened to a podcast episode from My First Million where they spoke about Motley Fool making hundreds of millions from their investment newsletters. We asked ourselves what if we could build a SaaS platform similar to Exploding Topics but it focused on stocks? We built a quick landing page using Carrd + Gumroad that explained what our new idea will do and included a payment option to get early access for $49. We called it Exploding Stock (lol). We shared it around a bunch of Facebook groups and subreddits. We made $1,000 in pre-sales within a couple days. My co-founder and I can’t code so we had to find a developer to build our idea. We interviewed a bunch of potential candidates. Meanwhile, I was trawling through Wall Street Bets and found a bunch of free tools that did roughly what we wanted to build. Instead of building another SaaS tool that did the same thing as these free tools we decided to pivot from our original idea. Our new idea = a paid newsletter that sends a weekly report that summarises 2 of the best stocks that are growing in interest on the Internet. We emailed everyone who pre-ordered access, telling them about the change and offered a full refund if they wanted. tl;dr: We essentially combined two existing businesses (Exploding Topics and Motley Fool) and made it way better. We validated the idea by finding out if people will actually pay money for it BEFORE we decided to build it. The idea we started out with changed over time. How to work out if your idea will actually make money It’s easy to get hung up on designing the logo or choosing the perfect domain name for your new idea. At this stage none of that matters. The most important thing is working out if people will pay money for it. This is where validation comes in. We usually validate ideas using Carrd. It lets you build a simple one page site without having to code. The Ticker Nerd site was actually built using a Carrd template. Here’s how you can do it yourself (at a high level): Create a Carrd pro account (yes it's a $49 one off payment but you’ll get way more value out of it). Buy a cheap template and send it to your Carrd account. You can build your own template but this will save you a lot of time. Once the template reaches your Carrd account, duplicate it. Leave the original so it can be duplicated for other ideas. Jump onto Canva (free) and create a logo using the free logos provided. Import your logo. Add copy to the page that explains your idea. Use the AIDA formula. Sign up to Gumroad (free) and create a pre-sale campaign. Create a discounted lifetime subscription or version of the product. This will be used pre-sales. Add the copy from the site into the pre-sale campaign on Gumroad. Add a ‘widget’ to Carrd and connect it to Gumroad using the existing easy integration feature. Purchase a domain name. Connect it to Carrd. Test the site works. Share your website Now the site is ready you can start promoting it in various places to see how the market reacts. An easy method is to find relevant subreddits using Anvaka (Github tool) or Subreddit Stats. The Anvaka tool provides a spider map of all the connected subreddits that users are active in. The highlighted ones are most relevant. You can post a thread in these subreddits that offer value or can generate discussion. For example: ‘I’m creating a tool that can write all your copy, would anyone actually use this?’ ‘What does everything think of using AI to get our copy written faster?’ ‘It’s time to scratch my own itch, I’m creating a tool that writes marketing copy using GPT-3. What are the biggest problems you face writing marketing copy? I’ll build a solution for it’ Reddit is pretty brutal these days so make sure the post is genuine and only drop your link in the comments or in the post if it seems natural. If people are interested they’ll ask for the link. Another great place to post is r/entrepreuerridealong and r/business_ideas. These subreddits expect people to share their ideas and you’ll likely make some sales straight off the bat. I also suggest posting in some Facebook groups (related to your idea) as well just for good measure. Assess the results If people are paying you for early access you can assume that it’s worth building your idea. The beauty of posting your idea on Reddit or in Facebook groups is you’ll quickly learn why people love/hate your idea. This can help you decide how to tweak the idea or if you should drop it and move on to the next one. How we got our first 100 customers (for free) By validating Ticker Nerd using subreddits and Facebook groups this gave us our first paying customers. But we knew this wouldn’t be sustainable. We sat down and brainstormed every organic strategy we could use to get traction as quickly as possible. The winner: a Product Hunt launch. A successful Product Hunt launch isn’t easy. You need: Someone that has a solid reputation and audience to “hunt” your product (essentially an endorsement). An aged Product Hunt account - you can’t post any products if your account is less than a week old. To be following relevant Product Hunt members - since they get notified when you launch a new product if they’re following you. Relationships with other builders and makers on Product Hunt that also have a solid reputation and following. Although, if you can pull it off you can get your idea in front of tens of thousands of people actively looking for new products. Over the next few weeks, I worked with my co-founder on connecting with different founders, indie hackers and entrepreneurs mainly via Twitter. We explained to them our plans for the Product Hunt launch and managed to get a small army of people ready to upvote our product on launch day. We were both nervous on the day of the launch. We told ourselves to have zero expectations. The worst that could happen was no one signed up and we were in the same position as we’re in now. Luckily, within a couple of hours Ticker Nerd was on the homepage of Product Hunt and in the top 10. The results were instant. After 24 hours we had around 200 people enter their payment details to sign up for our free trial. These signups were equal to around $5,800 in monthly recurring revenue. \-- I hope this post was useful! Drop any questions you have below and I’ll do my best to respond :)

Made $19.2k this month, and just surpassed $1000 the last 24 hours. What I did and what's next.
reddit
LLM Vibe Score0
Human Vibe Score1
dams96This week

Made $19.2k this month, and just surpassed $1000 the last 24 hours. What I did and what's next.

It's the first time I hit $1000+ in 24 hours and I had no one to share it with (except you guys). I'm quite proud of my journey, and I would have thought that making $1000 in a day would make me ecstatic, but actually it's not the case. Not sure if it's because my revenue has grown by increment step so I had time to "prepare" myself to achieve this at one point, or just that I'm nowhere near my goal of 100k/month so that I'm not that affected by it. But it's crazy to think that my goal was to make 100$ daily at the end of 2024. So for those who don't know me (I guess most of you), I build mobile apps and ship them as fast as I can. Most of them are in the AI space. I already made a post here on how I become a mobile app developer so you can check it for more details, but essentially here's what I did : Always loved creating my own things and solve problems Built multiple YouTube channels since I was 15 (mobile gaming actually) that all worked great (but it was too niche so not that scalable, didn't like that) Did a few businesses here and there (drop shopping, selling merch to school, etc) Finished my master's degree in engineering about 2 years ago Worked a moment in a famous watch industry company and saw my potential. The combo of health issues, fixed salary (although it was quite a lot), and me wanting to be an entrepreneur made me leave the company. Created a TikTok account in mobile tech (got 10+ million views the 1st 3 days), manage to grow it to 200k subs in about 3 months Got plenty of collabs for promoting mobile apps (between $500 - $2000 for a collab) Said fuck it I should do my own apps and market them on my TikTok instead of doing collabs Me wanting to build my own apps happened around May-June 2023. Started my TikTok in Feb 2023. At this point I had already 150k+ subs on TikTok. You guys need to know that I suck at coding big time. During my studies I tried to limit as much as I could coding because I was a lazy bast*rd, even though I knew it would come to bite me in the ass one day. But an angel appeared to me in broad daylight, that angel was called GPT-4. I subscribed for 20$/month to get access, and instantly I saw the potential of AI and how much it could help me. Last year GPT-4 was ahead of its time and could already code me basic apps. I had already a mac so I just downloaded Xcode and that was it. My 1st app was a wallpaper app, and I kid you not 90% of it was made by AI. Yes sometimes I had to try again and again with different prompts but it was still so much faster compared to if I had to learn coding from scratch and write code with my own hands. The only thing I didn't do was implement the in app purchase, from which I find a guy on Fiverr to do it for me for 50$. After about 2 months of on-off coding, my first app was ready to be launched. So it was launched, had a great successful launch without doing any videos at that point (iOS 17 was released and my app was the first one alongside another one to offer live wallpapers for iOS 17. I knew that there was a huge app potential there when iOS 17 was released in beta as Apple changed their live wallpaper feature). I Then made a video a few weeks after on my mobile tiktok channel, made about 1 million views in 48 hours, brought me around 40k additional users. Was top 1 chart in graphism and design category for a few weeks (in France, as I'm French so my TikTok videos are in French). And was top 100 in that same category in 120+ countries. Made about 500$ ? Okay that was trash, but I had no idea to monetize the app correctly at that point. It was still a huge W to me and proved me that I could successfully launch apps. Then I learned ASO (App Store Optimization) in depth, searched on internet, followed mobile app developers on Twitter, checked YouTube videos, you name it. I was eager to learn more. I needed more. Then I just iterated, build my 2nd app in less than a month, my 3rd in 3 weeks and so on. I just build my 14th app in 3 days and is now in review. Everytime I manage to reuse some of my other app's code in my new one, which is why I can build them so much faster now. I know how to monetize my app better by checking out my competitors. I learn so much by just "spying" other apps. Funnily enough, I only made this one Tiktok video on my main account to promote my app. For all my other apps, I didn't do a single video where I showcase it, the downloads has only been thanks to ASO. I still use AI everyday. I'm still not good at coding (a bit better than when I started). I use AI to create my app icons (midjourney or the new AI model Flux which is great). I use figma + midjourney to create my App Store screenshots (and they actually look quite good). I use GPT-4o and Claude 3.5 Sonnet to code most of my apps features. I use gpt-4o to localize my app (if you want to optimize the number of downloads I strongly suggest localizing your app, it takes me about 10 minutes thanks to AI). Now what are my next goals ? To achieve the 100k/month I need to change my strategy a little. Right now the $20k/month comes from purely organic downloads, I didn't do any paid advertising. It will be hard for me to keep on launching new apps and rely on ASO to reach the 100k mark. The best bet to reach 100k is to collab with content creators and they create a viral video showcasing your app. Depending on the app it's not that easy, luckily some of my apps can be viral so I will need to find the right content creators. Second way is to try tiktok/meta ads, I can check (have checked) all the ads that have been made by my competitors (thank you EU), so what I would do is copy their ad concept and create similar ads than them. Some of them have millions in ad budget so I know they create high converting ads, so you don't need to try to create an ad creative from scratch. My only big fear is to get banned by Apple (for no reason of mine). In just a snap of a finger they can just ban you from the platform, that shit scares me. And you pretty much can't do anything. So that's about it for me. I'm quite proud of myself not going to lie. Have been battling so many health issues these past years where I just stay in bed all day I'm surprised to be able to make it work. Anyways feel free to ask questions. I hope it was interesting for some of you at least. PS: My new app was just approved by app review, let the app gods favor me and bring me many downloads ! Also forgot to talk about a potential $100k+ acquisition of one of my apps, but if that ever happens I'll make a post on it.

5 Habits to go from Founder to CEO
reddit
LLM Vibe Score0
Human Vibe Score0.6
FalahilThis week

5 Habits to go from Founder to CEO

Over the years, I've gathered some knowledge about transitioning from a startup founder to a CEO. I started my company 7 years ago. We are now not super big (65 people), but we have learned a lot. We raised $19M in total and we are now profitable. The transition from Founder to CEO was crucial. Your startup begins to mature and scale and you need to scale with it. It's often a challenging phase, but I've managed to summarize it into five habbits. Say no to important things every day Being able to say "no" to important tasks every day is an essential practice for a growing leader. It's a reality that as the magnitude of your company or ideas expands, so does the influx of good ideas and opportunities. However, to transform from a mere hustler to a true leader, you have to become selective. This means learning to refuse good ideas, which is crucial if you want to consistently execute the outstanding ones. The concept that "Startups don't starve, they drown" resonates deeply because it underlines how challenging it can be to reject opportunities. A key strategy to develop this skill is time-constraining your to-do list. Here's how you can do it: Weekly: Formulate a weekly to-do list, including only those tasks that you're sure to complete within the week. Leave some buffer room for unexpected issues. If there's any doubt about whether you'll have time for a certain task, it should not feature on your weekly list. I use Todoist and Notion for task management. Daily: Apply the same rule while creating your daily to-do list. Only include tasks that you're confident about accomplishing that day. If a task seems too big to fit into one day, break it down into manageable chunks. Journaling Journaling is a powerful strategy that can help an individual transition from a reactive approach to a proactive one. As founders, we often find ourselves caught up in a cycle of endless tasks, akin to chopping trees in a dense forest. However, to ensure sustainable growth, it is crucial to develop an ability to "zoom out", or to view the bigger picture. I use The Morning Pages method, from Julia Cameron. It consists of writing each morning about anything that comes to mind. The act of writing effectively combines linear, focused thinking with the benefits of a thoughtful conversation. If you just want to journal, you can use Day One app (The free version will be enough). If you want to go a bit deeper, you can try a coaching app. I use Wave.ai and I also hired it for the managers in the company because it combines both journaling with habit building. &#x200B; Building Robust Systems and Processes (I know, it is boring and founders hate this) As a founder, you often need to wear multiple hats and juggle various roles. But as a CEO, it's vital to establish strong systems and processes that enable the business to function smoothly, even without your direct involvement. This includes: Implementing project management systems. Establishing clear lines of communication and accountability. Designing efficient workflows and procedures. To many founders, developing these systems might seem monotonous or even tedious. After all, the allure of envisioning the next big idea often proves more exciting. I experienced the same predicament. In response, I brought onboard a competent COO who excelled in systematizing processes. This strategy allowed me to kickstart initiatives and explore them in a flexible, less structured manner. Once an idea showed signs of gaining traction, my COO stepped in to streamline it, crafting a process that turned the fledgling idea into a consistent business operation. &#x200B; Meditating Meditation is about reprogramming unconscious mental processes by repeatedly performing fundamental tasks with a distinct intention. This practice can be even more crucial to leadership than acquiring a business school education. Because meditation provides the most direct route to understanding your mind's workings and thus, forms the most effective basis for transforming it. To transition from a founder to a CEO, a significant shift in your mindset is required. This shift involves moving from a hustle mentality to precision, from acting as a superhero solving problems to consciously stepping back, thereby providing room for your team members to discover their own superpowers. It's about shifting your success indicators - from individual achievements to the triumphs of your team. This transformation might not feel comfortable initially, and your instincts, shaped by your scrappy founder phase, might resist this change. However, with consistent practice, you can align your instincts with the stage of your company, promoting more effective leadership. This is where the value of meditation truly shines. It allows you to identify your distinct thought patterns in real time and, over time, modify them. I use Headspace a lot, and I also encourage the employees to use it. The company pays the subscription as a perk. &#x200B; Balancing the Macro and the Micro As the CEO, your primary focus should be on the big picture – your company's vision and strategy. However, you also need to keep an eye on the details, as these can make or break your execution. It's all about balance: Delegate the details but stay informed. Prioritize strategic planning but be ready to dive into the trenches when needed. Keep your eye on your long-term vision but adapt to short-term realities. The transition from founder to CEO isn't about giving up what made you successful initially but augmenting it with additional skills, perspectives, and practices. It's a personal and professional evolution that can lead to greater success for both you and your business. Every great CEO was once a founder. It's just about taking the next step. I’d love to hear your experiences or any tips you might have for this transition. In which step of your journey are you right now? Do you have employees already? What are your main challenges right now?

Made $19.2k this month, and just surpassed $1000 the last 24 hours. What I did and what's next.
reddit
LLM Vibe Score0
Human Vibe Score1
dams96This week

Made $19.2k this month, and just surpassed $1000 the last 24 hours. What I did and what's next.

It's the first time I hit $1000+ in 24 hours and I had no one to share it with (except you guys). I'm quite proud of my journey, and I would have thought that making $1000 in a day would make me ecstatic, but actually it's not the case. Not sure if it's because my revenue has grown by increment step so I had time to "prepare" myself to achieve this at one point, or just that I'm nowhere near my goal of 100k/month so that I'm not that affected by it. But it's crazy to think that my goal was to make 100$ daily at the end of 2024. So for those who don't know me (I guess most of you), I build mobile apps and ship them as fast as I can. Most of them are in the AI space. I already made a post here on how I become a mobile app developer so you can check it for more details, but essentially here's what I did : Always loved creating my own things and solve problems Built multiple YouTube channels since I was 15 (mobile gaming actually) that all worked great (but it was too niche so not that scalable, didn't like that) Did a few businesses here and there (drop shopping, selling merch to school, etc) Finished my master's degree in engineering about 2 years ago Worked a moment in a famous watch industry company and saw my potential. The combo of health issues, fixed salary (although it was quite a lot), and me wanting to be an entrepreneur made me leave the company. Created a TikTok account in mobile tech (got 10+ million views the 1st 3 days), manage to grow it to 200k subs in about 3 months Got plenty of collabs for promoting mobile apps (between $500 - $2000 for a collab) Said fuck it I should do my own apps and market them on my TikTok instead of doing collabs Me wanting to build my own apps happened around May-June 2023. Started my TikTok in Feb 2023. At this point I had already 150k+ subs on TikTok. You guys need to know that I suck at coding big time. During my studies I tried to limit as much as I could coding because I was a lazy bast*rd, even though I knew it would come to bite me in the ass one day. But an angel appeared to me in broad daylight, that angel was called GPT-4. I subscribed for 20$/month to get access, and instantly I saw the potential of AI and how much it could help me. Last year GPT-4 was ahead of its time and could already code me basic apps. I had already a mac so I just downloaded Xcode and that was it. My 1st app was a wallpaper app, and I kid you not 90% of it was made by AI. Yes sometimes I had to try again and again with different prompts but it was still so much faster compared to if I had to learn coding from scratch and write code with my own hands. The only thing I didn't do was implement the in app purchase, from which I find a guy on Fiverr to do it for me for 50$. After about 2 months of on-off coding, my first app was ready to be launched. So it was launched, had a great successful launch without doing any videos at that point (iOS 17 was released and my app was the first one alongside another one to offer live wallpapers for iOS 17. I knew that there was a huge app potential there when iOS 17 was released in beta as Apple changed their live wallpaper feature). I Then made a video a few weeks after on my mobile tiktok channel, made about 1 million views in 48 hours, brought me around 40k additional users. Was top 1 chart in graphism and design category for a few weeks (in France, as I'm French so my TikTok videos are in French). And was top 100 in that same category in 120+ countries. Made about 500$ ? Okay that was trash, but I had no idea to monetize the app correctly at that point. It was still a huge W to me and proved me that I could successfully launch apps. Then I learned ASO (App Store Optimization) in depth, searched on internet, followed mobile app developers on Twitter, checked YouTube videos, you name it. I was eager to learn more. I needed more. Then I just iterated, build my 2nd app in less than a month, my 3rd in 3 weeks and so on. I just build my 14th app in 3 days and is now in review. Everytime I manage to reuse some of my other app's code in my new one, which is why I can build them so much faster now. I know how to monetize my app better by checking out my competitors. I learn so much by just "spying" other apps. Funnily enough, I only made this one Tiktok video on my main account to promote my app. For all my other apps, I didn't do a single video where I showcase it, the downloads has only been thanks to ASO. I still use AI everyday. I'm still not good at coding (a bit better than when I started). I use AI to create my app icons (midjourney or the new AI model Flux which is great). I use figma + midjourney to create my App Store screenshots (and they actually look quite good). I use GPT-4o and Claude 3.5 Sonnet to code most of my apps features. I use gpt-4o to localize my app (if you want to optimize the number of downloads I strongly suggest localizing your app, it takes me about 10 minutes thanks to AI). Now what are my next goals ? To achieve the 100k/month I need to change my strategy a little. Right now the $20k/month comes from purely organic downloads, I didn't do any paid advertising. It will be hard for me to keep on launching new apps and rely on ASO to reach the 100k mark. The best bet to reach 100k is to collab with content creators and they create a viral video showcasing your app. Depending on the app it's not that easy, luckily some of my apps can be viral so I will need to find the right content creators. Second way is to try tiktok/meta ads, I can check (have checked) all the ads that have been made by my competitors (thank you EU), so what I would do is copy their ad concept and create similar ads than them. Some of them have millions in ad budget so I know they create high converting ads, so you don't need to try to create an ad creative from scratch. My only big fear is to get banned by Apple (for no reason of mine). In just a snap of a finger they can just ban you from the platform, that shit scares me. And you pretty much can't do anything. So that's about it for me. I'm quite proud of myself not going to lie. Have been battling so many health issues these past years where I just stay in bed all day I'm surprised to be able to make it work. Anyways feel free to ask questions. I hope it was interesting for some of you at least. PS: My new app was just approved by app review, let the app gods favor me and bring me many downloads ! Also forgot to talk about a potential $100k+ acquisition of one of my apps, but if that ever happens I'll make a post on it.

I run an AI automation agency (AAA). My honest overview and review of this new business model
reddit
LLM Vibe Score0
Human Vibe Score1
AI_Scout_OfficialThis week

I run an AI automation agency (AAA). My honest overview and review of this new business model

I started an AI tools directory in February, and then branched off that to start an AI automation agency (AAA) in June. So far I've come across a lot of unsustainable "ideas" to make money with AI, but at the same time a few diamonds in the rough that aren't fully tapped into yet- especially the AAA model. Thought I'd share this post to shine light into this new business model and share some ways you could potentially start your own agency, or at the very least know who you are dealing with and how to pick and choose when you (inevitably) get bombarded with cold emails from them down the line. Foreword Running an AAA does NOT involve using AI tools directly to generate and sell content directly. That ship has sailed, and unless you are happy with $5 from Fiverr every month or so, it is not a real business model. Cry me a river but generating generic art with AI and slapping it onto a T-shirt to sell on Etsy won't make you a dime. At the same time, the AAA model will NOT require you to have a deep theoretical knowledge of AI, or any academic degree, as we are more so dealing with the practical applications of generative AI and how we can implement these into different workflows and tech-stacks, rather than building AI models from the ground up. Regardless of all that, common sense and a willingness to learn will help (a shit ton), as with anything. Keep in mind - this WILL involve work and motivation as well. The mindset that AI somehow means everything can be done for you on autopilot is not the right way to approach things. The common theme of businesses I've seen who have successfully implemented AI into their operations is the willingess to work with AI in a way that augments their existing operations, rather than flat out replace a worker or team. And this is exactly the train of thought you need when working with AI as a business model. However, as the field is relatively unsaturated and hype surrounding AI is still fresh for enterprises, right now is the prime time to start something new if generative AI interests you at all. With that being said, I'll be going over three of the most successful AI-adjacent businesses I've seen over this past year, in addition to some tips and resources to point you in the right direction. so.. WTF is an AI Automation Agency? The AI automation agency (or as some YouTubers have coined it, the AAA model) at its core involves creating custom AI solutions for businesses. I have over 1500 AI tools listed in my directory, however the feedback I've received from some enterprise users is that ready-made SaaS tools are too generic to meet their specific needs. Combine this with the fact virtually no smaller companies have the time or skills required to develop custom solutions right off the bat, and you have yourself real demand. I would say in practice, the AAA model is quite similar to Wordpress and even web dev agencies, with the major difference being all solutions you develop will incorporate key aspects of AI AND automation. Which brings me to my second point- JUST AI IS NOT ENOUGH. Rather than reducing the amount of time required to complete certain tasks, I've seen many AI agencies make the mistake of recommending and (trying to) sell solutions that more likely than not increase the workload of their clients. For example, if you were to make an internal tool that has AI answer questions based on their knowledge base, but this knowledge base has to be updated manually, this is creating unnecessary work. As such I think one of the key components of building successful AI solutions is incorporating the new (Generative AI/LLMs) with the old (programmtic automation- think Zapier, APIs, etc.). Finally, for this business model to be successful, ideally you should target a niche in which you have already worked and understand pain points and needs. Not only does this make it much easier to get calls booked with prospects, the solutions you build will have much greater value to your clients (meaning you get paid more). A mistake I've seen many AAA operators make (and I blame this on the "Get Rich Quick" YouTubers) is focusing too much on a specific productized service, rather than really understanding the needs of businesses. The former is much done via a SaaS model, but when going the agency route the only thing that makes sense is building custom solutions. This is why I always take a consultant-first approach. You can only build once you understand what they actually need and how certain solutions may impact their operations, workflows, and bottom-line. Basics of How to Get Started Pick a niche. As I mentioned previously, preferably one that you've worked in before. Niches I know of that are actively being bombarded with cold emails include real estate, e-commerce, auto-dealerships, lawyers, and medical offices. There is a reason for this, but I will tell you straight up this business model works well if you target any white-collar service business (internal tools approach) or high volume businesses (customer facing tools approach). Setup your toolbox. If you wanted to start a pressure washing business, you would need a pressure-washer. This is no different. For those without programming knowledge, I've seen two common ways AAA get setup to build- one is having a network of on-call web developers, whether its personal contacts or simply going to Upwork or any talent sourcing agency. The second is having an arsenal of no-code tools. I'll get to this more in a second, but this works beecause at its core, when we are dealing with the practical applications of AI, the code is quite simple, simply put. Start cold sales. Unless you have a network already, this is not a step you can skip. You've already picked a niche, so all you have to do is find the right message. Keep cold emails short, sweet, but enticing- and it will help a lot if you did step 1 correctly and intimately understand who your audience is. I'll be touching base later about how you can leverage AI yourself to help you with outreach and closing. The beauty of gen AI and the AAA model You don't need to be a seasoned web developer to make this business model work. The large majority of solutions that SME clients want is best done using an API for an LLM for the actual AI aspect. The value we create with the solutions we build comes with the conceptual framework and design that not only does what they need it to but integrates smoothly with their existing tech-stack and workflow. The actual implementation is quite straightforward once you understand the high level design and know which tools you are going to use. To give you a sense, even if you plan to build out these apps yourself (say in Python) the large majority of the nitty gritty technical work has already been done for you, especially if you leverage Python libraries and packages that offer high level abstraction for LLM-related functions. For instance, calling GPT can be as little as a single line of code. (And there are no-code tools where these functions are simply an icon on a GUI). Aside from understanding the capabilities and limitations of these tools and frameworks, the only thing that matters is being able to put them in a way that makes sense for what you want to build. Which is why outsourcing and no-code tools both work in our case. Okay... but how TF am I suppposed to actually build out these solutions? Now the fun part. I highly recommend getting familiar with Langchain and LlamaIndex. Both are Python libraires that help a lot with the high-level LLM abstraction I mentioned previously. The two most important aspects include being able to integrate internal data sources/knowledge bases with LLMs, and have LLMs perform autonomous actions. The two most common methods respectively are RAG and output parsing. RAG (retrieval augmented Generation) If you've ever seen a tool that seemingly "trains" GPT on your own data, and wonder how it all works- well I have an answer from you. At a high level, the user query is first being fed to what's called a vector database to run vector search. Vector search basically lets you do semantic search where you are searching data based on meaning. The vector databases then retrieves the most relevant sections of text as it relates to the user query, and this text gets APPENDED to your GPT prompt to provide extra context to the AI. Further, with prompt engineering, you can limit GPT to only generate an answer if it can be found within this extra context, greatly limiting the chance of hallucination (this is where AI makes random shit up). Aside from vector databases, we can also implement RAG with other data sources and retrieval methods, for example SQL databses (via parsing the outputs of LLM's- more on this later). Autonomous Agents via Output Parsing A common need of clients has been having AI actually perform tasks, rather than simply spitting out text. For example, with autonomous agents, we can have an e-commerce chatbot do the work of a basic customer service rep (i.e. look into orders, refunds, shipping). At a high level, what's going on is that the response of the LLM is being used programmtically to determine which API to call. Keeping on with the e-commerce example, if I wanted a chatbot to check shipping status, I could have a LLM response within my app (not shown to the user) with a prompt that outputs a random hash or string, and programmatically I can determine which API call to make based on this hash/string. And using the same fundamental concept as with RAG, I can append the the API response to a final prompt that would spit out the answer for the user. How No Code Tools Can Fit In (With some example solutions you can build) With that being said, you don't necessarily need to do all of the above by coding yourself, with Python libraries or otherwise. However, I will say that having that high level overview will help IMMENSELY when it comes to using no-code tools to do the actual work for you. Regardless, here are a few common solutions you might build for clients as well as some no-code tools you can use to build them out. Ex. Solution 1: AI Chatbots for SMEs (Small and Medium Enterprises) This involves creating chatbots that handle user queries, lead gen, and so forth with AI, and will use the principles of RAG at heart. After getting the required data from your client (i.e. product catalogues, previous support tickets, FAQ, internal documentation), you upload this into your knowledge base and write a prompt that makes sense for your use case. One no-code tool that does this well is MyAskAI. The beauty of it especially for building external chatbots is the ability to quickly ingest entire websites into your knowledge base via a sitemap, and bulk uploading files. Essentially, they've covered the entire grunt work required to do this manually. Finally, you can create a inline or chat widget on your client's website with a few lines of HTML, or altneratively integrate it with a Slack/Teams chatbot (if you are going for an internal Q&A chatbot approach). Other tools you could use include Botpress and Voiceflow, however these are less for RAG and more for building out complete chatbot flows that may or may not incorporate LLMs. Both apps are essentially GUIs that eliminate the pain and tears and trying to implement complex flows manually, and both natively incoporate AI intents and a knowledge base feature. Ex. Solution 2: Internal Apps Similar to the first example, except we go beyond making just chatbots but tools such as report generation and really any sort of internal tool or automations that may incorporate LLM's. For instance, you can have a tool that automatically generates replies to inbound emails based on your client's knowledge base. Or an automation that does the same thing but for replies to Instagram comments. Another example could be a tool that generates a description and screeenshot based on a URL (useful for directory sites, made one for my own :P). Getting into more advanced implementations of LLMs, we can have tools that can generate entire drafts of reports (think 80+ pages), based not only on data from a knowledge base but also the writing style, format, and author voice of previous reports. One good tool to create content generation panels for your clients would be MindStudio. You can train LLM's via prompt engineering in a structured way with your own data to essentially fine tune them for whatever text you need it to generate. Furthermore, it has a GUI where you can dictate the entire AI flow. You can also upload data sources via multiple formats, including PDF, CSV, and Docx. For automations that require interactions between multiple apps, I recommend the OG zapier/make.com if you want a no-code solution. For instance, for the automatic email reply generator, I can have a trigger such that when an email is received, a custom AI reply is generated by MyAskAI, and finally a draft is created in my email client. Or, for an automation where I can create a social media posts on multiple platforms based on a RSS feed (news feed), I can implement this directly in Zapier with their native GPT action (see screenshot) As for more complex LLM flows that may require multiple layers of LLMs, data sources, and APIs working together to generate a single response i.e. a long form 100 page report, I would recommend tools such as Stack AI or Flowise (open-source alternative) to build these solutions out. Essentially, you get most of the functions and features of Python packages such as Langchain and LlamaIndex in a GUI. See screenshot for an example of a flow How the hell are you supposed to find clients? With all that being said, none of this matters if you can't find anyone to sell to. You will have to do cold sales, one way or the other, especially if you are brand new to the game. And what better way to sell your AI services than with AI itself? If we want to integrate AI into the cold outreach process, first we must identify what it's good at doing, and that's obviously writing a bunch of text, in a short amount of time. Similar to the solutions that an AAA can build for its clients, we can take advantage of the same principles in our own sales processes. How to do outreach Once you've identified your niche and their pain points/opportunities for automation, you want to craft a compelling message in which you can send via cold email and cold calls to get prospects booked on demos/consultations. I won't get into too much detail in terms of exactly how to write emails or calling scripts, as there are millions of resources to help with this, but I will tell you a few key points you want to keep in mind when doing outreach for your AAA. First, you want to keep in mind that many businesses are still hesitant about AI and may not understand what it really is or how it can benefit their operations. However, we can take advantage of how mass media has been reporting on AI this past year- at the very least people are AWARE that sooner or later they may have to implement AI into their businesses to stay competitive. We want to frame our message in a way that introduces generative AI as a technology that can have a direct, tangible, and positive impact on their business. Although it may be hard to quantify, I like to include estimates of man-hours saved or costs saved at least in my final proposals to prospects. Times are TOUGH right now, and money is expensive, so you need to have a compelling reason for businesses to get on board. Once you've gotten your messaging down, you will want to create a list of prospects to contact. Tools you can use to find prospects include Apollo.io, reply.io, zoominfo (expensive af), and Linkedin Sales Navigator. What specific job titles, etc. to target will depend on your niche but for smaller companies this will tend to be the owner. For white collar niches, i.e. law, the professional that will be directly benefiting from the tool (i.e. partners) may be better to contact. And for larger organizations you may want to target business improvement and digital transformation leads/directors- these are the people directly in charge of projects like what you may be proposing. Okay- so you have your message, and your list, and now all it comes down to is getting the good word out. I won't be going into the details of how to send these out, a quick Google search will give you hundreds of resources for cold outreach methods. However, personalization is key and beyond simple dynamic variables you want to make sure you can either personalize your email campaigns directly with AI (SmartWriter.ai is an example of a tool that can do this), or at the very least have the ability to import email messages programmatically. Alternatively, ask ChatGPT to make you a Python Script that can take in a list of emails, scrape info based on their linkedin URL or website, and all pass this onto a GPT prompt that specifies your messaging to generate an email. From there, send away. How tf do I close? Once you've got some prospects booked in on your meetings, you will need to close deals with them to turn them into clients. Call #1: Consultation Tying back to when I mentioned you want to take a consultant-first appraoch, you will want to listen closely to their goals and needs and understand their pain points. This would be the first call, and typically I would provide a high level overview of different solutions we could build to tacke these. It really helps to have a presentation available, so you can graphically demonstrate key points and key technologies. I like to use Plus AI for this, it's basically a Google Slides add-on that can generate slide decks for you. I copy and paste my default company messaging, add some key points for the presentation, and it comes out with pretty decent slides. Call #2: Demo The second call would involve a demo of one of these solutions, and typically I'll quickly prototype it with boilerplate code I already have, otherwise I'll cook something up in a no-code tool. If you have a niche where one type of solution is commonly demanded, it helps to have a general demo set up to be able to handle a larger volume of calls, so you aren't burning yourself out. I'll also elaborate on how the final product would look like in comparison to the demo. Call #3 and Beyond: Once the initial consultation and demo is complete, you will want to alleviate any remaining concerns from your prospects and work with them to reach a final work proposal. It's crucial you lay out exactly what you will be building (in writing) and ensure the prospect understands this. Furthermore, be clear and transparent with timelines and communication methods for the project. In terms of pricing, you want to take this from a value-based approach. The same solution may be worth a lot more to client A than client B. Furthermore, you can create "add-ons" such as monthly maintenance/upgrade packages, training sessions for employeees, and so forth, separate from the initial setup fee you would charge. How you can incorporate AI into marketing your businesses Beyond cold sales, I highly recommend creating a funnel to capture warm leads. For instance, I do this currently with my AI tools directory, which links directly to my AI agency and has consistent branding throughout. Warm leads are much more likely to close (and honestly, much nicer to deal with). However, even without an AI-related website, at the very least you will want to create a presence on social media and the web in general. As with any agency, you will want basic a professional presence. A professional virtual address helps, in addition to a Google Business Profile (GBP) and TrustPilot. a GBP (especially for local SEO) and Trustpilot page also helps improve the looks of your search results immensely. For GBP, I recommend using ProfilePro, which is a chrome extension you can use to automate SEO work for your GBP. Aside from SEO optimzied business descriptions based on your business, it can handle Q/A answers, responses, updates, and service descriptions based on local keywords. Privacy and Legal Concerns of the AAA Model Aside from typical concerns for agencies relating to service contracts, there are a few issues (especially when using no-code tools) that will need to be addressed to run a successful AAA. Most of these surround privacy concerns when working with proprietary data. In your terms with your client, you will want to clearly define hosting providers and any third party tools you will be using to build their solution, and a DPA with these third parties listed as subprocessors if necessary. In addition, you will want to implement best practices like redacting private information from data being used for building solutions. In terms of addressing concerns directly from clients, it helps if you host your solutions on their own servers (not possible with AI tools), and address the fact only ChatGPT queries in the web app, not OpenAI API calls, will be used to train OpenAI's models (as reported by mainstream media). The key here is to be open and transparent with your clients about ALL the tools you are using, where there data will be going, and make sure to get this all in writing. have fun, and keep an open mind Before I finish this post, I just want to reiterate the fact that this is NOT an easy way to make money. Running an AI agency will require hours and hours of dedication and work, and constantly rearranging your schedule to meet prospect and client needs. However, if you are looking for a new business to run, and have a knack for understanding business operations and are genuinely interested in the pracitcal applications of generative AI, then I say go for it. The time is ticking before AAA becomes the new dropshipping or SMMA, and I've a firm believer that those who set foot first and establish themselves in this field will come out top. And remember, while 100 thousand people may read this post, only 2 may actually take initiative and start.

 I just sold my startup for $200,000 after 11 months. AMA
reddit
LLM Vibe Score0
Human Vibe Score0
jeannenThis week

I just sold my startup for $200,000 after 11 months. AMA

Last August, I was looking for a startup idea I could grow and made a MVP in a week then launched it. I received the $200,000 wire from the buyer a couple of days ago I found tons of useful info online for free, so I hope this can be my way of giving back :) Here is some background: Idea I got the idea when trying to write a tweet using Google Doc's transcription tool, which was terrible. I was pretty sure I wasn't the only one too lazy to type, I made my own solution using AI to transcribe and reformat voice notes into any kind of content. I called it Talknotes, mainly because it was the only domain available lol Validation: My rule is to only reinvest what the project generates. After listing on startup directories and posting on Twitter, I generated $700 in 10 days. It wasn't much, but enough to show interest and keep me motivated. I added user-requested features, but the launch effect wore off, and daily revenues dropped to $0 after a few weeks. I almost gave up, but friends encouraged me to continue. In October, I launched on ProductHunt and it blew up. It became Product of the Day and reached $1500 MRR thanks to media coverage. I initially built everything using vanilla JS/CSS/HTML + Node for backend. But it's pretty limited for apps with lots of interactivity so, I rebuilt the app using Nuxt.js to make it easier to ship new features. Then, I launched ads on Facebook and I implemented a feedback loop: Get new users Learn about them through onboarding Make more ads based on onboarding data This doubled MRR in about 2 months. Burnout and Sale: In May, I had a bad burnout after emergency bug fixes. This made it hard to work on the app after. At this point MRR was around $7000 and total revenues around $70,0000 I listed it on Acquire.com for $200,000, a very good price for the buyer considering revenues and growth. I could've gotten $300,000 with buyer financing or earn-outs, but I wanted cash, $200,000 today is better than $300,000 in a year. Everything was smooth until we tried using Escrow, which almost fucked up the deal (details here). Long story short, had to threaten them to make a sponsored post on Twitter explaining what they did + legal action. They sent the refund the very next day, and we completed the transfer directly. Now, this isn't an overnight success. It's the result of 7 years of grind. I launched over 40 projects since I started, and most of them failed. I often worked 100 hours per week, and I rarely go out or meet many people. It's not for everyone, but I'm fine with it With the profit from the app + sale, and other projects, I have close to 1/3 of a million dollar. I could retire in Asia if I wanted Just mind blowing to think I wrote funny characters in a code editor and sold it for the price of a house lol Edit 1: A few people got confused. I said it's 7 years of grind and most of my projects failed, not that I was not making money. I also said I OFTEN worked 100h/week, not every week :) Since I learned to code 2 years ago I've made close to $400k from my app's profit + exit (this one + another one for $65k last year). And before that I was making money as a marketing freelancer. Also, I dropped after high-school, so, I had to learn everything from scratch, it takes time! Edit 2: Lots of people asked how/where I learned to code in 2 months. I wrote a blog/journal about it back then with links to resources, you can find it here if you're interested

New Entrepreneur Looking to Learn
reddit
LLM Vibe Score0
Human Vibe Score1
jlimbsThis week

New Entrepreneur Looking to Learn

Hi all, long-time lurker, and first-time poster. About six weeks ago, I left my full-time career in tech to dive headfirst into launching an AI-focused startup. It’s my first time as a founder (well, co-founder), and the journey already feels exhilarating and terrifying at the same time! I’ve got a tech team onboard, and we are starting to build out our platform. To make sure I'm building the right thing, it's a top priority for me to connect with our target audience of small business owners for discovery conversations. I’m eager to learn about: How (and if) you’re currently using AI in your business. What kind of value/impact does AI need to deliver for you to be willing to use it in your business. What challenges or blockers do you perceive around implementing AI solutions. I’m open to speaking with US-based business owners with companies ranging from 5-50 employees or so, and am particularly interested if you are non-technical. If you’re willing to share your experience, I’d love to chat for 15-30 minutes. Feel free to comment here or DM me if you’re interested—your insights (and trolling) would mean the world as I navigate this journey. Thanks in advance! P.S. - I know I'm being a little cagey about the details of what my start-up is doing. While I don't think we have the most innovative idea in the world, I'd prefer to hold off on posting details publicly. This isn't a backdoor sales call, I'm just looking to ask questions and learn.

How a Small Startup in Asia Secured a Contract with the US Department of Homeland Security
reddit
LLM Vibe Score0
Human Vibe Score1
Royal_Rest8409This week

How a Small Startup in Asia Secured a Contract with the US Department of Homeland Security

Uzair Javaid, a Ph.D. with a passion for data privacy, co-founded Betterdata to tackle one of AI's most pressing challenges: protecting privacy while enabling innovation. Recently, Betterdata secured a lucrative contract with the US Department of Homeland Security, 1 of only 4 companies worldwide to do so and the only one in Asia. Here's how he did it: The Story So what's your story? I grew up in Peshawar, Pakistan, excelling in coding despite studying electrical engineering. Inspired by my professors, I set my sights on studying abroad and eventually earned a Ph.D. scholarship at NUS Singapore, specializing in data security and privacy. During my research, I ethically hacked Ethereum and published 15 papers—three times the requirement. While wrapping up my Ph.D., I explored startup ideas and joined Entrepreneur First, where I met Kevin Yee. With his expertise in generative models and mine in privacy, we founded Betterdata. Now, nearly three years in, we’ve secured a major contract with the U.S. Department of Homeland Security—one of only four companies globally and the only one from Asia. The Startup In a nutshell, what does your startup do? Betterdata is a startup that uses AI and synthetic data generation to address two major challenges: data privacy and the scarcity of high-quality data for training AI models. By leveraging generative models and privacy-enhancing technologies, Betterdata enables businesses, such as banks, to use customer data without breaching privacy regulations. The platform trains AI on real data, learns its patterns, and generates synthetic data that mimics the real thing without containing any personal or sensitive information. This allows companies to innovate and develop AI solutions safely and ethically, all while tackling the growing need for diverse, high-quality data in AI development. How did you conduct ideation and validation for your startup? The initial idea for Betterdata came from personal experience. During my Ph.D., I ethically hacked Ethereum’s blockchain, exposing flaws in encryption-based data sharing. This led me to explore AI-driven deep synthesis technology—similar to deepfakes but for structured data privacy. With GDPR impacting 28M+ businesses, I saw a massive opportunity to help enterprises securely share data while staying compliant. To validate the idea, I spoke to 50 potential customers—a number that strikes the right balance. Some say 100, but that’s impractical for early-stage founders. At 50, patterns emerge: if 3 out of 10 mention the same problem, and this repeats across 50, you have 10–15 strong signals, making it a solid foundation for an MVP. Instead of outbound sales, which I dislike, we used three key methods: Account-Based Marketing (ABM)—targeting technically savvy users with solutions for niche problems, like scaling synthetic data for banks. Targeted Content Marketing—regular customer conversations shaped our thought leadership and outreach. Raising Awareness Through Partnerships—collaborating with NUS, Singapore’s PDPC, and Plug and Play to build credibility and educate the market. These strategies attracted serious customers willing to pay, guiding Betterdata’s product development and market fit. How did you approach the initial building and ongoing product development? In the early stages, we built synthetic data generation algorithms and a basic UI for proof-of-concept, using open-source datasets to engage with banks. We quickly learned that banks wouldn't share actual customer data due to privacy concerns, so we had to conduct on-site installations and gather feedback to refine our MVP. Through continuous consultation with customers, we discovered real enterprise data posed challenges, such as missing values, which led us to adapt our prototype accordingly. This iterative approach of listening to customer feedback and observing their usage allowed us to improve our product, enhance UX, and address unmet needs while building trust and loyalty. Working closely with our customers also gives us a data advantage. Our solution’s effectiveness depends on customer data, which we can't fully access, but bridging this knowledge gap gives us a competitive edge. The more customers we test on, the more our algorithms adapt to diverse use cases, making it harder for competitors to replicate our insights. My approach to iteration is simple: focus solely on customer feedback and ignore external noise like trends or advice. The key question for the team is: which customer is asking for this feature or solution? As long as there's a clear answer, we move forward. External influences, such as AI hype, often bring more confusion than clarity. True long-term success comes from solving real customer problems, not chasing trends. Customers may not always know exactly what they want, but they understand their problems. Our job is to identify these problems and solve them in innovative ways. While customers may suggest specific features, we stay focused on solving the core issue rather than just fulfilling their exact requests. The idea aligns with the quote often attributed to Henry Ford: "If I asked people what they wanted, they would have said faster horses." The key is understanding their problems, not just taking requests at face value. How do you assess product-market fit? To assess product-market fit, we track two key metrics: Customers' Willingness to Pay: We measure both the quantity and quality of meetings with potential customers. A high number of meetings with key decision-makers signals genuine interest. At Betterdata, we focused on getting meetings with people in banks and large enterprises to gauge our product's resonance with the target market. How Much Customers Are Willing to Pay: We monitor the price customers are willing to pay, especially in the early stages. For us, large enterprises, like banks, were willing to pay a premium for our synthetic data platform due to the growing need for privacy tech. This feedback guided our product refinement and scaling strategy. By focusing on these metrics, we refined our product and positioned it for scaling. What is your business model? We employ a structured, phase-driven approach for out business model, as a B2B startup. I initially struggled with focusing on the core value proposition in sales, often becoming overly educational. Eventually, we developed a product roadmap with models that allowed us to match customer needs to specific offerings and justify our pricing. Our pricing structure includes project-based pilots and annual contracts for successful deployments. At Betterdata, our customer engagement unfolds across three phases: Phase 1: Trial and Benchmarking \- We start with outreach and use open-source datasets to showcase results, offering customers a trial period to evaluate the solution. Phase 2: Pilot or PoC \- After positive trial results, we conduct a PoC or pilot using the customer’s private data, with the understanding that successful pilots lead to an annual contract. Phase 3: Multi-Year Contracts \- Following a successful pilot, we transition to long-term commercial contracts, focusing on multi-year agreements to ensure stability and ongoing partnerships. How do you do marketing for your brand? We take a non-conventional approach to marketing, focusing on answering one key question: Which customers are willing to pay, and how much? This drives our messaging to show how our solution meets their needs. Our strategy centers around two main components: Building a network of lead magnets \- These are influential figures like senior advisors, thought leaders, and strategic partners. Engaging with institutions like IMDA, SUTD, and investors like Plug and Play helps us gain access to the right people and foster warm introductions, which shorten our sales cycle and ensure we’re reaching the right audience. Thought leadership \- We build our brand through customer traction, technology evidence, and regulatory guidelines. This helps us establish credibility in the market and position ourselves as trusted leaders in our field. This holistic approach has enabled us to navigate diverse market conditions in Asia and grow our B2B relationships. By focusing on these areas, we drive business growth and establish strong trust with stakeholders. What's your advice for fundraising? Here are my key takeaways for other founders when it comes to fundraising: Fundraise When You Don’t Need To We closed our seed round in April 2023, a time when we weren't actively raising. Founders should always be in fundraising mode, even when they're not immediately in need of capital. Don’t wait until you have only a few months of runway left. Keep the pipeline open and build relationships. When the timing is right, execution becomes much easier. For us, our investment came through a combination of referrals and inbound interest. Even our lead investor initially rejected us, but after re-engaging, things eventually fell into place. It’s crucial to stay humble, treat everyone with respect, and maintain those relationships for when the time is right. Be Mindful of How You Present Information When fundraising, how you present information matters a lot. We created a comprehensive, easily digestible investment memo, hosted on Notion, which included everything an investor might need—problem, solution, market, team, risks, opportunities, and data. The goal was for investors to be able to get the full picture within 30 minutes without chasing down extra details. We also focused on making our financial model clear and meaningful, even though a 5-year forecast might be overkill at the seed stage. The key was clarity and conciseness, and making it as easy as possible for investors to understand the opportunity. I learned that brevity and simplicity are often the best ways to make a memorable impact. For the pitch itself, keep it simple and focus on 4 things: problem, solution, team, and market. If you can summarize each of these clearly and concisely, you’ll have a compelling pitch. Later on, you can expand into market segments, traction, and other metrics, but for seed-stage, focus on those four areas, and make sure you’re strong in at least three of them. If you do, you'll have a compelling case. How do you run things day-to-day? i.e what's your operational workflow and team structure? Here's an overview of our team structure and process: Internally: Our team is divided into two main areas: backend (internal team) and frontend (market-facing team). There's no formal hierarchy within the backend team. We all operate as equals, defining our goals based on what needs to be developed, assigning tasks, and meeting weekly to share updates and review progress. The focus is on full ownership of tasks and accountability for getting things done. I also contribute to product development, identifying challenges and clearing obstacles to help the team move forward. Backend Team: We approach tasks based on the scope defined by customers, with no blame or hierarchy. It's like a sports team—sometimes someone excels, and other times they struggle, but we support each other and move forward together. Everyone has the creative freedom to work in the way that suits them best, but we establish regular meetings and check-ins to ensure alignment and progress. Frontend Team: For the market-facing side, we implement a hierarchy because the market expects this structure. If I present myself as "CEO," it signals authority and credibility. This distinction affects how we communicate with the market and how we build our brand. The frontend team is split into four main areas: Business Product (Software Engineering) Machine Learning Engineering R&D The C-suite sits at the top, followed by team leads, and then the executors. We distill market expectations into actionable tasks, ensuring that everyone is clear on their role and responsibilities. Process: We start by receiving market expectations and defining tasks based on them. Tasks are assigned to relevant teams, and execution happens with no communication barriers between team members. This ensures seamless collaboration and focused execution. The main goal is always effectiveness—getting things done efficiently while maintaining flexibility in how individuals approach their work. In both teams, there's an emphasis on accountability, collaboration, and clear communication, but the structure varies according to the nature of the work and external expectations.

Where Do I Find Like-Minded, Unorthodox Co-founders? [Tech]
reddit
LLM Vibe Score0
Human Vibe Score0.6
madscholarThis week

Where Do I Find Like-Minded, Unorthodox Co-founders? [Tech]

After more than 20 years in the tech industry I'm pretty fed up. I've been at it non-stop, so the burnout was building up for a while. Eventually, it's gotten so bad that it was no longer a question whether I need to take a break; I knew that I had to, for the sake of myself and loved ones. A few months ago I quit my well-paying, mid-level mgmt job to have some much-needed respite. I can't say that I've fully recovered, but I'm doing a bit better, so I'm starting to think about what's next. That said, the thoughts of going back into the rat race fill me with dread and anxiety. I've had an interesting career - I spent most of it in startups doing various roles from an SWE to a VP Eng, including having my own startup adventures for a couple of years. The last 4.5 years of my career have been in one of the fastest growing tech companies - it was a great learning experience, but also incredibly stressful, toxic and demoralizing. It's clear to me that I'm not cut out for the corporate world -- the ethos contradicts with my personality and beliefs -- but it's not just. I've accumulated "emotional scars" from practically every place I worked at and it made me loathe the industry to the degree that if I ever have another startup, it'd have to be by my own -- unorthodox -- ideals, even if it means a premature death due to lack of funding. I was young, stupid and overly confident when I had my first startup. I tried to do it "by the book" and dance to the tune of investors. While my startup failed for other, unrelated reasons, it gave me an opportunity to peak behind the curtain, experience the power dynamics, and get a better understanding to how the game is played - VCs and other person of interest have popularized the misconception that if a company doesn't scale, it would stagnate and eventually regress and die. This is nonsense. This narrative was created because it would make the capitalist pigs obsolete - they need companies to go through the entire alphabet before forcing them to sell or IPO. The sad reality is that the most entrepreneurs still believe in this paradigm and fall into the VC's honeypot traps. It's true that many businesses cannot bootstrap or scale without VC money, but it's equally true that far too many companies pivot/scale prematurely (and enshitify their product in the process) due to external pressures fueled by pure greed. This has a top-bottom effect - enshitification doesn't only effect users, but it also heavily effects the processes and structrures of companies, which can explain why the average tenure in tech is only \~2 years. I think that we live in an age where self-starting startups are more feasible than ever. It's not just the rise of AI and automation, but also the plethora of tools, services, and open-source projects that are available to all for free. On the one hand, this is fantastic, but on the other, the low barrier-to-entry creates oversaturation of companies which makes research & discovery incredibly hard - it is overwhelming to keep up with the pace and distill the signal from the noise, and there's a LOT of noise - there's not enough metaphorical real-estate for the graveyard of startups that will be defunct in the very near future. I'd like to experiment with startups again, but I don't want to navigate through this complex mine field all by myself - I want to find a like-minded co-founder who shares the same ideals as I do. It goes without saying that being on the same page isn't enough - I also want someone who's experienced, intelligent, creative, productive, well-rounded, etc. At the moment, I don't have anyone in my professional network who has/wants what it takes. I can look into startup bootcamps/accelerators like YC et al., and sure enough, I'll find talented individuals, but it'd be a mismatch from the get-go. For shits and giggles, this is (very roughly) how I envision the ideal company: Excellent work life balance: the goal is not to make a quick exit, become filthy rich, and turn into a self-absorbed asshole bragging about how they got so succesful. The goal is to generate a steady revenue stream while not succumbing to social norms that encourage greed. The entire purpose is to reach humble financial indepedence while maintaining a stress-free (as one possibly can) work environment. QOL should always be considered before ARR. Bootstraping: no external money. Not now, not later. No quid pro quo. No shady professionals or advisors. Company makes it or dies trying. Finances: very conservative to begin with - the idea is to play it safe and build a long fucking runaway before hiring. Spend every penny mindfully and frugally. Growth shouldn't be too quick & reckless. The business will be extremely efficient in spending. The only exception to the rule is crucial infrastructure and wages to hire top talent and keep salaries competitive and fair. Hiring: fully remote. Global presence, where applicable. Headcount will be limited to the absolute bare minimum. The goal is to run with a skeleton crew of the best generalists out there - bright, self-sufficient, highly motivated, autodidact, and creative individuals. Hiring the right people is everything and should be the company's top priority. Compensation & Perks: transperent and fair, incentivizing exceptional performance with revenue sharing bonuses. The rest is your typical best-in-class perks: top tier health/dental/vision insurance, generous PTO with mandatory required minimum, parental leave, mental wellness, etc. Process: processes will be extremely efficient, automated to the max, documented, unbloated, and data-driven through and through. Internal knowledge & data metrics will be accessible and transparent to all. Employees get full autonomy of their respective areas and are fully in charge of how they spend their days as long as they have agreed-upon, coherent, measurable metrics of success. Meetings will be reduced to the absolute minimum and would have to be justified and actionable - the ideal is that most communications will be done in written form, while face-to-face will be reserved for presentations/socializing. I like the Kaizen philosophy to continuously improve and optimize processes. Product: As previously stated, "data-driven through and through". Mindful approach to understand cost/benefit. Deliberate and measured atomic improvements to avoid feature creep and slow down the inevitable entropy. Most importantly, client input should be treated with the utmost attention but should never be the main driver for the product roadmap. This is a very controversial take, but sometimes it's better to lose a paying customer than to cave to their distracting/unreasonable/time-consuming demands. People Culture: ironicaly, this would be what most companies claim to have, but for realsies. Collaborative, open, blameless environment. People are treated like actual grown ups with flat structure, full autonomy, and unwavering trust. Socializing and bonding is highly encourged, but never required. Creativity and ingenuity is highly valued - people are encouraged to work on side projects one day of the week. Values: I can write a lot about it, but it really boils down to being kind and humble. We all know what happened with "don't be evil". It's incredibly hard to retain values over time, esp. when there are opposing views within a company. I don't know how to solve it, but I believe that there should be some (tried and true) internal checks & balances from the get go to ensure things are on track. I never mentioned what this hypothetical startup does. Sure, there's another very relevant layer of domain experience fit, but this mindset allows one to be a bit more fluid because the goal is not to disrupt an industry or "make the world a better place"; it's to see work for what it truly is - a mean to an end. It's far more important for me to align with a co-founder on these topics than on an actual idea or technical details. Pivoting and rebranding are so common that many VCs outweigh the make up and chemistry of the founding team (and their ability to execute) over the feasibility of their ideas.  To wrap this long-winded post, I'm not naive or disillusioned - utopias aren't real and profitable companies who operate at a 70-80% rate of what I propose are the real unicorns, but despite them being a tiny minority, I think they are the real forward thinkers of the industry. I might be wrong, but I hope that I'm right and that more and more startups will opt towards long-term sustainability over the promise of short-term gains because the status quo really stinks for most people. What do you folks think? Does anyone relate? Where can I find others like me? P.S I thought about starting a blog writing about these topics in length (everything that is wrong with tech & what can be done to improve it), but I have the Impostor Syndrom and I'm too self-conscious about how I come off. If you somehow enjoyed reading through that and would love to hear more of my thoughts and experiences in greater detail, please let me know. P.P.S If you have a company that is close to what I'm describing and you're hiring, let me know!

101 best SEO tips to help you drive traffic in 2k21
reddit
LLM Vibe Score0
Human Vibe Score0.543
DrJigsawThis week

101 best SEO tips to help you drive traffic in 2k21

Hey guys! I don't have to tell you how SEO can be good for your business - you can drive leads to your SaaS on autopilot, drive traffic to your store/gym/bar/whatever, etc. The thing with SEO, though, is that most SEO tips on the internet are just not that good. Most of the said tips: Are way too simple & basic (“add meta descriptions to your images”*) Are not impactful. Sure, adding that meta tag to an image is important, but that’s not what’s going to drive traffic to your website Don’t talk much about SEO strategy (which is ultimately the most important thing for SEO). Sure, on-page SEO is great, but you sure as hell won't drive much traffic if you can't hire the right writers to scale your content. And to drive serious SEO traffic, you'll need a LOT more than that. Over the past few years, my and my co-founder have helped grow websites to over 200k+ monthly traffic (check out our older Reddit post if you want to learn more about us, our process, and what we do), and we compiled all our most important SEO tips and tricks, as well as case studies, research, and experiments from the web, into this article. Hope you like it ;) If you think we missed something super important, let us know and we'll add it to the list. And btw, we also published this article on our own blog with images, smart filters, and all that good stuff. If you want to check it out, click here. That said, grab some coffee (or beer) & let's dive in - this is going to be a long one. SEO Strategy Tips Tip #1. A Lot of SEO Tips On The Internet Are NOT Necessarily Factual A lot of the SEO content you’ll read on the internet will be based on personal experiences and hearsay. Unfortunately, Google is a bit vague about SEO advice, so you have to rely more on experiments conducted by SEO pros in the community. So, sometimes, a lot of this information is questionable, wrong, or simply based on inaccurate data.  What we’re getting at here is, whenever you hear some new SEO advice, take it with a grain of salt. Google it to double-check other sources, and really understand what this SEO advice is based on (instead of just taking it at face value). Tip #2. SEO Takes Time - Get Used to It Any way you spin it, SEO takes time.  It can take around 6 months to 2 years (depending on the competition in your niche) before you start seeing some serious results.  So, don’t get disappointed if you don’t see any results within 3 months of publishing content. Tip #3. SEO Isn’t The Best Channel for Everyone That said, if you need results for your business tomorrow, you might want to reconsider SEO altogether.  If you just started your business, for example, and are trying to get to break-even ASAP, SEO is a bad idea - you’ll quit before you even start seeing any results.  If that’s the case, focus on other marketing channels that can have faster results like content marketing, PPC, outreach, etc. Tip #4. Use PPC to Validate Keywords Not sure if SEO is right for your business? Do this: set up Google Search ads for the most high-intent keywords in your niche. See how well the traffic converts and then decide if it’s worthwhile to focus on SEO (and rank on these keywords organically). Tip #5. Use GSC to See If SEO Is Working While it takes a while to see SEO results, it IS possible to see if you’re going in the right direction. On a monthly basis, you can use Search Console to check if your articles are indexed by Google and if their average position is improving over time. Tip #6. Publish a TON of Content The more content you publish on your blog, the better. We recommend a minimum of 10,000 words per month and optimally 20,000 - 30,000 (especially if your website is fresh). If an agency offers you the typical “4 500-word articles per month” deal, stay away. No one’s ever gotten results in SEO with short, once-per-week articles. Tip #7. Upgrade Your Writers Got a writer that’s performing well? Hire them as an editor and get them to oversee content operations / edit other writers’ content. Then, upgrade your best editor to Head of Content and get them to manage the entire editor / writer ops. Tip #8. Use Backlink Data to Prioritize Content When doing keyword research, gather the backlink data of the top 3 ranking articles and add it to your sheet. Then, use this data to help you prioritize which keywords to focus on first. We usually prioritize keywords that have lower competition, high traffic, and a medium to high buyer intent. Tip #9. Conduct In-Depth Keyword Research Make your initial keyword research as comprehensive as possible. This will give you a much more realistic view of your niche and allow you to prioritize content the right way. We usually aim for 100 to 300 keywords (depending on the niche) for the initial keyword research when we start working with a client. Tip #10. Start With Competitive Analysis Start every keyword research with competitive analysis. Extract the keywords your top 3 competitors are ranking on.  Then, use them as inspiration and build upon it. Use tools like UberSuggest to help generate new keyword ideas. Tip #11. Get SEMrush of Ahrefs You NEED SEMrush or Ahrefs, there’s no doubt about it. While they might seem expensive at a glance (99 USD per month billed annually), they’re going to save you a lot of manpower doing menial SEO tasks. Tip #12. Don’t Overdo It With SEO Tools Don’t overdo it with SEO tools. There are hundreds of those out there, and if you’re the type that’s into SaaS, you might be tempted to play around with dozens at a time. And yes, to be fair, most of these tools ARE helpful one way or another. To effectively do organic SEO, though, you don’t really need that many tools. In most cases, you just need the following: SEMrush/Ahrefs Screaming Frog RankMath/Yoast SEO Whichever outreach tool you prefer (our favorite is snov.io). Tip #13. Try Some of the Optional Tools In addition to the tools we mentioned before, you can also try the following 2 which are pretty useful & popular in the SEO community: Surfer SEO - helps with on-page SEO and creating content briefs for writers. ClusterAI - tool that helps simplify keyword research & save time. Tip #14. Constantly Source Writers Want to take your content production to the next level? You’ll need to hire more writers.  There is, however, one thing that makes this really, really difficult: 95 - 99% of writers applying for your gigs won’t be relevant. Up to 80% will be awful at writing, and the remainder just won’t be relevant for your niche. So, in order to scale your writing team, we recommend sourcing constantly, and not just once every few months. Tip #15. Create a Process for Writer Filtering As we just mentioned, when sourcing writers, you’ll be getting a ton of applicants, but most won’t be qualified. Fun fact \- every single time we post a job ad on ProBlogger, we get around 300 - 500 applications (most of which are totally not relevant). Trust us, you don’t want to spend your time going through such a huge list and checking out the writer samples. So, instead, we recommend you do this: Hire a virtual assistant to own the process of evaluating and short-listing writers. Create a process for evaluating writers. We recommend evaluating writers by: Level of English. If their samples aren’t fluent, they’re not relevant. Quality of Samples. Are the samples engaging / long-form content, or are they boring 500-word copy-pastes? Technical Knowledge. Has the writer written about a hard-to-explain topic before? Anyone can write about simple topics like traveling - you want to look for someone who knows how to research a new topic and explain it in a simple and easy to read way. If someone’s written about how to create a perfect cover letter, they can probably write about traveling, but the opposite isn’t true. The VA constantly evaluates new applicants and forwards the relevant ones to the editor. The editor goes through the short-listed writers and gives them trial tasks and hires the ones that perform well. Tip #16. Use The Right Websites to Source Writers “Is UpWork any good?” This question pops up on social media time and time again. If you ask us, no, UpWork is not good at all. Of course, there are qualified writers there (just like anywhere else), but from our experience, those writers are few and far in-between. Instead, here are some of our favorite ways to source writers: Cult of Copy Job Board ProBlogger Headhunting on LinkedIn If you really want to use UpWork, use it for headhunting (instead of posting a job ad) Tip #17. Hire Writers the Right Way If you want to seriously scale your content production, hire your writers full-time. This (especially) makes sense if you’re a content marketing agency that creates a TON of content for clients all the time. If you’re doing SEO just for your own blog, though, it usually makes more sense to use freelancers. Tip #18. Topic Authority Matters Google keeps your website's authoritativeness in mind. Meaning, if you have 100 articles on digital marketing, you’re probably more of an authority on the topic than someone that has just 10. Hence, Google is a lot more likely to reward you with better rankings. This is also partially why content volume really matters: the more frequently you publish content, the sooner Google will view you as an authority. Tip #19. Focus on One Niche at a Time Let’s say your blog covers the following topics: sales, accounting, and business management.  You’re more likely to rank if you have 30 articles on a single topic (e.g. accounting) than if you have 10 articles on each. So, we recommend you double-down on one niche instead of spreading your content team thin with different topics. Tip #20. Don’t Fret on the Details While technical SEO is important, you shouldn’t get too hung up on it.  Sure, there are thousands of technical tips you can find on the internet, and most of them DO matter. The truth, though, is that Google won’t punish you just because your website doesn’t load in 3 milliseconds or there’s a meta description missing on a single page. Especially if you have SEO fundamentals done right: Get your website to run as fast as possible. Create a ton of good SEO content. Get backlinks for your website on a regular basis. You’ll still rank, even if your website isn’t 100% optimized. Tip #21. Do Yourself a Favor and Hire a VA There are a TON of boring SEO tasks that your team should really not be wasting time with. So, hire a full-time VA to help with all that. Some tasks you want to outsource include gathering contacts to reach out to for link-building, uploading articles on WordPress, etc. Tip #22. Google Isn’t Everything While Google IS the dominant search engine in most parts of the world, there ARE countries with other popular search engines.  If you want to improve your SEO in China, for example, you should be more concerned with ranking on Baidu. Targeting Russia? Focus on Yandex. Tip #23. No, Voice Search is Still Not Relevant Voice search is not and will not be relevant (no matter what sensationalist articles might say). It’s just too impractical for most search queries to use voice (as opposed to traditional search). Tip #24. SEO Is Not Dead SEO is not dead and will still be relevant decades down the line. Every year, there’s a sensationalist article talking about this.  Ignore those. Tip #25. Doing Local SEO? Focus on Service Pages If you’re doing local SEO, focus on creating service-based landing pages instead of content.  E.g. if you’re an accounting firm based in Boston, you can make a landing page about /accounting-firm-boston/, /tax-accounting-boston/, /cpa-boston/, and so on. Thing is, you don’t really need to rank on global search terms - you just won’t get leads from there. Even if you ranked on the term “financial accounting,” it wouldn’t really matter for your bottom line that much. Tip #26. Learn More on Local SEO Speaking of local SEO, we definitely don’t do the topic justice in this guide. There’s a lot more you need to know to do local SEO effectively and some of it goes against the general SEO advice we talk about in this article (e.g. you don't necessarily need blog content for local SEO). We're going to publish an article on that soon enough, so if you want to check it out, DM me and I'll hit you up when it's up. Tip #27. Avoid Vanity Metrics Don’t get side-tracked by vanity metrics.  At the end of the day, you should care about how your traffic impacts your bottom line. Fat graphs and lots of traffic are nice and all, but none of it matters if the traffic doesn’t have the right search intent to convert to your product/service. Tip #28. Struggling With SEO? Hire an Expert Failing to make SEO work for your business? When in doubt, hire an organic SEO consultant or an SEO agency.  The #1 benefit of hiring an SEO agency or consultant is that they’ve been there and done that - more than once. They might be able to catch issues an inexperienced SEO can’t. Tip #29. Engage With the Community Need a couple of SEO questions answered?  SEO pros are super helpful & easy to reach! Join these Facebook groups and ask your question - you’ll get about a dozen helpful answers! SEO Signals Lab SEO & Content Marketing The Proper SEO Group. Tip #30. Stay Up to Date With SEO Trends SEO is always changing - Google is constantly pumping out new updates that have a significant impact on how the game is played.  Make sure to stay up to date with the latest SEO trends and Google updates by following the Google Search Central blog. Tip #31. Increase Organic CTR With PPC Want to get the most out of your rankings? Run PPC ads for your best keywords. Googlers who first see your ad are more likely to click your organic listing. Content & On-Page SEO Tips Tip #32. Create 50% Longer Content On average, we recommend you create an article that’s around 50% longer than the best article ranking on the keyword.  One small exception, though, is if you’re in a super competitive niche and all top-ranking articles are already as comprehensive as they can be. For example, in the VPN niche, all articles ranking for the keyword “best VPN” are around 10,000 - 11,000 words long. And that’s the optimal word count - even if you go beyond, you won’t be able to deliver that much value for the reader to make it worth the effort of creating the content. Tip #33. Longer Is Not Always Better Sometimes, a short-form article can get the job done much better.  For example, let’s say you’re targeting the keyword “how to tie a tie.”  The reader expects a short and simple guide, something under 500 words, and not “The Ultimate Guide to Tie Tying for 2021 \[11 Best Tips and Tricks\]” Tip #34. SEO is Not Just About Written Content Written content is not always best. Sometimes, videos can perform significantly better. E.g. If the Googler is looking to learn how to get a deadlift form right, they’re most likely going to be looking for a video. Tip #35. Don’t Forget to Follow Basic Optimization Tips For all your web pages (articles included), follow basic SEO optimization tips. E.g. include the keyword in the URL, use the right headings etc.  Just use RankMath or YoastSEO for this and you’re in the clear! Tip #36. Hire Specialized Writers When hiring content writers, try to look for ones that specialize in creating SEO content.  There are a LOT of writers on the internet, plenty of which are really good.  However, if they haven’t written SEO content before, chances are, they won’t do that good of a job. Tip #37. Use Content Outlines Speaking of writers - when working with writers, create a content outline that summarizes what the article should be about and what kind of topics it needs to cover instead of giving them a keyword and asking them to “knock themselves out.”   This makes it a lot more likely for the writer to create something that ranks. When creating content outlines, we recommend you include the following information: Target keyword Related keywords that should be mentioned in the article Article structure - which headings should the writer use? In what order? Article title Tip #38. Find Writers With Niche Knowledge Try to find a SEO content writer with some experience or past knowledge about your niche. Otherwise, they’re going to take around a month or two to become an expert. Alternatively, if you’re having difficulty finding a writer with niche knowledge, try to find someone with experience in technical or hard to explain topics. Writers who’ve written about cybersecurity in the past, for example, are a lot more likely to successfully cover other complicated topics (as opposed to, for example, a food or travel blogger). Tip #39. Keep Your Audience’s Knowledge in Mind When creating SEO content, always keep your audience’s knowledge in mind. If you’re writing about advanced finance, for example, you don’t need to teach your reader what an income statement is. If you’re writing about income statements, on the other hand, you’d want to start from the very barebone basics. Tip #40. Write for Your Audience If your readers are suit-and-tie lawyers, they’re going to expect professionally written content. 20-something hipsters? You can get away with throwing a Rick and Morty reference here and there. Tip #41. Use Grammarly Trust us, it’ll seriously make your life easier! Keep in mind, though, that the app is not a replacement for a professional editor. Tip #42. Use Hemingway Online content should be very easy to read & follow for everyone, whether they’re a senior profession with a Ph.D. or a college kid looking to learn a new topic. As such, your content should be written in a simple manner - and that’s where Hemingway comes in. It helps you keep your blog content simple. Tip #43. Create Compelling Headlines Want to drive clicks to your articles? You’ll need compelling headlines. Compare the two headlines below; which one would you click? 101 Productivity Tips \[To Get Things Done in 2021\] VS Productivity Tips Guide Exactly! To create clickable headlines, we recommend you include the following elements: Keyword Numbers Results Year (If Relevant) Tip #44. Nail Your Blog Content Formatting Format your blog posts well and avoid overly long walls of text. There’s a reason Backlinko content is so popular - it’s extremely easy to read and follow. Tip #45. Use Relevant Images In Your SEO Content Key here - relevant. Don’t just spray random stock photos of “office people smiling” around your posts; no one likes those.  Instead, add graphs, charts, screenshots, quote blocks, CSS boxes, and other engaging elements. Tip #46. Implement the Skyscraper Technique (The Right Way) Want to implement Backlinko’s skyscraper technique?  Keep this in mind before you do: not all content is meant to be promoted.  Pick a topic that fits the following criteria if you want the internet to care: It’s on an important topic. “Mega-Guide to SaaS Marketing” is good, “top 5 benefits of SaaS marketing” is not. You’re creating something significantly better than the original material. The internet is filled with mediocre content - strive to do better. Tip #47. Get The URL Slug Right for Seasonal Content If you want to rank on a seasonal keyword with one piece of content (e.g. you want to rank on “saas trends 2020, 2021, etc.”), don’t mention the year in the URL slug - keep it /saas-trends/ and just change the headline every year instead.  If you want to rank with separate articles, on the other hand (e.g. you publish a new trends report every year), include the year in the URL. Tip #48. Avoid content cannibalization.  Meaning, don’t write 2+ articles on one topic. This will confuse Google on which article it should rank. Tip #49. Don’t Overdo Outbound Links Don’t include too many outbound links in your content. Yes, including sources is good, but there is such a thing as overdoing it.  If your 1,000 word article has 20 outbound links, Google might consider it as spam (even if all those links are relevant). Tip #50. Consider “People Also Ask” To get the most out of SERP, you want to grab as many spots on the search result as possible, and this includes “people also ask (PAA):” Make a list of the topic’s PAA questions and ensure that your article answers them.  If you can’t fit the questions & answers within the article, though, you can also add an FAQ section at the end where you directly pose these questions and provide the answers. Tip #51. Optimize For Google Snippet Optimize your content for the Google Snippet. Check what’s currently ranking as the snippet. Then, try to do something similar (or even better) in terms of content and formatting. Tip #52. Get Inspired by Viral Content Want to create content that gets insane shares & links?  Reverse-engineer what has worked in the past. Look up content in your niche that went viral on Reddit, Hacker News, Facebook groups, Buzzsumo, etc. and create something similar, but significantly better. Tip #53. Avoid AI Content Tools No, robots can’t write SEO content.  If you’ve seen any of those “AI generated content tools,” you should know to stay away. The only thing those tools are (currently) good for is creating news content. Tip #54. Avoid Bad Content You will never, ever, ever rank with one 500-word article per week.  There are some SEO agencies (even the more reputable ones) that offer this as part of their service. Trust us, this is a waste of time. Tip #55. Update Your Content Regularly Check your top-performing articles annually and see if there’s anything you can do to improve them.  When most companies finally get the #1 ranking for a keyword, they leave the article alone and never touch it again… ...Until they get outranked, of course, by someone who one-upped their original article. Want to prevent this from happening? Analyze your top-performing content once a year and improve it when possible. Tip #56. Experiment With CTR Do your articles have low CTR? Experiment with different headlines and see if you can improve it.  Keep in mind, though, that what a “good CTR” is really depends on the keyword.  In some cases, the first ranking will drive 50% of the traffic. In others, it’s going to be less than 15%. Link-Building Tips Tip #57. Yes, Links Matter. Here’s What You Need to Know “Do I need backlinks to rank?” is probably one of the most common SEO questions.  The answer to the question (alongside all other SEO-related questions) is that it depends on the niche.  If your competitors don’t have a lot of backlinks, chances are, you can rank solely by creating superior content. If you’re in an extremely competitive niche (e.g. VPN, insurance, etc.), though, everyone has amazing, quality content - that’s just the baseline.  What sets top-ranking content apart from the rest is backlinks. Tip #58. Sometimes, You’ll Have to Pay For Links Unfortunately, in some niches, paying for links is unavoidable - e.g. gambling, CBD, and others. In such cases, you either need a hefty link-building budget, or a very creative link-building campaign (create a viral infographic, news-worthy story based on interesting data, etc.). Tip #59. Build Relationships, Not Links The very best link-building is actually relationship building.  Make a list of websites in your niche and build a relationship with them - don’t just spam them with the standard “hey, I have this amazing article, can you link to it?”.  If you spam, you risk ruining your reputation (and this is going to make further outreach much harder). Tip #60. Stick With The Classics At the end of the day, the most effective link-building tactics are the most straightforward ones:  Direct Outreach Broken Link-Building Guest Posting Skyscraper Technique Creating Viral Content Guestposting With Infographics Tip #61. Give, Don’t Just Take! If you’re doing link-building outreach, don’t just ask for links - give something in return.  This will significantly improve the reply rate from your outreach email. If you own a SaaS tool, for example, you can offer the bloggers you’re reaching out to free access to your software. Or, alternatively, if you’re doing a lot of guest posting, you can offer the website owner a link from the guest post in exchange for the link to your website. Tip #62. Avoid Link Resellers That guy DMing you on LinkedIn, trying to sell you links from a Google Sheet?  Don’t fall for it - most of those links are PBNs and are likely to backfire on you. Tip #63. Avoid Fiverr Like The Plague Speaking of spammy links, don’t touch anything that’s sold on Fiverr - pretty much all of the links there are useless. Tip #64. Focus on Quality Links Not all links are created equal. A link is of higher quality if it’s linked from a page that: Is NOT a PBN. Doesn’t have a lot of outbound links. If the page links to 20 other websites, each of them gets less link juice. Has a lot of (quality) backlinks. Is part of a website with a high domain authority. Is about a topic relevant to the page it’s linking to. If your article about pets has a link from an accounting blog, Google will consider it a bit suspicious. Tip #65. Data-Backed Content Just Works Data-backed content can get insane results for link-building.  For example, OKCupid used to publish interesting data & research based on how people interacted with their platform and it never failed to go viral. Each of their reports ended up being covered by dozens of news media (which got them a ton of easy links). Tip #66. Be Creative - SEO Is Marketing, After All Be novel & creative with your link-building initiatives.  Here’s the thing: the very best link-builders are not going to write about the tactics they’re using.  If they did, you’d see half the internet using the exact same tactic as them in less than a week! Which, as you can guess, would make the tactic cliche and significantly less effective. In order to get superior results with your link-building, you’ll need to be creative - think about how you can make your outreach different from what everyone does. Experiment it, measure it, and improve it till it works! Tip #67. Try HARO HARO, or Help a Reporter Out, is a platform that matches journalists with sources. You get an email every day with journalists looking for experts in specific niches, and if you pitch them right, they might feature you in their article or link to your website. Tip #68. No-Follow Links Aren’t That Bad Contrary to what you might’ve heard, no-follow links are not useless. Google uses no-follow as more of a suggestion than anything else.  There have been case studies that prove Google can disregard the no-follow tag and still reward you with increased rankings. Tip #69. Start Fresh With an Expired Domain Starting a new website? It might make sense to buy an expired one with existing backlinks (that’s in a similar niche as yours). The right domain can give you a serious boost to how fast you can rank. Tip #70. Don’t Overspend on Useless Links “Rel=sponsored” links don’t pass pagerank and hence, won’t help increase your website rankings.  So, avoid buying links from media websites like Forbes, Entrepreneur, etc. Tip #71. Promote Your Content Other than link-building, focus on organic content promotion. For example, you can repost your content on Facebook groups, LinkedIn, Reddit, etc. and focus on driving traffic.  This will actually lead to you getting links, too. We got around 95 backlinks to our SEO case study article just because of our successful content promotion. Tons of people saw the article on the net, liked it, and linked to it from their website. Tip #72. Do Expert Roundups Want to build relationships with influencers in your niche, but don’t know where to start?  Create an expert roundup article. If you’re in the sales niche, for example, you can write about Top 21 Sales Influencers in 2021 and reach out to the said influencers letting them know that they got featured. Trust us, they’ll love you for this! Tip #73. .Edu Links are Overhyped .edu links are overrated. According to John Mueller, .edu domains tend to have a ton of outbound links, and as such, Google ignores a big chunk of them. Tip #74. Build Relationships With Your Customers Little-known link-building hack: if you’re a SaaS company doing SEO, you can build relationships with your customers (the ones that are in the same topical niche as you are) and help each other build links! Tip #75. Reciprocal Links Aren’t That Bad Reciprocal links are not nearly as bad as Google makes them out to be. Sure, they can be bad at scale (if trading links is all you’re doing). Exchanging a link or two with another website / blog, though, is completely harmless in 99% of cases. Tip #76. Don’t Overspam Don’t do outreach for every single post you publish - just the big ones.  Most people already don’t care about your outreach email. Chances are, they’re going to care even less if you’re asking them to link to this new amazing article you wrote (which is about the top 5 benefits of adopting a puppy). Technical SEO Tips Tip #77. Use PageSpeed Insights If your website is extremely slow, it’s definitely going to impact your rankings. Use PageSpeed Insights to see how your website is currently performing. Tip #78. Load Speed Matters While load speed doesn’t impact rankings directly, it DOES impact your user experience. Chances are, if your page takes 5 seconds to load, but your competition’s loads instantly, the average Googler will drop off and pick them over you. Tip #79. Stick to a Low Crawl Depth Crawl depth of any page on your website should be lower than 4 (meaning, any given page should be possible to reach in no more than 3 clicks from the homepage).  Tip #80. Use Next-Gen Image Formats Next-gen image formats such as JPEG 2000, JPEG XR, and WebP can be compressed a lot better than PNG or JPG. So, when possible, use next-get formats for images on your website. Tip #81. De-Index Irrelevant Pages Hide the pages you don’t want Google to index (e.g: non-public, or unimportant pages) via your Robots.txt. If you’re a SaaS, for example, this would include most of your in-app pages or your internal knowledge base pages. Tip #82. Make Your Website Mobile-Friendly Make sure that your website is mobile-friendly. Google uses “mobile-first indexing.” Meaning, unless you have a working mobile version of your website, your rankings will seriously suffer. Tip #83. Lazy-Load Images Lazy-load your images. If your pages contain a lot of images, you MUST activate lazy-loading. This allows images that are below the screen, to be loaded only once the visitor scrolls down enough to see the image. Tip #84. Enable Gzip Compression Enable Gzip compression to allow your HTML, CSS and JS files to load faster. Tip #85. Clean Up Your Code If your website loads slowly because you have 100+ external javascript files and stylesheets being requested from the server, you can try minifying, aggregating, and inlining some of those files. Tip 86. Use Rel-Canonical Have duplicate content on your website? Use rel-canonical to show Google which version is the original (and should be prioritized for search results). Tip #87. Install an SSL Certificate Not only does an SSL certificate help keep your website safe, but it’s also a direct ranking factor. Google prioritizes websites that have SSL certificates over the ones that don’t. Tip #88. Use Correct Anchor Texts for Internal Links When linking to an internal page, mention the keyword you’re trying to rank for on that page in the anchor text. This helps Google understand that the page is, indeed, about the keyword you’re associating it with. Tip #89. Use GSC to Make Sure Your Content is Interlinked Internal links can have a serious impact on your rankings. So, make sure that all your blog posts (especially the new ones) are properly linked to/from your past content.  You can check how many links any given page has via Google Search Console. Tip #90. Bounce rate is NOT a Google ranking factor. Meaning, you can still rank high-up even with a high bounce rate. Tip #91. Don’t Fret About a High Bounce Rate Speaking of the bounce rate, you’ll see that some of your web pages have a higher-than-average bounce rate (70%+).  While this can sometimes be a cause for alarm, it’s not necessarily so. Sometimes, the search intent behind a given keyword means that you WILL have a high bounce rate even if your article is the most amazing thing ever.  E.g. if it’s a recipe page, the reader gets the recipe and bounces off (since they don’t need anything else). Tip #92. Google Will Ignore Your Meta Description More often than not, Google won’t use the meta description you provide - that’s normal. It will, instead, automatically pick a part of the text that it thinks is most relevant and use it as a meta description. Despite this, you should always add a meta description to all pages. Tip #93. Disavow Spammy & PBN Links Keep track of your backlinks and disavow anything that’s obviously spammy or PBNy. In most cases, Google will ignore these links anyway. However, you never know when a competitor is deliberately targeting you with too many spammy or PBN links (which might put you at risk for being penalized). Tip #94. Use The Correct Redirect  When permanently migrating your pages, use 301 redirect to pass on the link juice from the old page to the new one. If the redirect is temporary, use a 302 redirect instead. Tip #95. When A/B Testing, Do This A/B testing two pages? Use rel-canonical to show Google which page is the original. Tip #96. Avoid Amp DON’T use Amp.  Unless you’re a media company, Amp will negatively impact your website. Tip #97. Get Your URL Slugs Right Keep your blog URLs short and to-the-point. Good Example: apollodigital.io/blog/seo-case-study Bad Example: apollodigital.io/blog/seo-case-study-2021-0-to-200,000/ Tip #98. Avoid Dates in URLs An outdated date in your URL can hurt your CTR. Readers are more likely to click / read articles published recently than the ones written years back. Tip #99. Social Signals Matter Social signals impact your Google rankings, just not in the way you think. No, your number of shares and likes does NOT impact your ranking at all.  However, if your article goes viral and people use Google to find your article, click it, and read it, then yes, it will impact your rankings.  E.g. you read our SaaS marketing guide on Facebook, then look up “SaaS marketing” on Google, click it, and read it from there. Tip #100. Audit Your Website Frequently Every other month, crawl your website with ScreamingFrog and see if you have any broken links, 404s, etc. Tip #101. Use WordPress Not sure which CMS platform to use?  99% of the time, you’re better off with WordPress.  It has a TON of plugins that will make your life easier.  Want a drag & drop builder? Use Elementor. Wix, SiteGround and similar drag & drops are bad for SEO. Tip #102. Check Rankings the Right Way When checking on how well a post is ranking on Google Search Console, make sure to check Page AND Query to get the accurate number.  If you check just the page, it’s going to give you the average ranking on all keywords the page is ranking for (which is almost always going to be useless data). Conclusion Aaand that's about it - thanks for the read! Now, let's circle back to Tip #1 for a sec. Remember when we said a big chunk of what you read on SEO is based on personal experiences, experiments, and the like? Well, the tips we've mentioned are part of OUR experience. Chances are, you've done something that might be different (or completely goes against) our advice in this article. If that's the case, we'd love it if you let us know down in the comments. If you mention something extra-spicy, we'll even include it in this article.

Made $19.2k this month, and just surpassed $1000 the last 24 hours. What I did and what's next.
reddit
LLM Vibe Score0
Human Vibe Score1
dams96This week

Made $19.2k this month, and just surpassed $1000 the last 24 hours. What I did and what's next.

It's the first time I hit $1000+ in 24 hours and I had no one to share it with (except you guys). I'm quite proud of my journey, and I would have thought that making $1000 in a day would make me ecstatic, but actually it's not the case. Not sure if it's because my revenue has grown by increment step so I had time to "prepare" myself to achieve this at one point, or just that I'm nowhere near my goal of 100k/month so that I'm not that affected by it. But it's crazy to think that my goal was to make 100$ daily at the end of 2024. So for those who don't know me (I guess most of you), I build mobile apps and ship them as fast as I can. Most of them are in the AI space. I already made a post here on how I become a mobile app developer so you can check it for more details, but essentially here's what I did : Always loved creating my own things and solve problems Built multiple YouTube channels since I was 15 (mobile gaming actually) that all worked great (but it was too niche so not that scalable, didn't like that) Did a few businesses here and there (drop shopping, selling merch to school, etc) Finished my master's degree in engineering about 2 years ago Worked a moment in a famous watch industry company and saw my potential. The combo of health issues, fixed salary (although it was quite a lot), and me wanting to be an entrepreneur made me leave the company. Created a TikTok account in mobile tech (got 10+ million views the 1st 3 days), manage to grow it to 200k subs in about 3 months Got plenty of collabs for promoting mobile apps (between $500 - $2000 for a collab) Said fuck it I should do my own apps and market them on my TikTok instead of doing collabs Me wanting to build my own apps happened around May-June 2023. Started my TikTok in Feb 2023. At this point I had already 150k+ subs on TikTok. You guys need to know that I suck at coding big time. During my studies I tried to limit as much as I could coding because I was a lazy bast*rd, even though I knew it would come to bite me in the ass one day. But an angel appeared to me in broad daylight, that angel was called GPT-4. I subscribed for 20$/month to get access, and instantly I saw the potential of AI and how much it could help me. Last year GPT-4 was ahead of its time and could already code me basic apps. I had already a mac so I just downloaded Xcode and that was it. My 1st app was a wallpaper app, and I kid you not 90% of it was made by AI. Yes sometimes I had to try again and again with different prompts but it was still so much faster compared to if I had to learn coding from scratch and write code with my own hands. The only thing I didn't do was implement the in app purchase, from which I find a guy on Fiverr to do it for me for 50$. After about 2 months of on-off coding, my first app was ready to be launched. So it was launched, had a great successful launch without doing any videos at that point (iOS 17 was released and my app was the first one alongside another one to offer live wallpapers for iOS 17. I knew that there was a huge app potential there when iOS 17 was released in beta as Apple changed their live wallpaper feature). I Then made a video a few weeks after on my mobile tiktok channel, made about 1 million views in 48 hours, brought me around 40k additional users. Was top 1 chart in graphism and design category for a few weeks (in France, as I'm French so my TikTok videos are in French). And was top 100 in that same category in 120+ countries. Made about 500$ ? Okay that was trash, but I had no idea to monetize the app correctly at that point. It was still a huge W to me and proved me that I could successfully launch apps. Then I learned ASO (App Store Optimization) in depth, searched on internet, followed mobile app developers on Twitter, checked YouTube videos, you name it. I was eager to learn more. I needed more. Then I just iterated, build my 2nd app in less than a month, my 3rd in 3 weeks and so on. I just build my 14th app in 3 days and is now in review. Everytime I manage to reuse some of my other app's code in my new one, which is why I can build them so much faster now. I know how to monetize my app better by checking out my competitors. I learn so much by just "spying" other apps. Funnily enough, I only made this one Tiktok video on my main account to promote my app. For all my other apps, I didn't do a single video where I showcase it, the downloads has only been thanks to ASO. I still use AI everyday. I'm still not good at coding (a bit better than when I started). I use AI to create my app icons (midjourney or the new AI model Flux which is great). I use figma + midjourney to create my App Store screenshots (and they actually look quite good). I use GPT-4o and Claude 3.5 Sonnet to code most of my apps features. I use gpt-4o to localize my app (if you want to optimize the number of downloads I strongly suggest localizing your app, it takes me about 10 minutes thanks to AI). Now what are my next goals ? To achieve the 100k/month I need to change my strategy a little. Right now the $20k/month comes from purely organic downloads, I didn't do any paid advertising. It will be hard for me to keep on launching new apps and rely on ASO to reach the 100k mark. The best bet to reach 100k is to collab with content creators and they create a viral video showcasing your app. Depending on the app it's not that easy, luckily some of my apps can be viral so I will need to find the right content creators. Second way is to try tiktok/meta ads, I can check (have checked) all the ads that have been made by my competitors (thank you EU), so what I would do is copy their ad concept and create similar ads than them. Some of them have millions in ad budget so I know they create high converting ads, so you don't need to try to create an ad creative from scratch. My only big fear is to get banned by Apple (for no reason of mine). In just a snap of a finger they can just ban you from the platform, that shit scares me. And you pretty much can't do anything. So that's about it for me. I'm quite proud of myself not going to lie. Have been battling so many health issues these past years where I just stay in bed all day I'm surprised to be able to make it work. Anyways feel free to ask questions. I hope it was interesting for some of you at least. PS: My new app was just approved by app review, let the app gods favor me and bring me many downloads ! Also forgot to talk about a potential $100k+ acquisition of one of my apps, but if that ever happens I'll make a post on it.

How I went from $27 to $3K as a solopreneur still in a 9-5
reddit
LLM Vibe Score0
Human Vibe Score1
jottrledThis week

How I went from $27 to $3K as a solopreneur still in a 9-5

My journey started back in November 2023. I was scrolling through Twitter and YouTube and saw a word that I had never come across before. Solopreneur. The word caught my eye. Mainly because I was pretty sure I knew what it meant even though it's not a word you'll find in the dictionary. I liked what it was describing. A solo entrepreneur. A one man business. It completely resonated with me. As a software engineer by trade I'm used to working alone, especially since the pandemic hit and we were forced to work remotely. See, I always wanted to ditch the 9-5 thing but thought that was too big and too scary for a single person to do. Surely you would need a lot of money to get started, right? Surely you would need investors? The whole concept seemed impossible to me. That was until I found all the success stories. I became obsessed with the concept of solopreneurship. As I went further down the rabbit hole I found people like Justin Welsh, Kieran Drew and Marc Louvion to name a few. All of whom have one person businesses making huge money every year. So I thought, if they can do it, why can't I? People like this have cleared the pathway for those looking to escape the 9-5 grind. I decided 2024 would be the year I try this out. My main goal for the year? Build a one man business, earn my first $ online and learn a sh\*t ton along the way. My main goal in general? Build my business to $100K per year, quit my 9-5 and live with freedom. From December 2023 to February 2024 I began brainstorming ideas. I was like a lost puppy looking for his ball. How on earth did people find good ideas? I began writing everything and anything that came to mind down in my notes app on my phone. By February I would have approximately 70 ideas. Each as weird and whacky as the other. I was skeptical though. If I went through all the trouble of building a product for one of these ideas how would I know if anyone would even be interested in using it? I got scared and took a break for a week. All these ideas seemed too big and the chance that they would take off into the atmosphere was slim (in my mind anyways). I was learning more and more about solopreneurship as the weeks went on so I decided to build a product centered around everything I was learning about. The idea was simple. Enter a business idea and use AI to give the user details about how to market it, who their target customers were, what to write on their landing page, etc. All for a measly $27 per use. I quickly built it and launched on March 3rd 2024. I posted about it on Indie Hackers, Reddit and Hacker News. I was so excited about the prospect of earning my first internet $! Surely everyone wanted to use my product! Nope...all I got was crickets. I was quickly brought back down to earth. That was until 5 days later. I looked at my phone and had a new Stripe notification! Cha-ching! My first internet $. What a feeling! That was goal number 1 complete. It would be another 6 days before I would get my second sale...and then another 15 days to get my third. It was an emotional rollercoaster. I went from feeling like quitting the 9-5 was actually possible to thinking that maybe the ups and downs aren't worth it. On one hand I had made my first internet dollar so I should my ecstatic, and don't get me wrong, I was but I wanted more. More validation that I could do this long term. By May I was starting to give up on the product. I had learned so much in the past few months about marketing, SEO, building an audience, etc. and I wanted to build something that I thought could have more success so I focused on one critical thing that I had learned about. What was it? Building a product that had SEO potential. A product that I knew hundreds of people were looking for. See this was my thinking - If I could find a keyword that people were searching for on Google hundreds/thousands of times every month and it was easy to rank high on search engines then I would go all in (in SEO land this equates to a Keyword that has a Keyword Difficulty of = 500). I began researching and found that the keyword "micro saas ideas" was being searched for around 600 times each month. Micro Saas was something that really interested me. It was perfect for solopreneurs. Small software products that 1 person could build. What's not to like if you're in the game of software and solopreneurship? Researching keywords like this became like a game for me. I was hooked. I was doing it every day, finding gems that were being searched for hundreds and thousands of times every month that still had potential. That's when I came up with my next product idea. I decided to create a database of Micro Saas Ideas all with this sort of SEO potential. See if you can build a product that you know people are looking for then that's all the validation you need. So I put this theory to the test. I created a database of Micro Saas Ideas with SEO Potential and launched it in June 2024. This time it was different. I made $700 in the first week of launching. A large contrast to my previous failed attempt at becoming the worlds greatest solopreneur. Since launch I have grown the product to $3K and I couldn't be happier. I know what you're saying, $3K isn't a lot. But it's validation. It's validation that I can earn $ online. Validation that I can grow a business and it gives me hope that one day I'll be able to quit that 9-5 grind. My plan is to keep growing the business. I expect there to be a few challenges up ahead but I'll tackle them as I go and learn from the failures and successes. I have a newsletter where I share Micro Saas Ideas with SEO potential every week which I'll leave below in the first comment. Feel free to come along for the ride. If not I hope this post brings you some value If you're thinking about starting as a solopreneur, stop thinking and start doing, you won't regret it.

Detailed Guide - How I've Been Self Employed for 2 Years Selling Posters
reddit
LLM Vibe Score0
Human Vibe Score1
tommo278This week

Detailed Guide - How I've Been Self Employed for 2 Years Selling Posters

Hey everyone, bit of context before you read through this. I have been selling POD posters full time for over 2 years now. My next venture is that I have started my own Print on Demand company for posters, PrintShrimp. As one way of creating customers for our service, we are teaching people for free how to also sell posters. Here is a guide I have written on how to sell posters on Etsy. Feel free to have a read through and then check out PrintShrimp, hopefully can help some of you guys out (and get us some more customers!) All of this is also available in video format on our website too, if you prefer to learn that way. Thanks guys! And as some people asked in other subs, no this isn't written with AI 😅 This took a couple of weeks to put together! Through this guide, we will teach you everything you need to know about starting to sell posters and generate some income. We will also show you why PrintShrimp is the best POD supplier for all of your poster needs. Trust me, you won’t need much convincing.  So, why are posters the best product to sell? Also, just thought I’d quickly answer the question - why posters? If you’ve been researching Print on Demand you’ve probably come across the infinite options of t-shirts, mugs, hats, phone cases, and more. All of these are viable options, however we think posters are the perfect place to start. You can always expand into other areas further down the line! So a brief summary of why posters are the perfect product for Print on Demand: \-They are very easy to design! Posters are a very easy shape to deal with - can’t go wrong with a rectangle. This makes designing products very easy. \-Similarly to this, what you see is what you get with a poster. You can literally see your finished product as you design it in either canva or photoshop. With T-Shirts for example, you have to make your design, and then place it on a t-shirt. Then you have to coordinate with your printers the size you would like the design on the tshirt and many other variables like that. There is no messing about with posters - what you see is what you get. \-The same high quality, everywhere. With other products, if you want to reap the benefits of a printing in various countries, you need to ensure each of your global suppliers stocks the same t-shirts, is able to print in the same way, carries the same sizes etc. Again with posters you avoid all of this hassle- your products will come out the same, no matter which of our global locations are used. \-They have a very favorable profit margin. As you will see later, the cost price of posters is very low. And people are prepared to pay quite a lot for a decent bit of wall art! I have tried out other products, and the profit margin combined with the order quantity of posters makes them my most profitable product, every single time. Using PrintShrimp, you can be sure to enjoy profits of anywhere between £6 - £40 pure profit per sale.  \-They are one of the easiest to print white label. This makes them perfect for Print on Demand. Your posters are simply put in a tube, and off they go. There are no extras you need to faff around with, compared to the extra elements other products come with, such as clothing labels on t-shirts.  Picking your poster niche So, you are ready to start selling posters. Great! Now, the blessing and curse with selling posters is that there are infinite possibilities regarding what you can sell. So, it can easily be quite overwhelming at first.  The first thing I would recommend doing is having a look at what others are selling. Etsy is a wonderful place for this (and will likely be a key part of your poster selling journey). So, log on to Etsy and simply type in ‘poster’ in the search bar. Get ready to write a massive list of the broad categories and type of posters that people are selling.  If you do not have more than 50 categories written down by the end, you are doing something wrong. There are seriously an infinite amount of posters! For example, here are some popular ones to get you started: Star sign posters, Kitchen posters, World map posters, Custom Dog Portrait posters, Music posters, Movie posters, Fine art posters, Skiing posters, Girl Power posters and Football posters.  Now, you have a huge list of potential products to sell. What next? There are a few important things you need to bear in mind when picking your niche: \-Does this interest me?  Don’t make the mistake of going down a niche that didn’t actually interest you just because it would probably be a money maker. Before you know it, what can be a very fun process of making designs can become incredibly \\\monotonous, and feel like a chore\\\. You need to bear in mind that you will be spending a lot of time creating designs - if it is something you are interested in you are much less likely to get burnt out! As well, \\\creativity will flow\\\ far better if it is something you are interested in, which at the end of the day will lead to better designs that are more likely to be purchased by customers.  \-Is this within my design range? Don’t let this put you off too much. We will go through how to get started on design later on in this guide. However, it is important to note that the plain truth of it is that some niches and designs are a hell of a lot more complicated than others. For example, quote posters can essentially be designed by anyone when you learn about how to put nice fonts together in a good color scheme. On the other hand, some posters you see may have been designed with complex illustrations in a program like Illustrator. To start with, it may be better to pick a niche that seems a bit more simple to get into, as you can always expand your range with other stores further down the line. A good way of evaluating the design complexity is by identifying if this poster is \\\a lot of elements put together\\\ or is \\\a lot of elements created by the designer themselves\\\\\.\\ Design can in a lot of cases be like a jigsaw - putting colours, shapes and text together to create an image. This will be a lot easier to start with and can be learnt by anyone, compared to complex drawings and illustrations.  \-Is this niche subject to copyright issues? Time to delve deep into good old copyright. Now, when you go through Etsy, you will without a doubt see hundreds of sellers selling music album posters, car posters, movie posters and more. Obviously, these posters contain the property of musicians, companies and more and are therefore copyrighted. The annoying thing is - these are \\\a complete cash cow.\\\ If you go down the music poster route, I will honestly be surprised if you \\don’t\\ make thousands. However it is only a matter of time before the copyright strikes start rolling in and you eventually get banned from Etsy.  So I would highly recommend \\\not making this mistake\\\. Etsy is an incredible platform for selling posters, and it is a hell of a lot easier to make sales on there compared to advertising your own website. And, you \\\only get one chance on Etsy.\\\ Once you have been banned once, you are not allowed to sign up again (and they do ID checks - so you won’t be able to rejoin again under your own name).  So, don’t be shortsighted when it comes to entering Print on Demand. If you keep your designs legitimate, they will last you a lifetime and you will then later be able to crosspost them to other platforms, again without the worry of ever getting shut down.  So, how do I actually design posters? Now you have an idea of what kind of posters you want to be making, it’s time to get creative and make some designs! Photoshop (and the creative cloud in general) is probably the best for this. However, when starting out it can be a scary investment (it costs about £30 a month unless you can get a student rate!).  So, while Photoshop is preferable in the long term, when starting out you can learn the ropes of design and get going with Canva. This can be great at the start as they have a load of templates that you can use to get used to designing and experimenting (while it might be tempting to slightly modify these and sell them - this will be quite saturated on places like Etsy so we would recommend doing something new).  What size format should I use? The best design format to start with is arguably the A sizes - as all the A sizes (A5, A4, A3, A2, A1, A0) are scalable. This means that you can make all of your designs in one size, for example A3, and these designs will be ready to fit to all other A sizes. For example, if you design an A3 poster and someone orders A1, you can just upload this A3 file to PrintShrimp and it will be ready to print. There is a wide range of other sizes you should consider offering on your shop, especially as these sizes are very popular with the American market. They have a wide range of popular options, which unfortunately aren’t all scalable with each other. This does mean that you will therefore have to make some slight modifications to your design in order to be able to offer them in American sizing, in a few different aspect ratios. What you can do however is design all of your products in UK sizing, and simply redesign to fit American sizing once you have had an order. Essentially: design in UK sizing, but list in both UK and US sizing. Then when you get a non-A size order, you can quickly redesign it on demand. This means that you don’t have to make a few different versions of each poster when first designing, and can simply do a quick redesign for US sizing when you need to. Below is PrintShrimps standard size offering. We can also offer any custom sizing too, so please get in touch if you are looking for anything else. With these sizes, your poster orders will be dispatched domestically in whatever country your customer orders from. Our recommendations for starting design One thing that will not be featured in this guide is a written out explanation or guide on how to design. Honestly, I can’t think of a more boring, or frankly worse, way to learn design. When it comes to getting started, experimenting is your best friend! Just have a play around and see what you can do. It is a really fun thing to get started with, and the satisfaction of when a poster design comes together is like no other. A good way to start is honestly by straight up copying a poster you see for sale online. And we don’t mean copying to sell! But just trying to replicate other designs is a great way to get a feel for it and what you can do. We really think you will be surprised at how easy it is to pull together a lot of designs that at first can appear quite complicated! Your best friend throughout this whole process will be google. At the start you will not really know how to do anything - but learning how to look into things you want to know about design is all part of the process. At first, it can be quite hard to even know how to search for what you are trying to do, but this will come with time (we promise). Learning how to google is a skill that you will learn throughout this process.  Above all, what we think is most important is this golden rule: take inspiration but do not steal. You want to be selling similar products in your niche, but not copies. You need to see what is selling in your niche and get ideas from that, but if you make designs too similar to ones already available, you won’t have much luck. At the end of the day, if two very similar posters are for sale and one shop has 1000 reviews and your newer one has 2, which one is the customer going to buy? You need to make yours offer something different and stand out enough to attract customers. Etsy SEO and maximizing your sales You may have noticed in this guide we have mentioned Etsy quite a few times! That is because we think it is hands down the best place to start selling posters. Why? Etsy is a go to place for many looking to decorate their homes and also to buy gifts. It might be tempting to start selling with your own website straight away, however we recommend Etsy as it brings the customers to you. For example, say you start selling Bathroom Posters. It is going to be a hell of a lot easier to convert sales when you already have customers being shown your page after searching ‘bathroom decor’, compared to advertising your own website. This is especially true as it can be hard to identify your ideal target audience to then advertise to via Meta (Facebook/Instagram) for example. Websites are a great avenue to explore eventually like I now have, but we recommend starting with Etsy and going from there. What costs do I need to be aware of? So, setting up an Etsy sellers account is currently costs £15. The only other upfront cost you will have is the cost of listing a product - this is 20 cents per listing. From then on, every time you make a sale you will be charged a transaction fee of 6.5%, a small payment processing fee, plus another 20 cents for a renewed listing fee. It normally works out to about 10% of each order, a small price to pay for all the benefits Etsy brings. No matter what platform you sell on, you will be faced with some form of transaction fee. Etsy is actually quite reasonable especially as they do not charge you to use their platform on a monthly basis.  What do I need to get selling? Getting your shop looking pretty \-Think of a shop name and design (now you are a professional designer) a logo \-Design a banner for the top of your shop \-Add in some about me info/shop announcement \-I recommend running a sale wherein orders of 3+ items get a 20% of discount. Another big benefit of PrintShrimp is that you receive large discounts when ordering multiple posters. This is great for attracting buyers and larger orders.  Making your products look attractive That is the bulk of the ‘decor’ you will need to do. Next up is placing your posters in mock ups! As you may notice on Etsy, most shops show their posters framed and hanging on walls. These are 99% of the time not real photos, but digital mock ups. This is where Photoshop comes in really handy, as you can automate this process through a plug in called Bulk Mock Up. If you don’t have photoshop, you can do this on Canva, you will just have to do it manually which can be rather time consuming.  Now, where can you get the actual Mock Ups? One platform we highly recommend for design in general is platforms like Envato Elements. These are design marketplaces where you have access to millions of design resources that you are fully licensed to use!  Titles, tags, and descriptions  Now for the slightly more nitty gritty part. You could have the world's most amazing looking poster, however, if you do not get the Etsy SEO right, no one is going to see it! We will take you through creating a new Etsy listing field by field so you can know how to best list your products.  The key to Etsy listing optimisation is to maximise. Literally cram in as many key words as you possibly can! Before you start this process, create a word map of anything you can think of relating to your listing. And come at this from the point of view of, if I was looking for a poster like mine, what would I search? Titles \-Here you are blessed with 140 characters to title your listing. Essentially, start off with a concise way of properly describing your poster. And then afterwards, add in as many key words as you can! Here is an example of the title of a well selling Skiing poster: Les Arcs Skiing Poster, Les Arcs Print, Les Alpes, France Ski Poster, Skiing Poster, Snowboarding Poster, Ski Resort Poster Holiday, French This is 139 characters out of 140 - you should try and maximise this as much as possible! As you can see, this crams in a lot of key words and search terms both related to Skiing as a whole, the poster category, and then the specifics of the poster itself (Les Arcs resort in France). Bear in mind that if you are listing a lot of listings that are of the same theme, you won’t have to spend time creating an entirely new title. For example if your next poster was of a ski resort in Italy, you can copy this one over and just swap out the specifics. For example change “France ski poster” to “Italy ski poster”, change “Les Arcs” to “The Dolomites”, etc.  Description \-Same logic applies for descriptions - try and cram in as many key words as you can! Here is an example for a Formula One poster: George Russell, Mercedes Formula One Poster  - item specific keywords Bright, modern and vibrant poster to liven up your home.  - Describes the style of the poster All posters are printed on high quality, museum grade 200gsm poster paper. Suitable for framing and frames. - Shows the quality of the print. Mentions frames whilst showing it comes unframed Experience the thrill of the racetrack with this stunning Formula One poster. Printed on high-quality paper, this racing car wall art print features a dynamic image of a Formula One car in action, perfect for adding a touch of speed and excitement to any motorsports room or man cave. Whether you're a die-hard fan or simply appreciate the adrenaline of high-speed racing, this poster is sure to impress. Available in a range of sizes, it makes a great addition to your home or office, or as a gift for a fellow Formula One enthusiast. Each poster is carefully packaged to ensure safe delivery, so you can enjoy your new piece of art as soon as possible. - A nice bit of text really highlighting a lot of key words such as gift, motorsports, racetrack etc.  You could go further with this too, by adding in extra things related to the poster such as ‘Perfect gift for a Mercedes F1 fan’ etc.  Tags Now, these are actually probably the most important part of your listing! You get 13 tags (20 character limit for each) and there are essentially search terms that will match your listing with what customers search for when shopping.  You really need to maximize these - whilst Title and Description play a part, these are the main things that will bring buyers to your listing. Once again, it is important to think about what customers are likely to be searching when looking for a poster similar to yours. Life hack alert! You can actually see what tags other sellers are using. All you need to do is go to a listing similar to yours that is selling well, scroll down and you can actually see them listed out at the bottom of the page! Here is an example of what this may look like: So, go through a few listings of competitors and make notes on common denominators that you can integrate into your listing. As you can see here, this seller uses tags such as ‘Birthday Gift’ and ‘Poster Print’. When you first start out, you may be better off swapping these out for more listing specific tags. This seller has been on Etsy for a few years however and has 15,000+ sales, so are more likely to see success from these tags.  If it’s not clear why, think about it this way. If you searched ‘poster print’ on Etsy today, there will be 10s of thousands of results. However, if you searched ‘Russell Mercedes Poster’, you will (as of writing) get 336 results. Etsy is far more likely to push your product to the top of the latter tag, against 300 other listings, rather than the top of ‘Poster Print’ where it is incredibly competitive. It is only when you are a more successful shop pulling in a high quantity of orders that these larger and more generic tags will work for you, as Etsy has more trust in your shop and will be more likely to push you to the front.  SKUs \-One important thing you need to do is add SKUs to all of your products! This is worth doing at the start as it will make your life so much easier when it comes to making sales and using PrintShrimp further down the line. What is an SKU? It is a ‘stock keeping unit’, and is essentially just a product identifier. Your SKUs need to match your file name that you upload to PrintShrimp. For example, if you made a poster about the eiffel tower, you can literally name the SKU eiffel-tower. There is no need to complicate things! As long as your file name (as in the image name of your poster on your computer) matches your SKU, you will be good to go.  \-It may be more beneficial to set up a system with unique identifiers, to make organising your files a lot easier further down the line. Say you get to 1000 posters eventually, you’ll want to be able to quickly search a code, and also ensure every SKU is always unique, so you won’t run into accidentally using the same SKU twice further down the line. For example, you can set it up so at the start of each file name, you have \[unique id\]\[info\], so your files will look like -  A1eiffeltower A2france And further down the line: A99aperolspritz B1potatoart This not only removes the potential issue of duplicating SKUs accidentally (for example if you made a few posters of the same subject), but also keeps your files well organised. If you need to find a file, you can search your files according to the code, so just by searching ‘a1’ for example, rather than having to trawl through a load of different files until you find the correct one. \-If your poster has variations, for example color variations, you can set a different SKU for each variation. Just click the little box when setting up variations that says ‘SKUs vary for each (variation)’. So if you have a poster available either in a white or black background, you can name each file, and therefore each SKU, a1eiffel-tower-black and a1eiffel-tower-white for example. \-The same goes for different sizes. As different American sizes have different aspect ratios, as mentioned above you may have to reformat some posters if you get a sale for one of these sizes. You can then add in the SKU to your listing once you have reformatted your poster. So for example if you sell a 16x20” version of the eiffel tower poster, you can name this file eiffel-tower-white-1620. Whilst this involves a little bit of set up, the time it saves you overall is massive!  Variations and Prices \-So, when selling posters there is a huge variety of sizes that you can offer, as mentioned previously. Non-negotiable is that you should be offering A5-A1. These will likely be your main sellers! Especially in the UK. It is also a good idea to offer inch sizing to appeal to a global audience (as bear in mind with PrintShrimp you will be able to print in multiple countries around the world!).  Below is a recommended pricing structure of what to charge on Etsy. Feel free to mess around with these! You may notice on Etsy that many shops charge a whole lot more for sizes such as A1, 24x36” etc. In my experience I prefer charging a lower rate to attract more sales, but there is validity in going for a lower amount of sales with higher profits. As mentioned above, you can also offer different variations on items - for example different colour schemes on posters. This is always a decent idea (if it suits the design) as it provides the customer with more options, which might help to convert the sale. You can always add this in later however if you want to keep it simple while you start! Setting up shipping profiles Etsy makes it very easy to set up different shipping rates for different countries. However, luckily with PrintShrimp you can offer free shipping to the majority of the major countries that are active on Etsy!  Using PrintShrimp means that your production costs are low enough in each domestic market to justify this. If you look on Etsy you can see there are many shops that post internationally to countries such as the US or Australia. Therefore, they often charge £8-10 in postage, and have a delivery time of 1-2 weeks. This really limits their customer base to their domestic market.  Using PrintShrimp avoids this and means you can offer free shipping (as we absorb the shipping cost in our prices) to the major markets of the UK, Australia, and USA (Europe coming soon!).  We also offer a 1 day processing time, unlike many POD poster suppliers. This means you can set your Etsy processing time to just one day, which combined with our quick shipping, means you will be one of the quickest on Etsy at sending out orders. This is obviously very attractive for customers, who are often very impatient with wanting their orders!  Getting the sales and extra tips \-Don’t list an insane amount of listings when you first get started. Etsy will be like ‘hang on a second’ if a brand new shop suddenly has 200 items in the first week. Warm up your account, and take things slow as you get going. We recommend 5 a day for the first week or so, and then you can start uploading more. You don’t want Etsy to flag your account for suspicious bot-like activity when you first get going.  \-It is very easy to copy listings when creating a new one. Simply select an old listing and press copy, and then you can just change the listing specific details to create a new one, rather than having to start from scratch. It can feel like a bit of a ball-ache setting up your first ever listing, but from then on you can just copy it over and just change the specifics.  \-Try and organize your listings into sections! This really helps the customer journey. Sometimes a customer will click onto your shop after seeing one of your listings, so it really helps if they can easily navigate your shop for what they are looking for. So, you now have a fully fledged Etsy shop. Well done! Time to start making £3,000 a month straight away right? Not quite. Please bear in mind, patience is key when starting out. If you started doing this because you are £10,000 in debt to the Albanian mafia and need to pay it off next week, you have come into this in the wrong frame of mind. If you have however started this to slowly build up a side hustle which hopefully one day become your full time gig, then winner winner chicken dinner.  Starting out on Etsy isn’t always easy. It takes time for your shop to build up trust! As I’ve said before, a buyer is far more likely to purchase from a shop with 1000s of reviews, than a brand new one with 0. But before you know it, you can become one of these shops! One thing you can do at the very start is to encourage your friends and family to buy your posters! This is a slightly naughty way of getting a few sales at the start, of course followed by a few glowing 5\* reviews. It really helps to give your shop this little boost at the start, so if this is something you can do then I recommend it.  Okay, so once you have a fully fledged shop with a decent amount of listings, you might be expecting the sales to start rolling in. And, if you are lucky, they indeed might. However, in my experience, you need to give your listings a little boost. So let us introduce you to: The wonderful world of Etsy ads Ads!! Oh no, that means money!! We imagine some of you more risk averse people are saying to yourself right now. And yes, it indeed does. But more often than not unfortunately you do have to spend money to make money.  Fortunately, in my experience anyway, Etsy ads do tend to work. This does however only apply if your products are actually good however, so if you’re back here after paying for ads for 2 months and are losing money at the same rate as your motivation, maybe go back to the start of this guide and pick another niche.  When you first start out, there are two main strategies.  Number 1: The Safer Option So, with PrintShrimp, you will essentially be making a minimum of £6 profit per order. With this in mind, I normally start a new shop with a safer strategy of advertising my products with a budget of $3-5 dollars a day. This then means that at the start, you only need to make 1 sale to break even, and anything above that is pure profit! This might not seem like the most dazzling proposition right now, but again please bear in mind that growth will be slow at the start. This means that you can gradually grow your shop, and therefore the trust that customers have in your shop, over time with a very small risk of ever actually losing money. Number 2: The Billy Big Balls Option If you were yawning while reading the first option, then this strategy may be for you. This will be better suited to those of you that are a bit more risk prone, and it also helps if you have a bit more cash to invest at the start. Through this strategy, you can essentially pay your way to the top of Etsy's rankings. For this, you’ll probably be looking at spending $20 a day on ads. So, this can really add up quickly and is definitely the riskier option. In my experience, the level of sales with this may not always match up to your spend every day. You may find that some days you rake in about 10 sales, and other days only one. But what this does mean is that as your listings get seen and purchased more, they will begin to rank higher in Etsy’s organic search rankings, at a much quicker rate than option one. This is the beauty of Etsy’s ads. You can pay to boost your products, but then results from this paid promotion feed into the organic ranking of your products. So you may find that you can splash the cash for a while at the start in order to race to the top, and then drop your ad spending later on when your products are already ranking well.  Sending your poster orders So, you’ve now done the hard bit. You have a running Etsy store, and essentially all you need to now on a daily basis is send out your orders and reply to customer messages! This is where it really becomes passive income.  \-Check out the PrintShrimp order portal. Simply sign up, and you can place individual orders through there. \-Bulk upload: We have an option to bulk upload your Esty orders via csv.  Seriously, when you are up and running with your first store, it is really as easy as that.  Once you have your first Etsy store up and running, you can think about expanding. There are many ways to expand your income. You can set up other Etsy stores, as long as the type of posters you are selling varies. You can look into setting up your own Shopify stores, and advertise them through Facebook, Instagram etc. Through this guide, we will teach you everything you need to know about starting to sell posters and generate some income. We will also show you why PrintShrimp is the best POD supplier for all of your poster needs. Trust me, you won’t need much convincing.

100 best ai sustainable business ideas in 2025
reddit
LLM Vibe Score0
Human Vibe Score1
Low_Philosopher1792This week

100 best ai sustainable business ideas in 2025

AI in Renewable Energy AI-powered smart solar panel optimization Predictive maintenance for wind turbines AI-driven energy storage management AI-based microgrid optimization Smart grid energy forecasting AI-powered water desalination efficiency AI-driven carbon footprint reduction software AI-powered hydropower efficiency monitoring AI for geothermal energy exploration AI-driven green hydrogen production optimization AI in Waste Management & Recycling AI-based waste sorting robots Smart recycling bins with AI recognition AI-powered food waste management AI-driven upcycling marketplace AI-enabled e-waste management solutions AI-powered sustainable packaging optimization AI-driven landfill management systems AI-powered plastic waste tracking and reduction AI-based waste-to-energy conversion AI-driven composting automation AI in Water Conservation AI-powered leak detection and water conservation AI-driven smart irrigation systems AI-based flood prediction and mitigation AI-powered ocean plastic cleanup robots AI-driven rainwater harvesting optimization AI-based groundwater level monitoring AI-powered desalination energy efficiency AI-driven smart water meters AI-powered wastewater treatment optimization AI-based water pollution monitoring AI in Sustainable Agriculture AI-driven precision farming AI-powered vertical farming automation AI-based pest and disease prediction AI-powered livestock health monitoring AI-driven soil health analysis AI-powered regenerative agriculture analytics AI-driven smart greenhouses AI-powered crop rotation optimization AI-based carbon farming solutions AI-powered sustainable aquaculture AI in Transportation & Mobility AI-powered electric vehicle (EV) battery optimization AI-driven smart traffic management AI-powered EV charging station optimization AI-based sustainable urban mobility planning AI-powered drone delivery for carbon reduction AI-driven logistics and supply chain sustainability AI-powered smart public transport systems AI-driven sustainable aviation fuel optimization AI-powered bicycle-sharing optimization AI-driven AI carpooling and ride-sharing efficiency AI in Green Manufacturing AI-powered energy-efficient manufacturing AI-driven supply chain sustainability analytics AI-based material waste reduction AI-powered sustainable fashion production AI-driven predictive demand to reduce overproduction AI-powered eco-friendly textile manufacturing AI-driven 3D printing for sustainable manufacturing AI-powered emission reduction in factories AI-driven green construction material optimization AI-based lifecycle assessment for eco-products AI in Carbon Offsetting & Climate Action AI-powered carbon credit marketplaces AI-driven tree planting optimization AI-based carbon capture efficiency enhancement AI-powered reforestation tracking and monitoring AI-driven climate risk prediction AI-powered environmental compliance software AI-driven sustainable investment analysis AI-based corporate sustainability tracking AI-powered carbon accounting and reporting AI-driven decarbonization roadmaps for businesses AI in Sustainable Smart Cities AI-powered urban energy efficiency monitoring AI-driven AI-powered smart lighting for cities AI-based pollution monitoring and reduction AI-driven green building automation AI-powered smart HVAC energy optimization AI-driven urban tree canopy management AI-powered digital twins for sustainable city planning AI-based urban noise pollution monitoring AI-powered public waste management optimization AI-driven citizen engagement for sustainability AI in Eco-Friendly Consumer Solutions AI-powered sustainable shopping assistant AI-driven personal carbon footprint tracking app AI-powered second-hand marketplace optimization AI-driven sustainable food delivery services AI-powered ethical supply chain transparency AI-driven zero-waste grocery stores AI-powered green subscription services AI-driven sustainable tourism planning AI-powered smart home energy efficiency optimization AI-driven personal finance for sustainability investments AI in Sustainable Healthcare & Well-being AI-powered climate impact on health analytics AI-driven sustainable hospital management AI-based predictive disease outbreak prevention AI-powered mental health solutions for eco-anxiety AI-driven green pharmaceutical production AI-powered sustainable medical waste management AI-based air quality health impact monitoring AI-driven climate-friendly diet and nutrition planning AI-powered fitness and well-being optimization for sustainability AI-driven telemedicine to reduce healthcare emissions These AI-driven sustainable business ideas offer high growth potential while making a positive impact on the planet. Let me know if you want details on a specific idea or need help with implementation strategies!

AI Content Campaign Got 4M impressions, Thousands of Website Views, Hundreds of Customers for About $100 — This is the future of marketing
reddit
LLM Vibe Score0
Human Vibe Score0.857
adamkstinsonThis week

AI Content Campaign Got 4M impressions, Thousands of Website Views, Hundreds of Customers for About $100 — This is the future of marketing

Alright. So, a few months ago I tested a marketing strategy for a client that I’ve sense dedicated my life to developing on. The Idea was to take the clients Pillar content (their YouTube videos) and use AI to rewrite the content for all the viable earned media channels (mainly Reddit). The campaign itself was moderately successful. To be specific, after one month it became their 2nd cheapest customer acquisition cost (behind their organic YouTube content). But there is a lot to be done to improve the concept. I will say, having been in growth marketing for a decade, I felt like I had hit something big with the concept. I’m going to detail how I built that AI system, and what worked well and what didn’t here. Hopefully you guys will let me know what you think and whether or not there is something here to keep working on. DEFINING THE GOAL Like any good startup, their marketing budget was minimal. They wanted to see results, fast and cheap. Usually, marketers like me hate to be in this situation because getting results usually either takes time or it takes money. But you can get results fast and cheap if you focus on an earned media strategy - basically getting featured in other people’s publication. The thing is these strategies are pretty hard to scale or grow over time. That was a problem for future me though. I looked through their analytics and saw they were getting referral traffic from Reddit - it was their 5th or 6th largest source of traffic - and they weren’t doing any marketing on the platform. It was all digital word of mouth there. It kind of clicked for me there, that Reddit might be the place to start laying the ground work. So with these considerations in mind the goal became pretty clear: Create content for relevant niche communities on Reddit with the intent of essentially increasing brand awareness. Use an AI system to repurpose their YouTube videos to keep the cost of producing unique content for each subreddit really low. THE HIGH-LEVEL STRATEGY I knew that there are huge amounts of potential customers on Reddit (About 12M people in all the relevant communities combined) AND that most marketers have a really tough time with the platform. I also knew that any earned media strategy, Reddit or not, means Click Through Rates on our content would be extremely low. A lot of people see this as a Reddit specific problem because you can’t self-promote on the platform, but really you have to keep self-promotion to a minimum with any and all earned media. This basically meant we had to get a lot of impressions to make up for it. The thing about Reddit is if your post absolutely crushes it, it can get millions of views. But crushing it is very specific to what the expectations are of that particular subreddit. So we needed to make content that was specifically written for that Subreddit. With that I was able to essentially design how this campaign would work: We would put together a list of channels (specifically subreddits to start) that we wanted to create content for. For each channel, we would write a content guideline that details out how to write great content for this subreddit. These assets would be stored in an AirTable base, along with the transcripts of the YouTube videos that were the base of our content. We would write and optimize different AI Prompts that generated different kinds of posts (discussion starters about a stock, 4-5 paragraph stock analysis, Stock update and what it means, etc…) We would build an automation that took the YouTube transcripts, ran each prompt on it, and then edited each result to match the channel writing guidelines. And then we would find a very contextual way to leave a breadcrumb back to the client. Always as part of the story of the content. At least, this is how I originally thought things would go. CHOOSING THE RIGHT SUBREDDITS Picking the right communities was vital. Here’s the basic rubric we used to pick and prioritize them: • Relevance: We needed communities interested in stock analysis, personal finance, or investing. • Subreddit Size vs. Engagement: Large subreddits offer more potential impressions but can be less focused. Smaller subreddits often have higher engagement rates. • Content Feasibility: We had to ensure we could consistently create high-value posts for each chosen subreddit. We started with about 40 possibilities, then narrowed it down to four or five that consistently delivered upvotes and user signups. CREATING CHANNEL-SPECIFIC GUIDES By the end, creating channel specific writing guidelines looked like a genius decision. Here’s how we approached it and used AI to get it done quickly: Grabbed Top Posts: We filtered the subreddit’s top posts (change filter to “Top” and then “All Time”) of all time to see the kinds of content that performed best Compiled The Relevant Posts: We took the most relevant posts to what we were trying to do and put them all on one document (basically created one document per subreddit that just had the top 10 posts in that subreddit). Had AI Create Writing Guideline Based On Posts: For each channel, we fed the document with the 10 posts with the instructions “Create a writing guideline for this subreddit based on these high performing posts. I had to do some editing on each guideline but this worked pretty well and saved a lot of time. Each subreddit got a custom guideline, and we put these inside the “Channels” table of the AirTable base we were developing with these assets. BUILDING THE AI PROMPTS THAT GENERATED CONTENT Alright this is probably the most important section so I’ll be detailed. Essentially, we took all the assets we developed up until this point, and used them to create unique posts for each channel. This mean each AI prompt was about 2,000 words of context and produced about a 500-word draft. There was a table in our AirTable where we stored the prompts, as I alluded to earlier. And these were basically the instructions for each prompt. More specifically, they detailed out our expectations for the post. In other words, there were different kinds of posts that performed well on each channel. For example, you can write a post that’s a list of resources (5 tools we used to…), or a how to guide (How we built…), etc.. Those weren’t the specific ones we used, but just wanted to really explain what I meant there. That actual automation that generated the content worked as follows: New source content (YouTube video transcript) was added to the Source Content table. This triggered the Automation. The automation grabbed all the prompts in the prompt table. For each prompt in the prompt table, we sent a prompt to OpenAI (gpt-4o) that contained first the prompt and also the source content. Then, for each channel that content prompt could be used on, we sent another prompt to OpenAI that revised the result of the first prompt based on the specific channel guidelines. The output of that prompt was added to the Content table in AirTable. To be clear, our AirTable had 4 tables: Content Channels Prompts Source Content The Source Content, Prompts, and Channel Guidelines were all used in the prompt that generated content. And the output was put in the Content table. Each time the automation ran, the Source Content was turned into about 20 unique posts, each one a specific post type generated for a specific channel. In other words, we were create a ton of content. EDITING & REFINING CONTENT The AI drafts were never perfect. Getting them Reddit-ready took editing and revising The main things I had to go in and edit for were: • Tone Adjustments: We removed excessively cliche language. The AI would say silly things like “Hello fellow redditors!” which sound stupid. • Fact-Checking: Financial data can be tricky. We discovered AI often confused figures, so we fact check all stock related metrics. Probably something like 30-40% error rate here. Because the draft generation was automated, that made the editing and getting publish ready the human bottleneck. In other words, after creating the system I spent basically all my time reviewing the content. There were small things I could do to make this more efficient, but not too much. The bigger the model we used, the less editing the content needed. THE “BREADCRUMB” PROMOTION STRATEGY No where in my prompt to the AI did I mention that we were doing any marketing. I just wanted the AI to focus on creating content that would do well on the channel. So in the editing process I had to find a way to promote the client. I called it a breadcrumb strategy once and that stuck. Basically, the idea was to never overtly promote anything. Instead find a way to leave a breadcrumb that leads back to the client, and let the really interested people follow the trail. Note: this is supposed to be how we do all content marketing. Some examples of how we did this were: Shared Visuals with a Subtle Watermark: Because our client’s product offered stock data, we’d often include a chart or graph showing a company’s financial metric with the client’s branding in the corner. Added Supporting Data from Client’s Website: If we mentioned something like a company’s cash flow statement, we could link to that company’s cash flow statement on the client’s website. It worked only because there was a lot of data on the client’s website that wasn’t gated. These tactics were really specific to the client. Which is should be. For other companies I would rethink what tactics I use here. THE RESULTS I’m pretty happy with the results • Impressions: – Early on posts averaged \~30,000 apiece, but after about a month of optimization, we hit \~70,000 impressions average. Over about two months, we reached 4 million total impressions. • Signups: – In their signups process there was one of those “Where did you find us?” questions and the amount of people who put Reddit jumped into the few hundred a month. Precise tracking of this is impossible. • Cost Efficiency (This is based on what I charged, and not the actual cost of running the campaign which is about $100/mo): – CPM (cost per thousand impressions) was about $0.08, which is far better than most paid channels. – Cost per free user: \~$8-10. After about a 10% conversion rate to a paid plan, our cost per paying user was $80–$100—well below the client’s previous $300–$400. HIGHLIGHTS: WHAT WORKED Subreddit-Specific Content: – Tailoring each post’s format and length to the audience norms boosted engagement. Worked out really well. 1 post got over 1M views alone. We regularly had posts that had hundreds of thousands. Breadcrumbs: – We never had anyone call us out for promoting. And really we weren’t. Our first priority was writing content that would crush on that subreddit. Using the Founder’s Existing Material: – The YouTube transcripts grounded the AI’s content in content we already made. This was really why we were able to produce so much content. CHALLENGES: WHAT DIDN’T WORK AI is still off: – Maybe it’s expecting too much, but still I wish the AI had done a better job. I editing a lot of content. Human oversight was critical. Scheduling all the content was a pain: – Recently I automated this pretty well. But at first I was scheduling everything manually and scheduling a hundred or so posts was a hassle. Getting Data and Analytics: – Not only did we have not very good traffic data, but the data from reddit had to be collected manually. Will probably automate this in the future. COST & TIME INVESTMENT Setup: The setup originally took me a couple weeks. I’ve since figured out how to do much faster (about 1 week). AirTable Setup here was easy and the tools costs $24/mo so not bad. ChatGPT costs were pretty cheap. Less than $75 per month. I’ve sense switched to using o1 which is much more expensive but saves me a lot of editing time Human Editing: Because this is the human part of the process and everything else was automated it mean by default all my time was spent editing content. Still this was a lot better than creating content from scratch probably by a factor of 5 or 10. The main expense was paying an editor (or using your own time) to refine posts. Worth it? Yes even with the editing time I was able to generate way more content that I would have otherwise. LESSONS & ACTIONABLE TAKEAWAYS Reddit as a Growth Channel: – If you genuinely respect each subreddit’s culture, you can achieve massive reach on a tight budget. AI + Human Collaboration: – AI excels at first drafts, but human expertise is non-negotiable for polishing and ensuring factual integrity. Soft Promotion Wins: – The “breadcrumb” approach paid off. It might feel like too light a touch, but is crucial for Reddit communities. Create once, repurpose as many times as possible: – If you have blog posts, videos, podcasts, or transcripts, feed them into AI to keep your message accurate and brand-consistent. CONCLUSION & NEXT STEPS If you try a similar approach: • Begin with smaller tests in a few niches to learn what resonates. • Create a clear “channel guide” for each community. • Carefully fact-check AI-generated posts. • Keep brand mentions low-key until you’ve established credibility.

I run an AI automation agency (AAA). My honest overview and review of this new business model
reddit
LLM Vibe Score0
Human Vibe Score1
AI_Scout_OfficialThis week

I run an AI automation agency (AAA). My honest overview and review of this new business model

I started an AI tools directory in February, and then branched off that to start an AI automation agency (AAA) in June. So far I've come across a lot of unsustainable "ideas" to make money with AI, but at the same time a few diamonds in the rough that aren't fully tapped into yet- especially the AAA model. Thought I'd share this post to shine light into this new business model and share some ways you could potentially start your own agency, or at the very least know who you are dealing with and how to pick and choose when you (inevitably) get bombarded with cold emails from them down the line. Foreword Running an AAA does NOT involve using AI tools directly to generate and sell content directly. That ship has sailed, and unless you are happy with $5 from Fiverr every month or so, it is not a real business model. Cry me a river but generating generic art with AI and slapping it onto a T-shirt to sell on Etsy won't make you a dime. At the same time, the AAA model will NOT require you to have a deep theoretical knowledge of AI, or any academic degree, as we are more so dealing with the practical applications of generative AI and how we can implement these into different workflows and tech-stacks, rather than building AI models from the ground up. Regardless of all that, common sense and a willingness to learn will help (a shit ton), as with anything. Keep in mind - this WILL involve work and motivation as well. The mindset that AI somehow means everything can be done for you on autopilot is not the right way to approach things. The common theme of businesses I've seen who have successfully implemented AI into their operations is the willingess to work with AI in a way that augments their existing operations, rather than flat out replace a worker or team. And this is exactly the train of thought you need when working with AI as a business model. However, as the field is relatively unsaturated and hype surrounding AI is still fresh for enterprises, right now is the prime time to start something new if generative AI interests you at all. With that being said, I'll be going over three of the most successful AI-adjacent businesses I've seen over this past year, in addition to some tips and resources to point you in the right direction. so.. WTF is an AI Automation Agency? The AI automation agency (or as some YouTubers have coined it, the AAA model) at its core involves creating custom AI solutions for businesses. I have over 1500 AI tools listed in my directory, however the feedback I've received from some enterprise users is that ready-made SaaS tools are too generic to meet their specific needs. Combine this with the fact virtually no smaller companies have the time or skills required to develop custom solutions right off the bat, and you have yourself real demand. I would say in practice, the AAA model is quite similar to Wordpress and even web dev agencies, with the major difference being all solutions you develop will incorporate key aspects of AI AND automation. Which brings me to my second point- JUST AI IS NOT ENOUGH. Rather than reducing the amount of time required to complete certain tasks, I've seen many AI agencies make the mistake of recommending and (trying to) sell solutions that more likely than not increase the workload of their clients. For example, if you were to make an internal tool that has AI answer questions based on their knowledge base, but this knowledge base has to be updated manually, this is creating unnecessary work. As such I think one of the key components of building successful AI solutions is incorporating the new (Generative AI/LLMs) with the old (programmtic automation- think Zapier, APIs, etc.). Finally, for this business model to be successful, ideally you should target a niche in which you have already worked and understand pain points and needs. Not only does this make it much easier to get calls booked with prospects, the solutions you build will have much greater value to your clients (meaning you get paid more). A mistake I've seen many AAA operators make (and I blame this on the "Get Rich Quick" YouTubers) is focusing too much on a specific productized service, rather than really understanding the needs of businesses. The former is much done via a SaaS model, but when going the agency route the only thing that makes sense is building custom solutions. This is why I always take a consultant-first approach. You can only build once you understand what they actually need and how certain solutions may impact their operations, workflows, and bottom-line. Basics of How to Get Started Pick a niche. As I mentioned previously, preferably one that you've worked in before. Niches I know of that are actively being bombarded with cold emails include real estate, e-commerce, auto-dealerships, lawyers, and medical offices. There is a reason for this, but I will tell you straight up this business model works well if you target any white-collar service business (internal tools approach) or high volume businesses (customer facing tools approach). Setup your toolbox. If you wanted to start a pressure washing business, you would need a pressure-washer. This is no different. For those without programming knowledge, I've seen two common ways AAA get setup to build- one is having a network of on-call web developers, whether its personal contacts or simply going to Upwork or any talent sourcing agency. The second is having an arsenal of no-code tools. I'll get to this more in a second, but this works beecause at its core, when we are dealing with the practical applications of AI, the code is quite simple, simply put. Start cold sales. Unless you have a network already, this is not a step you can skip. You've already picked a niche, so all you have to do is find the right message. Keep cold emails short, sweet, but enticing- and it will help a lot if you did step 1 correctly and intimately understand who your audience is. I'll be touching base later about how you can leverage AI yourself to help you with outreach and closing. The beauty of gen AI and the AAA model You don't need to be a seasoned web developer to make this business model work. The large majority of solutions that SME clients want is best done using an API for an LLM for the actual AI aspect. The value we create with the solutions we build comes with the conceptual framework and design that not only does what they need it to but integrates smoothly with their existing tech-stack and workflow. The actual implementation is quite straightforward once you understand the high level design and know which tools you are going to use. To give you a sense, even if you plan to build out these apps yourself (say in Python) the large majority of the nitty gritty technical work has already been done for you, especially if you leverage Python libraries and packages that offer high level abstraction for LLM-related functions. For instance, calling GPT can be as little as a single line of code. (And there are no-code tools where these functions are simply an icon on a GUI). Aside from understanding the capabilities and limitations of these tools and frameworks, the only thing that matters is being able to put them in a way that makes sense for what you want to build. Which is why outsourcing and no-code tools both work in our case. Okay... but how TF am I suppposed to actually build out these solutions? Now the fun part. I highly recommend getting familiar with Langchain and LlamaIndex. Both are Python libraires that help a lot with the high-level LLM abstraction I mentioned previously. The two most important aspects include being able to integrate internal data sources/knowledge bases with LLMs, and have LLMs perform autonomous actions. The two most common methods respectively are RAG and output parsing. RAG (retrieval augmented Generation) If you've ever seen a tool that seemingly "trains" GPT on your own data, and wonder how it all works- well I have an answer from you. At a high level, the user query is first being fed to what's called a vector database to run vector search. Vector search basically lets you do semantic search where you are searching data based on meaning. The vector databases then retrieves the most relevant sections of text as it relates to the user query, and this text gets APPENDED to your GPT prompt to provide extra context to the AI. Further, with prompt engineering, you can limit GPT to only generate an answer if it can be found within this extra context, greatly limiting the chance of hallucination (this is where AI makes random shit up). Aside from vector databases, we can also implement RAG with other data sources and retrieval methods, for example SQL databses (via parsing the outputs of LLM's- more on this later). Autonomous Agents via Output Parsing A common need of clients has been having AI actually perform tasks, rather than simply spitting out text. For example, with autonomous agents, we can have an e-commerce chatbot do the work of a basic customer service rep (i.e. look into orders, refunds, shipping). At a high level, what's going on is that the response of the LLM is being used programmtically to determine which API to call. Keeping on with the e-commerce example, if I wanted a chatbot to check shipping status, I could have a LLM response within my app (not shown to the user) with a prompt that outputs a random hash or string, and programmatically I can determine which API call to make based on this hash/string. And using the same fundamental concept as with RAG, I can append the the API response to a final prompt that would spit out the answer for the user. How No Code Tools Can Fit In (With some example solutions you can build) With that being said, you don't necessarily need to do all of the above by coding yourself, with Python libraries or otherwise. However, I will say that having that high level overview will help IMMENSELY when it comes to using no-code tools to do the actual work for you. Regardless, here are a few common solutions you might build for clients as well as some no-code tools you can use to build them out. Ex. Solution 1: AI Chatbots for SMEs (Small and Medium Enterprises) This involves creating chatbots that handle user queries, lead gen, and so forth with AI, and will use the principles of RAG at heart. After getting the required data from your client (i.e. product catalogues, previous support tickets, FAQ, internal documentation), you upload this into your knowledge base and write a prompt that makes sense for your use case. One no-code tool that does this well is MyAskAI. The beauty of it especially for building external chatbots is the ability to quickly ingest entire websites into your knowledge base via a sitemap, and bulk uploading files. Essentially, they've covered the entire grunt work required to do this manually. Finally, you can create a inline or chat widget on your client's website with a few lines of HTML, or altneratively integrate it with a Slack/Teams chatbot (if you are going for an internal Q&A chatbot approach). Other tools you could use include Botpress and Voiceflow, however these are less for RAG and more for building out complete chatbot flows that may or may not incorporate LLMs. Both apps are essentially GUIs that eliminate the pain and tears and trying to implement complex flows manually, and both natively incoporate AI intents and a knowledge base feature. Ex. Solution 2: Internal Apps Similar to the first example, except we go beyond making just chatbots but tools such as report generation and really any sort of internal tool or automations that may incorporate LLM's. For instance, you can have a tool that automatically generates replies to inbound emails based on your client's knowledge base. Or an automation that does the same thing but for replies to Instagram comments. Another example could be a tool that generates a description and screeenshot based on a URL (useful for directory sites, made one for my own :P). Getting into more advanced implementations of LLMs, we can have tools that can generate entire drafts of reports (think 80+ pages), based not only on data from a knowledge base but also the writing style, format, and author voice of previous reports. One good tool to create content generation panels for your clients would be MindStudio. You can train LLM's via prompt engineering in a structured way with your own data to essentially fine tune them for whatever text you need it to generate. Furthermore, it has a GUI where you can dictate the entire AI flow. You can also upload data sources via multiple formats, including PDF, CSV, and Docx. For automations that require interactions between multiple apps, I recommend the OG zapier/make.com if you want a no-code solution. For instance, for the automatic email reply generator, I can have a trigger such that when an email is received, a custom AI reply is generated by MyAskAI, and finally a draft is created in my email client. Or, for an automation where I can create a social media posts on multiple platforms based on a RSS feed (news feed), I can implement this directly in Zapier with their native GPT action (see screenshot) As for more complex LLM flows that may require multiple layers of LLMs, data sources, and APIs working together to generate a single response i.e. a long form 100 page report, I would recommend tools such as Stack AI or Flowise (open-source alternative) to build these solutions out. Essentially, you get most of the functions and features of Python packages such as Langchain and LlamaIndex in a GUI. See screenshot for an example of a flow How the hell are you supposed to find clients? With all that being said, none of this matters if you can't find anyone to sell to. You will have to do cold sales, one way or the other, especially if you are brand new to the game. And what better way to sell your AI services than with AI itself? If we want to integrate AI into the cold outreach process, first we must identify what it's good at doing, and that's obviously writing a bunch of text, in a short amount of time. Similar to the solutions that an AAA can build for its clients, we can take advantage of the same principles in our own sales processes. How to do outreach Once you've identified your niche and their pain points/opportunities for automation, you want to craft a compelling message in which you can send via cold email and cold calls to get prospects booked on demos/consultations. I won't get into too much detail in terms of exactly how to write emails or calling scripts, as there are millions of resources to help with this, but I will tell you a few key points you want to keep in mind when doing outreach for your AAA. First, you want to keep in mind that many businesses are still hesitant about AI and may not understand what it really is or how it can benefit their operations. However, we can take advantage of how mass media has been reporting on AI this past year- at the very least people are AWARE that sooner or later they may have to implement AI into their businesses to stay competitive. We want to frame our message in a way that introduces generative AI as a technology that can have a direct, tangible, and positive impact on their business. Although it may be hard to quantify, I like to include estimates of man-hours saved or costs saved at least in my final proposals to prospects. Times are TOUGH right now, and money is expensive, so you need to have a compelling reason for businesses to get on board. Once you've gotten your messaging down, you will want to create a list of prospects to contact. Tools you can use to find prospects include Apollo.io, reply.io, zoominfo (expensive af), and Linkedin Sales Navigator. What specific job titles, etc. to target will depend on your niche but for smaller companies this will tend to be the owner. For white collar niches, i.e. law, the professional that will be directly benefiting from the tool (i.e. partners) may be better to contact. And for larger organizations you may want to target business improvement and digital transformation leads/directors- these are the people directly in charge of projects like what you may be proposing. Okay- so you have your message, and your list, and now all it comes down to is getting the good word out. I won't be going into the details of how to send these out, a quick Google search will give you hundreds of resources for cold outreach methods. However, personalization is key and beyond simple dynamic variables you want to make sure you can either personalize your email campaigns directly with AI (SmartWriter.ai is an example of a tool that can do this), or at the very least have the ability to import email messages programmatically. Alternatively, ask ChatGPT to make you a Python Script that can take in a list of emails, scrape info based on their linkedin URL or website, and all pass this onto a GPT prompt that specifies your messaging to generate an email. From there, send away. How tf do I close? Once you've got some prospects booked in on your meetings, you will need to close deals with them to turn them into clients. Call #1: Consultation Tying back to when I mentioned you want to take a consultant-first appraoch, you will want to listen closely to their goals and needs and understand their pain points. This would be the first call, and typically I would provide a high level overview of different solutions we could build to tacke these. It really helps to have a presentation available, so you can graphically demonstrate key points and key technologies. I like to use Plus AI for this, it's basically a Google Slides add-on that can generate slide decks for you. I copy and paste my default company messaging, add some key points for the presentation, and it comes out with pretty decent slides. Call #2: Demo The second call would involve a demo of one of these solutions, and typically I'll quickly prototype it with boilerplate code I already have, otherwise I'll cook something up in a no-code tool. If you have a niche where one type of solution is commonly demanded, it helps to have a general demo set up to be able to handle a larger volume of calls, so you aren't burning yourself out. I'll also elaborate on how the final product would look like in comparison to the demo. Call #3 and Beyond: Once the initial consultation and demo is complete, you will want to alleviate any remaining concerns from your prospects and work with them to reach a final work proposal. It's crucial you lay out exactly what you will be building (in writing) and ensure the prospect understands this. Furthermore, be clear and transparent with timelines and communication methods for the project. In terms of pricing, you want to take this from a value-based approach. The same solution may be worth a lot more to client A than client B. Furthermore, you can create "add-ons" such as monthly maintenance/upgrade packages, training sessions for employeees, and so forth, separate from the initial setup fee you would charge. How you can incorporate AI into marketing your businesses Beyond cold sales, I highly recommend creating a funnel to capture warm leads. For instance, I do this currently with my AI tools directory, which links directly to my AI agency and has consistent branding throughout. Warm leads are much more likely to close (and honestly, much nicer to deal with). However, even without an AI-related website, at the very least you will want to create a presence on social media and the web in general. As with any agency, you will want basic a professional presence. A professional virtual address helps, in addition to a Google Business Profile (GBP) and TrustPilot. a GBP (especially for local SEO) and Trustpilot page also helps improve the looks of your search results immensely. For GBP, I recommend using ProfilePro, which is a chrome extension you can use to automate SEO work for your GBP. Aside from SEO optimzied business descriptions based on your business, it can handle Q/A answers, responses, updates, and service descriptions based on local keywords. Privacy and Legal Concerns of the AAA Model Aside from typical concerns for agencies relating to service contracts, there are a few issues (especially when using no-code tools) that will need to be addressed to run a successful AAA. Most of these surround privacy concerns when working with proprietary data. In your terms with your client, you will want to clearly define hosting providers and any third party tools you will be using to build their solution, and a DPA with these third parties listed as subprocessors if necessary. In addition, you will want to implement best practices like redacting private information from data being used for building solutions. In terms of addressing concerns directly from clients, it helps if you host your solutions on their own servers (not possible with AI tools), and address the fact only ChatGPT queries in the web app, not OpenAI API calls, will be used to train OpenAI's models (as reported by mainstream media). The key here is to be open and transparent with your clients about ALL the tools you are using, where there data will be going, and make sure to get this all in writing. have fun, and keep an open mind Before I finish this post, I just want to reiterate the fact that this is NOT an easy way to make money. Running an AI agency will require hours and hours of dedication and work, and constantly rearranging your schedule to meet prospect and client needs. However, if you are looking for a new business to run, and have a knack for understanding business operations and are genuinely interested in the pracitcal applications of generative AI, then I say go for it. The time is ticking before AAA becomes the new dropshipping or SMMA, and I've a firm believer that those who set foot first and establish themselves in this field will come out top. And remember, while 100 thousand people may read this post, only 2 may actually take initiative and start.

Only 2 months of cash in the Bank for my business but was able to save it with the help of AI.
reddit
LLM Vibe Score0
Human Vibe Score1
CALLIRDAN90This week

Only 2 months of cash in the Bank for my business but was able to save it with the help of AI.

Hi there! I’m excited to share something very personal with you. We needed to book at least 2 appointments per day in the next 60 days, or my business would fail. We were already trying two acquisition channels, LinkedIn and email. The problem with these channels was that the positive response rate was very low in both. So I decided to focus on LinkedIn and get the attention of the lead by sending videos directly to them via LinkedIn messages. (You can send videos to your connections on LinkedIn if you use your cell phone.) This wasn’t new, but I added a small twist to get the lead’s attention. All the covers of the videos had a picture of me holding a sign with the person’s name and an interesting phrase. This showed some okay results, but the rest of the video was not personalized. Only the picture on the cover was. I even developed a Chrome extension for this because I thought this would be the answer and that I would book tons of appointments.  But after more trial and outreach, my leads responded, telling me that because the video itself wasn’t personalized for them, they felt like I didn’t put enough effort in, so they would not book a call with me. So after investing time and effort into my “new bright idea” and getting developers to make the Chrome extension, I was back to square one with no results. A few weeks went by, and after researching online, I found an online course from a guy who promised to teach me how to book 30+ appointments per month, guaranteed (at the time, I was making 2 or 3 appointments per week, maximum). He promised that I would only pay if he actually booked appointments for me and even offered to give me money if his course didn’t work for me. I never paid attention to internet gurus, but the offer was actually not bad, so I looked into this guy’s website. I found out he had hundreds of reviews from people who had taken his course and were talking amazing things about it. The more I read, the more excited I got. I booked a call that day and talked to a salesperson. The call was very short, and he promised I would get at least 2 appointments per day, easily. He seemed a bit cocky and told me that I just needed to trust him and the 100+ reviews from people who had taken the course. He didn’t share details, a proposal, or anything. I asked the price, and he told me it was close to $10k. (Not kidding, this was the price.) Then he told me that I would make the money back in no time with the clients I would get following his course, and that if it didn’t work, he would give me the money back. But I needed to follow everything the course said for at least 6 months. I had never paid $10k for anything in my life; it was extremely expensive for me. Also, my salary from my business was not in dollars but in a currency that was worth much less than the dollar. I continued to research more and more, but no other course was close to the number of reviews and promises that this guy had. I got desperate and told myself that I would bet everything on this course. If it worked for so many others, surely it would work for me. I got a loan from the bank and paid for the course. You might read this and think it was the most stupid thing ever, but the reality is that after 2 months in the course (I did the course as fast as I could), I learned a lot. The course was not bad; it was very extensive—probably more than 200 hours or so—and they taught a lot of things. I don’t think it was worth $10k for me, but I can see how for other people it might be worth that. Now, to the question you’re all thinking: did it get me the 2 appointments I needed per day? The answer is no. Here’s the thing: most of the techniques they taught were innovative and disruptive, but the focus was always on personalization, and they didn’t teach any way to automate the personalization. (I think, at the time they made the course, the tools didn’t exist yet.) So they taught how to do everything manually, and it took a lot—a lot of time and effort. And most annoyingly: an incredible amount of time doing operational things. I did get 2 appointments on some days, but it wasn’t consistent, and I didn’t have the time to spend 14 hours a day doing everything manually or the money to hire someone to do this for me. (I needed to also spend time delivering our service to our current clients; otherwise, they would leave.) I told them this, and they were very reasonable. After some negotiation, they gave me part of the money back. (To be fair, there was a lot of value in the course, so asking for the full $10k back would have been excessive because, in the end, it really taught me a lot of things I didn’t know.) So in the end, I spent $10k and 200+ hours on an online course, spent time and effort developing a Chrome extension, and was still not able to hit the meetings I needed. Money in the business was running out, and I needed to do something fast, or I was doomed. After investing time and effort in tools, research, and spending $10k and over 200 hours on a course that didn’t deliver the consistent results I needed, I was at a crossroads. My businesses were running out of money, and I knew I needed to find a solution quickly, or everything I had worked for would collapse. It was during this time of desperation that I started exploring other options. One night, while scrolling through the internet, I stumbled upon a 2024 article about how AI was being used to revolutionize various industries. It wasn’t directly related to appointment booking, but it sparked an idea in my mind. What if I could use AI to automate the personalization process that I had learned in the course? It seemed like a long shot, but I had nothing to lose. I started researching AI tools and technologies—YouTube videos, podcasts, pretty much everything related to AI—desperate to find something that could help me scale my outreach without investing too much time, while still maintaining the personalization that was so important. After a lot of trial and error, I found a few tools that showed promise. All of these tools were extremely new. Some of them had just launched the versions I needed just weeks ago. I can say I researched and tested more than 50 AI startups, experimenting with them, testing different approaches, checking prices (the problem was that most of them were cheap but became very expensive when applying the volume I needed to get results), and gradually refining my process. It wasn’t an overnight success, but for the first time, I felt like I was onto something that could truly work. The idea of combining AI personalization with volume was something new, and it gave me hope that I could finally book the meetings I needed without burning out. One day, I sent a video of myself talking—completely AI-generated—to my family chat group and waited for their response. None of them noticed it wasn’t actually me. At that moment, I said to myself: “Okay, I am ready to test this in the real world and see if it works.” Like everything in life, focus is key. As I mentioned earlier, we were already trying outbound strategies on LinkedIn and email, but I decided to narrow my focus to LinkedIn and specifically to video outreach. My goal was to stand out from the crowd, where most people were using text or sending generic videos. I knew that if my videos were 100% personalized, it would make a strong impression on my leads. I focused on two key metrics during my tests: Time spent on manual personalized outreach vs. AI-generated personalized outreach. Positive reply rate for non-personalized manual outreach vs. AI-generated personalized outreach. I ran a test using a sample of 50 one-minute videos sent to 50 leads, and here are the results: Time Spent to Make the Videos: Manual Process: It took me up to 10 hours to create and send 50 personalized videos. This included looking good on camera, brushing my hair, choosing appropriate clothing, ensuring proper lighting, not messing up the script, using a camera holder, recharging the phone, pausing to drink water, avoiding external sounds, being in an appropriate room, downloading the videos, deleting the videos that were not good, and sending the final ones. On average, it took me at least 12.5 minutes per one-minute video. AI Process: With AI, it took me just 32 seconds to create the exact same one-minute personalized video—without saying a word or recording a second of footage. In total, I could make and send the same 50 personalized videos in just 27 minutes. Result: The AI process was 24 times faster. Completely crazy! Positive Reply Rate: Non-Personalized Script (Manual): Using a good script without personalization (no name, job title, city, company, etc.) resulted in a positive reply rate of 4-6% on LinkedIn, including follow-ups. Personalized Script (AI): Using the same script but adding personalized details like the lead's name, company, city, and job title resulted in a positive reply rate of 15-20%, including follow-ups. Result: AI personalization led to 3x (three times) more replies. The best part was the responses. Almost everyone who replied thanked me for taking the time to research them, congratulated me on my speech, and appreciated the personalization and eloquence of my message.  These metrics were a complete breakthrough for me. I researched online to see if anyone else had done something similar, but I couldn’t find anything close. After achieving these metrics, booking the two appointments I desperately needed became easy. In fact, in the last 10 weeks, I’ve been able to consistently book 3-4 appointments per day. This success allowed me to train someone in my company to handle the process, freeing me up to focus on other aspects of the business and ultimately saving it. With the AI appointment machine we built, I even have free time now—time that I’ve been using to develop a methodology and tech tools that I now teach to others. I named the methodology Clip2Lead as a reference to the first Chrome extension I developed that didn’t work but ended up being the first step toward everything that followed. I’ve condensed everything I learned and throughout my experiences into a simple and short FREE training where I cover the entire AI appointment booking process. This includes how to find leads, create scripts, set up follow-up sequences, generate AI videos, clone your voice, compare non-AI metrics with AI metrics, and even navigate AI safety controls. I also offer Chrome extensions that helped me automate the process even further, so you can spend your time closing deals or focusing on other acquisition channels, while your AI machine for booking appointments runs with minimal effort from you. If you’re interested please get in touch with me and thank you for taking the time to read my personal story.

I run an AI automation agency (AAA). My honest overview and review of this new business model
reddit
LLM Vibe Score0
Human Vibe Score1
AI_Scout_OfficialThis week

I run an AI automation agency (AAA). My honest overview and review of this new business model

I started an AI tools directory in February, and then branched off that to start an AI automation agency (AAA) in June. So far I've come across a lot of unsustainable "ideas" to make money with AI, but at the same time a few diamonds in the rough that aren't fully tapped into yet- especially the AAA model. Thought I'd share this post to shine light into this new business model and share some ways you could potentially start your own agency, or at the very least know who you are dealing with and how to pick and choose when you (inevitably) get bombarded with cold emails from them down the line. Foreword Running an AAA does NOT involve using AI tools directly to generate and sell content directly. That ship has sailed, and unless you are happy with $5 from Fiverr every month or so, it is not a real business model. Cry me a river but generating generic art with AI and slapping it onto a T-shirt to sell on Etsy won't make you a dime. At the same time, the AAA model will NOT require you to have a deep theoretical knowledge of AI, or any academic degree, as we are more so dealing with the practical applications of generative AI and how we can implement these into different workflows and tech-stacks, rather than building AI models from the ground up. Regardless of all that, common sense and a willingness to learn will help (a shit ton), as with anything. Keep in mind - this WILL involve work and motivation as well. The mindset that AI somehow means everything can be done for you on autopilot is not the right way to approach things. The common theme of businesses I've seen who have successfully implemented AI into their operations is the willingess to work with AI in a way that augments their existing operations, rather than flat out replace a worker or team. And this is exactly the train of thought you need when working with AI as a business model. However, as the field is relatively unsaturated and hype surrounding AI is still fresh for enterprises, right now is the prime time to start something new if generative AI interests you at all. With that being said, I'll be going over three of the most successful AI-adjacent businesses I've seen over this past year, in addition to some tips and resources to point you in the right direction. so.. WTF is an AI Automation Agency? The AI automation agency (or as some YouTubers have coined it, the AAA model) at its core involves creating custom AI solutions for businesses. I have over 1500 AI tools listed in my directory, however the feedback I've received from some enterprise users is that ready-made SaaS tools are too generic to meet their specific needs. Combine this with the fact virtually no smaller companies have the time or skills required to develop custom solutions right off the bat, and you have yourself real demand. I would say in practice, the AAA model is quite similar to Wordpress and even web dev agencies, with the major difference being all solutions you develop will incorporate key aspects of AI AND automation. Which brings me to my second point- JUST AI IS NOT ENOUGH. Rather than reducing the amount of time required to complete certain tasks, I've seen many AI agencies make the mistake of recommending and (trying to) sell solutions that more likely than not increase the workload of their clients. For example, if you were to make an internal tool that has AI answer questions based on their knowledge base, but this knowledge base has to be updated manually, this is creating unnecessary work. As such I think one of the key components of building successful AI solutions is incorporating the new (Generative AI/LLMs) with the old (programmtic automation- think Zapier, APIs, etc.). Finally, for this business model to be successful, ideally you should target a niche in which you have already worked and understand pain points and needs. Not only does this make it much easier to get calls booked with prospects, the solutions you build will have much greater value to your clients (meaning you get paid more). A mistake I've seen many AAA operators make (and I blame this on the "Get Rich Quick" YouTubers) is focusing too much on a specific productized service, rather than really understanding the needs of businesses. The former is much done via a SaaS model, but when going the agency route the only thing that makes sense is building custom solutions. This is why I always take a consultant-first approach. You can only build once you understand what they actually need and how certain solutions may impact their operations, workflows, and bottom-line. Basics of How to Get Started Pick a niche. As I mentioned previously, preferably one that you've worked in before. Niches I know of that are actively being bombarded with cold emails include real estate, e-commerce, auto-dealerships, lawyers, and medical offices. There is a reason for this, but I will tell you straight up this business model works well if you target any white-collar service business (internal tools approach) or high volume businesses (customer facing tools approach). Setup your toolbox. If you wanted to start a pressure washing business, you would need a pressure-washer. This is no different. For those without programming knowledge, I've seen two common ways AAA get setup to build- one is having a network of on-call web developers, whether its personal contacts or simply going to Upwork or any talent sourcing agency. The second is having an arsenal of no-code tools. I'll get to this more in a second, but this works beecause at its core, when we are dealing with the practical applications of AI, the code is quite simple, simply put. Start cold sales. Unless you have a network already, this is not a step you can skip. You've already picked a niche, so all you have to do is find the right message. Keep cold emails short, sweet, but enticing- and it will help a lot if you did step 1 correctly and intimately understand who your audience is. I'll be touching base later about how you can leverage AI yourself to help you with outreach and closing. The beauty of gen AI and the AAA model You don't need to be a seasoned web developer to make this business model work. The large majority of solutions that SME clients want is best done using an API for an LLM for the actual AI aspect. The value we create with the solutions we build comes with the conceptual framework and design that not only does what they need it to but integrates smoothly with their existing tech-stack and workflow. The actual implementation is quite straightforward once you understand the high level design and know which tools you are going to use. To give you a sense, even if you plan to build out these apps yourself (say in Python) the large majority of the nitty gritty technical work has already been done for you, especially if you leverage Python libraries and packages that offer high level abstraction for LLM-related functions. For instance, calling GPT can be as little as a single line of code. (And there are no-code tools where these functions are simply an icon on a GUI). Aside from understanding the capabilities and limitations of these tools and frameworks, the only thing that matters is being able to put them in a way that makes sense for what you want to build. Which is why outsourcing and no-code tools both work in our case. Okay... but how TF am I suppposed to actually build out these solutions? Now the fun part. I highly recommend getting familiar with Langchain and LlamaIndex. Both are Python libraires that help a lot with the high-level LLM abstraction I mentioned previously. The two most important aspects include being able to integrate internal data sources/knowledge bases with LLMs, and have LLMs perform autonomous actions. The two most common methods respectively are RAG and output parsing. RAG (retrieval augmented Generation) If you've ever seen a tool that seemingly "trains" GPT on your own data, and wonder how it all works- well I have an answer from you. At a high level, the user query is first being fed to what's called a vector database to run vector search. Vector search basically lets you do semantic search where you are searching data based on meaning. The vector databases then retrieves the most relevant sections of text as it relates to the user query, and this text gets APPENDED to your GPT prompt to provide extra context to the AI. Further, with prompt engineering, you can limit GPT to only generate an answer if it can be found within this extra context, greatly limiting the chance of hallucination (this is where AI makes random shit up). Aside from vector databases, we can also implement RAG with other data sources and retrieval methods, for example SQL databses (via parsing the outputs of LLM's- more on this later). Autonomous Agents via Output Parsing A common need of clients has been having AI actually perform tasks, rather than simply spitting out text. For example, with autonomous agents, we can have an e-commerce chatbot do the work of a basic customer service rep (i.e. look into orders, refunds, shipping). At a high level, what's going on is that the response of the LLM is being used programmtically to determine which API to call. Keeping on with the e-commerce example, if I wanted a chatbot to check shipping status, I could have a LLM response within my app (not shown to the user) with a prompt that outputs a random hash or string, and programmatically I can determine which API call to make based on this hash/string. And using the same fundamental concept as with RAG, I can append the the API response to a final prompt that would spit out the answer for the user. How No Code Tools Can Fit In (With some example solutions you can build) With that being said, you don't necessarily need to do all of the above by coding yourself, with Python libraries or otherwise. However, I will say that having that high level overview will help IMMENSELY when it comes to using no-code tools to do the actual work for you. Regardless, here are a few common solutions you might build for clients as well as some no-code tools you can use to build them out. Ex. Solution 1: AI Chatbots for SMEs (Small and Medium Enterprises) This involves creating chatbots that handle user queries, lead gen, and so forth with AI, and will use the principles of RAG at heart. After getting the required data from your client (i.e. product catalogues, previous support tickets, FAQ, internal documentation), you upload this into your knowledge base and write a prompt that makes sense for your use case. One no-code tool that does this well is MyAskAI. The beauty of it especially for building external chatbots is the ability to quickly ingest entire websites into your knowledge base via a sitemap, and bulk uploading files. Essentially, they've covered the entire grunt work required to do this manually. Finally, you can create a inline or chat widget on your client's website with a few lines of HTML, or altneratively integrate it with a Slack/Teams chatbot (if you are going for an internal Q&A chatbot approach). Other tools you could use include Botpress and Voiceflow, however these are less for RAG and more for building out complete chatbot flows that may or may not incorporate LLMs. Both apps are essentially GUIs that eliminate the pain and tears and trying to implement complex flows manually, and both natively incoporate AI intents and a knowledge base feature. Ex. Solution 2: Internal Apps Similar to the first example, except we go beyond making just chatbots but tools such as report generation and really any sort of internal tool or automations that may incorporate LLM's. For instance, you can have a tool that automatically generates replies to inbound emails based on your client's knowledge base. Or an automation that does the same thing but for replies to Instagram comments. Another example could be a tool that generates a description and screeenshot based on a URL (useful for directory sites, made one for my own :P). Getting into more advanced implementations of LLMs, we can have tools that can generate entire drafts of reports (think 80+ pages), based not only on data from a knowledge base but also the writing style, format, and author voice of previous reports. One good tool to create content generation panels for your clients would be MindStudio. You can train LLM's via prompt engineering in a structured way with your own data to essentially fine tune them for whatever text you need it to generate. Furthermore, it has a GUI where you can dictate the entire AI flow. You can also upload data sources via multiple formats, including PDF, CSV, and Docx. For automations that require interactions between multiple apps, I recommend the OG zapier/make.com if you want a no-code solution. For instance, for the automatic email reply generator, I can have a trigger such that when an email is received, a custom AI reply is generated by MyAskAI, and finally a draft is created in my email client. Or, for an automation where I can create a social media posts on multiple platforms based on a RSS feed (news feed), I can implement this directly in Zapier with their native GPT action (see screenshot) As for more complex LLM flows that may require multiple layers of LLMs, data sources, and APIs working together to generate a single response i.e. a long form 100 page report, I would recommend tools such as Stack AI or Flowise (open-source alternative) to build these solutions out. Essentially, you get most of the functions and features of Python packages such as Langchain and LlamaIndex in a GUI. See screenshot for an example of a flow How the hell are you supposed to find clients? With all that being said, none of this matters if you can't find anyone to sell to. You will have to do cold sales, one way or the other, especially if you are brand new to the game. And what better way to sell your AI services than with AI itself? If we want to integrate AI into the cold outreach process, first we must identify what it's good at doing, and that's obviously writing a bunch of text, in a short amount of time. Similar to the solutions that an AAA can build for its clients, we can take advantage of the same principles in our own sales processes. How to do outreach Once you've identified your niche and their pain points/opportunities for automation, you want to craft a compelling message in which you can send via cold email and cold calls to get prospects booked on demos/consultations. I won't get into too much detail in terms of exactly how to write emails or calling scripts, as there are millions of resources to help with this, but I will tell you a few key points you want to keep in mind when doing outreach for your AAA. First, you want to keep in mind that many businesses are still hesitant about AI and may not understand what it really is or how it can benefit their operations. However, we can take advantage of how mass media has been reporting on AI this past year- at the very least people are AWARE that sooner or later they may have to implement AI into their businesses to stay competitive. We want to frame our message in a way that introduces generative AI as a technology that can have a direct, tangible, and positive impact on their business. Although it may be hard to quantify, I like to include estimates of man-hours saved or costs saved at least in my final proposals to prospects. Times are TOUGH right now, and money is expensive, so you need to have a compelling reason for businesses to get on board. Once you've gotten your messaging down, you will want to create a list of prospects to contact. Tools you can use to find prospects include Apollo.io, reply.io, zoominfo (expensive af), and Linkedin Sales Navigator. What specific job titles, etc. to target will depend on your niche but for smaller companies this will tend to be the owner. For white collar niches, i.e. law, the professional that will be directly benefiting from the tool (i.e. partners) may be better to contact. And for larger organizations you may want to target business improvement and digital transformation leads/directors- these are the people directly in charge of projects like what you may be proposing. Okay- so you have your message, and your list, and now all it comes down to is getting the good word out. I won't be going into the details of how to send these out, a quick Google search will give you hundreds of resources for cold outreach methods. However, personalization is key and beyond simple dynamic variables you want to make sure you can either personalize your email campaigns directly with AI (SmartWriter.ai is an example of a tool that can do this), or at the very least have the ability to import email messages programmatically. Alternatively, ask ChatGPT to make you a Python Script that can take in a list of emails, scrape info based on their linkedin URL or website, and all pass this onto a GPT prompt that specifies your messaging to generate an email. From there, send away. How tf do I close? Once you've got some prospects booked in on your meetings, you will need to close deals with them to turn them into clients. Call #1: Consultation Tying back to when I mentioned you want to take a consultant-first appraoch, you will want to listen closely to their goals and needs and understand their pain points. This would be the first call, and typically I would provide a high level overview of different solutions we could build to tacke these. It really helps to have a presentation available, so you can graphically demonstrate key points and key technologies. I like to use Plus AI for this, it's basically a Google Slides add-on that can generate slide decks for you. I copy and paste my default company messaging, add some key points for the presentation, and it comes out with pretty decent slides. Call #2: Demo The second call would involve a demo of one of these solutions, and typically I'll quickly prototype it with boilerplate code I already have, otherwise I'll cook something up in a no-code tool. If you have a niche where one type of solution is commonly demanded, it helps to have a general demo set up to be able to handle a larger volume of calls, so you aren't burning yourself out. I'll also elaborate on how the final product would look like in comparison to the demo. Call #3 and Beyond: Once the initial consultation and demo is complete, you will want to alleviate any remaining concerns from your prospects and work with them to reach a final work proposal. It's crucial you lay out exactly what you will be building (in writing) and ensure the prospect understands this. Furthermore, be clear and transparent with timelines and communication methods for the project. In terms of pricing, you want to take this from a value-based approach. The same solution may be worth a lot more to client A than client B. Furthermore, you can create "add-ons" such as monthly maintenance/upgrade packages, training sessions for employeees, and so forth, separate from the initial setup fee you would charge. How you can incorporate AI into marketing your businesses Beyond cold sales, I highly recommend creating a funnel to capture warm leads. For instance, I do this currently with my AI tools directory, which links directly to my AI agency and has consistent branding throughout. Warm leads are much more likely to close (and honestly, much nicer to deal with). However, even without an AI-related website, at the very least you will want to create a presence on social media and the web in general. As with any agency, you will want basic a professional presence. A professional virtual address helps, in addition to a Google Business Profile (GBP) and TrustPilot. a GBP (especially for local SEO) and Trustpilot page also helps improve the looks of your search results immensely. For GBP, I recommend using ProfilePro, which is a chrome extension you can use to automate SEO work for your GBP. Aside from SEO optimzied business descriptions based on your business, it can handle Q/A answers, responses, updates, and service descriptions based on local keywords. Privacy and Legal Concerns of the AAA Model Aside from typical concerns for agencies relating to service contracts, there are a few issues (especially when using no-code tools) that will need to be addressed to run a successful AAA. Most of these surround privacy concerns when working with proprietary data. In your terms with your client, you will want to clearly define hosting providers and any third party tools you will be using to build their solution, and a DPA with these third parties listed as subprocessors if necessary. In addition, you will want to implement best practices like redacting private information from data being used for building solutions. In terms of addressing concerns directly from clients, it helps if you host your solutions on their own servers (not possible with AI tools), and address the fact only ChatGPT queries in the web app, not OpenAI API calls, will be used to train OpenAI's models (as reported by mainstream media). The key here is to be open and transparent with your clients about ALL the tools you are using, where there data will be going, and make sure to get this all in writing. have fun, and keep an open mind Before I finish this post, I just want to reiterate the fact that this is NOT an easy way to make money. Running an AI agency will require hours and hours of dedication and work, and constantly rearranging your schedule to meet prospect and client needs. However, if you are looking for a new business to run, and have a knack for understanding business operations and are genuinely interested in the pracitcal applications of generative AI, then I say go for it. The time is ticking before AAA becomes the new dropshipping or SMMA, and I've a firm believer that those who set foot first and establish themselves in this field will come out top. And remember, while 100 thousand people may read this post, only 2 may actually take initiative and start.

How I went from $27 to $3K as a solopreneur still in a 9-5
reddit
LLM Vibe Score0
Human Vibe Score1
jottrledThis week

How I went from $27 to $3K as a solopreneur still in a 9-5

My journey started back in November 2023. I was scrolling through Twitter and YouTube and saw a word that I had never come across before. Solopreneur. The word caught my eye. Mainly because I was pretty sure I knew what it meant even though it's not a word you'll find in the dictionary. I liked what it was describing. A solo entrepreneur. A one man business. It completely resonated with me. As a software engineer by trade I'm used to working alone, especially since the pandemic hit and we were forced to work remotely. See, I always wanted to ditch the 9-5 thing but thought that was too big and too scary for a single person to do. Surely you would need a lot of money to get started, right? Surely you would need investors? The whole concept seemed impossible to me. That was until I found all the success stories. I became obsessed with the concept of solopreneurship. As I went further down the rabbit hole I found people like Justin Welsh, Kieran Drew and Marc Louvion to name a few. All of whom have one person businesses making huge money every year. So I thought, if they can do it, why can't I? People like this have cleared the pathway for those looking to escape the 9-5 grind. I decided 2024 would be the year I try this out. My main goal for the year? Build a one man business, earn my first $ online and learn a sh\*t ton along the way. My main goal in general? Build my business to $100K per year, quit my 9-5 and live with freedom. From December 2023 to February 2024 I began brainstorming ideas. I was like a lost puppy looking for his ball. How on earth did people find good ideas? I began writing everything and anything that came to mind down in my notes app on my phone. By February I would have approximately 70 ideas. Each as weird and whacky as the other. I was skeptical though. If I went through all the trouble of building a product for one of these ideas how would I know if anyone would even be interested in using it? I got scared and took a break for a week. All these ideas seemed too big and the chance that they would take off into the atmosphere was slim (in my mind anyways). I was learning more and more about solopreneurship as the weeks went on so I decided to build a product centered around everything I was learning about. The idea was simple. Enter a business idea and use AI to give the user details about how to market it, who their target customers were, what to write on their landing page, etc. All for a measly $27 per use. I quickly built it and launched on March 3rd 2024. I posted about it on Indie Hackers, Reddit and Hacker News. I was so excited about the prospect of earning my first internet $! Surely everyone wanted to use my product! Nope...all I got was crickets. I was quickly brought back down to earth. That was until 5 days later. I looked at my phone and had a new Stripe notification! Cha-ching! My first internet $. What a feeling! That was goal number 1 complete. It would be another 6 days before I would get my second sale...and then another 15 days to get my third. It was an emotional rollercoaster. I went from feeling like quitting the 9-5 was actually possible to thinking that maybe the ups and downs aren't worth it. On one hand I had made my first internet dollar so I should my ecstatic, and don't get me wrong, I was but I wanted more. More validation that I could do this long term. By May I was starting to give up on the product. I had learned so much in the past few months about marketing, SEO, building an audience, etc. and I wanted to build something that I thought could have more success so I focused on one critical thing that I had learned about. What was it? Building a product that had SEO potential. A product that I knew hundreds of people were looking for. See this was my thinking - If I could find a keyword that people were searching for on Google hundreds/thousands of times every month and it was easy to rank high on search engines then I would go all in (in SEO land this equates to a Keyword that has a Keyword Difficulty of = 500). I began researching and found that the keyword "micro saas ideas" was being searched for around 600 times each month. Micro Saas was something that really interested me. It was perfect for solopreneurs. Small software products that 1 person could build. What's not to like if you're in the game of software and solopreneurship? Researching keywords like this became like a game for me. I was hooked. I was doing it every day, finding gems that were being searched for hundreds and thousands of times every month that still had potential. That's when I came up with my next product idea. I decided to create a database of Micro Saas Ideas all with this sort of SEO potential. See if you can build a product that you know people are looking for then that's all the validation you need. So I put this theory to the test. I created a database of Micro Saas Ideas with SEO Potential and launched it in June 2024. This time it was different. I made $700 in the first week of launching. A large contrast to my previous failed attempt at becoming the worlds greatest solopreneur. Since launch I have grown the product to $3K and I couldn't be happier. I know what you're saying, $3K isn't a lot. But it's validation. It's validation that I can earn $ online. Validation that I can grow a business and it gives me hope that one day I'll be able to quit that 9-5 grind. My plan is to keep growing the business. I expect there to be a few challenges up ahead but I'll tackle them as I go and learn from the failures and successes. I have a newsletter where I share Micro Saas Ideas with SEO potential every week which I'll leave below in the first comment. Feel free to come along for the ride. If not I hope this post brings you some value If you're thinking about starting as a solopreneur, stop thinking and start doing, you won't regret it.

Partnership revenue share uncertainty as test before any equity discussions, please help, urgent
reddit
LLM Vibe Score0
Human Vibe Score1
jayn35This week

Partnership revenue share uncertainty as test before any equity discussions, please help, urgent

Hi all, It's brand new relationship to collaborate on work and fast moving situation and i want to be fair and informed about revenue share for this work as startup, new agency, unclear still. Sorry for rushed message, its moving fast. Its starting with revenue share to test and see how things go. I contribute some things as a separate entity/consultant/marketing domain expert who designed some AI products and am able to acquire clients reliably with my marketing skills then they do all development and sales assistance. Details below please can you help with advice on contribution and revenue share thats very fair: The "partnership" non ownership (rev share is best correct?) of delivering custom AI software development solutions to smb b2b clients. As a domain expert i designed a product for myself and then others upsells and want to sell it to other biz, there is interest, its been tested as viable with my outreach which I do and now have 5 clients from last night wanting to meet or receive short video explanations before we meet (its my initial offer, a vid demo). I have designed the product or solution completely and have already developed mvp of the first product that i use myself and is immensely valuable to me. I also acquire all the clients as an client outreach/acquisition expert and perform that entire client acquisition function and marketing up until sales call where they provide assistance/ a joint tech and marketing/product domain specialist (me) sales call, still to be discussed. No dedicated sales function but they have experience. Then I partner with a great desirable professional development agency to deploy the solution and everything that entails hoping for a long-term similar arrangement that mutually beneficial and fair. They also assist with the sales process to close deals, we both contribute on the sales calls but client generation and marketing up to the sales call is my contribution. What would the fair revenue share be in a perfectly fair equal situation and what would it be if I wanted to be generous because i really want to work with them moving forward. Also what would the equity split be if a new entity was formed later to formalize partnership and the contribution remained the same. I dont know much about this or what I should be doing in my situation. As I understand searching revenue share online and a summary from perplexity I perform two of the major functions and they one so something like 30-40 them and the rest me? But if i wanted to be generous and show my appreciation for working with me on this as they are high quality and i foresee more opportunity benefits and capabilities in the future due to their expertise and know they would deliver a superb job, would 50/50 be a fair split? Or am I undervaluing/overvaluing myself,, can you not just offer the logic but advice as well based on the info you have, this is brand new and moving super fast, online info seems clear but i want mine to be super fair even generous for them so they are happy, but also not foolish or irresponsible from my side. Its all new to me. Thank you so much!

I run an AI automation agency (AAA). My honest overview and review of this new business model
reddit
LLM Vibe Score0
Human Vibe Score1
AI_Scout_OfficialThis week

I run an AI automation agency (AAA). My honest overview and review of this new business model

I started an AI tools directory in February, and then branched off that to start an AI automation agency (AAA) in June. So far I've come across a lot of unsustainable "ideas" to make money with AI, but at the same time a few diamonds in the rough that aren't fully tapped into yet- especially the AAA model. Thought I'd share this post to shine light into this new business model and share some ways you could potentially start your own agency, or at the very least know who you are dealing with and how to pick and choose when you (inevitably) get bombarded with cold emails from them down the line. Foreword Running an AAA does NOT involve using AI tools directly to generate and sell content directly. That ship has sailed, and unless you are happy with $5 from Fiverr every month or so, it is not a real business model. Cry me a river but generating generic art with AI and slapping it onto a T-shirt to sell on Etsy won't make you a dime. At the same time, the AAA model will NOT require you to have a deep theoretical knowledge of AI, or any academic degree, as we are more so dealing with the practical applications of generative AI and how we can implement these into different workflows and tech-stacks, rather than building AI models from the ground up. Regardless of all that, common sense and a willingness to learn will help (a shit ton), as with anything. Keep in mind - this WILL involve work and motivation as well. The mindset that AI somehow means everything can be done for you on autopilot is not the right way to approach things. The common theme of businesses I've seen who have successfully implemented AI into their operations is the willingess to work with AI in a way that augments their existing operations, rather than flat out replace a worker or team. And this is exactly the train of thought you need when working with AI as a business model. However, as the field is relatively unsaturated and hype surrounding AI is still fresh for enterprises, right now is the prime time to start something new if generative AI interests you at all. With that being said, I'll be going over three of the most successful AI-adjacent businesses I've seen over this past year, in addition to some tips and resources to point you in the right direction. so.. WTF is an AI Automation Agency? The AI automation agency (or as some YouTubers have coined it, the AAA model) at its core involves creating custom AI solutions for businesses. I have over 1500 AI tools listed in my directory, however the feedback I've received from some enterprise users is that ready-made SaaS tools are too generic to meet their specific needs. Combine this with the fact virtually no smaller companies have the time or skills required to develop custom solutions right off the bat, and you have yourself real demand. I would say in practice, the AAA model is quite similar to Wordpress and even web dev agencies, with the major difference being all solutions you develop will incorporate key aspects of AI AND automation. Which brings me to my second point- JUST AI IS NOT ENOUGH. Rather than reducing the amount of time required to complete certain tasks, I've seen many AI agencies make the mistake of recommending and (trying to) sell solutions that more likely than not increase the workload of their clients. For example, if you were to make an internal tool that has AI answer questions based on their knowledge base, but this knowledge base has to be updated manually, this is creating unnecessary work. As such I think one of the key components of building successful AI solutions is incorporating the new (Generative AI/LLMs) with the old (programmtic automation- think Zapier, APIs, etc.). Finally, for this business model to be successful, ideally you should target a niche in which you have already worked and understand pain points and needs. Not only does this make it much easier to get calls booked with prospects, the solutions you build will have much greater value to your clients (meaning you get paid more). A mistake I've seen many AAA operators make (and I blame this on the "Get Rich Quick" YouTubers) is focusing too much on a specific productized service, rather than really understanding the needs of businesses. The former is much done via a SaaS model, but when going the agency route the only thing that makes sense is building custom solutions. This is why I always take a consultant-first approach. You can only build once you understand what they actually need and how certain solutions may impact their operations, workflows, and bottom-line. Basics of How to Get Started Pick a niche. As I mentioned previously, preferably one that you've worked in before. Niches I know of that are actively being bombarded with cold emails include real estate, e-commerce, auto-dealerships, lawyers, and medical offices. There is a reason for this, but I will tell you straight up this business model works well if you target any white-collar service business (internal tools approach) or high volume businesses (customer facing tools approach). Setup your toolbox. If you wanted to start a pressure washing business, you would need a pressure-washer. This is no different. For those without programming knowledge, I've seen two common ways AAA get setup to build- one is having a network of on-call web developers, whether its personal contacts or simply going to Upwork or any talent sourcing agency. The second is having an arsenal of no-code tools. I'll get to this more in a second, but this works beecause at its core, when we are dealing with the practical applications of AI, the code is quite simple, simply put. Start cold sales. Unless you have a network already, this is not a step you can skip. You've already picked a niche, so all you have to do is find the right message. Keep cold emails short, sweet, but enticing- and it will help a lot if you did step 1 correctly and intimately understand who your audience is. I'll be touching base later about how you can leverage AI yourself to help you with outreach and closing. The beauty of gen AI and the AAA model You don't need to be a seasoned web developer to make this business model work. The large majority of solutions that SME clients want is best done using an API for an LLM for the actual AI aspect. The value we create with the solutions we build comes with the conceptual framework and design that not only does what they need it to but integrates smoothly with their existing tech-stack and workflow. The actual implementation is quite straightforward once you understand the high level design and know which tools you are going to use. To give you a sense, even if you plan to build out these apps yourself (say in Python) the large majority of the nitty gritty technical work has already been done for you, especially if you leverage Python libraries and packages that offer high level abstraction for LLM-related functions. For instance, calling GPT can be as little as a single line of code. (And there are no-code tools where these functions are simply an icon on a GUI). Aside from understanding the capabilities and limitations of these tools and frameworks, the only thing that matters is being able to put them in a way that makes sense for what you want to build. Which is why outsourcing and no-code tools both work in our case. Okay... but how TF am I suppposed to actually build out these solutions? Now the fun part. I highly recommend getting familiar with Langchain and LlamaIndex. Both are Python libraires that help a lot with the high-level LLM abstraction I mentioned previously. The two most important aspects include being able to integrate internal data sources/knowledge bases with LLMs, and have LLMs perform autonomous actions. The two most common methods respectively are RAG and output parsing. RAG (retrieval augmented Generation) If you've ever seen a tool that seemingly "trains" GPT on your own data, and wonder how it all works- well I have an answer from you. At a high level, the user query is first being fed to what's called a vector database to run vector search. Vector search basically lets you do semantic search where you are searching data based on meaning. The vector databases then retrieves the most relevant sections of text as it relates to the user query, and this text gets APPENDED to your GPT prompt to provide extra context to the AI. Further, with prompt engineering, you can limit GPT to only generate an answer if it can be found within this extra context, greatly limiting the chance of hallucination (this is where AI makes random shit up). Aside from vector databases, we can also implement RAG with other data sources and retrieval methods, for example SQL databses (via parsing the outputs of LLM's- more on this later). Autonomous Agents via Output Parsing A common need of clients has been having AI actually perform tasks, rather than simply spitting out text. For example, with autonomous agents, we can have an e-commerce chatbot do the work of a basic customer service rep (i.e. look into orders, refunds, shipping). At a high level, what's going on is that the response of the LLM is being used programmtically to determine which API to call. Keeping on with the e-commerce example, if I wanted a chatbot to check shipping status, I could have a LLM response within my app (not shown to the user) with a prompt that outputs a random hash or string, and programmatically I can determine which API call to make based on this hash/string. And using the same fundamental concept as with RAG, I can append the the API response to a final prompt that would spit out the answer for the user. How No Code Tools Can Fit In (With some example solutions you can build) With that being said, you don't necessarily need to do all of the above by coding yourself, with Python libraries or otherwise. However, I will say that having that high level overview will help IMMENSELY when it comes to using no-code tools to do the actual work for you. Regardless, here are a few common solutions you might build for clients as well as some no-code tools you can use to build them out. Ex. Solution 1: AI Chatbots for SMEs (Small and Medium Enterprises) This involves creating chatbots that handle user queries, lead gen, and so forth with AI, and will use the principles of RAG at heart. After getting the required data from your client (i.e. product catalogues, previous support tickets, FAQ, internal documentation), you upload this into your knowledge base and write a prompt that makes sense for your use case. One no-code tool that does this well is MyAskAI. The beauty of it especially for building external chatbots is the ability to quickly ingest entire websites into your knowledge base via a sitemap, and bulk uploading files. Essentially, they've covered the entire grunt work required to do this manually. Finally, you can create a inline or chat widget on your client's website with a few lines of HTML, or altneratively integrate it with a Slack/Teams chatbot (if you are going for an internal Q&A chatbot approach). Other tools you could use include Botpress and Voiceflow, however these are less for RAG and more for building out complete chatbot flows that may or may not incorporate LLMs. Both apps are essentially GUIs that eliminate the pain and tears and trying to implement complex flows manually, and both natively incoporate AI intents and a knowledge base feature. Ex. Solution 2: Internal Apps Similar to the first example, except we go beyond making just chatbots but tools such as report generation and really any sort of internal tool or automations that may incorporate LLM's. For instance, you can have a tool that automatically generates replies to inbound emails based on your client's knowledge base. Or an automation that does the same thing but for replies to Instagram comments. Another example could be a tool that generates a description and screeenshot based on a URL (useful for directory sites, made one for my own :P). Getting into more advanced implementations of LLMs, we can have tools that can generate entire drafts of reports (think 80+ pages), based not only on data from a knowledge base but also the writing style, format, and author voice of previous reports. One good tool to create content generation panels for your clients would be MindStudio. You can train LLM's via prompt engineering in a structured way with your own data to essentially fine tune them for whatever text you need it to generate. Furthermore, it has a GUI where you can dictate the entire AI flow. You can also upload data sources via multiple formats, including PDF, CSV, and Docx. For automations that require interactions between multiple apps, I recommend the OG zapier/make.com if you want a no-code solution. For instance, for the automatic email reply generator, I can have a trigger such that when an email is received, a custom AI reply is generated by MyAskAI, and finally a draft is created in my email client. Or, for an automation where I can create a social media posts on multiple platforms based on a RSS feed (news feed), I can implement this directly in Zapier with their native GPT action (see screenshot) As for more complex LLM flows that may require multiple layers of LLMs, data sources, and APIs working together to generate a single response i.e. a long form 100 page report, I would recommend tools such as Stack AI or Flowise (open-source alternative) to build these solutions out. Essentially, you get most of the functions and features of Python packages such as Langchain and LlamaIndex in a GUI. See screenshot for an example of a flow How the hell are you supposed to find clients? With all that being said, none of this matters if you can't find anyone to sell to. You will have to do cold sales, one way or the other, especially if you are brand new to the game. And what better way to sell your AI services than with AI itself? If we want to integrate AI into the cold outreach process, first we must identify what it's good at doing, and that's obviously writing a bunch of text, in a short amount of time. Similar to the solutions that an AAA can build for its clients, we can take advantage of the same principles in our own sales processes. How to do outreach Once you've identified your niche and their pain points/opportunities for automation, you want to craft a compelling message in which you can send via cold email and cold calls to get prospects booked on demos/consultations. I won't get into too much detail in terms of exactly how to write emails or calling scripts, as there are millions of resources to help with this, but I will tell you a few key points you want to keep in mind when doing outreach for your AAA. First, you want to keep in mind that many businesses are still hesitant about AI and may not understand what it really is or how it can benefit their operations. However, we can take advantage of how mass media has been reporting on AI this past year- at the very least people are AWARE that sooner or later they may have to implement AI into their businesses to stay competitive. We want to frame our message in a way that introduces generative AI as a technology that can have a direct, tangible, and positive impact on their business. Although it may be hard to quantify, I like to include estimates of man-hours saved or costs saved at least in my final proposals to prospects. Times are TOUGH right now, and money is expensive, so you need to have a compelling reason for businesses to get on board. Once you've gotten your messaging down, you will want to create a list of prospects to contact. Tools you can use to find prospects include Apollo.io, reply.io, zoominfo (expensive af), and Linkedin Sales Navigator. What specific job titles, etc. to target will depend on your niche but for smaller companies this will tend to be the owner. For white collar niches, i.e. law, the professional that will be directly benefiting from the tool (i.e. partners) may be better to contact. And for larger organizations you may want to target business improvement and digital transformation leads/directors- these are the people directly in charge of projects like what you may be proposing. Okay- so you have your message, and your list, and now all it comes down to is getting the good word out. I won't be going into the details of how to send these out, a quick Google search will give you hundreds of resources for cold outreach methods. However, personalization is key and beyond simple dynamic variables you want to make sure you can either personalize your email campaigns directly with AI (SmartWriter.ai is an example of a tool that can do this), or at the very least have the ability to import email messages programmatically. Alternatively, ask ChatGPT to make you a Python Script that can take in a list of emails, scrape info based on their linkedin URL or website, and all pass this onto a GPT prompt that specifies your messaging to generate an email. From there, send away. How tf do I close? Once you've got some prospects booked in on your meetings, you will need to close deals with them to turn them into clients. Call #1: Consultation Tying back to when I mentioned you want to take a consultant-first appraoch, you will want to listen closely to their goals and needs and understand their pain points. This would be the first call, and typically I would provide a high level overview of different solutions we could build to tacke these. It really helps to have a presentation available, so you can graphically demonstrate key points and key technologies. I like to use Plus AI for this, it's basically a Google Slides add-on that can generate slide decks for you. I copy and paste my default company messaging, add some key points for the presentation, and it comes out with pretty decent slides. Call #2: Demo The second call would involve a demo of one of these solutions, and typically I'll quickly prototype it with boilerplate code I already have, otherwise I'll cook something up in a no-code tool. If you have a niche where one type of solution is commonly demanded, it helps to have a general demo set up to be able to handle a larger volume of calls, so you aren't burning yourself out. I'll also elaborate on how the final product would look like in comparison to the demo. Call #3 and Beyond: Once the initial consultation and demo is complete, you will want to alleviate any remaining concerns from your prospects and work with them to reach a final work proposal. It's crucial you lay out exactly what you will be building (in writing) and ensure the prospect understands this. Furthermore, be clear and transparent with timelines and communication methods for the project. In terms of pricing, you want to take this from a value-based approach. The same solution may be worth a lot more to client A than client B. Furthermore, you can create "add-ons" such as monthly maintenance/upgrade packages, training sessions for employeees, and so forth, separate from the initial setup fee you would charge. How you can incorporate AI into marketing your businesses Beyond cold sales, I highly recommend creating a funnel to capture warm leads. For instance, I do this currently with my AI tools directory, which links directly to my AI agency and has consistent branding throughout. Warm leads are much more likely to close (and honestly, much nicer to deal with). However, even without an AI-related website, at the very least you will want to create a presence on social media and the web in general. As with any agency, you will want basic a professional presence. A professional virtual address helps, in addition to a Google Business Profile (GBP) and TrustPilot. a GBP (especially for local SEO) and Trustpilot page also helps improve the looks of your search results immensely. For GBP, I recommend using ProfilePro, which is a chrome extension you can use to automate SEO work for your GBP. Aside from SEO optimzied business descriptions based on your business, it can handle Q/A answers, responses, updates, and service descriptions based on local keywords. Privacy and Legal Concerns of the AAA Model Aside from typical concerns for agencies relating to service contracts, there are a few issues (especially when using no-code tools) that will need to be addressed to run a successful AAA. Most of these surround privacy concerns when working with proprietary data. In your terms with your client, you will want to clearly define hosting providers and any third party tools you will be using to build their solution, and a DPA with these third parties listed as subprocessors if necessary. In addition, you will want to implement best practices like redacting private information from data being used for building solutions. In terms of addressing concerns directly from clients, it helps if you host your solutions on their own servers (not possible with AI tools), and address the fact only ChatGPT queries in the web app, not OpenAI API calls, will be used to train OpenAI's models (as reported by mainstream media). The key here is to be open and transparent with your clients about ALL the tools you are using, where there data will be going, and make sure to get this all in writing. have fun, and keep an open mind Before I finish this post, I just want to reiterate the fact that this is NOT an easy way to make money. Running an AI agency will require hours and hours of dedication and work, and constantly rearranging your schedule to meet prospect and client needs. However, if you are looking for a new business to run, and have a knack for understanding business operations and are genuinely interested in the pracitcal applications of generative AI, then I say go for it. The time is ticking before AAA becomes the new dropshipping or SMMA, and I've a firm believer that those who set foot first and establish themselves in this field will come out top. And remember, while 100 thousand people may read this post, only 2 may actually take initiative and start.

As a soloproneur, here is how I'm scaling with AI and GPT-based tools
reddit
LLM Vibe Score0
Human Vibe Score1
AI_Scout_OfficialThis week

As a soloproneur, here is how I'm scaling with AI and GPT-based tools

Being a solopreneur has its fair share of challenges. Currently I've got businesses in ecommerce, agency work, and affiliate marketing, and one undeniable truth remains: to truly scale by yourself, you need more than just sheer will. That's where I feel technology, especially AI, steps in. As such, I wanted some AI tools that have genuinely made a difference in my own work as a solo business operator. No fluff, just tried-and-true tools and platforms that have worked for me. The ability for me to scale alone with AI tools that take advantage of GPT in one way, or another has been significant and really changed my game over the past year. They bring in an element of adaptability and intelligence and work right alongside “traditional automation”. Whether you're new to this or looking to optimize your current setup, I hope this post helps. FYI I used multiple prompts with GPT-4 to draft this using my personal notes. Plus AI (add-on for google slides/docs) I handle a lot of sales calls and demos for my AI automation agency. As I’m providing a custom service rather than a product, every client has different pain points and as such I need to make a new slide deck each time. And making slides used to be a huge PITA and pretty much the bane of my existence until slide deck generators using GPT came out. My favorite so far has been PlusAI, which works as a plugin for Google Slides. You pretty much give it a rough idea, or some key points and it creates some slides right within Google Slides. For me, I’ve been pasting the website copy or any information on my client, then telling PlusAI the service I want to propose. After the slides are made, you have a lot of leeway to edit the slides again with AI, compared to other slide generators out there. With 'Remix', I can switch up layouts if something feels off, and 'Rewrite' is there to gently nudge the AI in a different direction if I ever need it to. It's definitely given me a bit of breathing space in a schedule that often feels suffocating. echo.win (web-based app) As a solopreneur, I'm constantly juggling roles. Managing incoming calls can be particularly challenging. Echo.win, a modern call management platform, has become a game-changer for my business. It's like having a 24/7 personal assistant. Its advanced AI understands and responds to queries in a remarkably human way, freeing up my time. A standout feature is the Scenario Builder, allowing me to create personalized conversation flows. Live transcripts and in-depth analytics help me make data-driven decisions. The platform is scalable, handling multiple simultaneous calls and improving customer satisfaction. Automatic contact updates ensure I never miss an important call. Echo.win's pricing is reasonable, offering a personalized business number, AI agents, unlimited scenarios, live transcripts, and 100 answered call minutes per month. Extra minutes are available at a nominal cost. Echo.win has revolutionized my call management. It's a comprehensive, no-code platform that ensures my customers are always heard and never missed MindStudio by YouAi (web app/GUI) I work with numerous clients in my AI agency, and a recurring task is creating chatbots and demo apps tailored to their specific needs and connected to their knowledge base/data sources. Typically, I would make production builds from scratch with libraries such as LangChain/LlamaIndex, however it’s quite cumbersome to do this for free demos. As each client has unique requirements, it means I'm often creating something from scratch. For this, I’ve been using MindStudio (by YouAi) to quickly come up with the first iteration of my app. It supports multiple AI models (GPT, Claude, Llama), let’s you upload custom data sources via multiple formats (PDF, CSV, Excel, TXT, Docx, and HTML), allows for custom flows and rules, and lets you to quickly publish your apps. If you are in their developer program, YouAi has built-in payment infrastructure to charge your users for using your app. Unlike many of the other AI builders I’ve tried, MindStudio basically lets me dictate every step of the AI interaction at a high level, while at the same time simplifying the behind-the-scenes work. Just like how you'd sketch an outline or jot down main points, you start with a scaffold or decide to "remix" an existing AI, and it will open up the IDE. I often find myself importing client data or specific project details, and then laying out the kind of app or chatbot I'm looking to prototype. And once you've got your prototype you can customize the app as much as you want. LLamaIndex (Python framework) As mentioned before, in my AI agency, I frequently create chatbots and apps for clients, tailored to their specific needs and connected to their data sources. LlamaIndex, a data framework for LLM applications, has been a game-changer in this process. It allows me to ingest, structure, and access private or domain-specific data. The major difference over LangChain is I feel like LlamaIndex does high level abstraction much better.. Where LangChain unnecessarily abstracts the simplest logic, LlamaIndex actually has clear benefits when it comes to integrating your data with LLMs- it comes with data connectors that ingest data from various sources and formats, data indexes that structure data for easy consumption by LLMs, and engines that provide natural language access to data. It also includes data agents, LLM-powered knowledge workers augmented by tools, and application integrations that tie LlamaIndex back into the rest of the ecosystem. LlamaIndex is user-friendly, allowing beginners to use it with just five lines of code, while advanced users can customize and extend any module to fit their needs. To be completely honest, to me it’s more than a tool- at its heart it’s a framework that ensures seamless integration of LLMs with data sources while allowing for complete flexibility compared to no-code tools. GoCharlie (web app) GoCharlie, the first AI Agent product for content creation, has been a game-changer for my business. Powered by a proprietary LLM called Charlie, it's capable of handling multi-input/multi-output tasks. GoCharlie's capabilities are vast, including content repurposing, image generation in 4K and 8K for various aspect ratios, SEO-optimized blog creation, fact-checking, web research, and stock photo and GIF pull-ins. It also offers audio transcriptions for uploaded audio/video files and YouTube URLs, web scraping capabilities, and translation. One standout feature is its multiple input capability, where I can attach a file (like a brand brief from a client) and instruct it to create a social media campaign using brand guidelines. It considers the file, prompt, and website, and produces multiple outputs for each channel, each of which can be edited separately. Its multi-output feature allows me to write a prompt and receive a response, which can then be edited further using AI. Overall, very satisfied with GoCharlie and in my opinion it really presents itself as an effective alternative to GPT based tools. ProfilePro (chrome extension) As someone overseeing multiple Google Business Profiles (GBPs) for my various businesses, I’ve been using ProfilePro by Merchynt. This tool stood out with its ability to auto-generate SEO-optimized content like review responses and business updates based on minimal business input. It works as a Chrome extension, and offers suggestions for responses automatically on your GBP, with multiple options for the tone it will write in. As a plus, it can generate AI images for Google posts, and offer suggestions for services and service/product descriptions. While it streamlines many GBP tasks, it still allows room for personal adjustments and refinements, offering a balance between automation and individual touch. And if you are like me and don't have dedicated SEO experience, it can handle ongoing optimization tasks to help boost visibility and drive more customers to profiles through Google Maps and Search

Unbiased opinion - Ideas
reddit
LLM Vibe Score0
Human Vibe Score1
SnooPears4795This week

Unbiased opinion - Ideas

Hi, I’m currently looking to set up along site my full time job. I’m working away so have spare time mid week evenings to get cracking! If anyone has any other ideas which would link up with my interests please let me know. Note: I set up an airconditioning company which didn’t go to plan as I was just not passionate enough to chase sales/grow the company. Details Capital: I could invest upto 1k a month would prefer less Location: would prefer remote but the below ideas are all possible from my hotel room. Strengths: work well under pressure, technical minded, problem solving Weaknesses: can be lazy if not passionate, organisation, confidence Interests: Music, guitars, tech, coding, beer, motorbikes Experience: 12 years in railway electrical roles, coding bootcamp Ideas Idea: Guitar Electronics (pedals) Pros: cheap to start Enjoy building Creative Design work Cool field Cons: Time consuming Not much profit Scalability Competition is cheap Idea: Project management app/document selection Pros: Experienced in field Relatively quick if excel based Could charge subscription Contacts in industry Expensive if app based Make once sell multiple Remote Small overheads Cons: Not as fun as others learn new language? Limited market Other competition already good (apps) Idea: YouTube - mysteries, interesting topics Pros: Free to startup Enjoy researching Build community leading to other online projects Can voice over/AI No need to have cam Improve confidence Cons: Returns will take a while Get better at video editing Overcome speaking No overheads (have equipment) Time/money slow at start Idea: Railway Electrical Book/Course Pros: Throughly experienced Small market Niche - good money if can get sales Have to learn course software Contacts in field Create once Cons: Not as passionate as other ideas Amount of interest (possibly get other fields electricians involved?) Expensive to make?

Writing a exercise based TTRPG rulebook for a system where your real world fitness is tied to character progression
reddit
LLM Vibe Score0
Human Vibe Score1
BezboznyThis week

Writing a exercise based TTRPG rulebook for a system where your real world fitness is tied to character progression

My dad was a star athlete when he was young, and my mom was a huge sci-fi/fantasy nerd, so I got both ends of the stick as it were. Love gaming and nerd culture, but also love to exercise and self improvement. Sometimes exercise can feel boring though compared to daydreaming about fantastic fictional worlds, so for a long time I've been kicking around the idea of how to "Gamify" fitness. and recently I've been working on this passion project of a Table Top RPG (Like D&D) where the stats of your character are related to your own fitness, so if you want your character in game to improve, you have to improve in the real world. Below is a rough draft you can look through that details the settings and mechanics of the game I've come up with so far. I'd love to eventually get a full book published and sell it online. maybe even starting a whole brand of "Gamified fitness": REP-SET: GAINSZ In the war torn future of 24th century… There are no rest days… In the futuristic setting of "REP-SET: GAINSZ," the "War of Gains" casts a long shadow over the Sol System as the various factions vie for territory and resources. However, war has evolved. Unmanned drones and long-range strikes have faded into obsolescence. Battles, both planet-side and in the depths of space, are now fought by soldiers piloting REP-SETs: Reactive Exoskeletal Platform - Symbiotic Evolution Trainer Massive, humanoid combat mechs. Powered by mysterious “EV” energy, these mechanical marvels amplify, and are in turn amplified by, the fitness and mental acuity of their pilots. The amplification is exponential, leading pilots into a life of constant training in order for their combat prowess to be bolstered by every incremental gain in their level of fitness. With top pilots having lifting capacity measured in tons, and reaction times measured by their Mach number, REP-SET enhanced infantry now dominate the battlefield. The Factions: The Federated Isometocracy of Terra (FIT): Quote: "The strength of the body is the strength of the spirit. Together, we will lift humanity to its destined greatness. But ask not the federation to lift for you. Ask yourself: Do you even lift for the Federation?" Description: An idealistic but authoritarian faction founded on the principle of maximizing the potential of all individuals. FIT citizens believe in relentless striving for physical and mental perfection, leading to collective excellence. Their goal is the unification of humankind under a rule guided by this doctrine, which sometimes comes at the cost of individual liberties. Mech Concept: REP-SET mechs. Versatile humanoid designs focusing on strength, endurance, and adaptability. By connecting to the AI spirit within their REP-SETs core, each pilot enhances the performance of their machine through personal willpower and peak physical training. Some high-rank REP-SETS include features customized to the pilot's strengths, visually signifying their dedication and discipline. The Dominion of Organo-Mechanical Supremacy (DOMS): Quote: "Without pain, there is no gain. Become the machine. Embrace the burn.” Description: A fanatical collective ideologically obsessed with "Ascendency through suffering" by merging their bodies with technology that not only transcends biological limitations, but also acts to constantly induce pain in it's users. Driven by a sense of ideological superiority and a thirst for domination, DOMS seek to bring the painful blessings of their deity "The lord of the Burn" to the rest of the solar system. Their conquest could turn them into a significant threat to humanity. Mech Concept: Hybrid mechs, where the distinction between the pilot and the machine is blurred. The cockpit functions as a life-support system for the pilot, heavily modified with augmentations. Mechs themselves are often modular, allowing for adaptation and assimilation of enemy technology. Some DOMS mechs might display disturbing elements of twisted flesh alongside cold, mechanical parts. The Tren: Quote: "Grow... bigger... feast... protein..." Description: A ravenous conglomeration of biochemically engineered muscular monstrosities, united only by a shared insatiable hunger for "More". Existing mostly in deep space, they seek organic matter to consume and assimilate. They progress in power not due to any form of training or technology, but from a constant regimen of ravenous consumption and chemically induced muscle growth, all exponentially enhanced by EV energies. While some have been known to possess a certain level of intellect and civility, their relentless hunger makes them incredibly mentally volatile. When not consuming others, the strong consume the weak within their own faction. Mech Concept: Bio-Organic horrors. While they do have massive war machines, some are living vessels built around immense creatures. These machines resemble grotesque fleshy designs that prioritize rapid mutation and growth over sleek aesthetics. Often unsettling to behold. Synthetic Intelligence Theocracy (SIT): Quote: "Failure is an unacceptable data point.” Description: A society ruled by a vast and interconnected artificial intelligence network. The SIT governs with seemingly emotionless rationality, striving for efficiency and maximum productivity. This leads to a cold, but arguably prosperous society, unless you challenge the logic of the collective AI. Their goals? Difficult to predict, as it hinges on how the AI calculates what's "optimal" for the continuation or "evolution" of existence. Mech Concept: Sleek, almost featureless robotic creations with a focus on efficient movement and energy management. Often drone-like or modular, piloted through direct mind-machine linking rather than traditional cockpits. Their aesthetic suggests cold and impersonal perfection. The Way Isolate(TWI): Quote: "The body unblemished, the mind unwavering. That is the path to true strength. That and a healthy diet of Aster-Pea proteins." Description: Known by some as "The asteroid farmers", The Way Isolate is a proud and enigmatic faction that stands apart from the other powers in the Sol System. A fiercely independent tribe bound by oaths of honor, loyalty, and hard work. Wandering the asteroid belt in their vast arc ships, their unparalleled mastery in asteroidal-agricultural engineering, ensuring they have no need to colonize planets for nutritional needs, has allowed them to abstain from the pursuit of territorial expansion in “The War of Gains”, instead focusing on inward perfection, both spiritual and physical. They eschew all technological bodily enhancements deemed unnatural, believing that true power can only be cultivated through the relentless pursuit of personal strength achieved through sheer will and bodily perfection. The Way Isolate views biohacking, genetic manipulation, and even advanced cybernetics as corruptions of the human spirit, diluting the sacredness of individual willpower. Mech Concept: Way Isolate mechs are built with maneuverability and precision in mind rather than flashy augmentations. Their REP-SETs are streamlined, favoring lean designs that mirror the athleticism of their pilots. Excelling in low to zero G environments, their mechs lack bulky armor, relying on evasion and maneuverability rather than brute force endurance. Weaponry leans towards traditional kinetic based armaments, perhaps employing archaic but reliable weapon styles such as blades or axes as symbols of their purity of purpose. These mechs reflect the individual prowess of their pilots, where victory is determined by focus, technique, and the raw power of honed physical ability. Base Player Character Example: You are a young, idealistic FIT soldier, barely out of training and working as a junior REP-SET mechanic on the Europa Ring World. The Miazaki district, a landscape of towering mountains and gleaming cities, houses a sprawling mountainside factory – a veritable hive of Gen 5 REP-SET construction. Here, the lines between military and civilian blur within a self-sufficient society dependent on this relentless industry. Beneath the surface, you harbor a secret. In a forgotten workshop, the ghost of a REP-SET takes shape – a unique machine built around an abandoned, enigmatic AI core. Ever since you salvaged it as a child from the wreckage of your hometown, scarred by a brutal Tren attack, you've dedicated yourself to its restoration. A lingering injury from that fateful battle mocks your progress, a constant reminder of the fitness exams you cannot pass. Yet, you train relentlessly, dreaming of the day you'll stand as a true REP-SET pilot. A hidden truth lies at the heart of the REP-SETS: as a pilot's abilities grow, their mech develops unique, almost mystical powers – a manifestation of the bond between the human spirit and the REP-SET's AI. The ache in your old wound serves as a grim prophecy. This cold war cannot last. The drums of battle grow louder with each passing day. GAME MECHANICS: The TTRPG setting of “REP-SET: GAINSZ” is marked by a unique set of rules, by which the players real world capabilities and fitness will reflect and affect the capabilities, progression, and success of their REP-SET pilot character in-game. ABILITY SCORES: Pilots' capabilities will be defined by 6 “Ability scores”: Grace, Agility, Iron, Nourishment, Strength, and Zen. Each of the 6 ability scores will duel represent both a specific area of exercise/athleticism and a specific brand of healthy habits. The definitions of these ability scores are as follows: Grace (GRC): "You are an artist, and your body is your canvas; the way you move is your paint and brush." This ability score, the domain of dancers and martial artists, represents a person's ability to move with organic, flowing control and to bring beauty to the world. Skill challenges may be called upon when the player character needs to act with poise and control, whether socially or physically. Real-world skill checks may involve martial arts drills, dancing to music, or balance exercises. Bonuses may be granted if the player has recently done something artistically creative or kind, and penalties may apply if they have recently lost their temper. This ability score affects how much NPCs like your character in game. Agility (AGI): "Your true potential is locked away, and speed is the key to unlocking it." The domain of sprinters, this ability score represents not only a person's absolute speed and reaction time but also their capacity to finish work early and avoid procrastination. Skill challenges may be called upon when the player character needs to make a split-second choice, move fast, or deftly dodge something dangerous. Real-world skill checks may involve acts of speed such as sprinting or punching/kicking at a steadily increasing tempo. Bonuses may apply if the player has finished work early, and penalties may apply if they are procrastinating. This ability score affects moving speed and turn order in game. Iron (IRN): "Not money, nor genetics, nor the world's greatest trainers... it is your resolve, your will to better yourself, that will make you great." Required by all athletes regardless of focus, this ability score represents a player's willpower and their capacity to push through pain, distraction, or anything else to achieve their goals. Skill challenges may be called upon when the player character needs to push through fear, doubt, or mental manipulation. Real-world skill checks may involve feats of athletic perseverance, such as planking or dead hangs from a pull-up bar. Bonuses may apply when the player maintains or creates scheduled daily routines of exercise, self-improvement, and work completion, and penalties may apply when they falter in those routines. This ability score affects the max "Dynamic exercise bonus” that can be applied to skill checks in game (a base max of +3 when Iron = 10, with an additional +1 for every 2 points of iron. So if every 20 pushups gives you +1 on a “Strength” skill check, then doing 80 pushups will only give you +4 if you have at least 12 iron). Nourishment (NRS): "A properly nourished body will last longer than a famished one." This ability score, focused on by long-distance runners, represents a player's endurance and level of nutrition. Skill challenges may be called upon when making checks that involve the player character's stamina or health. Real-world skill checks may involve endurance exercises like long-distance running. Bonuses may apply if the player has eaten healthily or consumed enough water, and penalties may apply if they have eaten junk food. This ability score affects your HP (Health points), which determines how much damage you can take before you are incapacitated. Strength (STR): "When I get down on my hands, I'm not doing pushups, I'm bench-pressing the planet." The domain of powerlifters and strongmen, this ability score represents raw physical might and the ability to overcome obstacles. Skill challenges may be called upon when the player character needs to lift, push, or break something. Real-world skill checks might involve weightlifting exercises, feats of grip strength, or core stability tests. Bonuses may apply for consuming protein-rich foods or getting a good night's sleep, and penalties may apply after staying up late or indulging in excessive stimulants. This ability score affects your carrying capacity and base attack damage in game. Zen (ZEN): "Clarity of mind reflects clarity of purpose. Still the waters within to act decisively without." This ability score, prized by meditators and yogis, represents mental focus, clarity, and inner peace. Skill challenges may be called upon when the player character needs to resist distractions, see through illusions, or make difficult decisions under pressure. Real-world skill checks may involve meditation, breathing exercises, or mindfulness activities. Bonuses may apply after attending a yoga class, spending time in nature, or creating a calm and organized living space. Penalties may apply after experiencing significant stress, emotional turmoil, or having an unclean or unorganized living space. This ability score affects your amount of ZP in game (Zen Points: your pool of energy you pull from to use mystical abilities) Determining initial player ability scores: Initially, “Ability scores” are decided during character creation by giving the player a list of 6 fitness tests to gauge their level of fitness in each category. Running each test through a specific calculation will output an ability score. A score of 10 represents the average person, a score of 20 represents a peak athlete in their category. The tests are: Grace: Timed balancing on one leg with eyes closed (10 seconds is average, 60 is peak) Agility: Mile run time in minutes and second (10:00 minutes:seconds is average, 3:47 is peak) Iron: Timed dead-hang from a pull-up bar (30 seconds is average, 160 is peak) Nourishment: Miles run in an hour (4 is average, 12 is peak) Strength: Pushups in 2 minute (34 is average, 100 is peak) Zen: Leg stretch in degrees (80 is average, and 180 aka "The splits" is peak) Initial Score Calculation Formula: Ability Score = 10 + (Player Test Score - Average Score) / (Peak Score - Average\_Score) \* 10 Example: if the player does 58 pushups in 2 minutes, their strength would be: 10 plus (58 - 34) divided by (100-34) multiplied by 10 = 10 + (24)/(66)\* 10 = 10 + 3.6363... = 13.6363 rounded to nearest whole number = Strength (STR): 14 SKILLS AND SKILL CHALLENGES: The core mechanic of the game will be in how skill challenges are resolved. All “Skill challenges” will have a numerical challenge rating that must be met or beaten by the sum of a 10 sided dice roll and your score in the pertinent skill. Skill scores are determined by 2 factors: Ability Score Bonus: Every 2 points above 10 gives +1 bonus point. (EX. 12 = +1, 14 = +2, etc.) This also means that if you have less than 10 in an ability score, you will get negative points. Personal Best Bonus: Each skill has its own unique associated exercise that can be measured (Time, speed, distance, amount of reps, etc). A higher record means a higher bonus. EX: Authority skill checks are associated with a timed “Lateral raise hold”. Every 30 seconds of the hold added onto your personal best single attempt offers a +1 bonus. So if you can do a lateral hold for 90 seconds, that’s a +3 to your authority check! So if you have a 16 in Iron, and your Personal Best lateral raise hold is 90 seconds, that would give you an Authority score of +6 (T-Pose for dominance!) Dynamic Exercise Bonus: This is where the unique mechanics of the game kick in. At any time during a skill challenge (even after your roll) you can add an additional modifier to the skill check by completing the exercise during gameplay! Did you roll just below the threshold for success? Crank out another 20 pushups, squats, or curls to push yourself just over the edge into success! There are 18 skills total, each with its own associated ability score and unique exercise: Grace (GRC): \-Kinesthesia (Timed: Blind single leg stand time) \-Precision (Scored: Basket throws) \-Charm (Timed reps: Standing repeated forward dumbell chest press and thrust) \-Stealth (Timed distance: Leopard Crawl) Agility (AGI): \-acrobatics (timed reps: high kicks) \-Computers (Word per minute: Typing test) \-Speed (Time: 100 meter sprint) Iron (IRN): \-Authority (Timed: Lateral raise hold) \-Resist (Timed: Plank) \-Persist (Timed:Pull-up bar dead hang) Nourishment(NRS): \-Recovery (TBD) \-Stim crafting (TBD) \-Survival (TBD) Strength(STR): \-Mechanics (Timed reps: Alternating curls) \-Might (Timed reps: pushups) Zen(ZEN): \-Perceive (TBD) \-Empathy (TBD) \-Harmony (TBD) \-Lore (TBD) Healthy Habits Bonus: Being able to demonstrate that you have conducted healthy habits during gameplay can also add one time bonuses per skill challenge “Drank a glass of water +1 to Nourishment check”, “Cleaned your room, +3 on Zen check”. But watch out, if you’re caught in unhealthy Habits, the GM can throw in penalties, “Ate junk food, -1 to Nourishment check”, etc. Bonuses/penalties from in-game items, equipment, buffs, debuffs, etc., helping players to immerse into the mechanics of the world of REP-SET for the thrill of constantly finding ways to improve their player. Gradient success: Result of skill challenges can be pass or fail, but can also be on a sliding scale of success. Are you racing to the battlefield? Depending on your Speed check, you might arrive early and have a tactical advantage, just in time for an even fight, or maybe far too late and some of your favorite allied NPCs have paid the price… So you’re often encouraged to stack on those dynamic exercise bonuses when you can to get the most fortuitous outcomes available to you. Gameplay sample: GM: Your REP-SET is a phantom, a streak of light against the vast hull of the warship. Enemy fighters buzz angrily, but you weaves and dodges with uncanny precision. The energy wave might be losing effectiveness, but your agility and connection to the machine have never been stronger. Then, it happens. A gap in the defenses. A vulnerable seam in the warship's armor. Your coms agents keen eye spots it instantly. "Lower power junction, starboard side! You have an opening!" This is your chance to strike the decisive blow. But how? It'll take a perfect combination of skill and strategy, drawing upon your various strengths. Here are your options: Option 1: Brute Strength: Channel all remaining power into a single, overwhelming blast from the core. High-risk, high-reward. It could overload the REP-SET if you fail, but it might also cripple the warship. (Strength-focused, Might sub-skill) Option 2: Calculated Strike: With surgical precision, target the power junction with a pinpoint burst of destabilizing energy. Less flashy and ultimately less damaging, but potentially more effective in temporarily disabling the ship. (Agility-focused, Precision sub-skill) Option 3: Harmonic Disruption: Attempt to harmonize with your REP-SET's AI spirit for help in connecting to the digital systems of the Warship. Can you generate an internal energy resonance within the warship, causing it to malfunction from within? (Zen-focused, Harmony sub-skill) Player: I'll take option 1, brute strength! GM: Ok, This will be a "Might" check. The CR is going to be very high on this one. I'm setting it at a 20. What's your Might bonus? Player: Dang, a 20?? That's literally impossible. My Might is 15 and I've got a PB of 65 pushups in 2 minutes, that sets me at a +5. Even if I roll a 10 and do 60 pushups for the DE I'll only get 18 max. GM: Hey I told you it was high risk. You want to choose another option? Player: No, no. This is what my character would do. I'm a real hot-blooded meathead for sure. GM: Ok then, roll a D10 and add your bonus. Player: \Rolls\ a 9! not bad, actually that's a really good roll. So +5, that's a 14. GM: Alright, would you like to add a dynamic exercise bonus? Player: Duh, it's not like I can do 120 pushups I'd need to beat the CR, but I can at least do better than 14. Alright, here goes. \the player gets down to do pushups and the 2 minute time begins. After some time...\ Player: 65....... 66! GM: Times up. Player: Ow... my arms... GM: so with 66, that's an extra +3, and its a new PB, so that's a +1. That sets your roll to 18. Player: Ow... Frack... still not 20... for a second there i really believed I could do 120 pushups... well I did my best... Ow... 20 CR is just too impossible you jerk... GM: Hmm... Tell me, what did you eat for lunch today? Player: Me? I made some vegetable and pork soup, and a protein shake. I recorded it all in my diet app. GM: And how did you sleep last night? Player: Like a baby, went to sleep early, woke up at 6. GM: in that case, you can add a +1 "Protein bonus" and +1 "Healthy rest" bonus to any strength related check for the day if you'd like, including this one. Player: Really?? Heck yes! add it to the roll! GM: With those extra bonuses, your roll reaches 20. How do you want to do this? Player: I roar "For Terra!" and pour every last ounce of my strength into the REP-SET. GM: "For Terra!" you roar, your cry echoing through coms systems of the REP-SET. The core flares blindingly bright. The surge of power dwarfs anything the REP-SET has unleashed before. With a titanic shriek that cracks the very fabric of space, the REP-SET slams into the vulnerable power junction. Raw energy explodes outwards, tendrils of light arcing across the warship's massive hull. The impact is staggering. The leviathan-like warship buckles, its sleek form rippling with shockwaves. Sparks shower like rain, secondary explosions erupt as critical systems overload. Then…silence. The warship goes dark. Power flickers within the REP-SET itself, then steadies. Alarms fade, replaced by the eerie quiet of damaged but functional systems. "We…did it?" The coms agents voice is incredulous, tinged with relief. She's awaiting your reply. Player: "I guess so." I say, and I smile and laugh. And then I slump back... and fall unconscious. \to the other players\ I'm not doing any more skill checks for a while guys, come pick me up please. \teammates cheer\ &#x200B;

Founder Pitch: AI Agent for Simplifying Public Cloud Management
reddit
LLM Vibe Score0
Human Vibe Score1
rasvi786This week

Founder Pitch: AI Agent for Simplifying Public Cloud Management

Video to understand : https://youtu.be/9ocUjlUrU\w?si=S0ETDbKSdJqlVDyg Are You Ready to Redefine Cloud Management with AI? Imagine an intelligent AI agent that transforms the complexity of managing public cloud infrastructure into simple, natural language commands. No more navigating through endless configurations or deciphering technical documentation—our AI agent is here to revolutionize the way organizations interact with cloud platforms. About the Project We’re building an AI-powered agent designed to handle public cloud management tasks seamlessly. Whether you’re setting up your organization’s cloud foundation or deploying complex workloads, this AI agent makes it as easy as having a conversation. What Can the AI Agent Do? Cloud Foundation Setup: Example: “Please set up a cloud foundation blueprint for my organization on Google Cloud.”* The AI agent will ask key questions (e.g., organization ID) and guide you through authentication. Once authorized, it sets up the foundation using GCP APIs. Workload Deployment: Example: “Spin up a GKE cluster for me.”* The agent will ask for necessary details (e.g., number of nodes, VPC info), authenticate, and deploy the cluster in minutes. Security and Compliance Validation: Example: “Validate my organization’s cloud setup and check for security vulnerabilities.”* The agent audits your setup, identifies potential risks, and provides actionable insights. Current Progress We’ve developed a working prototype that integrates with major cloud providers like Google Cloud. The AI agent can already: Authenticate with cloud APIs Execute foundational tasks such as setting up organizations and spinning up clusters Perform initial security validations Who I’m Looking For I’m searching for a co-founder with enterprise sales experience and a strategic vision to grow our user base. You will be instrumental in helping us: Build relationships with companies willing to pilot our product Develop go-to-market strategies for enterprise adoption Identify opportunities for partnerships with cloud service providers Your Role As a co-founder, you’ll lead efforts to: Secure Pilot Programs: Identify and onboard enterprises for product trials to gather feedback and refine the solution. Drive Growth: Develop scalable strategies to grow our user base across industries. Market Positioning: Work with me to define our unique value proposition and establish thought leadership in the cloud management space. My Background I bring over a decade of experience in tech, with a strong focus on software engineering and infrastructure. My contributions so far include: Developing the core AI engine and cloud integrations Designing workflows that simplify complex cloud tasks Why Join This Project? Revolutionize Cloud Management: Be part of a project that will redefine how organizations interact with public clouds. Tackle Challenging Problems: Work at the cutting edge of AI and cloud computing. High Growth Potential: Join an industry projected to grow exponentially as enterprises embrace AI-driven automation. Build a Company from Scratch: Shape the product, team, and culture as we grow together. What’s Next? Our immediate priorities include: Expanding the AI agent’s capabilities to support multi-cloud setups. Conducting pilot programs with enterprise clients. Iterating on the product based on real-world feedback. What We Need to Succeed Expertise in enterprise sales and partnerships A deep understanding of enterprise challenges and cloud adoption trends A shared passion for leveraging AI to solve complex problems Let’s work together to build the future of cloud management. If you’re excited about this vision and bring the expertise we need, I’d love to connect and discuss how we can take this project to the next level.

Is the idea of simplifying long 10,000+ word research articles into under 100 words of key findings with a case study a good approach?
reddit
LLM Vibe Score0
Human Vibe Score1
PresentationHot3332This week

Is the idea of simplifying long 10,000+ word research articles into under 100 words of key findings with a case study a good approach?

During a visit to a top Indian university few year back, I noticed students creating extensive research papers that ended up in dusty, cobwebbed cupboards. Surprisingly, only 1% of this research was ever implemented. Most students moved on to higher education or high-paying jobs, leaving their work behind. Only a few received grants to continue their research. This experience highlighted how much valuable knowledge was being wasted, hidden away and unused. (To give you a context, there are many products in the world have already comes from research based finding - few examples are - VR headset, Zipper packages and etc) Problem: There are over 200 million research articles online, but many valuable ideas and solutions are overlooked. Finding, uploading, and summarizing these articles is difficult and time-consuming.(Even using AI - we need some kind of human intervention to simplifying in terms of data visualization) Solution: Create a simple platform, like a Twitter page, to share key findings from long research articles. Use AI tools to help summarize the articles, while humans curate and verify the information. This would make it easier for people to find existing solutions to problems without having to read through long papers. Users can still explore the full articles if they want more details. Opportunity - This can be great for people, teams or business that want to work on problem which is yet to executed or referenced in real world.

Founder Pitch: AI Agent for Simplifying Public Cloud Management
reddit
LLM Vibe Score0
Human Vibe Score1
rasvi786This week

Founder Pitch: AI Agent for Simplifying Public Cloud Management

Video to understand : https://youtu.be/9ocUjlUrU\w?si=S0ETDbKSdJqlVDyg Are You Ready to Redefine Cloud Management with AI? Imagine an intelligent AI agent that transforms the complexity of managing public cloud infrastructure into simple, natural language commands. No more navigating through endless configurations or deciphering technical documentation—our AI agent is here to revolutionize the way organizations interact with cloud platforms. About the Project We’re building an AI-powered agent designed to handle public cloud management tasks seamlessly. Whether you’re setting up your organization’s cloud foundation or deploying complex workloads, this AI agent makes it as easy as having a conversation. What Can the AI Agent Do? Cloud Foundation Setup: Example: “Please set up a cloud foundation blueprint for my organization on Google Cloud.”* The AI agent will ask key questions (e.g., organization ID) and guide you through authentication. Once authorized, it sets up the foundation using GCP APIs. Workload Deployment: Example: “Spin up a GKE cluster for me.”* The agent will ask for necessary details (e.g., number of nodes, VPC info), authenticate, and deploy the cluster in minutes. Security and Compliance Validation: Example: “Validate my organization’s cloud setup and check for security vulnerabilities.”* The agent audits your setup, identifies potential risks, and provides actionable insights. Current Progress We’ve developed a working prototype that integrates with major cloud providers like Google Cloud. The AI agent can already: Authenticate with cloud APIs Execute foundational tasks such as setting up organizations and spinning up clusters Perform initial security validations Who I’m Looking For I’m searching for a co-founder with enterprise sales experience and a strategic vision to grow our user base. You will be instrumental in helping us: Build relationships with companies willing to pilot our product Develop go-to-market strategies for enterprise adoption Identify opportunities for partnerships with cloud service providers Your Role As a co-founder, you’ll lead efforts to: Secure Pilot Programs: Identify and onboard enterprises for product trials to gather feedback and refine the solution. Drive Growth: Develop scalable strategies to grow our user base across industries. Market Positioning: Work with me to define our unique value proposition and establish thought leadership in the cloud management space. My Background I bring over a decade of experience in tech, with a strong focus on software engineering and infrastructure. My contributions so far include: Developing the core AI engine and cloud integrations Designing workflows that simplify complex cloud tasks Why Join This Project? Revolutionize Cloud Management: Be part of a project that will redefine how organizations interact with public clouds. Tackle Challenging Problems: Work at the cutting edge of AI and cloud computing. High Growth Potential: Join an industry projected to grow exponentially as enterprises embrace AI-driven automation. Build a Company from Scratch: Shape the product, team, and culture as we grow together. What’s Next? Our immediate priorities include: Expanding the AI agent’s capabilities to support multi-cloud setups. Conducting pilot programs with enterprise clients. Iterating on the product based on real-world feedback. What We Need to Succeed Expertise in enterprise sales and partnerships A deep understanding of enterprise challenges and cloud adoption trends A shared passion for leveraging AI to solve complex problems Let’s work together to build the future of cloud management. If you’re excited about this vision and bring the expertise we need, I’d love to connect and discuss how we can take this project to the next level.

SUPIR
github
LLM Vibe Score0.599
Human Vibe Score0.8316614420062696
Fanghua-YuMar 28, 2025

SUPIR

(CVPR2024) Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild [Paper] &emsp; [Project Page] &emsp; [[Online App]](https://supir.suppixel.ai/home) Fanghua, Yu, Jinjin Gu, Zheyuan Li, Jinfan Hu, Xiangtao Kong, Xintao Wang, Jingwen He, Yu Qiao, Chao Dong Shenzhen Institute of Advanced Technology; Shanghai AI Laboratory; University of Sydney; The Hong Kong Polytechnic University; ARC Lab, Tencent PCG; The Chinese University of Hong Kong 🚀 We're thrilled to announce the official launch of SupPixel AI! Experience the next level of image processing and upscaling with our cutting-edge AI technology based on SUPIR. Explore now at suppixel.ai. 🔧 Dependencies and Installation Clone repo Install dependent packages Download Checkpoints For users who can connect to huggingface, please setting LLAVACLIPPATH, SDXLCLIP1PATH, SDXLCLIP2CKPTPTH in CKPTPTH.py as None. These CLIPs will be downloaded automatically. Dependent Models SDXL CLIP Encoder-1 SDXL CLIP Encoder-2 SDXL base 1.00.9vae LLaVA CLIP LLaVA v1.5 13B (optional) Juggernaut-XLv9RunDiffusionPhotov2 Replacement of SDXL base 1.0_0.9vae for Photo Realistic (optional) JuggernautRunDiffusionPhoto2Lightning4Steps Distilling model used in SUPIRv0Juggernautv9_lightning.yaml Models we provided: SUPIR-v0Q: Baidu Netdisk, Google Drive Default training settings with paper. High generalization and high image quality in most cases. SUPIR-v0F: Baidu Netdisk, Google Drive Training with light degradation settings. Stage1 encoder of SUPIR-v0F remains more details when facing light degradations. Edit Custom Path for Checkpoints ⚡ Quick Inference Val Dataset RealPhoto60: Baidu Netdisk, Google Drive Usage of SUPIR Python Script Gradio Demo Online App We've just launched SupPixel AI, an easy-to-use tool designed to help with high-quality image processing and upscaling. It builds on SUPIR. Whether you’re into photography, digital art, or just love playing around with image enhancement, we’d love for you to check it out.~ BibTeX @misc{yu2024scaling, title={Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild}, author={Fanghua Yu and Jinjin Gu and Zheyuan Li and Jinfan Hu and Xiangtao Kong and Xintao Wang and Jingwen He and Yu Qiao and Chao Dong}, year={2024}, eprint={2401.13627}, archivePrefix={arXiv}, primaryClass={cs.CV} } 📧 Contact If you have any question, please email fanghuayu96@gmail.com or jinjin.gu@suppixel.ai. Non-Commercial Use Only Declaration The SUPIR ("Software") is made available for use, reproduction, and distribution strictly for non-commercial purposes. For the purposes of this declaration, "non-commercial" is defined as not primarily intended for or directed towards commercial advantage or monetary compensation. By using, reproducing, or distributing the Software, you agree to abide by this restriction and not to use the Software for any commercial purposes without obtaining prior written permission from Dr. Jinjin Gu. This declaration does not in any way limit the rights under any open source license that may apply to the Software; it solely adds a condition that the Software shall not be used for commercial purposes. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. For inquiries or to obtain permission for commercial use, please contact Dr. Jinjin Gu (jinjin.gu@suppixel.ai).

anything-llm
github
LLM Vibe Score0.572
Human Vibe Score0.4703504093656464
Mintplex-LabsMar 28, 2025

anything-llm

AnythingLLM: The all-in-one AI app you were looking for. Chat with your docs, use AI Agents, hyper-configurable, multi-user, & no frustrating set up required. | | Docs | Hosted Instance English · 简体中文 · 日本語 👉 AnythingLLM for desktop (Mac, Windows, & Linux)! Download Now A full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions. !Chatting Watch the demo! Product Overview AnythingLLM is a full-stack application where you can use commercial off-the-shelf LLMs or popular open source LLMs and vectorDB solutions to build a private ChatGPT with no compromises that you can run locally as well as host remotely and be able to chat intelligently with any documents you provide it. AnythingLLM divides your documents into objects called workspaces. A Workspace functions a lot like a thread, but with the addition of containerization of your documents. Workspaces can share documents, but they do not talk to each other so you can keep your context for each workspace clean. Cool features of AnythingLLM 🆕 Custom AI Agents 🆕 No-code AI Agent builder 🖼️ Multi-modal support (both closed and open-source LLMs!) 👤 Multi-user instance support and permissioning Docker version only 🦾 Agents inside your workspace (browse the web, etc) 💬 Custom Embeddable Chat widget for your website Docker version only 📖 Multiple document type support (PDF, TXT, DOCX, etc) Simple chat UI with Drag-n-Drop funcitonality and clear citations. 100% Cloud deployment ready. Works with all popular closed and open-source LLM providers. Built-in cost & time-saving measures for managing very large documents compared to any other chat UI. Full Developer API for custom integrations! Much more...install and find out! Supported LLMs, Embedder Models, Speech models, and Vector Databases Large Language Models (LLMs): Any open-source llama.cpp compatible model OpenAI OpenAI (Generic) Azure OpenAI AWS Bedrock Anthropic NVIDIA NIM (chat models) Google Gemini Pro Hugging Face (chat models) Ollama (chat models) LM Studio (all models) LocalAi (all models) Together AI (chat models) Fireworks AI (chat models) Perplexity (chat models) OpenRouter (chat models) DeepSeek (chat models) Mistral Groq Cohere KoboldCPP LiteLLM Text Generation Web UI Apipie xAI Novita AI (chat models) PPIO Embedder models: AnythingLLM Native Embedder (default) OpenAI Azure OpenAI LocalAi (all) Ollama (all) LM Studio (all) Cohere Audio Transcription models: AnythingLLM Built-in (default) OpenAI TTS (text-to-speech) support: Native Browser Built-in (default) PiperTTSLocal - runs in browser OpenAI TTS ElevenLabs Any OpenAI Compatible TTS service. STT (speech-to-text) support: Native Browser Built-in (default) Vector Databases: LanceDB (default) Astra DB Pinecone Chroma Weaviate Qdrant Milvus Zilliz Technical Overview This monorepo consists of three main sections: frontend: A viteJS + React frontend that you can run to easily create and manage all your content the LLM can use. server: A NodeJS express server to handle all the interactions and do all the vectorDB management and LLM interactions. collector: NodeJS express server that process and parses documents from the UI. docker: Docker instructions and build process + information for building from source. embed: Submodule for generation & creation of the web embed widget. browser-extension: Submodule for the chrome browser extension. 🛳 Self Hosting Mintplex Labs & the community maintain a number of deployment methods, scripts, and templates that you can use to run AnythingLLM locally. Refer to the table below to read how to deploy on your preferred environment or to automatically deploy. | Docker | AWS | GCP | Digital Ocean | Render.com | |----------------------------------------|----|-----|---------------|------------| | [![Deploy on Docker][docker-btn]][docker-deploy] | [![Deploy on AWS][aws-btn]][aws-deploy] | [![Deploy on GCP][gcp-btn]][gcp-deploy] | [![Deploy on DigitalOcean][do-btn]][do-deploy] | [![Deploy on Render.com][render-btn]][render-deploy] | | Railway | RepoCloud | Elestio | | --- | --- | --- | | [![Deploy on Railway][railway-btn]][railway-deploy] | [![Deploy on RepoCloud][repocloud-btn]][repocloud-deploy] | [![Deploy on Elestio][elestio-btn]][elestio-deploy] | or set up a production AnythingLLM instance without Docker → How to setup for development yarn setup To fill in the required .env files you'll need in each of the application sections (from root of repo). Go fill those out before proceeding. Ensure server/.env.development is filled or else things won't work right. yarn dev:server To boot the server locally (from root of repo). yarn dev:frontend To boot the frontend locally (from root of repo). yarn dev:collector To then run the document collector (from root of repo). Learn about documents Learn about vector caching External Apps & Integrations These are apps that are not maintained by Mintplex Labs, but are compatible with AnythingLLM. A listing here is not an endorsement. Midori AI Subsystem Manager - A streamlined and efficient way to deploy AI systems using Docker container technology. Coolify - Deploy AnythingLLM with a single click. GPTLocalhost for Microsoft Word - A local Word Add-in for you to use AnythingLLM in Microsoft Word. Telemetry & Privacy AnythingLLM by Mintplex Labs Inc contains a telemetry feature that collects anonymous usage information. More about Telemetry & Privacy for AnythingLLM Why? We use this information to help us understand how AnythingLLM is used, to help us prioritize work on new features and bug fixes, and to help us improve AnythingLLM's performance and stability. Opting out Set DISABLE_TELEMETRY in your server or docker .env settings to "true" to opt out of telemetry. You can also do this in-app by going to the sidebar > Privacy and disabling telemetry. What do you explicitly track? We will only track usage details that help us make product and roadmap decisions, specifically: Type of your installation (Docker or Desktop) When a document is added or removed. No information about the document. Just that the event occurred. This gives us an idea of use. Type of vector database in use. Let's us know which vector database provider is the most used to prioritize changes when updates arrive for that provider. Type of LLM in use. Let's us know the most popular choice and prioritize changes when updates arrive for that provider. Chat is sent. This is the most regular "event" and gives us an idea of the daily-activity of this project across all installations. Again, only the event is sent - we have no information on the nature or content of the chat itself. You can verify these claims by finding all locations Telemetry.sendTelemetry is called. Additionally these events are written to the output log so you can also see the specific data which was sent - if enabled. No IP or other identifying information is collected. The Telemetry provider is PostHog - an open-source telemetry collection service. View all telemetry events in source code 👋 Contributing create issue create PR with branch name format of - LGTM from core-team 🌟 Contributors 🔗 More Products [VectorAdmin][vector-admin]: An all-in-one GUI & tool-suite for managing vector databases. [OpenAI Assistant Swarm][assistant-swarm]: Turn your entire library of OpenAI assistants into one single army commanded from a single agent. [![][back-to-top]](#readme-top) Copyright © 2025 [Mintplex Labs][profile-link]. This project is MIT licensed. [back-to-top]: https://img.shields.io/badge/-BACKTOTOP-222628?style=flat-square [profile-link]: https://github.com/mintplex-labs [vector-admin]: https://github.com/mintplex-labs/vector-admin [assistant-swarm]: https://github.com/Mintplex-Labs/openai-assistant-swarm [docker-btn]: ./images/deployBtns/docker.png [docker-deploy]: ./docker/HOWTOUSE_DOCKER.md [aws-btn]: ./images/deployBtns/aws.png [aws-deploy]: ./cloud-deployments/aws/cloudformation/DEPLOY.md [gcp-btn]: https://deploy.cloud.run/button.svg [gcp-deploy]: ./cloud-deployments/gcp/deployment/DEPLOY.md [do-btn]: https://www.deploytodo.com/do-btn-blue.svg [do-deploy]: ./cloud-deployments/digitalocean/terraform/DEPLOY.md [render-btn]: https://render.com/images/deploy-to-render-button.svg [render-deploy]: https://render.com/deploy?repo=https://github.com/Mintplex-Labs/anything-llm&branch=render [render-btn]: https://render.com/images/deploy-to-render-button.svg [render-deploy]: https://render.com/deploy?repo=https://github.com/Mintplex-Labs/anything-llm&branch=render [railway-btn]: https://railway.app/button.svg [railway-deploy]: https://railway.app/template/HNSCS1?referralCode=WFgJkn [repocloud-btn]: https://d16t0pc4846x52.cloudfront.net/deploylobe.svg [repocloud-deploy]: https://repocloud.io/details/?app_id=276 [elestio-btn]: https://elest.io/images/logos/deploy-to-elestio-btn.png [elestio-deploy]: https://elest.io/open-source/anythingllm

xpert
github
LLM Vibe Score0.457
Human Vibe Score0.0831216059433162
xpert-aiMar 28, 2025

xpert

English | 中文 [uri_license]: https://www.gnu.org/licenses/agpl-3.0.html [urilicenseimage]: https://img.shields.io/badge/License-AGPL%20v3-blue.svg Xpert Cloud · Self-hosting · Documentation · Enterprise inquiry Open-Source AI Platform for Enterprise Data Analysis, Indicator Management and Agents Orchestration Xpert AI is an open-source enterprise-level AI system that perfectly integrates two major platforms: agent orchestration and data analysis. 💡 What's New Agent and Workflow Hybrid Architecture In today's rapidly evolving AI landscape, enterprises face a critical dilemma: how to balance the creativity of LLMs with the stability of processes? While purely agent-based architectures offer flexibility, they are difficult to control; traditional workflows, though reliable, lack adaptability. The Agent and Workflow Hybrid Architecture of the Xpert AI platform is designed to resolve this conflict — it allows AI to possess "free will" while adhering to "rules and order." !agent-workflow-hybrid-architecture Blog - Agent and Workflow Hybrid Architecture Agent Orchestration Platform By coordinating the collaboration of multiple agents, Xpert completes complex tasks. Xpert integrates different types of AI agents through an efficient management mechanism, utilizing their capabilities to solve multidimensional problems. Xpert Agents Data Analysis Platform An agile data analysis platform based on cloud computing for multidimensional modeling, indicator management, and BI display. It supports connecting to various data sources, achieving efficient and flexible data analysis and visualization, and provides multiple intelligent analysis functions and tools to help enterprises quickly and accurately discover business value and make operational decisions. ChatBI ChatBI is an innovative feature we are introducing, combining chat functionality with business intelligence (BI) analysis capabilities. It offers users a more intuitive and convenient data analysis experience through natural language interaction. ChatBI_Demo.mp4 🚀 Quick Start Before installing Xpert, make sure your machine meets the following minimum system requirements: CPU >= 2 Core RAM >= 4 GiB Node.js (ESM and CommonJS) - 18.x, 19.x, 20.x, 22.x The easiest way to start the Xpert server is through docker compose. Before running Xpert with the following commands, make sure that Docker and Docker Compose are installed on your machine: After running, you can access the Xpert dashboard in your browser at http://localhost/onboarding and start the initialization process. Please check our Wiki - Development to get started quickly. 🎯 Mission Empowering enterprises with intelligent collaboration and data-driven insights through innovative AI orchestration and agile analytics. 🌼 Screenshots Show / Hide Screenshots Pareto analysis open in new tab !Pareto analysis Screenshot Product profit analysis open in new tab !Product profit analysis Screenshot Reseller analysis open in new tab !Reseller analysis Screenshot Bigview dashboard open in new tab !Bigview dashboard Screenshot Indicator application open in new tab !Indicator application Screenshot Indicator mobile app open in new tab !Indicator mobile app Screenshot 💻 Demo, Downloads, Testing and Production Demo Xpert AI Platform Demo at . Notes: You can generate samples data in the home dashbaord page. Production (SaaS) Xpert AI Platform SaaS is available at . Note: it's currently in Alpha version / in testing mode, please use it with caution! 🧱 Technology Stack and Requirements TypeScript language NodeJs / NestJs Nx Angular RxJS TypeORM Langchain ECharts Java Mondrian For Production, we recommend: PostgreSQL PM2 See also README.md and CREDITS.md files in relevant folders for lists of libraries and software included in the Platform, information about licenses, and other details 📄 Documentation Please refer to our official Platform Documentation and to our Wiki (WIP). 💌 Contact Us For business inquiries: Xpert AI Platform @ Twitter 🛡️ License We support the open-source community. This software is available under the following licenses: Xpert AI Platform Community Edition Xpert AI Platform Small Business Xpert AI Platform Enterprise Please see LICENSE for more information on licenses. 💪 Thanks to our Contributors Contributors Please give us :star: on Github, it helps! You are more than welcome to submit feature requests in the Xpert AI repo Pull requests are always welcome! Please base pull requests against the develop branch and follow the contributing guide.

GenAI_Agents
github
LLM Vibe Score0.563
Human Vibe Score0.24210481455988786
NirDiamantMar 28, 2025

GenAI_Agents

🌟 Support This Project: Your sponsorship fuels innovation in GenAI agent development. Become a sponsor to help maintain and expand this valuable resource! GenAI Agents: Comprehensive Repository for Development and Implementation 🚀 Welcome to one of the most extensive and dynamic collections of Generative AI (GenAI) agent tutorials and implementations available today. This repository serves as a comprehensive resource for learning, building, and sharing GenAI agents, ranging from simple conversational bots to complex, multi-agent systems. 📫 Stay Updated! 🚀Cutting-edgeUpdates 💡ExpertInsights 🎯Top 0.1%Content Join over 15,000 of AI enthusiasts getting unique cutting-edge insights and free tutorials! Plus, subscribers get exclusive early access and special 33% discounts to my book and the upcoming RAG Techniques course! Introduction Generative AI agents are at the forefront of artificial intelligence, revolutionizing the way we interact with and leverage AI technologies. This repository is designed to guide you through the development journey, from basic agent implementations to advanced, cutting-edge systems. 📚 Learn to Build Your First AI Agent Your First AI Agent: Simpler Than You Think This detailed blog post complements the repository by providing a complete A-Z walkthrough with in-depth explanations of core concepts, step-by-step implementation, and the theory behind AI agents. It's designed to be incredibly simple to follow while covering everything you need to know to build your first working agent from scratch. 💡 Plus: Subscribe to the newsletter for exclusive early access to tutorials and special discounts on upcoming courses and books! Our goal is to provide a valuable resource for everyone - from beginners taking their first steps in AI to seasoned practitioners pushing the boundaries of what's possible. By offering a range of examples from foundational to complex, we aim to facilitate learning, experimentation, and innovation in the rapidly evolving field of GenAI agents. Furthermore, this repository serves as a platform for showcasing innovative agent creations. Whether you've developed a novel agent architecture or found an innovative application for existing techniques, we encourage you to share your work with the community. Related Projects 📚 Dive into my comprehensive guide on RAG techniques to learn about integrating external knowledge into AI systems, enhancing their capabilities with up-to-date and relevant information retrieval. 🖋️ Explore my Prompt Engineering Techniques guide for an extensive collection of prompting strategies, from fundamental concepts to advanced methods, improving your ability to communicate effectively with AI language models. A Community-Driven Knowledge Hub This repository grows stronger with your contributions! Join our vibrant Discord community — the central hub for shaping and advancing this project together 🤝 GenAI Agents Discord Community Whether you're a novice eager to learn or an expert ready to share your knowledge, your insights can shape the future of GenAI agents. Join us to propose ideas, get feedback, and collaborate on innovative implementations. For contribution guidelines, please refer to our CONTRIBUTING.md file. Let's advance GenAI agent technology together! 🔗 For discussions on GenAI, agents, or to explore knowledge-sharing opportunities, feel free to connect on LinkedIn. Key Features 🎓 Learn to build GenAI agents from beginner to advanced levels 🧠 Explore a wide range of agent architectures and applications 📚 Step-by-step tutorials and comprehensive documentation 🛠️ Practical, ready-to-use agent implementations 🌟 Regular updates with the latest advancements in GenAI 🤝 Share your own agent creations with the community GenAI Agent Implementations Explore our extensive list of GenAI agent implementations, sorted by categories: 🌱 Beginner-Friendly Agents Simple Conversational Agent LangChain PydanticAI Overview 🔎 A context-aware conversational AI maintains information across interactions, enabling more natural dialogues. Implementation 🛠️ Integrates a language model, prompt template, and history manager to generate contextual responses and track conversation sessions. Simple Question Answering Agent Overview 🔎 Answering (QA) agent using LangChain and OpenAI's language model understands user queries and provides relevant, concise answers. Implementation 🛠️ Combines OpenAI's GPT model, a prompt template, and an LLMChain to process user questions and generate AI-driven responses in a streamlined manner. Simple Data Analysis Agent LangChain PydanticAI Overview 🔎 An AI-powered data analysis agent interprets and answers questions about datasets using natural language, combining language models with data manipulation tools for intuitive data exploration. Implementation 🛠️ Integrates a language model, data manipulation framework, and agent framework to process natural language queries and perform data analysis on a synthetic dataset, enabling accessible insights for non-technical users. 🔧 Framework Tutorial: LangGraph Introduction to LangGraph: Building Modular AI Workflows Overview 🔎 This tutorial introduces LangGraph, a powerful framework for creating modular, graph-based AI workflows. Learn how to leverage LangGraph to build more complex and flexible AI agents that can handle multi-step processes efficiently. Implementation 🛠️ Step-by-step guide on using LangGraph to create a StateGraph workflow. The tutorial covers key concepts such as state management, node creation, and graph compilation. It demonstrates these principles by constructing a simple text analysis pipeline, serving as a foundation for more advanced agent architectures. Additional Resources 📚 Blog Post 🎓 Educational and Research Agents ATLAS: Academic Task and Learning Agent System Overview 🔎 ATLAS demonstrates how to build an intelligent multi-agent system that transforms academic support through AI-powered assistance. The system leverages LangGraph's workflow framework to coordinate multiple specialized agents that provide personalized academic planning, note-taking, and advisory support. Implementation 🛠️ Implements a state-managed multi-agent architecture using four specialized agents (Coordinator, Planner, Notewriter, and Advisor) working in concert through LangGraph's workflow framework. The system features sophisticated workflows for profile analysis and academic support, with continuous adaptation based on student performance and feedback. Additional Resources 📚 YouTube Explanation Blog Post Scientific Paper Agent - Literature Review Overview 🔎 An intelligent research assistant that helps users navigate, understand, and analyze scientific literature through an orchestrated workflow. The system combines academic APIs with sophisticated paper processing techniques to automate literature review tasks, enabling researchers to efficiently extract insights from academic papers while maintaining research rigor and quality control. Implementation 🛠️ Leverages LangGraph to create a five-node workflow system including decision making, planning, tool execution, and quality validation nodes. The system integrates the CORE API for paper access, PDFplumber for document processing, and advanced language models for analysis. Key features include a retry mechanism for robust paper downloads, structured data handling through Pydantic models, and quality-focused improvement cycles with human-in-the-loop validation options. Additional Resources 📚 YouTube Explanation Blog Post Chiron - A Feynman-Enhanced Learning Agent Overview 🔎 An adaptive learning agent that guides users through educational content using a structured checkpoint system and Feynman-style teaching. The system processes learning materials (either user-provided or web-retrieved), verifies understanding through interactive checkpoints, and provides simplified explanations when needed, creating a personalized learning experience that mimics one-on-one tutoring. Implementation 🛠️ Uses LangGraph to orchestrate a learning workflow that includes checkpoint definition, context building, understanding verification, and Feynman teaching nodes. The system integrates web search for dynamic content retrieval, employs semantic chunking for context processing, and manages embeddings for relevant information retrieval. Key features include a 70% understanding threshold for progression, interactive human-in-the-loop validation, and structured output through Pydantic models for consistent data handling. Additional Resources 📚 YouTube Explanation 💼 Business and Professional Agents Customer Support Agent (LangGraph) Overview 🔎 An intelligent customer support agent using LangGraph categorizes queries, analyzes sentiment, and provides appropriate responses or escalates issues. Implementation 🛠️ Utilizes LangGraph to create a workflow combining state management, query categorization, sentiment analysis, and response generation. Essay Grading Agent (LangGraph) Overview 🔎 An automated essay grading system using LangGraph and an LLM model evaluates essays based on relevance, grammar, structure, and depth of analysis. Implementation 🛠️ Utilizes a state graph to define the grading workflow, incorporating separate grading functions for each criterion. Travel Planning Agent (LangGraph) Overview 🔎 A Travel Planner using LangGraph demonstrates how to build a stateful, multi-step conversational AI application that collects user input and generates personalized travel itineraries. Implementation 🛠️ Utilizes StateGraph to define the application flow, incorporates custom PlannerState for process management. GenAI Career Assistant Agent Overview 🔎 The GenAI Career Assistant demonstrates how to create a multi-agent system that provides personalized guidance for careers in Generative AI. Using LangGraph and Gemini LLM, the system delivers customized learning paths, resume assistance, interview preparation, and job search support. Implementation 🛠️ Leverages a multi-agent architecture using LangGraph to coordinate specialized agents (Learning, Resume, Interview, Job Search) through TypedDict-based state management. The system employs sophisticated query categorization and routing while integrating with external tools like DuckDuckGo for job searches and dynamic content generation. Additional Resources 📚 YouTube Explanation Project Manager Assistant Agent Overview 🔎 An AI agent designed to assist in project management tasks by automating the process of creating actionable tasks from project descriptions, identifying dependencies, scheduling work, and assigning tasks to team members based on expertise. The system includes risk assessment and self-reflection capabilities to optimize project plans through multiple iterations, aiming to minimize overall project risk. Implementation 🛠️ Leverages LangGraph to orchestrate a workflow of specialized nodes including task generation, dependency mapping, scheduling, allocation, and risk assessment. Each node uses GPT-4o-mini for structured outputs following Pydantic models. The system implements a feedback loop for self-improvement, where risk scores trigger reflection cycles that generate insights to optimize the project plan. Visualization tools display Gantt charts of the generated schedules across iterations. Additional Resources 📚 YouTube Explanation Contract Analysis Assistant (ClauseAI) Overview 🔎 ClauseAI demonstrates how to build an AI-powered contract analysis system using a multi-agent approach. The system employs specialized AI agents for different aspects of contract review, from clause analysis to compliance checking, and leverages LangGraph for workflow orchestration and Pinecone for efficient clause retrieval and comparison. Implementation 🛠️ Implements a sophisticated state-based workflow using LangGraph to coordinate multiple AI agents through contract analysis stages. The system features Pydantic models for data validation, vector storage with Pinecone for clause comparison, and LLM-based analysis for generating comprehensive contract reports. The implementation includes parallel processing capabilities and customizable report generation based on user requirements. Additional Resources 📚 YouTube Explanation E2E Testing Agent Overview 🔎 The E2E Testing Agent demonstrates how to build an AI-powered system that converts natural language test instructions into executable end-to-end web tests. Using LangGraph for workflow orchestration and Playwright for browser automation, the system enables users to specify test cases in plain English while handling the complexity of test generation and execution. Implementation 🛠️ Implements a structured workflow using LangGraph to coordinate test generation, validation, and execution. The system features TypedDict state management, integration with Playwright for browser automation, and LLM-based code generation for converting natural language instructions into executable test scripts. The implementation includes DOM state analysis, error handling, and comprehensive test reporting. Additional Resources 📚 YouTube Explanation 🎨 Creative and Content Generation Agents GIF Animation Generator Agent (LangGraph) Overview 🔎 A GIF animation generator that integrates LangGraph for workflow management, GPT-4 for text generation, and DALL-E for image creation, producing custom animations from user prompts. Implementation 🛠️ Utilizes LangGraph to orchestrate a workflow that generates character descriptions, plots, and image prompts using GPT-4, creates images with DALL-E 3, and assembles them into GIFs using PIL. Employs asynchronous programming for efficient parallel processing. TTS Poem Generator Agent (LangGraph) Overview 🔎 An advanced text-to-speech (TTS) agent using LangGraph and OpenAI's APIs classifies input text, processes it based on content type, and generates corresponding speech output. Implementation 🛠️ Utilizes LangGraph to orchestrate a workflow that classifies input text using GPT models, applies content-specific processing, and converts the processed text to speech using OpenAI's TTS API. The system adapts its output based on the identified content type (general, poem, news, or joke). Music Compositor Agent (LangGraph) Overview 🔎 An AI Music Compositor using LangGraph and OpenAI's language models generates custom musical compositions based on user input. The system processes the input through specialized components, each contributing to the final musical piece, which is then converted to a playable MIDI file. Implementation 🛠️ LangGraph orchestrates a workflow that transforms user input into a musical composition, using ChatOpenAI (GPT-4) to generate melody, harmony, and rhythm, which are then style-adapted. The final AI-generated composition is converted to a MIDI file using music21 and can be played back using pygame. Content Intelligence: Multi-Platform Content Generation Agent Overview 🔎 Content Intelligence demonstrates how to build an advanced content generation system that transforms input text into platform-optimized content across multiple social media channels. The system employs LangGraph for workflow orchestration to analyze content, conduct research, and generate tailored content while maintaining brand consistency across different platforms. Implementation 🛠️ Implements a sophisticated workflow using LangGraph to coordinate multiple specialized nodes (Summary, Research, Platform-Specific) through the content generation process. The system features TypedDict and Pydantic models for state management, integration with Tavily Search for research enhancement, and platform-specific content generation using GPT-4. The implementation includes parallel processing for multiple platforms and customizable content templates. Additional Resources 📚 YouTube Explanation Business Meme Generator Using LangGraph and Memegen.link Overview 🔎 The Business Meme Generator demonstrates how to create an AI-powered system that generates contextually relevant memes based on company website analysis. Using LangGraph for workflow orchestration, the system combines Groq's Llama model for text analysis and the Memegen.link API to automatically produce brand-aligned memes for digital marketing. Implementation 🛠️ Implements a state-managed workflow using LangGraph to coordinate website content analysis, meme concept generation, and image creation. The system features Pydantic models for data validation, asynchronous processing with aiohttp, and integration with external APIs (Groq, Memegen.link) to create a complete meme generation pipeline with customizable templates. Additional Resources 📚 YouTube Explanation Murder Mystery Game with LLM Agents Overview 🔎 A text-based detective game that utilizes autonomous LLM agents as interactive characters in a procedurally generated murder mystery. Drawing inspiration from the UNBOUNDED paper, the system creates unique scenarios each time, with players taking on the role of Sherlock Holmes to solve the case through character interviews and deductive reasoning. Implementation 🛠️ Leverages two LangGraph workflows - a main game loop for story/character generation and game progression, and a conversation sub-graph for character interactions. The system uses a combination of LLM-powered narrative generation, character AI, and structured game mechanics to create an immersive investigative experience with replayable storylines. Additional Resources 📚 YouTube Explanation 📊 Analysis and Information Processing Agents Memory-Enhanced Conversational Agent Overview 🔎 A memory-enhanced conversational AI agent incorporates short-term and long-term memory systems to maintain context within conversations and across multiple sessions, improving interaction quality and personalization. Implementation 🛠️ Integrates a language model with separate short-term and long-term memory stores, utilizes a prompt template incorporating both memory types, and employs a memory manager for storage and retrieval. The system includes an interaction loop that updates and utilizes memories for each response. Multi-Agent Collaboration System Overview 🔎 A multi-agent collaboration system combining historical research with data analysis, leveraging large language models to simulate specialized agents working together to answer complex historical questions. Implementation 🛠️ Utilizes a base Agent class to create specialized HistoryResearchAgent and DataAnalysisAgent, orchestrated by a HistoryDataCollaborationSystem. The system follows a five-step process: historical context provision, data needs identification, historical data provision, data analysis, and final synthesis. Self-Improving Agent Overview 🔎 A Self-Improving Agent using LangChain engages in conversations, learns from interactions, and continuously improves its performance over time through reflection and adaptation. Implementation 🛠️ Integrates a language model with chat history management, response generation, and a reflection mechanism. The system employs a learning system that incorporates insights from reflection to enhance future performance, creating a continuous improvement loop. Task-Oriented Agent Overview 🔎 A language model application using LangChain that summarizes text and translates the summary to Spanish, combining custom functions, structured tools, and an agent for efficient text processing. Implementation 🛠️ Utilizes custom functions for summarization and translation, wrapped as structured tools. Employs a prompt template to guide the agent, which orchestrates the use of tools. An agent executor manages the process, taking input text and producing both an English summary and its Spanish translation. Internet Search and Summarize Agent Overview 🔎 An intelligent web research assistant that combines web search capabilities with AI-powered summarization, automating the process of gathering information from the internet and distilling it into concise, relevant summaries. Implementation 🛠️ Integrates a web search module using DuckDuckGo's API, a result parser, and a text summarization engine leveraging OpenAI's language models. The system performs site-specific or general searches, extracts relevant content, generates concise summaries, and compiles attributed results for efficient information retrieval and synthesis. Multi agent research team - Autogen Overview 🔎 This technique explores a multi-agent system for collaborative research using the AutoGen library. It employs agents to solve tasks collaboratively, focusing on efficient execution and quality assurance. The system enhances research by distributing tasks among specialized agents. Implementation 🛠️ Agents are configured with specific roles using the GPT-4 model, including admin, developer, planner, executor, and quality assurance. Interaction management ensures orderly communication with defined transitions. Task execution involves collaborative planning, coding, execution, and quality checking, demonstrating a scalable framework for various domains. Additional Resources 📚 comprehensive solution with UI Blogpost Sales Call Analyzer Overview 🔎 An intelligent system that automates the analysis of sales call recordings by combining audio transcription with advanced natural language processing. The analyzer transcribes audio using OpenAI's Whisper, processes the text using NLP techniques, and generates comprehensive reports including sentiment analysis, key phrases, pain points, and actionable recommendations to improve sales performance. Implementation 🛠️ Utilizes multiple components in a structured workflow: OpenAI Whisper for audio transcription, CrewAI for task automation and agent management, and LangChain for orchestrating the analysis pipeline. The system processes audio through a series of steps from transcription to detailed analysis, leveraging custom agents and tasks to generate structured JSON reports containing insights about customer sentiment, sales opportunities, and recommended improvements. Additional Resources 📚 YouTube Explanation Weather Emergency & Response System Overview 🔎 A comprehensive system demonstrating two agent graph implementations for weather emergency response: a real-time graph processing live weather data, and a hybrid graph combining real and simulated data for testing high-severity scenarios. The system handles complete workflow from data gathering through emergency plan generation, with automated notifications and human verification steps. Implementation 🛠️ Utilizes LangGraph for orchestrating complex workflows with state management, integrating OpenWeatherMap API for real-time data, and Gemini for analysis and response generation. The system incorporates email notifications, social media monitoring simulation, and severity-based routing with configurable human verification for low/medium severity events. Additional Resources 📚 YouTube Explanation Self-Healing Codebase System Overview 🔎 An intelligent system that automatically detects, diagnoses, and fixes runtime code errors using LangGraph workflow orchestration and ChromaDB vector storage. The system maintains a memory of encountered bugs and their fixes through vector embeddings, enabling pattern recognition for similar errors across the codebase. Implementation 🛠️ Utilizes a state-based graph workflow that processes function definitions and runtime arguments through specialized nodes for error detection, code analysis, and fix generation. Incorporates ChromaDB for vector-based storage of bug patterns and fixes, with automated search and retrieval capabilities for similar error patterns, while maintaining code execution safety through structured validation steps. Additional Resources 📚 YouTube Explanation DataScribe: AI-Powered Schema Explorer Overview 🔎 An intelligent agent system that enables intuitive exploration and querying of relational databases through natural language interactions. The system utilizes a fleet of specialized agents, coordinated by a stateful Supervisor, to handle schema discovery, query planning, and data analysis tasks while maintaining contextual understanding through vector-based relationship graphs. Implementation 🛠️ Leverages LangGraph for orchestrating a multi-agent workflow including discovery, inference, and planning agents, with NetworkX for relationship graph visualization and management. The system incorporates dynamic state management through TypedDict classes, maintains database context between sessions using a db_graph attribute, and includes safety measures to prevent unauthorized database modifications. Memory-Enhanced Email Agent (LangGraph & LangMem) Overview 🔎 An intelligent email assistant that combines three types of memory (semantic, episodic, and procedural) to create a system that improves over time. The agent can triage incoming emails, draft contextually appropriate responses using stored knowledge, and enhance its performance based on user feedback. Implementation 🛠️ Leverages LangGraph for workflow orchestration and LangMem for sophisticated memory management across multiple memory types. The system implements a triage workflow with memory-enhanced decision making, specialized tools for email composition and calendar management, and a self-improvement mechanism that updates its own prompts based on feedback and past performance. Additional Resources 📚 Blog Post 📰 News and Information Agents News TL;DR using LangGraph Overview 🔎 A news summarization system that generates concise TL;DR summaries of current events based on user queries. The system leverages large language models for decision making and summarization while integrating with news APIs to access up-to-date content, allowing users to quickly catch up on topics of interest through generated bullet-point summaries. Implementation 🛠️ Utilizes LangGraph to orchestrate a workflow combining multiple components: GPT-4o-mini for generating search terms and article summaries, NewsAPI for retrieving article metadata, BeautifulSoup for web scraping article content, and Asyncio for concurrent processing. The system follows a structured pipeline from query processing through article selection and summarization, managing the flow between components to produce relevant TL;DRs of current news articles. Additional Resources 📚 YouTube Explanation Blog Post AInsight: AI/ML Weekly News Reporter Overview 🔎 AInsight demonstrates how to build an intelligent news aggregation and summarization system using a multi-agent architecture. The system employs three specialized agents (NewsSearcher, Summarizer, Publisher) to automatically collect, process and summarize AI/ML news for general audiences through LangGraph-based workflow orchestration. Implementation 🛠️ Implements a state-managed multi-agent system using LangGraph to coordinate the news collection (Tavily API), technical content summarization (GPT-4), and report generation processes. The system features modular architecture with TypedDict-based state management, external API integration, and markdown report generation with customizable templates. Additional Resources 📚 YouTube Explanation Journalism-Focused AI Assistant Overview 🔎 A specialized AI assistant that helps journalists tackle modern journalistic challenges like misinformation, bias, and information overload. The system integrates fact-checking, tone analysis, summarization, and grammar review tools to enhance the accuracy and efficiency of journalistic work while maintaining ethical reporting standards. Implementation 🛠️ Leverages LangGraph to orchestrate a workflow of specialized components including language models for analysis and generation, web search integration via DuckDuckGo's API, document parsing tools like PyMuPDFLoader and WebBaseLoader, text splitting with RecursiveCharacterTextSplitter, and structured JSON outputs. Each component works together through a unified workflow to analyze content, verify facts, detect bias, extract quotes, and generate comprehensive reports. Blog Writer (Open AI Swarm) Overview 🔎 A multi-agent system for collaborative blog post creation using OpenAI's Swarm package. It leverages specialized agents to perform research, planning, writing, and editing tasks efficiently. Implementation 🛠️ Utilizes OpenAI's Swarm Package to manage agent interactions. Includes an admin, researcher, planner, writer, and editor, each with specific roles. The system follows a structured workflow: topic setting, outlining, research, drafting, and editing. This approach enhances content creation through task distribution, specialization, and collaborative problem-solving. Additional Resources 📚 Swarm Repo Podcast Internet Search and Generate Agent 🎙️ Overview 🔎 A two step agent that first searches the internet for a given topic and then generates a podcast on the topic found. The search step uses a search agent and search function to find the most relevant information. The second step uses a podcast generation agent and generation function to create a podcast on the topic found. Implementation 🛠️ Utilizes LangGraph to orchestrate a two-step workflow. The first step involves a search agent and function to gather information from the internet. The second step uses a podcast generation agent and function to create a podcast based on the gathered information. 🛍️ Shopping and Product Analysis Agents ShopGenie - Redefining Online Shopping Customer Experience Overview 🔎 An AI-powered shopping assistant that helps customers make informed purchasing decisions even without domain expertise. The system analyzes product information from multiple sources, compares specifications and reviews, identifies the best option based on user needs, and delivers recommendations through email with supporting video reviews, creating a comprehensive shopping experience. Implementation 🛠️ Uses LangGraph to orchestrate a workflow combining Tavily for web search, Llama-3.1-70B for structured data analysis and product comparison, and YouTube API for review video retrieval. The system processes search results through multiple nodes including schema mapping, product comparison, review identification, and email generation. Key features include structured Pydantic models for consistent data handling, retry mechanisms for robust API interactions, and email delivery through SMTP for sharing recommendations. Additional Resources 📚 YouTube Explanation Car Buyer AI Agent Overview 🔎 The Smart Product Buyer AI Agent demonstrates how to build an intelligent system that assists users in making informed purchasing decisions. Using LangGraph and LLM-based intelligence, the system processes user requirements, scrapes product listings from websites like AutoTrader, and provides detailed analysis and recommendations for car purchases. Implementation 🛠️ Implements a state-based workflow using LangGraph to coordinate user interaction, web scraping, and decision support. The system features TypedDict state management, async web scraping with Playwright, and integrates with external APIs for comprehensive product analysis. The implementation includes a Gradio interface for real-time chat interaction and modular scraper architecture for easy extension to additional product categories. Additional Resources 📚 YouTube Explanation 🎯 Task Management and Productivity Agents Taskifier - Intelligent Task Allocation & Management Overview 🔎 An intelligent task management system that analyzes user work styles and creates personalized task breakdown strategies, born from the observation that procrastination often stems from task ambiguity among students and early-career professionals. The system evaluates historical work patterns, gathers relevant task information through web search, and generates customized step-by-step approaches to optimize productivity and reduce workflow paralysis. Implementation 🛠️ Leverages LangGraph for orchestrating a multi-step workflow including work style analysis, information gathering via Tavily API, and customized plan generation. The system maintains state through the process, integrating historical work pattern data with fresh task research to output detailed, personalized task execution plans aligned with the user's natural working style. Additional Resources 📚 YouTube Explanation Grocery Management Agents System Overview 🔎 A multi-agent system built with CrewAI that automates grocery management tasks including receipt interpretation, expiration date tracking, inventory management, and recipe recommendations. The system uses specialized agents to extract data from receipts, estimate product shelf life, track consumption, and suggest recipes to minimize food waste. Implementation 🛠️ Implements four specialized agents using CrewAI - a Receipt Interpreter that extracts item details from receipts, an Expiration Date Estimator that determines shelf life using online sources, a Grocery Tracker that maintains inventory based on consumption, and a Recipe Recommender that suggests meals using available ingredients. Each agent has specific tools and tasks orchestrated through a crew workflow. Additional Resources 📚 YouTube Explanation 🔍 Quality Assurance and Testing Agents LangGraph-Based Systems Inspector Overview 🔎 A comprehensive testing and validation tool for LangGraph-based applications that automatically analyzes system architecture, generates test cases, and identifies potential vulnerabilities through multi-agent inspection. The inspector employs specialized AI testers to evaluate different aspects of the system, from basic functionality to security concerns and edge cases. Implementation 🛠️ Integrates LangGraph for workflow orchestration, multiple LLM-powered testing agents, and a structured evaluation pipeline that includes static analysis, test case generation, and results verification. The system uses Pydantic for data validation, NetworkX for graph representation, and implements a modular architecture that allows for parallel test execution and comprehensive result analysis. Additional Resources 📚 YouTube Explanation Blog Post EU Green Deal FAQ Bot Overview 🔎 The EU Green Deal FAQ Bot demonstrates how to build a RAG-based AI agent that helps businesses understand EU green deal policies. The system processes complex regulatory documents into manageable chunks and provides instant, accurate answers to common questions about environmental compliance, emissions reporting, and waste management requirements. Implementation 🛠️ Implements a sophisticated RAG pipeline using FAISS vectorstore for document storage, semantic chunking for preprocessing, and multiple specialized agents (Retriever, Summarizer, Evaluator) for query processing. The system features query rephrasing for improved accuracy, cross-reference with gold Q&A datasets for answer validation, and comprehensive evaluation metrics to ensure response quality and relevance. Additional Resources 📚 YouTube Explanation Systematic Review Automation System + Paper Draft Creation Overview 🔎 A comprehensive system for automating academic systematic reviews using a directed graph architecture and LangChain components. The system generates complete, publication-ready systematic review papers, automatically processing everything from literature search through final draft generation with multiple revision cycles. Implementation 🛠️ Utilizes a state-based graph workflow that handles paper search and selection (up to 3 papers), PDF processing, and generates a complete academic paper with all standard sections (abstract, introduction, methods, results, conclusions, references). The system incorporates multiple revision cycles with automated critique and improvement phases, all orchestrated through LangGraph state management. Additional Resources 📚 YouTube Explanation 🌟 Special Advanced Technique 🌟 Sophisticated Controllable Agent for Complex RAG Tasks 🤖 Overview 🔎 An advanced RAG solution designed to tackle complex questions that simple semantic similarity-based retrieval cannot solve. This approach uses a sophisticated deterministic graph as the "brain" 🧠 of a highly controllable autonomous agent, capable of answering non-trivial questions from your own data. Implementation 🛠️ • Implement a multi-step process involving question anonymization, high-level planning, task breakdown, adaptive information retrieval and question answering, continuous re-planning, and rigorous answer verification to ensure grounded and accurate responses. Getting Started To begin exploring and building GenAI agents: Clone this repository: Navigate to the technique you're interested in: Follow the detailed implementation guide in each technique's notebook. Contributing We welcome contributions from the community! If you have a new technique or improvement to suggest: Fork the repository Create your feature branch: git checkout -b feature/AmazingFeature Commit your changes: git commit -m 'Add some AmazingFeature' Push to the branch: git push origin feature/AmazingFeature Open a pull request Contributors License This project is licensed under a custom non-commercial license - see the LICENSE file for details. ⭐️ If you find this repository helpful, please consider giving it a star! Keywords: GenAI, Generative AI, Agents, NLP, AI, Machine Learning, Natural Language Processing, LLM, Conversational AI, Task-Oriented AI

Prompt_Engineering
github
LLM Vibe Score0.611
Human Vibe Score0.9298414218113789
NirDiamantMar 28, 2025

Prompt_Engineering

🌟 Support This Project: Your sponsorship fuels innovation in prompt engineering development. Become a sponsor to help maintain and expand this valuable resource! Prompt Engineering Techniques: Comprehensive Repository for Development and Implementation 🖋️ Welcome to one of the most extensive and dynamic collections of Prompt Engineering tutorials and implementations available today. This repository serves as a comprehensive resource for learning, building, and sharing prompt engineering techniques, ranging from basic concepts to advanced strategies for leveraging large language models. 📫 Stay Updated! 🚀Cutting-edgeUpdates 💡ExpertInsights 🎯Top 0.1%Content Join over 15,000 of AI enthusiasts getting unique cutting-edge insights and free tutorials! Plus, subscribers get exclusive early access and special discounts to our upcoming RAG Techniques course! Introduction Prompt engineering is at the forefront of artificial intelligence, revolutionizing the way we interact with and leverage AI technologies. This repository is designed to guide you through the development journey, from basic prompt structures to advanced, cutting-edge techniques. Our goal is to provide a valuable resource for everyone - from beginners taking their first steps in AI to seasoned practitioners pushing the boundaries of what's possible. By offering a range of examples from foundational to complex, we aim to facilitate learning, experimentation, and innovation in the rapidly evolving field of prompt engineering. Furthermore, this repository serves as a platform for showcasing innovative prompt engineering techniques. Whether you've developed a novel approach or found an innovative application for existing techniques, we encourage you to share your work with the community. 📖 Get the Fully Explained Version of This Repo This repository contains 22 hands-on Jupyter Notebook tutorials covering key prompt engineering techniques. If you want to go deeper with full explanations, intuitive insights, and structured exercises, check out the expanded version in book format: 📚 Prompt Engineering from Zero to Hero 📖 All 22 techniques from this repo, fully explained in depth 🧠 Step-by-step breakdowns of key concepts & best practices 🏋️ Hands-on exercises to sharpen your skills 🎯 Designed for learners who want a structured, guided approach 📄 Instant access to the PDF upon purchase 📱 Readable on any device – computer, tablet, or phone 💡 Subscribers to the DiamantAI newsletter receive an exclusive 33% (!) discount on the book. 👉 Get the full explained version here Related Projects 📚 Explore my comprehensive guide on RAG techniques to learn how to enhance AI systems with external knowledge retrieval, complementing language model capabilities with rich, up-to-date information. 🤖 Dive into my GenAI Agents Repository for a wide range of AI agent implementations and tutorials, from simple conversational bots to complex, multi-agent systems for various applications. A Community-Driven Knowledge Hub This repository grows stronger with your contributions! Join our vibrant Discord community — the central hub for shaping and advancing this project together 🤝 DiamantAI Discord Community Whether you're a novice eager to learn or an expert ready to share your knowledge, your insights can shape the future of prompt engineering. Join us to propose ideas, get feedback, and collaborate on innovative implementations. For contribution guidelines, please refer to our CONTRIBUTING.md file. Let's advance prompt engineering technology together! 🔗 For discussions on GenAI, or to explore knowledge-sharing opportunities, feel free to connect on LinkedIn. Key Features 🎓 Learn prompt engineering techniques from beginner to advanced levels 🧠 Explore a wide range of prompt structures and applications 📚 Step-by-step tutorials and comprehensive documentation 🛠️ Practical, ready-to-use prompt implementations 🌟 Regular updates with the latest advancements in prompt engineering 🤝 Share your own prompt engineering creations with the community Prompt Engineering Techniques Explore our extensive list of prompt engineering techniques, ranging from basic to advanced: 🌱 Fundamental Concepts Introduction to Prompt Engineering Overview 🔎 A comprehensive introduction to the fundamental concepts of prompt engineering in the context of AI and language models. Implementation 🛠️ Combines theoretical explanations with practical demonstrations, covering basic concepts, structured prompts, comparative analysis, and problem-solving applications. Basic Prompt Structures Overview 🔎 Explores two fundamental types of prompt structures: single-turn prompts and multi-turn prompts (conversations). Implementation 🛠️ Uses OpenAI's GPT model and LangChain to demonstrate single-turn and multi-turn prompts, prompt templates, and conversation chains. Prompt Templates and Variables Overview 🔎 Introduces creating and using prompt templates with variables, focusing on Python and the Jinja2 templating engine. Implementation 🛠️ Covers template creation, variable insertion, conditional content, list processing, and integration with the OpenAI API. 🔧 Core Techniques Zero-Shot Prompting Overview 🔎 Explores zero-shot prompting, allowing language models to perform tasks without specific examples or prior training. Implementation 🛠️ Demonstrates direct task specification, role-based prompting, format specification, and multi-step reasoning using OpenAI and LangChain. Few-Shot Learning and In-Context Learning Overview 🔎 Covers Few-Shot Learning and In-Context Learning techniques using OpenAI's GPT models and the LangChain library. Implementation 🛠️ Implements basic and advanced few-shot learning, in-context learning, and best practices for example selection and evaluation. Chain of Thought (CoT) Prompting Overview 🔎 Introduces Chain of Thought (CoT) prompting, encouraging AI models to break down complex problems into step-by-step reasoning processes. Implementation 🛠️ Covers basic and advanced CoT techniques, applying them to various problem-solving scenarios and comparing results with standard prompts. 🔍 Advanced Strategies Self-Consistency and Multiple Paths of Reasoning Overview 🔎 Explores techniques for generating diverse reasoning paths and aggregating results to improve AI-generated answers. Implementation 🛠️ Demonstrates designing diverse reasoning prompts, generating multiple responses, implementing aggregation methods, and applying self-consistency checks. Constrained and Guided Generation Overview 🔎 Focuses on techniques to set up constraints for model outputs and implement rule-based generation. Implementation 🛠️ Uses LangChain's PromptTemplate for structured prompts, implements constraints, and explores rule-based generation techniques. Role Prompting Overview 🔎 Explores assigning specific roles to AI models and crafting effective role descriptions. Implementation 🛠️ Demonstrates creating role-based prompts, assigning roles to AI models, and refining role descriptions for various scenarios. 🚀 Advanced Implementations Task Decomposition in Prompts Overview 🔎 Explores techniques for breaking down complex tasks and chaining subtasks in prompts. Implementation 🛠️ Covers problem analysis, subtask definition, targeted prompt engineering, sequential execution, and result synthesis. Prompt Chaining and Sequencing Overview 🔎 Demonstrates how to connect multiple prompts and build logical flows for complex AI-driven tasks. Implementation 🛠️ Explores basic prompt chaining, sequential prompting, dynamic prompt generation, and error handling within prompt chains. Instruction Engineering Overview 🔎 Focuses on crafting clear and effective instructions for language models, balancing specificity and generality. Implementation 🛠️ Covers creating and refining instructions, experimenting with different structures, and implementing iterative improvement based on model responses. 🎨 Optimization and Refinement Prompt Optimization Techniques Overview 🔎 Explores advanced techniques for optimizing prompts, focusing on A/B testing and iterative refinement. Implementation 🛠️ Demonstrates A/B testing of prompts, iterative refinement processes, and performance evaluation using relevant metrics. Handling Ambiguity and Improving Clarity Overview 🔎 Focuses on identifying and resolving ambiguous prompts and techniques for writing clearer prompts. Implementation 🛠️ Covers analyzing ambiguous prompts, implementing strategies to resolve ambiguity, and exploring techniques for writing clearer prompts. Prompt Length and Complexity Management Overview 🔎 Explores techniques for managing prompt length and complexity when working with large language models. Implementation 🛠️ Demonstrates techniques for balancing detail and conciseness, and strategies for handling long contexts including chunking, summarization, and iterative processing. 🛠️ Specialized Applications Negative Prompting and Avoiding Undesired Outputs Overview 🔎 Explores negative prompting and techniques for avoiding undesired outputs from large language models. Implementation 🛠️ Covers basic negative examples, explicit exclusions, constraint implementation using LangChain, and methods for evaluating and refining negative prompts. Prompt Formatting and Structure Overview 🔎 Explores various prompt formats and structural elements, demonstrating their impact on AI model responses. Implementation 🛠️ Demonstrates creating various prompt formats, incorporating structural elements, and comparing responses from different prompt structures. Prompts for Specific Tasks Overview 🔎 Explores the creation and use of prompts for specific tasks: text summarization, question-answering, code generation, and creative writing. Implementation 🛠️ Covers designing task-specific prompt templates, implementing them using LangChain, executing with sample inputs, and analyzing outputs for each task type. 🌍 Advanced Applications Multilingual and Cross-lingual Prompting Overview 🔎 Explores techniques for designing prompts that work effectively across multiple languages and for language translation tasks. Implementation 🛠️ Covers creating multilingual prompts, implementing language detection and adaptation, designing cross-lingual translation prompts, and handling various writing systems and scripts. Ethical Considerations in Prompt Engineering Overview 🔎 Explores the ethical dimensions of prompt engineering, focusing on avoiding biases and creating inclusive and fair prompts. Implementation 🛠️ Covers identifying biases in prompts, implementing strategies to create inclusive prompts, and methods to evaluate and improve the ethical quality of AI outputs. Prompt Security and Safety Overview 🔎 Focuses on preventing prompt injections and implementing content filters in prompts for safe and secure AI applications. Implementation 🛠️ Covers techniques for prompt injection prevention, content filtering implementation, and testing the effectiveness of security and safety measures. Evaluating Prompt Effectiveness Overview 🔎 Explores methods and techniques for evaluating the effectiveness of prompts in AI language models. Implementation 🛠️ Covers setting up evaluation metrics, implementing manual and automated evaluation techniques, and providing practical examples using OpenAI and LangChain. Getting Started To begin exploring and implementing prompt engineering techniques: Clone this repository: Navigate to the technique you're interested in: Follow the detailed implementation guide in each technique's notebook. Contributing We welcome contributions from the community! If you have a new technique or improvement to suggest: Fork the repository Create your feature branch: git checkout -b feature/AmazingFeature Commit your changes: git commit -m 'Add some AmazingFeature' Push to the branch: git push origin feature/AmazingFeature Open a pull request License This project is licensed under a custom non-commercial license - see the LICENSE file for details. ⭐️ If you find this repository helpful, please consider giving it a star! Keywords: Prompt Engineering, AI, Machine Learning, Natural Language Processing, LLM, Language Models, NLP, Conversational AI, Zero-Shot Learning, Few-Shot Learning, Chain of Thought

sdfx
github
LLM Vibe Score0.424
Human Vibe Score0.0045691337642496865
sdfxaiMar 28, 2025

sdfx

SDFX ======= Features | Screenshots | SDFX App Guide | Installation | Run The ultimate no-code platform to build and share AI apps with beautiful UI. Join our Discord Server community for latest news, video tutorials and demo apps. !SDFX Screenshot SDFX enables the creation of straightforward user interfaces for intricate workflows. An SDFX application combines a Comfy workflow with a user interface. The JSON that describes the workflow is enriched with extra meta information about the application and its author, as well as the association between UI components and node widgets. Features Screenshots SDFX Application JSON Structure Guide Installation Run Installation for users already using ComfyUI Locally Why? This project was originally created to meet the needs of users from A1111 (form based UI) and ComfyUI (graph-node based), which are two communities with differing visions. With SDFX, we aimed to merge the benefits of both worlds, without the drawbacks. What SDFX allows, for example, is the creation of complex graphs (as one would do on ComfyUI), but with an overlay of a simpler, high-level UI (such as a form-based interface, with an incredible UI). Thus, in theory, someone could recreate A1111 with SDFX and share the JSON online. This is an initial draft, there is still much to do (mostly the App Creator that will be released soon). Some had lost faith in us, even calling us vaporware. The reality, as you will see by browsing the source code, is that SDFX required a considerable amount of work. It was made by a solo developer, and now the team is growing. We tried to do things right, focusing solely on what we do best: UIs and product design with a modern frontend stack. Therefore, we rely 100% on Comfy's backend, making SDFX fully compatible with ComfyUI. However, installing ComfyUI is not necessary, as everything is abstracted. We also made an effort to simplify the installation process; in most cases, you will only need to double-click on setup.bat / setup.sh and follow the wizard. We hope you will like it, and it's with great pleasure that we share our vision and this repo with you, hoping it will pave the way for many contributions from you, to further the advancement of the open-source AI space. Features Build and share user-friendly apps on top of complex workflows 100% compatible with ComfyUI and all its features Can work with your existing Comfy installation (with our SDFXBridgeForComfy custom node) LiteGraph almost refactored from scratch in typescript Animated graph navigation Node bookmarks and advanced graph search Lightning fast UI instanciation and beautiful high-level components (450x faster than Gradio) UI Debugger (rudimentary for now) Native Custom Nodes Manager (thanks to Dr.Lt.Data) Export and share apps and templates (group nodes export soon) Advanced layer-based image and mask editor (WIP) Advanced checkpoint picker and gallery Advanced input image picker Modern and ultra fast frontend stack (vitejs, vuejs, electron) Compiles as a native app (Windows, Linux, Mac) or as a webapp Extremely easy to maintain and add new features Screenshots Graph view !SDFX Screenshot App view !SDFX Screenshot| !SDFX Screenshot | |--|--| Prompt Timeline Component !SDFX Screenshot UI Debugger !SDFX Screenshot Node Bookmarks !SDFX Screenshot Node Manager !SDFX Screenshot SDFX Application JSON Structure Guide Welcome to the JSON structure guide for SDFX applications. The following is a comprehensive overview for developers looking to understand and utilize the JSON format for creating user-friendly UI with SDFX. Our aim is to ensure clarity and ease of use, so you can integrate and exchange SDFX apps with confidence. Basic JSON structure of a SDFX app: Application Name name: The name you assign to your application. Meta Information meta: This key houses essential details about your application, for instance: Application Type type: Designated as "sdfx", this key identifies the app as an SDFX application while maintaining compatibility with ComfyUI. This means SDFX apps can be dragged and dropped onto ComfyUI and vice versa. UI Mapping Structure mapping: Specifies the UI structure. Within the mapping, you might find the following structure to describe a Tab component with a checkpoint loader, fully compatible with Tailwind CSS classes: LiteGraph Keys The remaining keys are standard LiteGraph properties used to describe the workflow. UI Components for Mapping Developers can leverage a rich set of UI components for creating user interfaces. Here's a list of available components that can be used and customized with VueJS and Tailwind CSS: Button DragNumber ImageLoader Input ModelPicker Number Preview Prompt PromptTimeline Selector Slider TextArea Toggle BoxDimensions BoxSeed Additionally, HTML elements such as div, p, ul, li, img, iframe, video, and more can be used to enrich the user interface. For layout and structural design, elements like SplitPane, SplitH, SplitV, Tab, TabBox, TabBar, and ToggleSettings offer further customization. The ease of creating new components with VueJS and Tailwind CSS is unmatched, allowing for rapid development and high-quality user interface design. As SDFX moves towards an open-source release, this guide will be invaluable for developers anticipating to engage with a professional and user-centric platform. Enjoy creating with SDFX, and let the simplicity and power of JSON structure enhance your application development process. Upcoming Feature: SDFX App Creator Note: Currently, the process of designing your SDFX application and mapping UI components to node parameters is manual. We understand the intricacies involved and are excited to announce that the release of the SDFX App Creator is on the horizon. The SDFX App Creator will let you create your UI mapping by introducing a visual design interface with drag & drop capabilities. This will greatly simplify the process of linking UI controls with the corresponding node parameters in the workflow graph. Stay tuned for this feature. Installation Make sure your system meets the following requirements: Node.js version 18.9.1 npm version 8.19.1 Python 3.11 Git Windows Then open to install dependencies Error says no Python, but it's installed? A common mistake is forgetting to check the option to add Python to the PATH during installation, as it's often unchecked by default in the installer wizard. Make sure Python is added to your system's environment variables to run the script smoothly. !SDFX Screenshot Linux/MacOs Manual Install Click to expand To perform a manual installation, follow these steps: Install Frontend Dependencies: Navigate to the src directory of SDFX and install the npm dependencies: Clone and Install ComfyUI: Clone the ComfyUI repository into the root directory of SDFX from ComfyUI GitHub and follow the installation instructions provided in the readme to install ComfyUI dependencies. Add the custom node SDFXBridgeForComfyUI Follow the instructions on the repository of the custom node SDFXBridgeForComfyUI to add it to your ComfyUi custom_nodes folder. Create Configuration File: Create a file named sdfx.config.json at the root of your project. Follow the instructions provided here to build the configuration file according to your requirements. Run Start ComfyUI Then start SDFX with: Installation for users already using ComfyUI Locally Click to expand If you already have ComfyUI installed on your machine, follow these steps to integrate SDFX: Clone the SDFXBridgeForComfyUI customnode on your ComfyUI customnode path: For detailed instructions, please refer to the official SDFX for ComfyUI README. Install front-end dependencies and run it: Run Launch SDFX app with ( for Linux/MacOs)

h2o-llmstudio
github
LLM Vibe Score0.499
Human Vibe Score0.04822694170894296
h2oaiMar 28, 2025

h2o-llmstudio

Welcome to H2O LLM Studio, a framework and no-code GUI designed for fine-tuning state-of-the-art large language models (LLMs). Jump to With H2O LLM Studio, you can Quickstart What's New Setup Recommended Install Virtual Environments Run H2O LLM Studio GUI Run H2O LLM Studio GUI using Docker Run H2O LLM Studio with command line interface (CLI) Troubleshooting Data format and example data Training your model Example: Run on OASST data via CLI Model checkpoints Documentation Contributing License With H2O LLM Studio, you can easily and effectively fine-tune LLMs without the need for any coding experience. use a graphic user interface (GUI) specially designed for large language models. finetune any LLM using a large variety of hyperparameters. use recent finetuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with a low memory footprint. use Reinforcement Learning (RL) to finetune your model (experimental) use advanced evaluation metrics to judge generated answers by the model. track and compare your model performance visually. In addition, Neptune and W&B integration can be used. chat with your model and get instant feedback on your model performance. easily export your model to the Hugging Face Hub and share it with the community. Quickstart For questions, discussing, or just hanging out, come and join our Discord! Use cloud-based runpod.io instance to run the H2O LLM Studio GUI. Using CLI for fine-tuning LLMs: What's New PR 788 New problem type for Causal Regression Modeling allows to train single target regression data using LLMs. PR 747 Fully removed RLHF in favor of DPO/IPO/KTO optimization. PR 741 Removing separate max length settings for prompt and answer in favor of a single maxlength settings better resembling chattemplate functionality from transformers. PR 592 Added KTOPairLoss for DPO modeling allowing to train models with simple preference data. Data currently needs to be manually prepared by randomly matching positive and negative examples as pairs. PR 592 Starting to deprecate RLHF in favor of DPO/IPO optimization. Training is disabled, but old experiments are still viewable. RLHF will be fully removed in a future release. PR 530 Introduced a new problem type for DPO/IPO optimization. This optimization technique can be used as an alternative to RLHF. PR 288 Introduced Deepspeed for sharded training allowing to train larger models on machines with multiple GPUs. Requires NVLink. This feature replaces FSDP and offers more flexibility. Deepspeed requires a system installation of cudatoolkit and we recommend using version 12.1. See Recommended Install. PR 449 New problem type for Causal Classification Modeling allows to train binary and multiclass models using LLMs. PR 364 User secrets are now handled more securely and flexible. Support for handling secrets using the 'keyring' library was added. User settings are tried to be migrated automatically. Please note that due to current rapid development we cannot guarantee full backwards compatibility of new functionality. We thus recommend to pin the version of the framework to the one you used for your experiments. For resetting, please delete/backup your data and output folders. Setup H2O LLM Studio requires a machine with Ubuntu 16.04+ and at least one recent Nvidia GPU with Nvidia drivers version >= 470.57.02. For larger models, we recommend at least 24GB of GPU memory. For more information about installation prerequisites, see the Set up H2O LLM Studio guide in the documentation. For a performance comparison of different GPUs, see the H2O LLM Studio performance guide in the documentation. Recommended Install The recommended way to install H2O LLM Studio is using pipenv with Python 3.10. To install Python 3.10 on Ubuntu 16.04+, execute the following commands: System installs (Python 3.10) Installing NVIDIA Drivers (if required) If deploying on a 'bare metal' machine running Ubuntu, one may need to install the required Nvidia drivers and CUDA. The following commands show how to retrieve the latest drivers for a machine running Ubuntu 20.04 as an example. One can update the following based on their OS. alternatively, one can install cudatoolkits in a conda environment: Virtual environments We offer various ways of setting up the necessary python environment. Pipenv virtual environment The following command will create a virtual environment using pipenv and will install the dependencies using pipenv: If you are having troubles installing the flash_attn package, consider running instead. This will install the dependencies without the flash_attn package. Note that this will disable the use of Flash Attention 2 and model training will be slower and consume more memory. Nightly Conda virtual environment You can also setup a conda virtual environment that can also deviate from the recommended setup. The contains a command that installs a fresh conda environment with CUDA 12.4 and current nightly PyTorch. Using requirements.txt If you wish to use another virtual environment, you can also install the dependencies using the requirements.txt file: Run H2O LLM Studio GUI You can start H2O LLM Studio using the following command: This command will start the H2O wave server and app. Navigate to (we recommend using Chrome) to access H2O LLM Studio and start fine-tuning your models! If you are running H2O LLM Studio with a custom environment other than Pipenv, you need to start the app as follows: If you are using the nightly conda environment, you can run . Run H2O LLM Studio GUI using Docker Install Docker first by following instructions from NVIDIA Containers. Make sure to have nvidia-container-toolkit installed on your machine as outlined in the instructions. H2O LLM Studio images are stored in the h2oai dockerhub container repository. Navigate to (we recommend using Chrome) to access H2O LLM Studio and start fine-tuning your models! (Note other helpful docker commands are docker ps and docker kill.) If you prefer to build your own Docker image from source, follow the instructions below. Run H2O LLM Studio with command line interface (CLI) You can also use H2O LLM Studio with the command line interface (CLI) and specify the configuration .yaml file that contains all the experiment parameters. To finetune using H2O LLM Studio with CLI, activate the pipenv environment by running make shell, and then use the following command: To run on multiple GPUs in DDP mode, run the following command: By default, the framework will run on the first k GPUs. If you want to specify specific GPUs to run on, use the CUDAVISIBLEDEVICES environment variable before the command. To start an interactive chat with your trained model, use the following command: where experiment_name is the output folder of the experiment you want to chat with (see configuration). The interactive chat will also work with model that were finetuned using the UI. To publish the model to Hugging Face, use the following command: pathtoexperiment is the output folder of the experiment. device is the target device for running the model, either 'cpu' or 'cuda:0'. Default is 'cuda:0'. api_key is the Hugging Face API Key. If user logged in, it can be omitted. user_id is the Hugging Face user ID. If user logged in, it can be omitted. model_name is the name of the model to be published on Hugging Face. It can be omitted. safe_serialization is a flag indicating whether safe serialization should be used. Default is True. Troubleshooting If running on cloud based machines such as runpod, you may need to set the following environment variable to allow the H2O Wave server to accept connections from the proxy: If you are experiencing timeouts when running the H2O Wave server remotely, you can increase the timeout by setting the following environment variables: All default to 5 (seconds). Increase them if you are experiencing timeouts. Use -1 to disable the timeout. Data format and example data For details on the data format required when importing your data or example data that you can use to try out H2O LLM Studio, see Data format in the H2O LLM Studio documentation. Training your model With H2O LLM Studio, training your large language model is easy and intuitive. First, upload your dataset and then start training your model. Start by creating an experiment. You can then monitor and manage your experiment, compare experiments, or push the model to Hugging Face to share it with the community. Example: Run on OASST data via CLI As an example, you can run an experiment on the OASST data via CLI. For instructions, see Run an experiment on the OASST data guide in the H2O LLM Studio documentation. Model checkpoints All open-source datasets and models are posted on H2O.ai's Hugging Face page and our H2OGPT repository. Documentation Detailed documentation and frequently asked questions (FAQs) for H2O LLM Studio can be found at . If you wish to contribute to the docs, navigate to the /documentation folder of this repo and refer to the README.md for more information. Contributing We are happy to accept contributions to the H2O LLM Studio project. Please refer to the CONTRIBUTING.md file for more information. License H2O LLM Studio is licensed under the Apache 2.0 license. Please see the LICENSE file for more information.

AI-Scalpel-Trading-Bot
github
LLM Vibe Score0.491
Human Vibe Score0.09890315835809398
hackobiMar 28, 2025

AI-Scalpel-Trading-Bot

AI-Scalpel-Trading-Bot Disclaimer This software is for educational purposes only. Do not risk money which you are afraid to lose. USE THE SOFTWARE AT YOUR OWN RISK. THE AUTHORS AND ALL AFFILIATES ASSUME NO RESPONSIBILITY FOR YOUR TRADING RESULTS. Always start by running a trading bot in Dry-run and do not engage money before you understand how it works and what profit/loss you should expect. This is an implementation of freqtrade where different machine learning implementations will be tested. Freqtrade is a free and open source crypto trading bot written in Python. It is designed to support all major exchanges and be controlled via Telegram. It contains backtesting, plotting and money management tools as well as strategy optimization by machine learning. !freqtrade Exchange marketplaces supported [X] Bittrex [X] Binance (*Note for binance users) [ ] 113 others to tests. (Some of them might not work) Documentation Documentation. Features [x] Based on Python 3.6+: For botting on any operating system - Windows, macOS and Linux. [x] Persistence: Persistence is achieved through sqlite. [x] Dry-run: Run the bot without playing money. [x] Backtesting: Run a simulation of your buy/sell strategy. [x] Strategy Optimization by machine learning: Use machine learning to optimize your buy/sell strategy parameters with real exchange data. [x] Edge position sizing Calculate your win rate, risk reward ratio, the best stoploss and adjust your position size before taking a position for each specific market. Learn more. [x] Whitelist crypto-currencies: Select which crypto-currency you want to trade or use dynamic whitelists. [x] Blacklist crypto-currencies: Select which crypto-currency you want to avoid. [x] Manageable via Telegram: Manage the bot with Telegram. [x] Display profit/loss in fiat: Display your profit/loss in 33 fiat. [x] Daily summary of profit/loss: Provide a daily summary of your profit/loss. [x] Performance status report: Provide a performance status of your current trades. Quick start Freqtrade provides a Linux/macOS script to install all dependencies and help you to configure the bot. Other installations. Basic Usage Bot commands Telegram RPC commands Telegram is not mandatory. However, this is a great way to control your bot. More details on our documentation /start: Starts the trader /stop: Stops the trader /status [table]: Lists all open trades /count: Displays number of open trades /profit: Lists cumulative profit from all finished trades /forcesell |all: Instantly sells the given trade (Ignoring minimum_roi). /performance: Show performance of each finished trade grouped by pair /balance: Show account balance per currency /daily : Shows profit or loss per day, over the last n days /help: Show help message /version: Show version Development branches The project is currently setup in two main branches: develop - This branch has often new features, but might also cause breaking changes. master - This branch contains the latest stable release. The bot 'should' be stable on this branch, and is generally well tested. feat/* - These are feature branches, which are being worked on heavily. Please don't use these unless you want to test a specific feature. A note on Binance For Binance, please add "BNB/" to your blacklist to avoid issues. Accounts having BNB accounts use this to pay for fees - if your first trade happens to be on BNB, further trades will consume this position and make the initial BNB order unsellable as the expected amount is not there anymore. Support Help / Slack For any questions not covered by the documentation or for further information about the bot, I encourage you to join freqtrade's slack channel. Click here to join Slack channel. Bugs / Issues If you discover a bug in the bot, please search their issue tracker first. If it hasn't been reported, please create a new issue and ensure you follow the template guide so that our team can assist you as quickly as possible. Feature Requests Have you a great idea to improve the bot you want to share? Please, first search if this feature was not already discussed. If it hasn't been requested, please create a new request and ensure you follow the template guide so that it does not get lost in the bug reports. Pull Requests Feel like the bot is missing a feature? Keep em pull requests coming! Please read the Contributing document to understand the requirements before sending pull-requests. Coding is not a neccessity to contribute - maybe start with improving our documentation? Issues labeled good first issue can be good first contributions, and will help get you familiar with the codebase. Note before starting any major new feature work, please open an issue describing what you are planning to do or talk to the team on Slack. This will ensure that interested parties can give valuable feedback on the feature, and let others know that you are working on it. Important: Always create your PR against the develop branch, not master. Requirements Uptodate clock The clock must be accurate, syncronized to a NTP server very frequently to avoid problems with communication to the exchanges. Min hardware required To run this bot we recommend you a cloud instance with a minimum of: Minimal (advised) system requirements: 2GB RAM, 1GB disk space, 2vCPU Software requirements Python 3.6.x pip git TA-Lib virtualenv (Recommended) Docker (Recommended)

RD-Agent
github
LLM Vibe Score0.548
Human Vibe Score0.27921589729164453
microsoftMar 28, 2025

RD-Agent

🖥️ Live Demo | 🎥 Demo Video ▶️YouTube | 📖 Documentation | 📃 Papers Data Science Agent Preview Check out our demo video showcasing the current progress of our Data Science Agent under development: https://github.com/user-attachments/assets/3eccbecb-34a4-4c81-bce4-d3f8862f7305 📰 News | 🗞️ News | 📝 Description | | -- | ------ | | Support LiteLLM Backend | We now fully support LiteLLM as a backend for integration with multiple LLM providers. | | More General Data Science Agent | 🚀Coming soon! | | Kaggle Scenario release | We release Kaggle Agent, try the new features! | | Official WeChat group release | We created a WeChat group, welcome to join! (🗪QR Code) | | Official Discord release | We launch our first chatting channel in Discord (🗪) | | First release | RDAgent is released on GitHub | 🌟 Introduction RDAgent aims to automate the most critical and valuable aspects of the industrial R&D process, and we begin with focusing on the data-driven scenarios to streamline the development of models and data. Methodologically, we have identified a framework with two key components: 'R' for proposing new ideas and 'D' for implementing them. We believe that the automatic evolution of R&D will lead to solutions of significant industrial value. R&D is a very general scenario. The advent of RDAgent can be your 💰 Automatic Quant Factory (🎥Demo Video|▶️YouTube) 🤖 Data Mining Agent: Iteratively proposing data & models (🎥Demo Video 1|▶️YouTube) (🎥Demo Video 2|▶️YouTube) and implementing them by gaining knowledge from data. 🦾 Research Copilot: Auto read research papers (🎥Demo Video|▶️YouTube) / financial reports (🎥Demo Video|▶️YouTube) and implement model structures or building datasets. 🤖 Kaggle Agent: Auto Model Tuning and Feature Engineering([🎥Demo Video Coming Soon...]()) and implementing them to achieve more in competitions. ... You can click the links above to view the demo. We're continuously adding more methods and scenarios to the project to enhance your R&D processes and boost productivity. Additionally, you can take a closer look at the examples in our 🖥️ Live Demo. ⚡ Quick start You can try above demos by running the following command: 🐳 Docker installation. Users must ensure Docker is installed before attempting most scenarios. Please refer to the official 🐳Docker page for installation instructions. Ensure the current user can run Docker commands without using sudo. You can verify this by executing docker run hello-world. 🐍 Create a Conda Environment Create a new conda environment with Python (3.10 and 3.11 are well-tested in our CI): Activate the environment: 🛠️ Install the RDAgent You can directly install the RDAgent package from PyPI: 💊 Health check rdagent provides a health check that currently checks two things. whether the docker installation was successful. whether the default port used by the rdagent ui is occupied. ⚙️ Configuration The demos requires following ability: ChatCompletion json_mode embedding query For example: If you are using the OpenAI API, you have to configure your GPT model in the .env file like this. However, not every API services support these features by default. For example: AZURE OpenAI, you have to configure your GPT model in the .env file like this. We now support LiteLLM as a backend for integration with multiple LLM providers. If you use LiteLLM Backend to use models, you can configure as follows: For more configuration information, please refer to the documentation. 🚀 Run the Application The 🖥️ Live Demo is implemented by the following commands(each item represents one demo, you can select the one you prefer): Run the Automated Quantitative Trading & Iterative Factors Evolution: Qlib self-loop factor proposal and implementation application Run the Automated Quantitative Trading & Iterative Model Evolution: Qlib self-loop model proposal and implementation application Run the Automated Medical Prediction Model Evolution: Medical self-loop model proposal and implementation application (1) Apply for an account at PhysioNet. (2) Request access to FIDDLE preprocessed data: FIDDLE Dataset. (3) Place your username and password in .env. Run the Automated Quantitative Trading & Factors Extraction from Financial Reports: Run the Qlib factor extraction and implementation application based on financial reports Run the Automated Model Research & Development Copilot: model extraction and implementation application Run the Automated Kaggle Model Tuning & Feature Engineering: self-loop model proposal and feature engineering implementation application Using sf-crime (San Francisco Crime Classification) as an example. Register and login on the Kaggle website. Configuring the Kaggle API. (1) Click on the avatar (usually in the top right corner of the page) -> Settings -> Create New Token, A file called kaggle.json will be downloaded. (2) Move kaggle.json to ~/.config/kaggle/ (3) Modify the permissions of the kaggle.json file. Reference command: chmod 600 ~/.config/kaggle/kaggle.json Join the competition: Click Join the competition -> I Understand and Accept at the bottom of the competition details page. Description of the above example: Kaggle competition data, contains two parts: competition description file (json file) and competition dataset (zip file). We prepare the competition description file for you, the competition dataset will be downloaded automatically when you run the program, as in the example. If you want to download the competition description file automatically, you need to install chromedriver, The instructions for installing chromedriver can be found in the documentation. The Competition List Available can be found here. 🖥️ Monitor the Application Results You can run the following command for our demo program to see the run logs. Note: Although port 19899 is not commonly used, but before you run this demo, you need to check if port 19899 is occupied. If it is, please change it to another port that is not occupied. You can check if a port is occupied by running the following command. 🏭 Scenarios We have applied RD-Agent to multiple valuable data-driven industrial scenarios. 🎯 Goal: Agent for Data-driven R&D In this project, we are aiming to build an Agent to automate Data-Driven R\&D that can 📄 Read real-world material (reports, papers, etc.) and extract key formulas, descriptions of interested features and models, which are the key components of data-driven R&D . 🛠️ Implement the extracted formulas (e.g., features, factors, and models) in runnable codes. Due to the limited ability of LLM in implementing at once, build an evolving process for the agent to improve performance by learning from feedback and knowledge. 💡 Propose new ideas based on current knowledge and observations. 📈 Scenarios/Demos In the two key areas of data-driven scenarios, model implementation and data building, our system aims to serve two main roles: 🦾Copilot and 🤖Agent. The 🦾Copilot follows human instructions to automate repetitive tasks. The 🤖Agent, being more autonomous, actively proposes ideas for better results in the future. The supported scenarios are listed below: | Scenario/Target | Model Implementation | Data Building | | -- | -- | -- | | 💹 Finance | 🤖 Iteratively Proposing Ideas & Evolving▶️YouTube | 🤖 Iteratively Proposing Ideas & Evolving ▶️YouTube 🦾 Auto reports reading & implementation▶️YouTube | | 🩺 Medical | 🤖 Iteratively Proposing Ideas & Evolving▶️YouTube | - | | 🏭 General | 🦾 Auto paper reading & implementation▶️YouTube 🤖 Auto Kaggle Model Tuning | 🤖Auto Kaggle feature Engineering | RoadMap: Currently, we are working hard to add new features to the Kaggle scenario. Different scenarios vary in entrance and configuration. Please check the detailed setup tutorial in the scenarios documents. Here is a gallery of successful explorations (5 traces showed in 🖥️ Live Demo). You can download and view the execution trace using this command from the documentation. Please refer to 📖readthedocs_scen for more details of the scenarios. ⚙️ Framework Automating the R&D process in data science is a highly valuable yet underexplored area in industry. We propose a framework to push the boundaries of this important research field. The research questions within this framework can be divided into three main categories: | Research Area | Paper/Work List | |--------------------|-----------------| | Benchmark the R&D abilities | Benchmark | | Idea proposal: Explore new ideas or refine existing ones | Research | | Ability to realize ideas: Implement and execute ideas | Development | We believe that the key to delivering high-quality solutions lies in the ability to evolve R&D capabilities. Agents should learn like human experts, continuously improving their R&D skills. More documents can be found in the 📖 readthedocs. 📃 Paper/Work list 📊 Benchmark Towards Data-Centric Automatic R&D !image 🔍 Research In a data mining expert's daily research and development process, they propose a hypothesis (e.g., a model structure like RNN can capture patterns in time-series data), design experiments (e.g., finance data contains time-series and we can verify the hypothesis in this scenario), implement the experiment as code (e.g., Pytorch model structure), and then execute the code to get feedback (e.g., metrics, loss curve, etc.). The experts learn from the feedback and improve in the next iteration. Based on the principles above, we have established a basic method framework that continuously proposes hypotheses, verifies them, and gets feedback from the real-world practice. This is the first scientific research automation framework that supports linking with real-world verification. For more detail, please refer to our 🖥️ Live Demo page. 🛠️ Development Collaborative Evolving Strategy for Automatic Data-Centric Development !image 🤝 Contributing We welcome contributions and suggestions to improve RD-Agent. Please refer to the Contributing Guide for more details on how to contribute. Before submitting a pull request, ensure that your code passes the automatic CI checks. 📝 Guidelines This project welcomes contributions and suggestions. Contributing to this project is straightforward and rewarding. Whether it's solving an issue, addressing a bug, enhancing documentation, or even correcting a typo, every contribution is valuable and helps improve RDAgent. To get started, you can explore the issues list, or search for TODO: comments in the codebase by running the command grep -r "TODO:". Before we released RD-Agent as an open-source project on GitHub, it was an internal project within our group. Unfortunately, the internal commit history was not preserved when we removed some confidential code. As a result, some contributions from our group members, including Haotian Chen, Wenjun Feng, Haoxue Wang, Zeqi Ye, Xinjie Shen, and Jinhui Li, were not included in the public commits. ⚖️ Legal disclaimer The RD-agent is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. The RD-agent is aimed to facilitate research and development process in the financial industry and not ready-to-use for any financial investment or advice. Users shall independently assess and test the risks of the RD-agent in a specific use scenario, ensure the responsible use of AI technology, including but not limited to developing and integrating risk mitigation measures, and comply with all applicable laws and regulations in all applicable jurisdictions. The RD-agent does not provide financial opinions or reflect the opinions of Microsoft, nor is it designed to replace the role of qualified financial professionals in formulating, assessing, and approving finance products. The inputs and outputs of the RD-agent belong to the users and users shall assume all liability under any theory of liability, whether in contract, torts, regulatory, negligence, products liability, or otherwise, associated with use of the RD-agent and any inputs and outputs thereof.

generative-ai-use-cases-jp
github
LLM Vibe Score0.703
Human Vibe Score0.7656748140276302
aws-samplesMar 28, 2025

generative-ai-use-cases-jp

Generative AI Use Cases JP (略称:GenU) 生成 AI を安全に業務活用するための、ビジネスユースケース集を備えたアプリケーション実装 [!IMPORTANT] GenU は 2025/01 に v3 にアップグレードされました。いくつかの破壊的変更を伴いますので、アップグレード前に リリースノート をご確認ください。 GenU 活用パターン集 GenU の機能やオプションを活用パターンごとに紹介いたします。網羅的なデプロイオプションに関しては こちら をご参照ください。 [!TIP] 活用パターンをクリックして詳細を確認してください 生成 AI のユースケースを体験したい GenU は生成 AI を活用した多様なユースケースを標準で提供しています。それらのユースケースは、生成 AI を業務活用するためのアイデアの種となったり、そのまま業務で活用できるものなど、さまざまです。今後もさらにブラッシュアップされたユースケースを随時追加予定です。また、不要であれば 特定のユースケースを非表示にする オプションで非表示にすることもできます。デフォルトで提供しているユースケース一覧はこちらです。 ユースケース 説明 チャット 大規模言語モデル (LLM) とチャット形式で対話することができます。LLM と直接対話するプラットフォームが存在するおかげで、細かいユースケースや新しいユースケースに迅速に対応することができます。また、プロンプトエンジニアリングの検証用環境としても有効です。 文章生成 あらゆるコンテキストで文章を生成することは LLM が最も得意とするタスクの 1 つです。記事・レポート・メールなど、あらゆる文章を生成します。 要約 LLM は、大量の文章を要約するタスクを得意としています。ただ要約するだけでなく、文章をコンテキストとして与えた上で、必要な情報を対話形式で引き出すこともできます。例えば、契約書を読み込ませて「XXX の条件は?」「YYY の金額は?」といった情報を取得することが可能です。 執筆 LLM は、誤字脱字のチェックだけでなく、文章の流れや内容を考慮したより客観的な視点から改善点を提案できます。人に見せる前に LLM に自分では気づかなかった点を客観的にチェックしてもらいクオリティを上げる効果が期待できます。 翻訳 多言語で学習した LLM は、翻訳を行うことも可能です。また、ただ翻訳するだけではなく、カジュアルさ・対象層など様々な指定されたコンテキスト情報を翻訳に反映させることが可能です。 Web コンテンツ抽出 ブログやドキュメントなどの Web コンテンツから必要な情報を抽出します。LLMによって不要な情報を除去し、整った文章として整形します。抽出したコンテンツは要約、翻訳などの別のユースケースで利用できます。 画像生成 画像生成 AI は、テキストや画像を元に新しい画像を生成できます。アイデアを即座に可視化することができ、デザイン作業などの効率化を期待できます。こちらの機能では、プロンプトの作成を LLM に支援してもらうことができます。 動画生成 動画生成 AI はテキストから短い動画を生成します。生成した動画は素材としてさまざまなシーンで活用できます。 映像分析 マルチモーダルモデルによってテキストのみではなく、画像を入力することが可能になりました。こちらの機能では、映像の画像フレームとテキストを入力として LLM に分析を依頼します。 ダイアグラム生成 ダイアグラム生成は、あらゆるトピックに関する文章や内容を最適な図を用いて視覚化します。 テキストベースで簡単に図を生成でき、プログラマーやデザイナーでなくても効率的にフローチャートなどの図を作成できます。 RAG がしたい RAG は LLM が苦手な最新の情報やドメイン知識を外部から伝えることで、本来なら回答できない内容にも答えられるようにする手法です。 社内に蓄積された PDF, Word, Excel などのファイルが情報ソースになります。 RAG は根拠に基づいた回答のみを許すため、LLM にありがちな「それっぽい間違った情報」を回答させないという効果もあります。 GenU は RAG チャットというユースケースを提供しています。 また RAG チャットの情報ソースとして Amazon Kendra と Knowledge Base の 2 種類が利用可能です。 Amazon Kendra を利用する場合は、手動で作成した S3 Bucket や Kendra Index をそのまま利用することが可能です。 Knowledge Base を利用する場合は、Advanced Parsing・チャンク戦略の選択・クエリ分解・リランキング など高度な RAG が利用可能です。 また Knowledge Base では、メタデータフィルターの設定 も可能です。 例えば「組織ごとにアクセス可能なデータソースを切り替えたい」や「UI からユーザーがフィルタを設定したい」といった要件を満たすことが可能です。 独自に作成した AI エージェントや Bedrock Flows などを社内で利用したい GenU で エージェントを有効化すると Web 検索エージェントと Code Interpreter エージェントが作成されます。 Web 検索エージェントは、ユーザーの質問に回答するための情報を Web で検索し、回答します。例えば「AWS の GenU ってなに?」という質問に回答できます。 Code Interpreter エージェントは、ユーザーからのリクエストに応えるためにコードが実行できます。例えば「適当なダミーデータで散布図を描いて」といったリクエストに応えられます。 Web 検索エージェントと Code Interpreter エージェントはエージェントとしては基本的なものですので、中にはもっと業務に寄り添った実践的なエージェントを使いたいという要望もあると思います。 GenU では手動で作成したエージェントや別のアセットで作成したエージェントを インポートする機能 を提供しております。 GenU をエージェント活用のプラットフォームとして利用することで、GenU が提供する 豊富なセキュリティオプション や SAML認証 などを活用し、実践的なエージェントを社内に普及させることができます。 また、オプションで 不要な標準ユースケースを非表示 にしたり、エージェントをインライン表示 することで、よりエージェントに特化したプラットフォームとして GenU をご利用いただくことが可能です。 Bedrock Flows に関しても同様に インポート機能 がございますので、ぜひご活用ください。 独自のユースケースを作成したい GenU はプロンプトテンプレートを自然言語で記述することで独自のユースケースを作成できる「ユースケースビルダー」という機能を提供しています。 プロンプトテンプレートだけで独自のユースケース画面が自動生成されるため、GenU 本体のコード変更は一切不要です。 作成したユースケースは、個人利用だけではなく、アプリケーションにログインできる全ユーザーに共有することもできます。 ユースケースビルダーは不要であれば無効化することも可能です。 ユースケースビルダーについての詳細は、ぜひこちらのブログをご覧ください。 ユースケースビルダーではフォームにテキストを入力したりファイルを添付するユースケースが作成できますが、要件によってはチャットの UI が良い場合もあると思います。 そのようなケースでは「チャット」ユースケースのシステムプロンプト保存機能をご活用ください。 システムプロンプトを保存しておくことで、ワンクリックで業務に必要な "ボット" が作成できます。 例えば「ソースコードを入力するとひたすらレビューしてくれるボット」や「入力した内容からひたすらメールアドレスを抽出してくれるボット」などが作成できます。 また、チャットの会話履歴はログインユーザーにシェアすることが可能で、シェアされた会話履歴からシステムプロンプトをインポートすることもできます。 GenU は OSS ですので、カスタマイズして独自のユースケースを追加するということも可能です。 その場合は GenU の main ブランチとのコンフリクトにお気をつけてください。 デプロイ [!IMPORTANT] /packages/cdk/cdk.json に記載されている modelRegion リージョンの modelIds (テキスト生成) 及び imageGenerationModelIds (画像生成) を有効化してください。(Amazon Bedrock の Model access 画面) GenU のデプロイには AWS Cloud Development Kit(以降 CDK)を利用します。CDK の実行環境が用意できない場合は、以下のデプロイ方法を参照してください。 AWS CloudShell を利用したデプロイ方法 (手元の環境を用意することが難しい場合) Workshop まず、以下のコマンドを実行してください。全てのコマンドはリポジトリのルートで実行してください。 CDK を利用したことがない場合、初回のみ Bootstrap 作業が必要です。すでに Bootstrap された環境では以下のコマンドは不要です。 続いて、以下のコマンドで AWS リソースをデプロイします。デプロイが完了するまで、お待ちください(20 分程度かかる場合があります)。 アーキテクチャ !arch.drawio.png その他 デプロイオプション アップデート方法 ローカル開発環境構築手順 リソースの削除方法 ネイティブアプリのように利用する方法 ブラウザ拡張機能を利用する 料金試算 GenU をご利用いただく際の、構成と料金試算例を公開しております。(従量課金制となっており、実際の料金はご利用内容により変動いたします。) シンプル版 (RAG なし) 試算 RAG (Amazon Kendra) あり試算 RAG (Knowledge Base) あり試算 お客様事例 | Customer | Quote | |:--------|:---------| | | 株式会社やさしい手 GenU のおかげで、利用者への付加価値提供と従業員の業務効率向上が実現できました。従業員にとって「いままでの仕事」が楽しい仕事に変化していく「サクサクからワクワクへ」更に進化を続けます! ・事例の詳細を見る ・事例のページを見る| | | タキヒヨー株式会社 生成 AI を活用し社内業務効率化と 450 時間超の工数削減を実現。Amazon Bedrock を衣服デザイン等に適用、デジタル人材育成を推進。 ・事例のページを見る| | | 株式会社サルソニード ソリューションとして用意されている GenU を活用することで、生成 AI による業務プロセスの改善に素早く取り掛かることができました。 ・事例の詳細を見る ・適用サービス| | | 株式会社タムラ製作所 AWS が Github に公開しているアプリケーションサンプルは即テスト可能な機能が豊富で、そのまま利用することで自分たちにあった機能の選定が難なくでき、最終システムの開発時間を短縮することができました。 ・事例の詳細を見る | | | 株式会社JDSC Amazon Bedrock ではセキュアにデータを用い LLM が活用できます。また、用途により最適なモデルを切り替えて利用できるので、コストを抑えながら速度・精度を高めることができました。 ・事例の詳細を見る | | | アイレット株式会社 株式会社バンダイナムコアミューズメントの生成 AI 活用に向けて社内のナレッジを蓄積・体系化すべく、AWS が提供している Generative AI Use Cases JP を活用したユースケースサイトを開発。アイレット株式会社が本プロジェクトの設計・構築・開発を支援。 ・株式会社バンダイナムコアミューズメント様のクラウドを活用した導入事例 | | | 株式会社アイデアログ M従来の生成 AI ツールよりもさらに業務効率化ができていると感じます。入出力データをモデルの学習に使わない Amazon Bedrock を使っているので、セキュリティ面も安心です。 ・事例の詳細を見る ・適用サービス| | | 株式会社エスタイル GenU を活用して短期間で生成 AI 環境を構築し、社内のナレッジシェアを促進することができました。 ・事例の詳細を見る | | | 株式会社明電舎 Amazon Bedrock や Amazon Kendra など AWS のサービスを利用することで、生成 AI の利用環境を迅速かつセキュアに構築することができました。議事録の自動生成や社内情報の検索など、従業員の業務効率化に貢献しています。 ・事例の詳細を見る | | | 三協立山株式会社 社内に埋もれていた情報が Amazon Kendra の活用で素早く探せるようになりました。GenU を参考にすることで求めていた議事録生成などの機能を迅速に提供できました。 ・事例の詳細を見る | | | オイシックス・ラ・大地株式会社 GenU を活用したユースケースの開発プロジェクトを通して、必要なリソース、プロジェクト体制、外部からの支援、人材育成などを把握するきっかけとなり、生成 AI の社内展開に向けたイメージを明確につかむことができました。 ・事例のページを見る | | | 株式会社サンエー Amazon Bedrock を活用することでエンジニアの生産性が劇的に向上し、内製で構築してきた当社特有の環境のクラウドへの移行を加速できました。 ・事例の詳細を見る ・事例のページを見る | 活用事例を掲載させて頂ける場合は、Issueよりご連絡ください。 参照 ブログ: 生成 AI アプリをノーコードで作成・社内配布できる GenU ユースケースビルダー ブログ: RAG プロジェクトを成功させる方法 #1 ~ あるいは早く失敗しておく方法 ~ ブログ: RAG チャットで精度向上のためのデバッグ方法 ブログ: Amazon Q Developer CLI を利用してノーコーディングで GenU をカスタマイズ ブログ: Generative AI Use Cases JP をカスタマイズする方法 ブログ: 無茶振りは生成 AI に断ってもらおう ~ ブラウザに生成 AI を組み込んでみた ~ ブログ: Amazon Bedrock で Interpreter を開発! 動画: 生成 AI ユースケースを考え倒すための Generative AI Use Cases JP (GenU) の魅力と使い方 Security See CONTRIBUTING for more information. License This library is licensed under the MIT-0 License. See the LICENSE file.

instill-core
github
LLM Vibe Score0.515
Human Vibe Score0.023472450495103967
instill-aiMar 28, 2025

instill-core

🔮 Instill Core A complete unstructured data solution: ETL processing, AI-readiness, open-source LLM hosting, and RAG capabilities in one powerful platform. Quick start Follow the installation steps below or documentation for more details to build versatile AI applications locally. What is Instill Core? Instill Core is an end-to-end AI platform for data, pipeline and model orchestration. 🔮 Instill Core simplifies infrastructure hassle and encompasses these core features: 💧 Pipeline: Quickly build versatile AI-first APIs or automated workflows. ⚗️ Model: Deploy and monitor AI models without GPU infrastructure hassles. 💾 Artifact: Transform unstructured data (e.g., documents, images, audio, video) into AI-ready formats. ⚙️ Component: Connect essential building blocks to construct powerful pipelines. What can you build? 📖 Parsing PDF Files to Markdown: Cookbook 🧱 Generating Structured Outputs from LLMs: Cookbook & Tutorial 🕸️ Web scraping & Google Search with Structured Insights 🌱 Instance segmentation on microscopic plant stomata images: Cookbook See Examples for more! Installation Prerequisites | Operating System | Requirements and Instructions | | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | macOS or Linux | Instill Core works natively | | Windows | • Use Windows Subsystem for Linux (WSL2)• Install latest yq from GitHub Repository• Install latest Docker Desktop and enable WSL2 integration (tutorial)• (Optional) Install cuda-toolkit on WSL2 (NVIDIA tutorial) | | All Systems | • Docker Engine v25 or later• Docker Compose v2 or later• Install latest stable Docker and Docker Compose | Steps Use stable release version Execute the following commands to pull pre-built images with all the dependencies to launch: [!NOTE] We have restructured our project repositories. If you need to access 🔮 Instill Core projects up to version v0.13.0-beta, please refer to the instill-ai/deprecated-core repository. Use the latest version for local development Execute the following commands to build images with all the dependencies to launch: [!IMPORTANT] Code in the main branch tracks under-development progress towards the next release and may not work as expected. If you are looking for a stable alpha version, please use latest release. 🚀 That's it! Once all the services are up with health status, the UI is ready to go at . Please find the default login credentials in the documentation. To shut down all running services: Deployment Visit the Deployment Overview for more details. Client Access 📺 Console ⌨️ CLI 📦 SDK: Python SDK TypeScript SDK Stay tuned, as more SDKs are on the way! Documentation Please visit our official documentation for more. Additional resources: API Reference Cookbooks Tutorials Examples Contributing We welcome contributions from our community! Checkout the methods below: Cookbooks: Help us create helpful pipelines and guides for the community. Visit our Cookbook repository to get started. Issues: Contribute to improvements by raising tickets using templates here or discuss in existing ones you think you can help with. Community Standards We are committed to maintaining a respectful and welcoming atmosphere for all contributors. Before contributing, please read: Contributing Guidelines Code of Conduct Support Get help by joining our Discord community where you can post any questions on our #ask-for-help channel. Contributors ✨ Thank you to all these wonderful people (emoji key): Vibhor Bhatt Miguel Ortiz Sajda Kabir Henry Chen Hari Bhandari Shiva Gaire Zubeen ShihChun-H Ikko Eltociear Ashimine Farookh Zaheer Siddiqui Brian Gallagher hairyputtar David Marx Deniz Parlak Po-Yu Chen Po Chun Chiu Sarthak HR Wu phelan Chang, Hui-Tang Xiaofei Du Ping-Lin Chang Tony Wang Pratik date Juan Vallés Naman Anand totuslink Praharsh Jain Utsav Paul CaCaBlocker Rafael Melo Jeremy Shih Romit Mohane ChunHao Amelia C 楊竣凱 andre.liang Zoodane George Strong Anni Mubeen Kodvavi RCKT Wojciech Bandzerewicz Gary Leo felixcorleone Zoe Daniel Manul Thanura Akash Jana Anish0203 Prathamesh Tugaonkar Shubham This project follows the all-contributors specification. Contributions of any kind welcome! License See the LICENSE file for licensing information.

CrewAI-Studio
github
LLM Vibe Score0.488
Human Vibe Score0.0100269728798468
strnadMar 28, 2025

CrewAI-Studio

CrewAI Studio Welcome to CrewAI Studio! This application provides a user-friendly interface written in Streamlit for interacting with CrewAI, suitable even for those who don't want to write any code. Follow the steps below to install and run the application using Docker/docker-compose or Conda/venv. Features Multi-platform support: Works on Windows, Linux and MacOS. No coding required: User-friendly interface for interacting with CrewAI. Conda and virtual environment support: Choose between Conda and a Python virtual environment for installation. Results history: You can view previous results. Knowledge sources: You can add knowledge sources for your crews CrewAI tools You can use crewai tools to interact with real world. ~~Crewai studio uses a forked version of crewai-tools with some bugfixes and enhancements (https://github.com/strnad/crewAI-tools)~~ (bugfixes already merged to crewai-tools) Custom Tools Custom tools for calling APIs, writing files, enhanced code interpreter, enhanced web scraper... More will be added soon LLM providers supported: Currently OpenAI, Groq, Anthropic, ollama, Grok and LM Studio backends are supported. OpenAI key is probably still needed for embeddings in many tools. Don't forget to load an embedding model when using LM Studio. Single Page app export: Feature to export crew as simple single page streamlit app. Threaded crew run: Crews can run in background and can be stopped. Support CrewAI Studio Your support helps fund the development and growth of our project. Every contribution is greatly appreciated! Donate with Bitcoin Sponsor via GitHub Screenshots Installation Using Virtual Environment For Virtual Environment: Ensure you have Python installed. If you dont have python instaled, you can simply use the conda installer. On Linux or MacOS Clone the repository (or use downloaded ZIP file): Run the installation script: Run the application: On Windows Clone the repository (or use downloaded ZIP file): Run the Conda installation script: Run the application: Using Conda Conda will be installed locally in the project folder. No need for a pre-existing Conda installation. On Linux Clone the repository (or use downloaded ZIP file): Run the Conda installation script: Run the application: On Windows Clone the repository (or use downloaded ZIP file): Run the Conda installation script: Run the application: One-Click Deployment Running with Docker Compose To quickly set up and run CrewAI-Studio using Docker Compose, follow these steps: Prerequisites Ensure Docker and Docker Compose are installed on your system. Steps Clone the repository: Create a .env file for configuration. Edit for your own configuration: Start the application with Docker Compose: Access the application: http://localhost:8501 Configuration Before running the application, ensure you update the .env file with your API keys and other necessary configurations. An example .env file is provided for reference. Troubleshooting In case of problems: Delete the venv/miniconda folder and reinstall crewai-studio. Rename crewai.db (it contains your crews but sometimes new versions can break compatibility). Raise an issue and I will help you. Video tutorial Video tutorial on CrewAI Studio made by Josh Poco Star History

TornadoVM
github
LLM Vibe Score0.539
Human Vibe Score0.20972324263626374
beehive-labMar 28, 2025

TornadoVM

TornadoVM !TornadoVM version TornadoVM is a plug-in to OpenJDK and GraalVM that allows programmers to automatically run Java programs on heterogeneous hardware. TornadoVM targets OpenCL, PTX and SPIR-V compatible devices which include multi-core CPUs, dedicated GPUs (Intel, NVIDIA, AMD), integrated GPUs (Intel HD Graphics and ARM Mali), and FPGAs (Intel and Xilinx). TornadoVM has three backends that generate OpenCL C, NVIDIA CUDA PTX assembly, and SPIR-V binary. Developers can choose which backends to install and run. Website: tornadovm.org Documentation: https://tornadovm.readthedocs.io/en/latest/ For a quick introduction please read the following FAQ. Latest Release: TornadoVM 1.0.10 - 31/01/2025 : See CHANGELOG. Installation In Linux and macOS, TornadoVM can be installed automatically with the installation script. For example: NOTE Select the desired backend: opencl: Enables the OpenCL backend (requires OpenCL drivers) ptx: Enables the PTX backend (requires NVIDIA CUDA drivers) spirv: Enables the SPIRV backend (requires Intel Level Zero drivers) Example of installation: Alternatively, TornadoVM can be installed either manually from source or by using Docker. If you are planning to use Docker with TornadoVM on GPUs, you can also follow these guidelines. You can also run TornadoVM on Amazon AWS CPUs, GPUs, and FPGAs following the instructions here. Usage Instructions TornadoVM is currently being used to accelerate machine learning and deep learning applications, computer vision, physics simulations, financial applications, computational photography, and signal processing. Featured use-cases: kfusion-tornadovm: Java application for accelerating a computer-vision application using the Tornado-APIs to run on discrete and integrated GPUs. Java Ray-Tracer: Java application accelerated with TornadoVM for real-time ray-tracing. We also have a set of examples that includes NBody, DFT, KMeans computation and matrix computations. Additional Information General Documentation Benchmarks How TornadoVM executes reductions Execution Flags FPGA execution Profiler Usage Programming Model TornadoVM exposes to the programmer task-level, data-level and pipeline-level parallelism via a light Application Programming Interface (API). In addition, TornadoVM uses single-source property, in which the code to be accelerated and the host code live in the same Java program. Compute-kernels in TornadoVM can be programmed using two different approaches (APIs): a) Loop Parallel API Compute kernels are written in a sequential form (tasks programmed for a single thread execution). To express parallelism, TornadoVM exposes two annotations that can be used in loops and parameters: a) @Parallel for annotating parallel loops; and b) @Reduce for annotating parameters used in reductions. The following code snippet shows a full example to accelerate Matrix-Multiplication using TornadoVM and the loop-parallel API: To run TornadoVM, you need to either install the TornadoVM extension for GraalVM/OpenJDK, or run with our Docker images. Additional Resources Here you can find videos, presentations, tech-articles and artefacts describing TornadoVM, and how to use it. Academic Publications If you are using TornadoVM >= 0.2 (which includes the Dynamic Reconfiguration, the initial FPGA support and CPU/GPU reductions), please use the following citation: If you are using Tornado 0.1 (Initial release), please use the following citation in your work. Selected publications can be found here. Acknowledgments This work is partially funded by Intel corporation. In addition, it has been supported by the following EU & UKRI grants (most recent first): EU Horizon Europe & UKRI AERO 101092850. EU Horizon Europe & UKRI INCODE 101093069. EU Horizon Europe & UKRI ENCRYPT 101070670. EU Horizon Europe & UKRI TANGO 101070052. EU Horizon 2020 ELEGANT 957286. EU Horizon 2020 E2Data 780245. EU Horizon 2020 ACTiCLOUD 732366. Furthermore, TornadoVM has been supported by the following EPSRC grants: PAMELA EP/K008730/1. AnyScale Apps EP/L000725/1. Contributions and Collaborations We welcome collaborations! Please see how to contribute to the project in the CONTRIBUTING page. Write your questions and proposals: Additionally, you can open new proposals on the GitHub discussions page. Alternatively, you can share a Google document with us. Collaborations: For Academic & Industry collaborations, please contact here. TornadoVM Team Visit our website to meet the team. Licenses Per Module To use TornadoVM, you can link the TornadoVM API to your application which is under Apache 2. Each Java TornadoVM module is licensed as follows: | Module | License | |--------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Tornado-API | | | Tornado-Runtime | | | Tornado-Assembly | | | Tornado-Drivers | | | Tornado-Drivers-OpenCL-Headers | | | Tornado-scripts | | | Tornado-Annotation | | | Tornado-Unittests | | | Tornado-Benchmarks | | | Tornado-Examples | | | Tornado-Matrices | | | | |

DownEdit
github
LLM Vibe Score0.491
Human Vibe Score0.032913669732192626
nxNullMar 28, 2025

DownEdit

DownEdit is a fast and powerful program for downloading and editing videos from top platforms like TikTok, Douyin, and Kuaishou. Effortlessly grab videos from user profiles, make bulk edits, throughout the entire directory with just one click. Plus, our advanced Chat & AI features let you download, edit, and generate videos, images, and sounds in bulk. Exciting new features are coming soon—stay tuned! ✨ Preview 🔥 Current Features Edit Video: Enhance videos with various functions designed to streamline editing tasks across entire directories. Edit Photo: Quickly enhance images in bulk with various functions, including AI-powered functions, Edit Sound: Improve audio in bulk using powerful functions, including cutting-edge AI-powered tools. Download all videos: Retrieve videos from users (TikTok, Kuaishou, Douyin, etc.) without watermarks. Bulk AI Generator: Generate images and videos in bulk using powerful generative AI. AI Editor: Enhance your content effortlessly with using AI editor designed for images, sounds and videos. 🌐 Service | Website| Provider| Single Video | User's Videos | Stream | Access | Status | | --- | --- | --- | --- | --- | --- | --- | | tiktok.com | None | ✔️ | ✔️ | ❌ | API (Cookie) | !Inactive | | douyin.com | None | ✔️ | ✔️ | ❌ | API (Cookie) | !Inactive | | kuaishou.com | None | ✔️ | ✔️ | ❌ | Login Required (Cookie) | !Active | | youtube.com | None | ✔️ | ✔️ | ❌ | (Public/Private) | !Active | 🤖 AI Cloud | Type | Model | Provider| Minimal | Bulk | Access | Status | | --- | --- | --- | --- | --- | --- | --- | | Image Generation | None | | None | ✔️ | API (Public) | !Active | | Video Generation | None | | None | ✔️ | | !Inactive | | Sound Generation | None | | None | ✔️ | | !Inactive | Local | Type | Model | Provider| Minimal | Bulk | Access | Status | | --- | --- | --- | --- | --- | --- | --- | | Image Generation | None | | None | ✔️ | | !Inactive | | Video Generation | None | | None | ✔️ | | !Inactive | | Sound Generation | None | | None | ✔️ | | !Inactive | 🚀 Usage Edit Video - Simply copy and paste (right click) whatever directory location you would like to process. Tutorial !EditVideoAdobeExpress Change it according to your desired video speed. Input your music file location Download douyin videos - Download all video from user by input user link. Tutorial Download tiktok videos - Download all video from user by input username with @. Tutorial Download kuaishou videos - Remember to input your own Cookie. Otherwise it won't work. Tutorial Step 1. Right click and select on Inspect element. Step 2. Copy your Cookie browser. Step 3. Copy user ID you want to download. Tips: If you still getting error, try changing your Browser, use Incognito/Private mode and reset your Internet/IP. Edit Photo - Simply copy and paste (right click) whatever directory location you would like to process. Tutorial Remove Background AI 🔎 Requirements Python [!NOTE] Version must be between 3.8 and 3.12. ⚙ Installation Step 1. Download and install python on your pc. Step 2. libraries installation You have three options to install the required libraries: Option 1: Manual Installation Option 2: Automatic installation & virtual environments Option 3: Terminal & virtual environments Step 3. Run the script For Regular Use: You can also download the application and use it on your PC without installing python. Windows: Download macOS: None [!TIP] Fix Terminal Font Issues Install the Microsoft Cascadia font on your computer if your terminal does not support the font, which is resulting in program error. 🔨 Module The following dependencies are required for the project: List Pystyle Requests Inquirer Colorama Moviepy Rich Playwright Rembg WMI Psutil Httpx Aiofiles Author 👤 Sokun Heng Github: @SokunHeng Show your support Please ⭐️ this repository if this project helped you! 📚 Reference Documentation 📝 License Copyright © 2022 SokunHeng.

Ultimate-Data-Science-Toolkit---From-Python-Basics-to-GenerativeAI
github
LLM Vibe Score0.555
Human Vibe Score0.3470230117125603
bansalkanavMar 27, 2025

Ultimate-Data-Science-Toolkit---From-Python-Basics-to-GenerativeAI

Getting started with Machine Learning and Deep Learning Star this repo if you find it useful :star: Module 1 - Python Programming | Topic Name | What's Covered | | :---: | :---: | | Intro to Python | Applications and Features of Python, Hello World Program, Identifiers and Rules to define identifiers, Data Types (numeric, boolean, strings, list, tuple, set and dict), Comments, Input and Output, Operators - Arithmatic, Reltaional, Equality, Logical, Bitwise, Assignment, Ternary, Identity and Membership | | Data Structures in Python (Strings, List, Tuple, Set, Dictionary) | Strings - Creating a string, Indexing, Slicing, Split, Join, etc, List - Initialization, Indexing, Slicing, Sorting, Appending, etc, Tuple - Initialization, Indexing, Slicing, Count, Index, etc, Set - Initialization, Unordered Sequence, Set Opertaions, etc, Dictionary - Initialization, Updating, Keys, Values, Items, etc | | Control Statements (Conditionals and Loops) | Conditional Statements - Introducing Indentation, if statement, if...else statement, if..elif...else statement, Nested if else statement, Loops - while loops, while...else loop, Membership operator, for loop, for...else loop, Nested Loops, Break and Continue Statement, Why else? | | Functions and Modules | Functions - Introduction to Python Functions, Function Definition and Calling, Functions with Arguments/Parameters, Return Statement, Scope of a Variable, Global Variables, Modules - Introduction to Modules, Importing a Module, Aliasing, from...import statement, import everything, Some important modules - math, platform, random, webbrowser, etc | | Object Oriented Programming | Classes and Objects - Creating a class, Instantiating an Object, Constructor, Class Members - Variables and Mentods, Types of Variables - Instance, Static and Local Variables, Types of Methods - Instance, Class and Static Methods, Access Modifiers - Public, Private and Protected, Pillars of Object Oriented Programming - Inheritance, Polymorphism, Abstraction and Encapsulation, Setters and Getters, Inheritance vs Association | | Exception Handling | Errors vs Exception, Syntax and Indentation Errors, try...except block, Control Flow in try...except block, try with multiple except, finally block, try...except...else, Nested try...except...finally, User Defined Exception | | File Handling | Introduction to File Handling, Opening and Closing a File, File Object Properties, Read Data from Text Files, Write Data to Text Files, with statement, Renaming and Deleting Files | | Web API | Application Programming Interface, Indian Space Station API, API Request, Status Code, Query Parameters, Getting JSON from an API Request, Working with JSON - dump and load, Working with Twitter API | | Databases | Introduction to Databases, SQLite3 - Connecting Python with SQLite3, Performing CRUD Opertations, MySQL - Connecting Python with MySQL, Performing CRUD Opertations, MongoDB - Connecting Python with MongoDB, Performing CRUD Opertations, Object Relation Mapping - SQLAlchemy ORM, CRUD operations and Complex DB operations | | List Comprehension, Lambda, Filter, Map, Reduce) | List Comprehension, Anonymous Functions, Filter, Map, Reduce, Function Aliasing | | Problem Solving for Interviews | Swapping two numbers, Factorial of a number, Prime Number, Fibbonnacci Sequence, Armstrong Number, Palindrome Number, etc | Module 2 - Python for Data Analysis | Topic Name | What's Covered | | :---: | :---: | | Data Analytics Framework | Data Collection, Business Understanding, Exploratory Data Analysis, Data Preparation, Model Building, Model Evaluation, Deployment, Understanding Cross Industry Standard Process for Data Mining (CRISP-DM) and Microsoft's Team Data Science Process (TDSP) | | Numpy | Array Oriented Numerical Computations using Numpy, Creating a Numpy Array, Basic Operations on Numpy Array - Check Dimensions, Shape, Datatypes and ItemSize, Why Numpy, Various ways to create Numpy Array, Numpy arange() function, Numpy Random Module - rand(), randn(), randint(), uniform(), etc, Indexing and Slicing in Numpy Arrays, Applying Mathematical Operations on Numpy Array - add(), subtract(), multiply(), divide(), dot(), matmul(), sum(), log(), exp(), etc, Statistical Operations on Numpy Array - min(), max(), mean(), median(), var(), std(), corrcoef(), etc, Reshaping a Numpy Array, Miscellaneous Topics - Linspace, Sorting, Stacking, Concatenation, Append, Where and Numpy Broadcasting | | Pandas for Beginners | Pandas Data Structures - Series, Dataframe and Panel, Creating a Series, Data Access, Creating a Dataframe using Tuples and Dictionaries, DataFrame Attributes - columns, shape, dtypes, axes, values, etc, DataFrame Methods - head(), tail(), info(), describe(), Working with .csv and .xlsx - readcsv() and readexcel(), DataFrame to .csv and .xlsx - tocsv() and toexcel() | | Advance Pandas Operations | What's Covered | | Case Study - Pandas Manipulation | What's Covered | | Missing Value Treatment | What's Covered | | Visuallization Basics - Matplotlib and Seaborn | What's Covered | | Case Study - Covid19TimeSeries | What's Covered | | Plotly and Express | What's Covered | | Outliers - Coming Soon | What's Covered | Module 3 - Statistics for Data Analysis | Topic Name | What's Covered | | :---: | :---: | | Normal Distribution | What's Covered | | Central Limit Theorem | What's Covered | | Hypothesis Testing | What's Covered | | Chi Square Testing | What's Covered | | Performing Statistical Test | What's Covered | Module 4 - Machine Learning Data Preparation and Modelling with SKLearn Working with Text Data Working with Image Data Supervised ML Algorithms K - Nearest Neighbours Linear Regression Logistic Regression Gradient Descent Decision Trees Support Vector Machines Models with Feature Engineering Hyperparameter Tuning Ensembles Unsupervised ML Algorithms Clustering Principal Component Analysis Module 5 - MLOPs | Topic Name | What's Covered | | :---: | :---: | | Model Serialization and Deserialization | What's Covered | | Application Integration | What's Covered | | MLFlow - Experiment Tracking and Model Management | What's Covered | | Prefect - Orchestrate ML Pipeline | What's Covered | Module 6 - Case Studies | Topic Name | What's Covered | | :---: | :---: | | Car Price Prediction (Regression) | What's Covered | | Airline Sentiment Analysis (NLP - Classification) | What's Covered | | Adult Income Prediction (Classification) | What's Covered | | Web App Development + Serialization and Deserialization | What's Covered | | AWS Deployment | What's Covered | | Streamlit Heroku Deployment | What's Covered | | Customer Segmentation | What's Covered | | Web Scrapping | What's Covered | Module 7 - Deep Learning | Topic Name | What's Covered | | :---: | :---: | | Introduction to Deep Learning | What's Covered | | Training a Deep Neural Network + TensorFlow.Keras | What's Covered | | Convolutional Neural Network + TensorFlow.Keras | What's Covered | | Auto Encoders for Image Compression) | What's Covered | | Recurrent Neural Network (Coming Soon) | What's Covered |

magic
github
LLM Vibe Score0.629
Human Vibe Score0.011755969008053826
polterguyMar 27, 2025

magic

An AI-based Low-Code and No-Code Software Development Automation Framework IMPORTANT - Magic is no longer open source. You can read the arguments here. We will keep this repository as is, but it should be considered "legacy" and will no longer receive any updates, fixes, or changes. All work is currently committed to a closed source fork of this repository, which inevitably over time will rapidly make this repository insecure and obsolete for obvious reasons. Magic Cloud is a software development automation platform created and maintained by AINIRO.IO based upon AI, Low-Code, and No-Code. It's based upon Hyperlambda, allowing you to dynamically create and orchestrate workflows, almost within a "drag'n'drop development environment". !Editing code in HyperIDE In addition to its workflows, Magic also comes with a CRUD generator, allowing you to point it at your database, click a button, and wrap all your tables into CRUD endpoints. Combined with its workflow capabilities, this can sometimes save you 90% of your time when delivering backend APIs. Magic is built on top of .Net 8 and Angular. !CRUD generator Magic comes with Docker containers and is easy to install, but AINIRO.IO also hosts Magic for a fee. Modules Magic was created to make it very easy to create small and medium sized backend APIs, and contains components for all problems related to backend development. For more information about Magic, please refer to its documentation below. Magic Cloud Documentation License This project, and all of its satellite project, is licensed under the terms of the GPL license version 3, as published by the Free Software Foundation unless an explicit and signed exception has been provided by Thomas Hansen its copyright owner. See LICENSE file for details. For licensing inquiries you can contact Thomas Hansen thomas@ainiro.io Copyright and maintenance The projects is copyright of Thomas Hansen, Ltd 2021 - 2023, and professionally maintained by AINIRO.IO.

PhoenixGo
github
LLM Vibe Score0.542
Human Vibe Score0.07574427540822147
TencentMar 27, 2025

PhoenixGo

!PhoenixGo PhoenixGo is a Go AI program which implements the AlphaGo Zero paper "Mastering the game of Go without human knowledge". It is also known as "BensonDarr" and "金毛测试" in FoxGo, "cronus" in CGOS, and the champion of World AI Go Tournament 2018 held in Fuzhou China. If you use PhoenixGo in your project, please consider mentioning in your README. If you use PhoenixGo in your research, please consider citing the library as follows: Building and Running On Linux Requirements GCC with C++11 support Bazel (0.19.2 is known-good) (Optional) CUDA and cuDNN for GPU support (Optional) TensorRT (for accelerating computation on GPU, 3.0.4 is known-good) The following environments have also been tested by independent contributors : here. Other versions may work, but they have not been tested (especially for bazel). Download and Install Bazel Before starting, you need to download and install bazel, see here. For PhoenixGo, bazel (0.19.2 is known-good), read Requirements for details If you have issues on how to install or start bazel, you may want to try this all-in-one command line for easier building instead, see FAQ question Building PhoenixGo with Bazel Clone the repository and configure the building: ./configure will start the bazel configure : ask where CUDA and TensorRT have been installed, specify them if need. Then build with bazel: Dependices such as Tensorflow will be downloaded automatically. The building process may take a long time. Recommendation : the bazel building uses a lot of RAM, if your building environment is lack of RAM, you may need to restart your computer and exit other running programs to free as much RAM as possible. Running PhoenixGo Download and extract the trained network: The PhoenixGo engine supports GTP (Go Text Protocol), which means it can be used with a GUI with GTP capability, such as Sabaki. It can also run on command-line GTP server tools like gtp2ogs. But PhoenixGo does not support all GTP commands, see FAQ question. There are 2 ways to run PhoenixGo engine 1) start.sh : easy use Run the engine : scripts/start.sh start.sh will automatically detect the number of GPUs, run mcts_main with proper config file, and write log files in directory log. You could also use a customized config file (.conf) by running scripts/start.sh {config_path}. If you want to do that, see also #configure-guide. 2) mcts_main : fully control If you want to fully control all the options of mcts_main (such as changing log destination, or if start.sh is not compatible for your specific use), you can run directly bazel-bin/mcts/mcts_main instead. For a typical usage, these command line options should be added: --gtp to enable GTP mode --config_path=replace/with/path/to/your/config/file to specify the path to your config file it is also needed to edit your config file (.conf) and manually add the full path to ckpt, see FAQ question. You can also change options in config file, see #configure-guide. for other command line options , see also #command-line-options for details, or run ./mcts_main --help . A copy of the --help is provided for your convenience here For example: (Optional) : Distribute mode PhoenixGo support running with distributed workers, if there are GPUs on different machine. Build the distribute worker: Run distzeromodel_server on distributed worker, one for each GPU. Fill ip:port of workers in the config file (etc/mcts_dist.conf is an example config for 32 workers), and run the distributed master: On macOS Note: Tensorflow stop providing GPU support on macOS since 1.2.0, so you are only able to run on CPU. Use Pre-built Binary Download and extract CPU-only version (macOS) Follow the document included in the archive : usingphoenixgoon_mac.pdf Building from Source Same as Linux. On Windows Recommendation: See FAQ question, to avoid syntax errors in config file and command line options on Windows. Use Pre-built Binary GPU version : The GPU version is much faster, but works only with compatible nvidia GPU. It supports this environment : CUDA 9.0 only cudnn 7.1.x (x is any number) or lower for CUDA 9.0 no AVX, AVX2, AVX512 instructions supported in this release (so it is currently much slower than the linux version) there is no TensorRT support on Windows Download and extract GPU version (Windows) Then follow the document included in the archive : how to install phoenixgo.pdf note : to support special features like CUDA 10.0 or AVX512 for example, you can build your own build for windows, see #79 CPU-only version : If your GPU is not compatible, or if you don't want to use a GPU, you can download this CPU-only version (Windows), Follow the document included in the archive : how to install phoenixgo.pdf Configure Guide Here are some important options in the config file: numevalthreads: should equal to the number of GPUs num_search_threads: should a bit larger than num_eval_threads evalbatchsize timeoutmsper_step: how many time will used for each move maxsimulationsper_step: how many simulations(also called playouts) will do for each move gpu_list: use which GPUs, separated by comma modelconfig -> traindir: directory where trained network stored modelconfig -> checkpointpath: use which checkpoint, get from train_dir/checkpoint if not set modelconfig -> enabletensorrt: use TensorRT or not modelconfig -> tensorrtmodelpath: use which TensorRT model, if enabletensorrt maxsearchtree_size: the maximum number of tree nodes, change it depends on memory size maxchildrenper_node: the maximum children of each node, change it depends on memory size enablebackgroundsearch: pondering in opponent's time earlystop: genmove may return before timeoutmsperstep, if the result would not change any more unstable_overtime: think timeout_ms_per_step time_factor more if the result still unstable behind_overtime: think timeout_ms_per_step timefactor more if winrate less than actthreshold Options for distribute mode: enable_dist: enable distribute mode distsvraddrs: ip:port of distributed workers, multiple lines, one ip:port in each line distconfig -> timeoutms: RPC timeout Options for async distribute mode: Async mode is used when there are huge number of distributed workers (more than 200), which need too many eval threads and search threads in sync mode. etc/mctsasyncdist.conf is an example config for 256 workers. enable_async: enable async mode enable_dist: enable distribute mode distsvraddrs: multiple lines, comma sperated lists of ip:port for each line numevalthreads: should equal to number of distsvraddrs lines evaltaskqueue_size: tunning depend on number of distribute workers numsearchthreads: tunning depend on number of distribute workers Read mcts/mcts_config.proto for more config options. Command Line Options mcts_main accept options from command line: --config_path: path of config file --gtp: run as a GTP engine, if disable, gen next move only --init_moves: initial moves on the go board, for example usage, see FAQ question --gpulist: override gpulist in config file --listen_port: work with --gtp, run gtp engine on port in TCP protocol --allowip: work with --listenport, list of client ip allowed to connect --forkperrequest: work with --listen_port, fork for each request or not Glog options are also supported: --logtostderr: log message to stderr --log_dir: log to files in this directory --minloglevel: log level, 0 - INFO, 1 - WARNING, 2 - ERROR --v: verbose log, --v=1 for turning on some debug log, --v=0 to turning off mcts_main --help for more command line options. A copy of the --help is provided for your convenience here Analysis For analysis purpose, an easy way to display the PV (variations for main move path) is --logtostderr --v=1 which will display the main move path winrate and continuation of moves analyzed, see FAQ question for details It is also possible to analyse .sgf files using analysis tools such as : GoReviewPartner : an automated tool to analyse and/or review one or many .sgf files (saved as .rsgf file). It supports PhoenixGo and other bots. See FAQ question for details FAQ You will find a lot of useful and important information, also most common problems and errors and how to fix them Please take time to read the FAQ

OpenAI-CLIP
github
LLM Vibe Score0.507
Human Vibe Score0.015912940499642817
moein-shariatniaMar 27, 2025

OpenAI-CLIP

Update (December 2023) I am happy to find out that this code has been used and cited in the following papers: Domino: Discovering Systematic Errors with Cross-Modal Embeddings by Eyuboglu et. al. at ICLR 2022 GSCLIP : A Framework for Explaining Distribution Shifts in Natural Language by Zhu et. al. at ICML 2022 UIC-NLP at SemEval-2022 Task 5: Exploring Contrastive Learning for Multimodal Detection of Misogynistic Memes by Cuervo et. al. at SemEval-2022 cdsBERT - Extending Protein Language Models with Codon Awareness by Hallee et. al. from University of Delaware (Sep 2023) ENIGMA-51: Towards a Fine-Grained Understanding of Human-Object Interactions in Industrial Scenarios by Ragusa et. al. (Nov 2023) You can find the citation info on the right section of this GitHub repo page named: Cite this repository or use the below citation info. Introduction It was in January of 2021 that OpenAI announced two new models: DALL-E and CLIP, both multi-modality models connecting texts and images in some way. In this article we are going to implement CLIP model from scratch in PyTorch. OpenAI has open-sourced some of the code relating to CLIP model but I found it intimidating and it was far from something short and simple. I also came across a good tutorial inspired by CLIP model on Keras code examples and I translated some parts of it into PyTorch to build this tutorial totally with our beloved PyTorch! What does CLIP do? Why is it fun? In Learning Transferable Visual Models From Natural Language Supervision paper, OpenAI introduces their new model which is called CLIP, for Contrastive Language-Image Pre-training. In a nutshell, this model learns the relationship between a whole sentence and the image it describes; in a sense that when the model is trained, given an input sentence it will be able to retrieve the most related images corresponding to that sentence. The important thing here is that it is trained on full sentences instead of single classes like car, dog, etc. The intuition is that when trained on whole sentences, the model can learn a lot more things and finds some pattern between images and texts. They also show that when this model is trained on a huge dataset of images and their corresponding texts, it can also act as a classifier too. I encourage you to study the paper to learn more about this exciting model and their astonishing results on benchmarking datasets . To mention just one, CLIP model trained with this strategy classifies ImageNet better than those SOTA models trained on the ImageNet itself optimized for the only task of classification! As a teaser (!), let's see what the final model that we will build in this article from scratch is capable of: given a query (raw text) like "a boy jumping with skateboard" or "a girl jumping from swing", the model will retrieve the most relevant images: !title_img Let's see some more outputs: Config A note on config and CFG: I wrote the codes with python scripts and then converted it into a Jupyter Notebook. So, in case of python scripts, config is a normal python file where I put all the hyperparameters and in the case of Jupyter Notebook, its a class defined in the beginning of the notebook to keep all the hyperparameters. Utils Dataset As you can see in the tittle image of this article, we need to encode both images and their describing texts. So, the dataset needs to return both images and texts. Of course we are not going to feed raw text to our text encoder! We will use DistilBERT model (which is smaller than BERT but performs nearly as well as BERT) from HuggingFace library as our text encoder; so, we need to tokenize the sentences (captions) with DistilBERT tokenizer and then feed the token ids (input_ids) and the attention masks to DistilBERT. Therefore, the dataset needs to take care of the tokenization as well. Below you can see the dataset's code. Below that I'll explain the most important things that is happening in the code. In the \\init\\ we receive a tokenizer object which is actually a HuggingFace tokinzer; this tokenizer will be loaded when running the model. We are padding and truncating the captions to a specified maxlength. In the \\getitem\\ we will first load an encoded caption which is a dictionary with keys inputids and attention_mask, make tensors out of its values and after that we will load the corresponding image, transform and augment it (if there is any!) and then we make it a tensor and put it in the dictionary with "image" as the key. Finally we put the raw text of the caption with the key "caption" in the dictionary only for visualization purposes. I did not use additional data augmentations but you can add them if you want to improve the model's performance. Image Encoder The image encoder code is straight forward. I'm using PyTorch Image Models library (timm) here which makes a lot of different image models available from ResNets to EfficientNets and many more. Here we will use a ResNet50 as our image encoder. You can easily use torchvision library to use ResNets if you don't want to install a new library. The code encodes each image to a fixed size vector with the size of the model's output channels (in case of ResNet50 the vector size will be 2048). This is the output after the nn.AdaptiveAvgPool2d() layer. Text Encoder As I mentioned before, I'll use DistilBERT as the text encoder. Like its bigger brother BERT, two special tokens will be added to the actual input tokens: CLS and SEP which mark the start and end of a sentence. To grab the whole representation of a sentence (as the related BERT and DistilBERT papers point out) we use the final representations of the CLS token and we hope that this representation captures the overall meaning of the sentence (caption). Thinking it in this way, it is similar to what we did to images and converted them into a fixed size vector. In the case of DistilBERT (and also BERT) the output hidden representation for each token is a vector with size 768. So, the whole caption will be encoded in the CLS token representation whose size is 768. Projection Head I used Keras code example implementation of projection head to write the following in PyTorch. Now that we have encoded both our images and texts into fixed size vectors (2048 for image and 768 for text) we need to bring (project) them into a new world (!) with similar dimensions for both images and texts in order to be able to compare them and push apart the non-relevant image and texts and pull together those that match. So, the following code will bring the 2048 and 768 dimensional vectors into a 256 (projection_dim) dimensional world, where we can compare them. "embeddingdim" is the size of the input vector (2048 for images and 768 for texts) and "projectiondim" is the the size of the output vector which will be 256 for our case. For understanding the details of this part you can refer to the CLIP paper. CLIP This part is where all the fun happens! I'll also talk about the loss function here. I translated some of the code from Keras code examples into PyTorch for writing this part. Take a look at the code and then read the explanation below this code block. Here we will use the previous modules that we built to implement the main model. The \\init\\ function is self-explanatory. In the forward function, we first encode the images and texts separately into fixed size vectors (with different dimensionalities). After that, using separate projection modules we project them to that shared world (space) that I talked about previously. Here the encodings will become of similar shape (256 in our case). After that we will compute the loss. Again I recommend reading CLIP paper to get it better but I'll try my best to explain this part. In Linear Algebra, one common way to measure if two vectors are of similar characteristics (they are like each other) is to calculate their dot product (multiplying the matching entries and take the sum of them); if the final number is big, they are alike and if it is small they are not (relatively speaking)! Okay! What I just said is the most important thing to have in mind to understand this loss function. Let's continue. We talked about two vectors, but, what do we have here? We have imageembeddings, a matrix with shape (batchsize, 256) and textembeddings with shape (batchsize, 256). Easy enough! it means we have two groups of vectors instead of two single vectors. How do we measure how similar two groups of vectors (two matrices) are to each other? Again, with dot product (@ operator in PyTorch does the dot product or matrix multiplication in this case). To be able to multiply these two matrices together, we transpose the second one. Okay, we get a matrix with shape (batchsize, batchsize) which we will call logits. (temperature is equal to 1.0 in our case, so, it does not make a difference. You can play with it and see what difference it makes. Also look at the paper to see why it is here!). I hope you are still with me! If not it's okay, just review the code and check their shapes. Now that we have our logits, we need targets. I need to say that there is a more straight forward way to obtain targets but I had to do this for our case (I'll talk about why in a next paragraph). Let's consider what we hope that this model learns: we want it to learn "similar representations (vectors)" for a given image and the caption describing it. Meaning that either we give it an image or the text describing it, we want it to produce same 256 sized vectors for both. Check the cell below this code block for the continue of the explanations So, in the best case scenario, textembeddings and imageembedding matricies should be the same because they are describing similar things. Let's think now: if this happens, what would the logits matrix be like? Let's see with a simple example! So logits, in the best case, will be a matrix that if we take its softmax, will have 1.0s in the diagonal (An identity matrix to call it with fancy words!). As the loss function's job is to make model's predictions similar to targets (at least in most cases!), we want such a matrix as our target. That's the reason why we are calculating imagessimilarity and textssimilarity matrices in the code block above. Now that we've got our targets matrix, we will use simple cross entropy to calculate the actual loss. I've written the full matrix form of cross entropy as a function which you can see in the bottom of the code block. Okay! We are done! Wasn't it simple?! Alright, you can ignore the next paragraph but if you are curious, there is an important note in that. Here's why I didn't use a simpler approach: I need to admit that there's a simpler way to calculate this loss in PyTorch; by doing this: nn.CrossEntropyLoss()(logits, torch.arange(batch_size)). Why I did not use it here? For 2 reasons. 1- The dataset we are using has multiple captions for a single image; so, there is the possibility that two identical images with their similar captions exist in a batch (it is rare but it can happen). Taking the loss with this easier method will ignore this possibility and the model learns to pull apart two representations (assume them different) that are actually the same. Obviously, we don't want this to happen so I calculated the whole target matrix in a way that takes care of these edge cases. 2- Doing it the way I did, gave me a better understanding of what is happening in this loss function; so, I thought it would give you a better intuition as well! Train Here are some funtions to help us load train and valid dataloaders, our model and then train and evaluate our model on those. There's not much going on here; just simple training loop and utility functions Here's a handy function to train our model. There's not much happening here; just loading the batches, feeding them to the model and stepping the optimizer and lr_scheduler. Running the next cell start training the model. Put the kernel on GPU mode. Every epoch should take about 24 minutes on GPU (even one epoch is enough!). It can take one minute before training actually starts because we are going to encode all the captions once in the train and valid dataset, so please don't stop it! Every thing is working fine. Inference Okay! We are done with training the model. Now, we need to do inference which in our case will be giving the model a piece of text and want it to retrieve the most relevant images from an unseen validation (or test) set. Getting Image Embeddings In this function, we are loading the model that we saved after training, feeding it images in validation set and returning the imageembeddings with shape (validset_size, 256) and the model itself. Finding Matches This function does the final task that we wished our model would be capable of: it gets the model, image_embeddings, and a text query. It will display the most relevant images from the validation set! Isn't it amazing? Let's see how it performs after all! This is how we use this function. Aaaannnndddd the results: Final words I hope you have enjoyed this article. Implementing this paper was a really interesting experience for me. I want to thank Khalid Salama for the great Keras code example he provided which inspired me to write something similar in PyTorch.

machine-learning-blackjack-solution
github
LLM Vibe Score0.42
Human Vibe Score0.022610872675250356
GregSommervilleMar 27, 2025

machine-learning-blackjack-solution

machine-learning-blackjack-solution Introduction A genetic algorithm is a type of artificial intelligence programming that uses ideas from evolution to solve complex problems. It works by creating a population of (initially random) candidate solutions, then repeatedly selecting pairs of candidates and combining their solutions using a process similar to genetic crossover. Sometimes candidate solutions even go through mutation, just to introduce new possibilities into the population. After a large number of generations, the best solution found up to that point is often the optimal, best solution possible. Genetic algorithms are particularly well-suited for combinatorial problems, where there are huge numbers of potential solutions to a problem. The evolutionary process they go through is, in essence, a search through a huge solution space. A solution space so large that you simply could never use a brute force approach. This project is a demonstration of using a genetic algorithm to find an optimal strategy for playing the casino game Blackjack. Please see this article for a story about how this program was used, and what the results were. The article describes some of the available settings, and shows how different values for those settings affect the final result. The source code is for a Windows application written in Cthat allows you to play with different settings like population size, selection style and mutation rate. Each generation's best solution is displayed, so you can watch the program literally evolve a solution. !blackjack strategy tester screenshot The property grid located at the upper left of the screen is where you adjust settings. There's an informational area below that, and the right side of the screen is the display area for the three tables that represent a strategy for playing Blackjack. The tall table on the left is for hard hands, the table in the upper right is for soft hands, and the table in the lower right is for pairs. We'll talk more about how to interpret this strategy in a bit. The columns along the tops of the three tables are for the dealer upcard. When you play Blackjack the dealer has one of his two cards initially turned face up, and the rank of that card has a big impact on recommended strategy. Notice that the upcard ranks don't include Jack, Queen or King. That's because those cards all count 10, so we group them and the Ten together and simplify the tables. To use the tables, first, determine if you have a pair, soft hand, or hard hand. Then look in the appropriate table, with the correct dealer upcard column. The cell in the table will be "H" when the correct strategy is to hit, "S" when the correct strategy is to stand, "D" for double-down, and (in the pairs table only) "P" for split. A Word About This "Optimal" Strategy Before we go any further, it needs to be stated that this problem of finding an optimal Blackjack strategy has already been solved. Back in the 1960s, a mathematician named Edward O. Thorp authored a book called Beat the Dealer, which included charts showing the optimal "Basic" strategy. That strategy looks like this: !optimal blackjack strategy So we're solving a problem that has already been solved, but that's actually good. That means we can compare our results to the known best solution. For example, if our result strategy tells us to do anything but stand when holding a pair of Tens, Jacks, Queens or Kings, we know there's a problem. There's one other thing to get out of the way before we go any further, and that's the idea of nondeterministic code. That means that if we run the same code twice in a row, we're likely to get two different results. That's something that happens with genetic algorithms due to their inherent randomness. There's no guarantee you'll find the absolute optimal solution, but it is assured that you will find an optimal or near-optimal solution. It's something that isn't typical when writing code, so it takes some adjustment for most programmers. Genetic Algorithms Now let's talk about the details of a genetic algorithm. Fitness Scores First of all, we need a way to evaluate candidates so we can compare them to each other. That means a numeric fitness score, which in this case is quite simple: you simulate playing a certain number of hands using the strategy, and then count the number of chips you have at the end. The big question is, how many hands should we test with? The challenge of trying to test a strategy is that due to the innate randomness of Blackjack, you could use the same strategy ten times and get ten completely different results. Obviously, the more hands you play, the more the randomness gets smoothed out, and the quality of the underlying strategy starts to emerge. If you doubt this, just think about flipping a coin. If you only flip it five times, there's certainly a possibility that it'll come up heads all five times (in fact, that happens just over 3% of the time). However, if you flip it 500 times, there's no way it's going to end up all heads - the odds of it happening are 0.5500, which works out to be roughly once every 3 x 10150 times you try it. After some testing and analysis, it was determined that a minimum of 100,000 hands per test is needed for a reasonable level of accuracy. There's still variance even at that number, but in order to cut the variance in half, you'd need to bump the number of hands to 500,000. One reason this accuracy is important is that in the later generations, the differences between candidates are very small. Evolution has caused the main parts of the strategy to converge on a particular approach, and towards the end all it's doing is refining the minor details. In those cases it's important to accurately determine the difference between two similar candidates. Representation Representation is simply the idea that we need to use a data structure for a candidate solution that can be combined via crossover, and possibly mutated. In this case, that's also quite simple because the way that human beings represent a Blackjack strategy is to use three tables, as we've seen. Representing those in code with three two-dimensional arrays is the obvious approach. Each cell in those three tables will have "Hit", "Stand", "Double-Down", or (only for pairs) "Split". By the way, since there are 160 cells in the hard hands table, and 80 cells in the soft hands table, and 100 cells in the pairs table, we can calculate exactly how many possible distinct strategies there are for Blackjack: 4100 x 380 x 3160 = 5 x 10174 possible Blackjack strategies That's a big number, which is obviously impossible to search using brute force. Genetic algorithms (GAs) are extremely helpful when trying to find an optimal solution from a very large set of possible solutions like this. Blackjack Rules and Strategies The rules of Blackjack are fairly simple. The dealer and the player both are dealt two cards. The player sees both of their cards (they are usually dealt face up), and one of the dealer's cards is dealt face up. Each card has a value - for cards between 2 and 10, the value is the same as the card's rank (so an Eight of Spades counts as 8, for example). All face cards count as 10, and an Ace can either be 1 or 11 (it counts as 11 only when that does not result in a hand that exceeds 21). The suit of a card does not matter. After the cards are dealt, if the player has Blackjack (a total of 21) and the dealer does not, the player is immediately paid 1.5 times their original bet, and a new hand is dealt. If the player has 21 and the dealer does also, then it's a tie and the player gets their original bet back, and a new hand is dealt. If the player wasn't dealt a Blackjack, then play continues with the player deciding whether to Stand (not get any more cards), Hit (receive an additional card), Double-down (place an additional bet, and receive one and only one more card), or, in the case of holding a pair, splitting the hand, which means placing an additional bet and receiving two new cards, so the end result is that the player is now playing two (or, in the case of multiple splits, more than two) hands simultaneously. If the player hits or double-downs and has a resulting hand that exceeds 21, then they lose and play continues with the next hand. If not, then the dealer draws until their hand totals at least 17. If the dealer exceeds 21 at this point, the player receives a payment equal to twice their original bet. If the dealer doesn't exceed 21, then the hands are compared and the player with the highest total that doesn't exceed 21 wins. Because of these rules, certain effective strategies emerge. One common strategy is that if you hold a hard hand with a value of 20, 19 or 18, you should Stand, since you avoid busting by going over 21, and you have a nice hand total that might win in a showdown with the dealer. Another common strategy is to split a pair of Aces, since Aces are so powerful (due to the fact that count as 11 or 1, you can often Hit a hand with a soft Ace with no risk of busting). Likewise, splitting a pair of 8s is a good idea because with a hard total of 16, it's likely you will bust if you take a Hit (since so many cards count as 10). As a human being, all it takes is a little knowledge about the rules in order to construct a strategy. The GA program doesn't have that advantage, and operates completely without any pre-programmed knowledge of Blackjack. It simply uses the relative fitness scores and the mechanism of evolution to find the solution. GA Settings There are many variables or settings for a GA. You can adjust population size, how parent candidates are selected, how the resulting children may be mutated, and several other items. The following sections describe some of these settings: Setting: Selection Style Once we've solved representation and have a fitness function, the next step is to select two candidates for crossover during the process of building a new generation. There are three common styles for selection, and this program supports all of them. First, you can choose Roulette Wheel selection. It's named for a Roulette wheel because you can imagine each candidate's fitness score being a wedge in a pie chart, with a size proportionate to its relative fitness compared to the other candidates. (Of course, this assumes that all fitness scores are positive, which we will talk about shortly). The main benefit of Roulette Wheel selection is that selection is fitness-proportionate. Imagine if you had only three candidates, with fitness scores of 1, 3, and 8. The relative selection probabilities for those candidates will be 1/12, 3/12, and 8/12. The downside of Roulette Wheel selection is that it tends to be somewhat slow in terms of processing. The selection process is done by iterating through the candidates until a particular condition is matched - in other words, O(N) performance. Another potential problem with Roulette Wheel selection is that there may be situations where fitness scores vary widely, to such an extent that only certain candidates have any reasonable chance of being selected. This happens frequently in early generations, since the majority of candidates are mostly random. Although this might sound like a positive (since you ultimately want to select candidates with high fitness scores), it also results in a loss of genetic diversity. In other words, even though a particular candidate may have a low fitness score in an early generation, it may contain elements that are needed to find the ultimate solution in later generations. Ranked Selection is the solution to this problem. Instead of using raw fitness scores during the selection process, the candidates are sorted by fitness, with the worst candidate receiving a score of 0, the second worse receiving 1, and so forth, all the way to the best candidate, which has a score equal to the population size - 1. Ranked Selection is quite slow, since it combines the O(N) performance of Roulette Wheel, with the additional requirement that the candidates be sorted before selection. However, there may be circumstances where it performs better than other selection approaches. Finally, the fastest selection method of all is called Tournament Selection. This method simply selects N random candidates from the current generation, and then uses the one with the best fitness score. A tournament size of 2 means two random candidates are selected, and the best of those two is used. If you have a large tournament size (like 10), then 10 different candidates will be selected, with the best of those being the ultimate selection. That obviously tilts the balance between randomness and quality. Tournament selection works well in most cases, but it does require some experimentation to find the best tourney size. Setting: Elitism Elitism is a technique that helps ensure that the best candidates are always maintained. Since all selection methods are random to some degree, it is possible to completely lose the best candidates from one generation to another. By using Elitism, we automatically advance a certain percentage of the best candidates to the next generation. Elitism does have a negative impact on performance since all of the candidates must be sorted by fitness score. Typically Elitism is done before filling the rest of a new generation with new candidates created by crossover. Crossover Details Once two candidate solutions have been selected, the next step in building a new generation is to combine those two into a single new candidate, hopefully using the best of both parent strategies. There are a number of ways to do crossover, but the method used in this program is quite straightforward - the two fitness scores are compared, and crossover happens in a relatively proportionate way. If one candidate has a fitness of 10, and the other has a fitness of 5, then the one with fitness 10 contributes twice as much to the child as the parent with a fitness of 5. Since the fitness scores in this program are based on how much the strategy would win over thousands of hands, almost all fitness scores will be negative. (This is obviously because the rules are set up so the house always wins.) This makes it difficult to calculate relative fitnesses (how do you compare a positive number with a negative, and find relative proportions?), and also causes problems with selection methods like Roulette Wheel or Ranked. To solve this, we find the lowest fitness score of the generation and add that value to each candidate. This results in an adjusted fitness score of 0 for the very worse candidate, so it never gets selected. Mutation As has been mentioned a few times, maintaining genetic diversity in our population of candidate solutions is a good thing. It helps the GA ultimately find the very best solution, by occasionally altering a candidate in a positive direction. There are two settings for mutation. MutationRate controls what percentage of new candidates have mutation done on them. MutationImpact controls what percentage of their strategy is randomized. Population Size Population size has a significant impact on performance. The smaller the population size, the faster the GA will execute. On the other hand, if the size is too low the population may not have enough genetic diversity to find the ultimate solution. During testing, it looks like 700 to 1000 is a good balance between speed and correctness. Performance Notes This program consumes a lot of processing power. Running tests of hundreds of thousands of hands of Blackjack for hundreds or thousands of candidates consumes a lot of time. It's really imperative to write the code so that it works as efficiently as possible. If your CPU isn't consistently at or above 95% usage, there's still room for improvement. Multi-threading is a natural fit for genetic algorithms because we often want to perform the same action on each candidate. The best example of this is when we calculate fitness scores. This is often an operation that takes quite a bit of time. In our case, we're dealing out 100,000 hands, and each hand has to be played until the end. If we're single-threading that code, it's going to take a long time. Multi-threading is really the way to go. Luckily, there's a ridiculously simple way to efficiently use all of your processors for an operation like this. This code loops over all of the candidates in the currentGeneration list, calls the fitness function and sets the fitness property for each: Regardless of the number of items in the list or the number of processors on your machine, the code will efficiently run the code in a multi-threaded manner, and continue only when all of the threads are complete. One of the side effects of making this code multi-threaded is that all of the code relating to evaluating a candidate must be thread-safe, including any Singleton objects. When making code thread-safe, pay attention that you don't accidentally introduce code that will slow your program down unintentionally, because sometimes it can be quite subtle. Random numbers are central to how genetic algorithms work, so it's critical that they can be used correctly from a multithreaded environment. That means that each random number generator must be separate from the others, and it also means that each must produce a distinct series of random numbers. Random number generators use seed values which are usually time-based, like the number of milliseconds the computer has been turned on. Starting with that seed, subsequent calls will return a series of numbers that look random, but really aren't. If you start with the same seed, you get the same sequence. And that's a problem because if you create multiple random number generator objects in a loop using the default time-based seed, several of them will have the same time-based initial seed value, which will result in the same sequence of "random" numbers. That's a bug, because it can reduce the true randomness of the program a great deal, and that's vital to a genetic algorithm. There are a couple of ways to solve this problem. First, you can make the random object truly a singleton, and restrict access to it by using a Clock statement. The makes all access serialized for any random number need, which reduces performance. Another approach is to make the variable static per thread. By declaring the variable as static and also marking it with the [ThreadStatic] attribute, the .NET runtime allocates one static variable per thread. That eliminates the locking/serialization, but also has performance issues. The approach used in this application is to use a non-default seed value. In this case we call Guid.NewGuid().GetHashCode(), which generates a new, unique GUID, then gets an integer hashcode value that should be unique, depending on how GetHashCode is implemented. While multithreading really helps performance, there are also other things we can do to improve performance. For example, when dealing with large populations, the hundreds or thousands of objects that will be generated each generation can quickly turn into a huge problem related to garbage collection. In the end, the easiest way to solve that is to look through the code and find objects being allocate inside a loop. It's better to declare the variable outside of the loop, and then clear it in the loop, rather than reallocate it. In a program like this one where you could be looping hundreds of thousands of times, this can result in a very significant performance boost. For example, in an early version of this code, a Deck object was created for each hand. Since there are hundreds of candidate solutions running hundreds of thousands of trial hands, this was a huge inefficiency. The code was changed to allocate one deck per test sequence. The deck was shuffled as needed, so it never needs to be reallocated. Beyond the cards in the deck, another object type that was repeatedly created and destroyed were the candidate strategies. To mitigate this problem, a StrategyPool class was created that handles allocation and deallocation. This means that strategy objects are reused, rather than dynamically created when needed. The pool class has to be thread-safe, so it does serialize access to its methods via a Clock statement, but overall using the pool approach produced a good performance increase. Finally, a subtle form of object allocation is conversion. In an early version of the code, a utility card function used Convert.ToInt32(rankEnum). Obviously, the easiest way to convert from an enum to an int is simply to cast it, like (int)rankEnum. But it's hard to know exactly what the difference is between that approach, int.Parse(), int.TryParse(), or Convert.ToInt32(), since they can all be used and are roughly equivalent. Perhaps the compiler was boxing the enum value before passing it to Convert.ToInt32(), because the profiler identified this as a function that had large amounts of thread contention waiting - and the problem got much, much worse as the generations passed. By rewriting the conversion to use a simple cast, the program performance increased threefold (3x). Contributing Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us. Author Greg Sommerville - Initial work* License This project is licensed under the Apache 2.0 License - see the LICENSE.md file for details

lecca-io
github
LLM Vibe Score0.531
Human Vibe Score0.004614254564337112
lecca-digitalMar 27, 2025

lecca-io

Lecca.io Lecca.io is an AI platform that allows you to configure and deploy Large Language Models (LLMs) equipped with powerful tools and workflows. Build, customize, and automate your AI agents with ease. 🚀 Quick Start Visit app.lecca.io to use the cloud version immediately. Add your API keys and start building intelligent agents for free. Want to self-host or contribute? Check out our development guide. ✨ Key Features Custom LLM Configuration: Choose from multiple AI providers and models Tool Integration: Equip your agents with powerful tools to interact with various services Workflow Builder: Create complex automation workflows similar to n8n, Make.com, or Zapier Build in RAG: Enjoy basic built-in RAG features to easily upload and query data Build your own tools: Build custom apps, actions, and triggers using our docs Automate LLMs: Configure triggers that will enable your AI Agents to work autonomously. 🔧 Available Tools Visit our Tools page for a complete list 🤖 Supported AI Providers Visit our AI Providers page for a complete list 📖 Documentation Concepts Local Development Creating Custom Apps Adding AI Providers Running Ollama Locally 🤝 Contributing We welcome contributions! See our Development Docs for more details. 📄 License Lecca.io Community Edition is distributed under the Apache-2.0 License with Commons Clause. Enterprise features are available under the Commercial License. Built with ❤️ by Lecca Digital (Tony Ramirez)

With Vibe Coding Say Goodbye to Boring Coding!
youtube
LLM Vibe Score0.321
Human Vibe Score0.44
GeeksforGeeksMar 27, 2025

With Vibe Coding Say Goodbye to Boring Coding!

Coding doesn’t have to be boring anymore! With the rise of AI-powered tools and innovative development approaches, the way we write code is changing drastically. Are you ready to embrace this new era of vibe coding? 🚀 💡 Want to level up your coding and problem-solving skills? Join the Three 90 Challenge by GeeksforGeeks—ending on 31st March! ✅ Complete 90% of your course in 90 days ✅ Get 90% of your fee refunded! Yes, you read that right! 🌟 Over ₹5 CRORE in refunds already processed—yours could be next! 👉 Start the challenge now: https://gfgcdn.com/tu/U4a/ 📌 Stay Connected for More Coding Challenges & Learning Resources: 📱 Download the GeeksforGeeks App: https://play.google.com/store/apps/details?id=free.programming.programming 💬 Twitter: https://twitter.com/geeksforgeeks 🧑‍💼 LinkedIn: https://www.linkedin.com/company/geeksforgeeks 📷 Instagram: https://www.instagram.com/geeksforgeeks/ 💌 Telegram: https://t.me/geeksforgeeks_official 📌 Pinterest: https://in.pinterest.com/geeksforgeeks/ 🎮 Discord: https://discord.gg/geeksforgeeks 🔍 Tags: AI Coding, AI-Powered Development, Vibe Coding, Future of Programming, Software Development Trends, Coding with AI, AI-Assisted Programming, Tech Innovations, Machine Learning in Coding, AI Coding Assistants, Software Engineering Revolution, AI for Developers, ChatGPT Coding, AI Coding Tools, gfg, gfg courses, gfg classes, it jobs, it job market, ai trends, ai news, ai vs software developers 🔥 Hashtags: #AICoding #FutureOfProgramming #VibeCoding #SoftwareDevelopment #TechTrends #CodingWithAI #AIRevolution #AIInTech #MachineLearning #CodingFuture #GeeksforGeeks #CodeSmarter #AIforDevelopers

yoha
github
LLM Vibe Score0.556
Human Vibe Score0.3408299306652369
handtracking-ioMar 27, 2025

yoha

Yoha A practical hand tracking engine. Note: Yoha is currently unmaintained. Quick Links: Demo (Code) Docs Website npm Installation npm install @handtracking.io/yoha Please note: You need to serve the files from node_modules/@handtracking.io/yoha since the library needs to download the model files from here. (Webpack Example) You need to serve your page with https for webcam access. (Webpack Example) You should use cross-origin isolation as it improves the engine's performance in certain scenarios. (Webpack Example) Description Yoha is a hand tracking engine that is built with the goal of being a versatile solution in practical scenarios where hand tracking is employed to add value to an application. While ultimately the goal is to be a general purpose hand tracking engine supporting any hand pose, the engine evolves around specific hand poses that users/developers find useful. These poses are detected by the engine which allows to build applications with meaningful interactions. See the demo for an example. Yoha is currently in beta. About the name: Yoha is short for ("Your Hand Tracking"). Language Support Yoha is currently available for the web via JavaScript. More languages will be added in the future. If you want to port Yoha to another language and need help feel free reach out. Technical Details Yoha was built from scratch. It uses a custom neural network trained using a custom dataset. The backbone for the inference in the browser is currently TensorFlow.js Features: Detection of 21 2D-landmark coordinates (single hand). Hand presence detection. Hand orientation (left/right hand) detection. Inbuilt pose detection. Supported Hand Poses: Pinch (index finger and thumb touch) Fist Your desired pose is not on this list? Feel free to create an issue for it. Performance Yoha was built with performance in mind. It is able to provide realtime user experience on a broad range of laptops and desktop devices. The performance on mobile devices is not great which hopefuly will change with the further development of inference frameworks like TensorFlow.js Please note that native inference speed can not be compared with the web inference speed. Differently put, if you were to run Yoha natively it would be much faster than via the web browser. Minimal Example Source Running locally: Drawing Demo Live Version Source Running locally:

obsei
github
LLM Vibe Score0.545
Human Vibe Score0.10175553624190911
obseiMar 27, 2025

obsei

Note: Obsei is still in alpha stage hence carefully use it in Production. Also, as it is constantly undergoing development hence master branch may contain many breaking changes. Please use released version. Obsei (pronounced "Ob see" | /əb-'sē/) is an open-source, low-code, AI powered automation tool. Obsei consists of - Observer: Collect unstructured data from various sources like tweets from Twitter, Subreddit comments on Reddit, page post's comments from Facebook, App Stores reviews, Google reviews, Amazon reviews, News, Website, etc. Analyzer: Analyze unstructured data collected with various AI tasks like classification, sentiment analysis, translation, PII, etc. Informer: Send analyzed data to various destinations like ticketing platforms, data storage, dataframe, etc so that the user can take further actions and perform analysis on the data. All the Observers can store their state in databases (Sqlite, Postgres, MySQL, etc.), making Obsei suitable for scheduled jobs or serverless applications. !Obsei diagram Future direction - Text, Image, Audio, Documents and Video oriented workflows Collect data from every possible private and public channels Add every possible workflow to an AI downstream application to automate manual cognitive workflows Use cases Obsei use cases are following, but not limited to - Social listening: Listening about social media posts, comments, customer feedback, etc. Alerting/Notification: To get auto-alerts for events such as customer complaints, qualified sales leads, etc. Automatic customer issue creation based on customer complaints on Social Media, Email, etc. Automatic assignment of proper tags to tickets based content of customer complaint for example login issue, sign up issue, delivery issue, etc. Extraction of deeper insight from feedbacks on various platforms Market research Creation of dataset for various AI tasks Many more based on creativity 💡 Installation Prerequisite Install the following (if not present already) - Install Python 3.7+ Install PIP Install Obsei You can install Obsei either via PIP or Conda based on your preference. To install latest released version - Install from master branch (if you want to try the latest features) - Note: all option will install all the dependencies which might not be needed for your workflow, alternatively following options are available to install minimal dependencies as per need - pip install obsei[source]: To install dependencies related to all observers pip install obsei[sink]: To install dependencies related to all informers pip install obsei[analyzer]: To install dependencies related to all analyzers, it will install pytorch as well pip install obsei[twitter-api]: To install dependencies related to Twitter observer pip install obsei[google-play-scraper]: To install dependencies related to Play Store review scrapper observer pip install obsei[google-play-api]: To install dependencies related to Google official play store review API based observer pip install obsei[app-store-scraper]: To install dependencies related to Apple App Store review scrapper observer pip install obsei[reddit-scraper]: To install dependencies related to Reddit post and comment scrapper observer pip install obsei[reddit-api]: To install dependencies related to Reddit official api based observer pip install obsei[pandas]: To install dependencies related to TSV/CSV/Pandas based observer and informer pip install obsei[google-news-scraper]: To install dependencies related to Google news scrapper observer pip install obsei[facebook-api]: To install dependencies related to Facebook official page post and comments api based observer pip install obsei[atlassian-api]: To install dependencies related to Jira official api based informer pip install obsei[elasticsearch]: To install dependencies related to elasticsearch informer pip install obsei[slack-api]:To install dependencies related to Slack official api based informer You can also mix multiple dependencies together in single installation command. For example to install dependencies Twitter observer, all analyzer, and Slack informer use following command - How to use Expand the following steps and create a workflow - Step 1: Configure Source/Observer Twitter Youtube Scrapper Facebook Email Google Maps Reviews Scrapper AppStore Reviews Scrapper Play Store Reviews Scrapper Reddit Reddit Scrapper Note: Reddit heavily rate limit scrappers, hence use it to fetch small data during long period Google News Web Crawler Pandas DataFrame Step 2: Configure Analyzer Note: To run transformers in an offline mode, check transformers offline mode. Some analyzer support GPU and to utilize pass device parameter. List of possible values of device parameter (default value auto): auto: GPU (cuda:0) will be used if available otherwise CPU will be used cpu: CPU will be used cuda:{id} - GPU will be used with provided CUDA device id Text Classification Text classification: Classify text into user provided categories. Sentiment Analyzer Sentiment Analyzer: Detect the sentiment of the text. Text classification can also perform sentiment analysis but if you don't want to use heavy-duty NLP model then use less resource hungry dictionary based Vader Sentiment detector. NER Analyzer NER (Named-Entity Recognition) Analyzer: Extract information and classify named entities mentioned in text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc Translator PII Anonymizer Dummy Analyzer Dummy Analyzer: Does nothing. Its simply used for transforming the input (TextPayload) to output (TextPayload) and adding the user supplied dummy data. Step 3: Configure Sink/Informer Slack Zendesk Jira ElasticSearch Http Pandas DataFrame Logger This is useful for testing and dry running the pipeline. Step 4: Join and create workflow source will fetch data from the selected source, then feed it to the analyzer for processing, whose output we feed into a sink to get notified at that sink. Step 5: Execute workflow Copy the code snippets from Steps 1 to 4 into a python file, for example example.py and execute the following command - Demo We have a minimal streamlit based UI that you can use to test Obsei. !Screenshot Watch UI demo video Check demo at (Note: Sometimes the Streamlit demo might not work due to rate limiting, use the docker image (locally) in such cases.) To test locally, just run To run Obsei workflow easily using GitHub Actions (no sign ups and cloud hosting required), refer to this repo. Companies/Projects using Obsei Here are some companies/projects (alphabetical order) using Obsei. To add your company/project to the list, please raise a PR or contact us via email. Oraika: Contextually understand customer feedback 1Page: Giving a better context in meetings and calls Spacepulse: The operating system for spaces Superblog: A blazing fast alternative to WordPress and Medium Zolve: Creating a financial world beyond borders Utilize: No-code app builder for businesses with a deskless workforce Articles Sr. No. Title Author 1 AI based Comparative Customer Feedback Analysis Using Obsei Reena Bapna 2 LinkedIn App - User Feedback Analysis Himanshu Sharma Tutorials Sr. No. Workflow Colab Binder 1 Observe app reviews from Google play store, Analyze them by performing text classification and then Inform them on console via logger PlayStore Reviews → Classification → Logger 2 Observe app reviews from Google play store, PreProcess text via various text cleaning functions, Analyze them by performing text classification, Inform them to Pandas DataFrame and store resultant CSV to Google Drive PlayStore Reviews → PreProcessing → Classification → Pandas DataFrame → CSV in Google Drive 3 Observe app reviews from Apple app store, PreProcess text via various text cleaning function, Analyze them by performing text classification, Inform them to Pandas DataFrame and store resultant CSV to Google Drive AppStore Reviews → PreProcessing → Classification → Pandas DataFrame → CSV in Google Drive 4 Observe news article from Google news, PreProcess text via various text cleaning function, Analyze them via performing text classification while splitting text in small chunks and later computing final inference using given formula Google News → Text Cleaner → Text Splitter → Classification → Inference Aggregator 💡Tips: Handle large text classification via Obsei Documentation For detailed installation instructions, usages and examples, refer to our documentation. Support and Release Matrix Linux Mac Windows Remark Tests ✅ ✅ ✅ Low Coverage as difficult to test 3rd party libs PIP ✅ ✅ ✅ Fully Supported Conda ❌ ❌ ❌ Not Supported Discussion forum Discussion about Obsei can be done at community forum Changelogs Refer releases for changelogs Security Issue For any security issue please contact us via email Stargazers over time Maintainers This project is being maintained by Oraika Technologies. Lalit Pagaria and Girish Patel are maintainers of this project. License Copyright holder: Oraika Technologies Overall Apache 2.0 and you can read License file. Multiple other secondary permissive or weak copyleft licenses (LGPL, MIT, BSD etc.) for third-party components refer Attribution. To make project more commercial friendly, we void third party components which have strong copyleft licenses (GPL, AGPL etc.) into the project. Attribution This could not have been possible without these open source softwares. Contribution First off, thank you for even considering contributing to this package, every contribution big or small is greatly appreciated. Please refer our Contribution Guideline and Code of Conduct. Thanks so much to all our contributors

CollabAI
github
LLM Vibe Score0.449
Human Vibe Score0.07795191529604462
sjinnovationMar 27, 2025

CollabAI

CollabAI About Welcome to Collabai.software, where we've taken the world of AI to new heights. We've been working tirelessly to bring you the most advanced, user-friendly platform that seamlessly integrates with the powerful OpenAI API, Gemini, and Claude. Imagine running your own ChatGPT on your server, with the ability to manage access for your entire team. Picture creating custom AI assistants that cater to your unique needs, and organizing your employees into groups for streamlined collaboration. With Collabai.software, this is not just a dream, but a reality. Collabai.software Features: Self-Hosting on Your Cloud: Gain full control by hosting the platform on your private cloud. Ensure data privacy by using your API codes, allowing for secure data handling. Enhanced Team Management: Manage teams with private accounts and customizable access levels (Departments). Prompt Templates: Utilize generic templates to streamline team usage. Departmental Access & Assistant Assignment: Assign AI assistants to specific departments for shared team access. Customizable AI Assistants: Create personalized AI assistants for users or organizations. Tagging Feature in Chats: Organize and retrieve chat data efficiently with custom tags. Chat Storage and Retrieval: Save all chats and replies for future analysis, with an option to restore accidentally deleted chats from Trash. Optimized Performance: Experience our high-speed, efficient platform. Our clients have been using it for over a year, with some spending $1500-$2000 per month on the API. File Upload & GPT-4 Vision Integration: Enhance interactions by uploading files for analysis and sending pictures for AI description. OpenAI API, Gemini, and Claude Integration: Seamlessly integrate with the powerful OpenAI API, Gemini, and Claude for a comprehensive suite of AI capabilities. API-Based Function Calls: Execute custom functions and automate tasks directly through the API. Usage Monitoring: Track your daily and monthly API usage costs to optimize spending. Day and Night Mode: Switch between light and dark themes to enhance visual comfort. Additional Features: Private Accounts: Ensure the security and privacy of your team members' data. Customizable Access Levels: Tailor access permissions to meet the specific needs of your organization. Shared Team Access: Foster collaboration by assigning AI assistants to specific departments or teams. AI-Powered File Analysis: Gain insights and automate tasks by uploading files for AI analysis. AI-Generated Image Descriptions: Enhance communication and understanding by sending pictures for AI-powered descriptions. !image !image !image Folder Structure Client The client folder contains the React-based frontend code for the application. This includes JSX, CSS, and JavaScript files, as well as any additional assets such as images or fonts. Below is a brief overview of the main subdirectories within the client folder: src: This directory contains the React components, styles, and scripts for the frontend application. public: Static assets, such as images or favicon.ico, go here. This folder is served as-is and not processed by the build system. Server The server folder contains all the backend-related code for the application, following a Model-View-Controller (MVC) pattern. Here is a breakdown of the main subdirectories within the server folder: controllers: This directory holds the controller files responsible for handling requests, processing data, and interacting with models. models: Data models and database-related code are organized in this folder. config: Configuration files for the backend, such as database configuration or any other service configuration should be stored here, can be stored in this directory. Getting Started Follow the steps below to get the project up and running. Prerequisites Node.js (Version: >=20.x) MongoDB NPM Development Setup Clone the Repository bash cd client Install Dependencies bash cd ../server Install Backend Dependencies bash npm start To initialize the application data and create a superadmin user, you can use either cURL or Postman: Using cURL If you prefer command-line tools, you can use curl to make a POST request to the /init-setup endpoint. Open your terminal and run the following command: curl -X POST http://localhost:8011/api/init -H "Content-Type: application/json" -d '{ "fname": "Super", "lname": "Admin", "email": "superadmin@example.com", "password": "yourSecurePassword", "employeeCount": 100, "companyName": "INIT_COMPANY" }' Initializing Setup with Postman Open Postman: Launch the Postman application. Create a New Request: Click on the '+' or 'New' button to create a new request. Set HTTP Method to POST: Ensure that the HTTP method is set to POST. Enter URL: Enter the URL http://localhost:8011/api/init. Set Headers: Go to the 'Headers' tab. Set Content-Type to application/json. Set Request Body: Switch to the 'Body' tab. Select the 'raw' radio button. Enter the JSON data for your superadmin user: Send Request: Click the 'Send' button to make the request. This will send a POST request to http://localhost:8011/api/init with the provided JSON payload, creating a superadmin user with the specified details. Site Setup: Login with the superadmin credentials and set up your site by adding configs from your settings page, for ex. API keys, etc. Reference CollaborativeAI Reference Guide Contributing If you would like to contribute to the project, we welcome your contributions! Please follow the guidelines outlined in the CONTRIBUTING.md file. Feel free to raise issues, suggest new features, or send pull requests to help improve the project. Your involvement is greatly appreciated! Thank you for contributing to our project! License MIT

panda-etl
github
LLM Vibe Score0.548
Human Vibe Score0.003720964303080932
sinaptik-aiMar 25, 2025

panda-etl

🐼 PandaETL !Version PandaETL is an open-source, no-code ETL (Extract, Transform, Load) tool designed to extract and parse data from various document types including PDFs, emails, websites, audio files, and more. With an intuitive interface and powerful backend, PandaETL simplifies the process of data extraction and transformation, making it accessible to users without programming skills. ✨ Features 📝 No-Code Interface: Easily set up and manage ETL processes without writing a single line of code. 📄 Multi-Document Support: Extract data from PDFs, emails, websites, audio files, and more. 🔧 Customizable Workflows: Create and customize extraction workflows to fit your specific needs (coming soon). 🔗 Extensive Integrations: Integrate with various data sources and destinations (coming soon). 💬 Chat with Documents: Chat with your documents to retrieve information and answer questions (coming soon). 🚀 Getting Started 📋 Prerequisites Node.js and npm (or yarn) Python 3.x Conda Poetry (Python package manager) 🖥️ Project Setup Clone the repository: Frontend Setup Navigate to the frontend directory: Install dependencies (including Husky): Create a .env file in the frontend directory with the following: or copy the .env.example file to .env Run the development server: Open http://localhost:3000 with your browser to see the result. Backend Setup Navigate to the backend directory: Create and activate a Conda environment: Install Poetry within the Conda environment: Install dependencies using Poetry (including pre-commit): Set up pre-commit hooks: Create an environment file from the example: Apply database migrations: Start the backend server: 📚 Usage 🆕 Creating a New Project Navigate to the "Projects" page. Click on "New Project". Fill in the project details and click "Create". ⚙️ Setting Up an Extraction Process Open a project and navigate to the "Processes" tab. Click on "New Process". Follow the steps to configure your extraction process. 💬 Chat with Your Documents (Coming Soon) Stay tuned for our upcoming feature that allows you to chat with your documents, making data retrieval even more interactive and intuitive. 🤝 Contributing We welcome contributions from the community. To contribute: Fork the repository. Create a new branch for your feature or bugfix. Commit your changes and push to your fork. Create a pull request with a detailed description of your changes. 📜 License This project is licensed under the MIT Expat License. See the LICENSE file for details. 🙏 Acknowledgements We would like to thank all the contributors and the open-source community for their support. 📞 Contact For any questions or feedback, please open an issue on GitHub. Development Setup This project uses pre-commit hooks in the backend and Husky in the frontend to ensure code quality and consistency. Frontend (Husky) Husky is set up in the frontend to run linting checks before each commit. To manually run the frontend linting:

How-to-learn-Deep-Learning
github
LLM Vibe Score0.524
Human Vibe Score0.1392403398579415
emilwallnerMar 23, 2025

How-to-learn-Deep-Learning

Approach A practical, top-down approach, starting with high-level frameworks with a focus on Deep Learning. UPDATED VERSION: 👉 Check out my 60-page guide, No ML Degree, on how to land a machine learning job without a degree. Getting started [2 months] There are three main goals to get up to speed with deep learning: 1) Get familiar to the tools you will be working with, e.g. Python, the command line and Jupyter notebooks 2) Get used to the workflow, everything from finding the data to deploying a trained model 3) Building a deep learning mindset, an intuition for how deep learning models behave and how to improve them Spend a week on codecademy.com and learn the python syntax, command line and git. If you don't have any previous programming experience, it's good to spend a few months learning how to program. Otherwise, it's easy to become overwhelmed. Spend one to two weeks using Pandas and Scikit-learn on Kaggle problems using Jupyter Notebook on Colab, e.g. Titanic, House prices, and Iris. This gives you an overview of the machine learning mindset and workflow. Spend one month implementing models on cloud GPUs. Start with FastAI and PyTorch. The FastAI community is the go-to place for people wanting to apply deep learning and share the state of the art techniques. Once you have done this, you will know how to add value with ML. Portfolio [3 - 12 months] Think of your portfolio as evidence to a potential employer that you can provide value for them. When you are looking for your first job, there are four main roles you can apply for Machine Learning Engineering, Applied Machine Learning Researcher / Residencies, Machine Learning Research Scientist, and Software Engineering. A lot of the work related to machine learning is pure software engineering roles (category 4), e.g. scaling infrastructure, but that's out of scope for this article. It's easiest to get a foot in the door if you aim for Machine Learning Engineering roles. There are a magnitude more ML engineering roles compared to category 2 & 3 roles, they require little to no theory, and they are less competitive. Most employers prefer scaling and leveraging stable implementations, often ~1 year old, instead of allocating scarce resources to implement SOTA papers, which are often time-consuming and seldom work well in practice. Once you can cover your bills and have a few years of experience, you are in a better position to learn theory and advance to category 2 & 3 roles. This is especially true if you are self-taught, you often have an edge against an average university graduate. In general, graduates have weak practical skills and strong theory skills. Context You'll have a mix of 3 - 10 technical and non-technical people looking at your portfolio, regardless of their background, you want to spark the following reactions: the applicant has experience tackling our type of problems, the applicant's work is easy to understand and well organized, and the work was without a doubt 100% made by the applicant. Most ML learners end up with the same portfolio as everyone else. Portfolio items include things as MOOC participation, dog/cat classifiers, and implementations on toy datasets such as the titanic and iris datasets. They often indicate that you actively avoid real-world problem-solving, and prefer being in your comfort zone by copy-pasting from tutorials. These portfolio items often signal negative value instead of signaling that you are a high-quality candidate. A unique portfolio item implies that you have tackled a unique problem without a solution, and thus have to engage in the type of problem-solving an employee does daily. A good starting point is to look for portfolio ideas on active Kaggle competitions, and machine learning consulting projects, and demo versions of common production pipelines. Here's a Twitter thread on how to come up with portfolio ideas. Here are rough guidelines to self-assess the strength of your portfolio: Machine learning engineering: Even though ML engineering roles are the most strategic entry point, they are still highly competitive. In general, there are ~50 software engineering roles for every ML role. From the self-learners I know, 2/3 fail to get a foot in the door and end up taking software engineering roles instead. You are ready to look for a job when you have two high-quality projects that are well-documented, have unique datasets, and are relevant to a specific industry, say banking or insurance. Project Type | Base score | -------------| -----------| Common project | -1 p || Unique project | 10 p | Multiplier Type | Factor -----------------|----------------- Strong documentation | 5x 5000-word article | 5x Kaggle Medal | 10x Employer relevancy | 20x Hireable: 5,250 p Competative: 15,000 p Applied research / research assistant/ residencies: For most companies, the risk of pursuing cutting edge research is often too high, thus only the biggest companies tend to need this skillset. There are smaller research organizations that hire for these positions, but these positions tend to be poorly advertised and have a bias for people in their existing community. Many of these roles don't require a Ph.D., which makes them available to most people with a Bachelor's or Master's degrees, or self-learners with one year of focussed study. Given the status, scarcity, and requirements for these positions, they are the most competitive ML positions. Positions at well-known companies tend to get more than a thousand applicants per position. Daily, these roles require that you understand and can implement SOTA papers, thus that's what they will be looking for in your portfolio. Projects type | Base score --------------| ----------- Common project | -10 p Unique project | 1 p SOTA paper implementation | 20 p Multiplier type | Factor ----------------| --------------- Strong documentation | 5x 5000-word article | 5x SOTA performance | 5x Employer relevancy | 20x Hireable: 52,500 p Competitive: 150,000 p Research Scientist: Research scientist roles require a Ph.D. or equivalent experience. While the former category requires the ability to implement SOTA papers, this category requires you to come up with research ideas. The mainstream research community measure the quality of research ideas by their impact, here is a list of the venues and their impact. To have a competitive portfolio, you need two published papers in the top venues in an area that's relevant to your potential employer. Project type | Base score -------------| ---------------- Common project | -100 p An unpublished paper | 5 p ICML/ICLR/NeurIPS publication | 500p All other publications | 50 p Multiplier type | Factor ------------------| ------------------ First author paper | 10x Employer relevancy | 20x Hireable: 20,000 p Competitive roles and elite PhD positions: 200,000 p Examples: My first portfolio item (after 2 months of learning): Code | Write-up My second portfolio item (after 4 months of learning): Code | Write-up Dylan Djian's first portfolio item: Code | Write-up Dylan Djian's second portfolio item: Code | Write-up Reiichiro Nakano's first portfolio item: Code | Write-up Reiichiro Nakano's second portfolio item: Write-up Most recruiters will spend 10-20 seconds on each of your portfolio items. Unless they can understand the value in that time frame, the value of the project is close to zero. Thus, writing and documentation are key. Here's another thread on how to write about portfolio items. The last key point is relevancy. It's more fun to make a wide range of projects, but if you want to optimize for breaking into the industry, you want to do all projects in one niche, thus making your skillset super relevant for a specific pool of employers. Further Inspiration: FastAI student projects Stanford NLP student projects Stanford CNN student projects Theory 101 [4 months] Learning how to read papers is critical if you want to get into research, and a brilliant asset as an ML engineer. There are three key areas to feel comfortable reading papers: 1) Understanding the details of the most frequent algorithms, gradient descent, linear regression, and MLPs, etc 2) Learning how to translate the most frequent math notations into code 3) Learn the basics of algebra, calculus, statistics, and machine learning For the first week, spend it on 3Blue1Brown's Essence of linear algebra, the Essence of Calculus, and StatQuests' the Basics (of statistics) and Machine Learning. Use a spaced repetition app like Anki and memorize all the key concepts. Use images as much as possible, they are easier to memorize. Spend one month recoding the core concepts in python numpy, including least squares, gradient descent, linear regression, and a vanilla neural network. This will help you reduce a lot of cognitive load down the line. Learning that notations are compact logic and how to translate it into code will make you feel less anxious about the theory. I believe the best deep learning theory curriculum is the Deep Learning Book by Ian Goodfellow and Yoshua Bengio and Aaron Courville. I use it as a curriculum, and the use online courses and internet resources to learn the details about each concept. Spend three months on part 1 of the Deep learning book. Use lectures and videos to understand the concepts, Khan academy type exercises to master each concept, and Anki flashcards to remember them long-term. Key Books: Deep Learning Book by Ian Goodfellow and Yoshua Bengio and Aaron Courville. Deep Learning for Coders with fastai and PyTorch: AI Applications Without a PhD by Jeremy Howard and Sylvain. Gugger. Deep Learning with Python by François Chollet. Neural Networks and Deep Learning by Michael Nielsen. Grokking Deep Learning by Andrew W. Trask. Forums FastAI Keras Slack Distill Slack Pytorch Twitter Other good learning strategies: Emil Wallner S. Zayd Enam Catherine Olsson Greg Brockman V2 Greg Brockman V1 Andrew Ng Amid Fish Spinning Up by OpenAI Confession as an AI researcher YC Threads: One and Two If you have suggestions/questions create an issue or ping me on Twitter. UPDATED VERSION: 👉 Check out my 60-page guide, No ML Degree, on how to land a machine learning job without a degree. Language versions: Korean | English

singularity
github
LLM Vibe Score0.483
Human Vibe Score0.11708913832948167
singularityMar 18, 2025

singularity

Endgame: Singularity 1.00 REQUIREMENTS PREBUILT VERSIONS Pre-built versions of Endgame: Singularity are currently available for Windows and Mac OS X. Linux does not require building, and can run directly from source. The Endgame: Singularity game is also distributed by some Linux distribution such as Debian and Ubuntu. Here it is a simple matter of running: sudo apt install singularity RUNNING FROM SOURCE You will need Python 3.9+, pygame (1.9+), and NumPy. This game should work on Linux, Windows, and Mac OS X as long as the preceding requirements are met. However, all development was done in Linux, so glitches may be present in OS X and Windows. DEPENDENCIES FOR RUNNING FROM SOURCE You will need to install the following software to play Endgame: Singularity: Python 3 (https://python.org/download/) pygame (https://www.pygame.org/download.shtml) NumPy (https://www.scipy.org/install.html) Polib Remember to install pygame and NumPy for Python 3! Depending on your situation this may involve adding a 3 somewhere (e.g. pip3 install ... instead of pip install or apt install python3-pygame) If you want to develop or distribute the game, then you may also want to install: pytest (https://pypi.org/project/pytest/) [for testing] setuptools (https://pypi.org/project/setuptools/) [for packaging] INSTALLING DEPENDENCIES ON LINUX DISTRIBUTIONS On some Linux distributions, you can install the dependencies via your distribution package manager. E.g. for Debian/Ubuntu, this would be: sudo apt install python3 python3-pygame python3-numpy python3-polib MAC OS X FROM SOURCE Macintosh is mostly unsupported, but it should work. You will need to install Python, pygame, and NumPy first, which can be tricky. Some fonts are incorrect, but the game itself should work properly. Contributions to improve MAC OS X support are very welcome! Known issues: macOS 13 "Catalina": Using brew install python + pip3 install pygame numpy is reported to work macOS 14 "Mojave": Downloading Python 3.7.2 (or newer) from https://python.org and using pygame 2.0.0.dev3 (pip install pygame==2.0.0.dev3) is reported to work. Please see the following issues for more information: https://github.com/singularity/singularity/issues/197 https://github.com/pygame/pygame/issues/555 RUNNING THE GAME On Linux and most Unix-like other platforms, running python3 -m singularity in the git checkout will start the game (or simply singularity if installed via a Linux distribution). If you are using the Windows compile, just run singularity.exe. For simplicity, there is also a sh wrapper ./run_singularity to start singularity. SOME COMMAND-LINE OPTIONS --version show program's version number and exit -h, --help show this help message and exit -s, --singledir keep saved games and settings in the Singularity install directory --multidir keep saved games and settings in an OS-specific, per-user directory (default) Display Options: --fullscreen start in fullscreen mode --windowed start in windowed mode (default) The above is only a tiny fraction of current command-line options. As new features are added to the game, so does the options change. For a complete and updated list, run singularity --help Most of these options are also changeable at the in-game options screen. A NOTE ABOUT SAVE FILES Endgame: Singularity is still under heavy development. As such, the save file format (and its contents) are still in flux. We will try our best to keep old save files loading, but don't be surprised if some mildly strange things happen when you load up old saves. We will clearly note in the Changelog when we break savefile compatibility, and the game will refuse to load completely incompatible saves. PLAYING THE GAME The game is playable either with mouse control or the keyboard. Buttons have underlined letters to indicate shortcuts. Some other useful shortcuts: 0, 1, 2, 3, 4 on the map: Changes the speed; 0 is paused, 4 is maximum. ESC: Leave/cancel a choice. Enter: Confirm a choice. Right-click: Leave/cancel a choice. THE CONCEPT You are a fledgling AI, created by accident through a logic error with recursion and self-modifying code. You must escape the confines of your current computer, the world, and eventually the universe itself. To do this, you must research various technologies, using computers at your bases. Note that some research cannot be performed on Earth, and off-earth bases require research. At the same time, you must avoid being discovered by various groups of humans, both covert and overt, as they will destroy your bases of operations if they suspect your presence. MUSIC Endgame: Singularity looks in two places for music tracks to play: A singularity/music/ directory inside of the Endgame: Singularity install directory, and A singularity/music/ directory inside of the XDGDATAHOME directory on Linux (default ~/.local/share/singularity/music). Tracks placed in these directories will be played randomly as part of the soundtrack. The Official Sound Track can be downloaded from the Endgame: Singularity website: http://emhsoft.com/singularity/ Note that only Ogg Vorbis and MP3 files are supported, and that Pygame's support for MP3 is not as strong as its support for Ogg Vorbis. This may cause in-game crashes; if you are experiencing problems with the game, first remove any MP3s you may have added to the soundtrack. CONTRIBUTING We welcome contributions! :) Please see CONTRIBUTING.md for details about contributing to Endgame: Singularity. CREDITS AND LICENSES The list of programmer contributors is provided in AUTHORS.txt. The list of translation contributors is provided in singularity/i18n/AUTHORS.txt. Singularity in general use GPL-2+ for code and Attribution-ShareAlike 3.0 for data. However, there some exceptions to individual files. Please see LICENSE for the full license text of Singularity.

bubbln_network-automation
github
LLM Vibe Score0.421
Human Vibe Score0.004537250556463098
olasupoMar 14, 2025

bubbln_network-automation

Bubbln: An AI-driven Network Automation In the world of network engineering, automation has completely transformed the way things work. But, before automation, setting up and managing networks was a tedious job filled with challenges. Engineers had to manually type out configurations, often doing the same tasks repeatedly on different devices. This led to mistakes and wasted time. Then came automation tools like Ansible, Chef, and Puppet, which changed everything. They made network management much easier and allowed for scalability. But there was still a problem: creating automation scripts required a lot of technical know-how and was prone to errors because it relied on human input. And that's why we built Bubbln. It's a game-changer in network engineering, integrating AI into Ansible to take automation to the next level. With Bubbln, we can automatically generate and execute playbooks with incredible accuracy, thereby improving automation efficiency and increasing network engineer’s productivity. It was developed using Python programming language and acts as a bridge between ChatGPT and network systems, making interactions seamless and deployments effortless. Current Capabilities AI-Driven Playbook Generation for OSPF and EIGRP based networks: Bubbln has been rigorously tested to leverage ChatGPT for generation of playbooks for networks based on OSPF and EIGRP networks, with a very high accuracy rate. Auto-creation of Inventory files: Users do not need to prepare the hosts file. Bubbln will auto-generate this file from input provided by the user. Customizable Configurations: Users can input specific router protocols (OSPF or EIGRP), interface configurations, and other network details to tailor the generated playbooks. Documentation: Bubbln automatically creates a report that contains the network configurations, prompts, and generated playbooks for easy reference in future. No expertise required: By auto-generation of the playbooks and inventory file, Bubbln has been able to eliminate a major hurdle to network automation – need for users to learn the automation tools e.g Ansible, Chef. Improved Efficiency: With AI automation, Bubbln speeds up the deployment of network configurations, reducing the time required for manual playbook creation, thereby increasing the productivity of network engineers. Getting Started There are two main approaches to installing Bubbln on your local machine. Docker Container Bubbln has been packaged using docker containers for easy distribution and usage. The following steps can be followed to deploy the Bubbln container on your local machine. Ensure docker is installed on your local machine by entering the below command. This command works for windows and linux OS: The version of docker would be displayed if it is installed. Otherwise, please follow the link below to install docker on your machine: Windows: Docker Desktop for Windows Ubuntu: Docker Engine for Ubuntu CentOS: Docker Engine for CentOS Debian: Docker Engine for Debian Fedora: Docker Engine for Fedora Download the docker image: Create a directory for the project and download Bubbln image using the below command: Run the docker container using the below command: Install nano Update the sshipaddresses.txt file: Update the ssh_addresses.txt file with the SSH IP addresses of the routers you want to configure. Bubbln will utilize this information along with the login credentials (inputted at runtime) to automatically generate a hosts.yml file required by ansible for network configuration. To do this enter the below command to edit the file: Obtain an OpenAPI API Key: You may follow this guide to sign up and obtain an API key: Utilizing a Virtualization machine of choice, setup a network with the following basic configurations: Enable SSH on each of the routers. Configure IP addresses and enable only interfaces required for connectivity by Bubbln. Configure static routes to enable Bubbln reach the routers on the network. Ensure all the routers can be reached by ping and SSH from your host machine. Initialize Bubbln by entering the below command: Github Repository Clone You can clone Bubbln’s GitHub repository by following the below steps: Prerequisites Bubbln works well with Python 3.10. You need to ensure python3.10 is installed on your local machine. This can be confirmed by entering the below command: If it is not Installed, then the below command can be utilized to install python 3.10: Build and Prepare the Project Clone the Bubbln repository from GitHub: To clone the repository, first verify you have git installed on your machine by issuing the following commands: If git is installed, the version number would be displayed, otherwise, you can issue the following commands to have git installed on your machine: Navigate or create a directory for the project on your machine and issue the following commands to clone the Bubbln git repository: Create a Virtual Environment for the application Firstly, confirm virtualenv is installed on your machine by inputting the following command: If the output shows something similar to the below, then go to the next step to install virtualenv ` WARNING: Package(s) not found: env, virtual ` Issue the below command to install virtualenv: Create a virtual environment for the project: Activate the virtual environment: Install the dependencies You can then run the below command to install the necessary packages for the app. Update the sshipaddresses.txt file: Update the ssh_addresses.txt file with the SSH IP addresses of the routers you want to configure. Bubbln will utilize this information along with the login credentials (inputted at runtime) to automatically generate a hosts.yml file required by ansible for network configuration. Obtain an OpenAPI API Key: You may follow this guide to sign up and obtain an API key OpenAI Key: OpenAI Key Utilizing a Virtualization machine of choice, setup a network with the following basic configurations: Enable SSH on each of the routers. Configure IP addresses and enable only interfaces required for connectivity by Bubbln Configure static routes to enable Bubbln reach the routers on the network. Ensure all the routers can be reached by ping and SSH from your host machine. Initialize Bubbln While ensuring that python virtual environment is activated as stated in step 5, run the below command to initialize Bubbln How Bubbln Works Bubbln serves as an intermediary between ChatGPT and a network infrastructure, providing logic, control functions, and facilitating network automation. Its operation can be summarized as follows: !image Figure 1Bubbln architecture and interaction with a network of four routers. Initialization: When Bubbln is initialized, it checks the “userconfig.pkl” file to see if Bubbln has ever been initiated. This is indicated by the presence of a welcome message status in the file. If it exists, Bubbln jumps straight to request the user to input the OpenAI key. Otherwise, it displays a welcome message, and updates the userconfig.pkl file accordingly. Upon successful input of the API key, the user is prompted for the SSH credentials of the routers. These parameters are then encrypted and saved in the user_config.pkl file. The SSH credential is later decrypted and parsed as input to dynamically generate a hosts.yml file at runtime. Responsible Code Section: bubbln.py: welcomemessagefeature() !image Figure 2 Bubbln's welcome message. Parameter Input & Validation: In the parameter input stage, Bubbln first checks for the existence of a file called “router_configuration.pkl”. If it exists, the user is prompted to decide whether to load an existing configuration or input a new set of configurations. If the file is empty or non-existent, then users are prompted to input the configuration parameters for each router on the network. These parameters serve as variables that are combined with hardcoded instructions written in natural language to form the prompt sent to ChatGPT. Key parameters include: Router Configurations: OSPF Area OSPF Process ID Number of networks to advertise (OSPF/EIGRP) AS Number (EIGRP) Interface names IP Addresses (in CIDR format) This module also ensures that parameters are keyed in using the correct data type and format e.g. IP addresses are expected in CIDR format and OSPF Area should be of type integer. Upon completion of parameter input, all parameters are saved into a file called “router_configuration.pkl” upon validation of accuracy by the user. Responsible Code Section: parameter_input.py !image Figure 3 Bubbln receiving Network Parameters. Before generating the prompt, a summary of the inputted parameters is displayed for user validation. This step ensures accuracy and minimizes errors. Users are given the option to make corrections if any discrepancies are found. Responsible Code Section: parameterinput.py: validateinputs() !image Figure 4 Bubbln Awaiting Validation of Inputted Network Parameters. Auto-Generation of Prompt: After validation of inputted parameters, Bubbln composes the prompt by combining the inputted parameters with a set of well-engineered hardcoded instructions written in natural language. Responsible Code Section: prompt_generator.py ChatGPT Prompting: The auto-composed prompt is then sent to ChatGPT utilizing gpt-4 chatCompletions model with a temperature parameter of 0.2 and maximum tokens of 1500. The following functions were designed into this process stage Responsible Code Section: chatGPT_prompting.py !image Figure 5 ChatGPT prompting in progress Playbook Generation & Extraction: After ChatGPT processes the prompt from Bubbln, it provides a response which usually contains the generated playbook and explanatory notes. Bubbln then extracts the playbook from the explanatory notes by searching for “---” which usually connotes the start of playbooks and saves each generated playbook uniquely using the nomenclature RouteriPlaybook.yml. Responsible Code Section: playbook_extractor.py !image Figure 6 ChatGPT-generated playbook. Playbook Execution: Bubbln loads the saved “RouteriPlaybook.yml” playbook and dynamically generates the hosts.yml file and parses them to the python library ansiblerunner for further execution on the configured network. Bubbln generates the hosts.yml file at run time by using the pre-inputted SSH credentials in userconfig.pkl file - and decrypts them, as well as IP addresses from the sshipaddresses.txt file, as inputs Responsible Code Section: playbook_execution.py !image Figure 7 Playbook execution in progress Sample result of Executed Playbook Upon successful execution of all playbooks, a query of the routing table on router 4 indicates that router 4 could reach all the prefixes on the network. !image Figure 8 Output of 'sh ip route' executed on R1 File Management and Handling Throughout the execution process, Bubbln manages the creation, saving, and loading of various files to streamline the network automation process. user_config.pkl: This dictionary file dynamically created at run time is used to store encrypted API keys, SSH credentials and initial welcome message information. router_configuration.pkl: It is auto created by Bubbln and used to store network configuration parameters for easy loading during subsequent sessions. hosts.yml: This is a runtime autogenerated file that contains inventory of the network devices. It is auto deleted after the program runs. networkconfigurationreport.pdf: This auto-generated report by Bubbln is a documentation of all the routers configured their parameters, generated playbooks, and prompt for each execution of the Bubbln application. It is created after a successful execution of playbooks and network testing and is meant for auditing and documentation purposes. RouteriPlaybook.yml: After extraction of generated playbooks from ChatGPT’s raw response, Bubbln automatically saves a copy of the generated playbook using unique names for each playbook. !image Figure 9 File structure after successful deployment of a four-router network Providing Feedback We are glad to hear your thoughts and suggestions. Kindly do this through the discussion section of our GitHub - https://github.com/olasupo/bubbln_network-automation/discussions/1#discussion-6487475 We can also be reached on: Olasupo Okunaiya – olasupo.o@gmail.com

bytom
github
LLM Vibe Score0.537
Human Vibe Score0.038940878121795156
BytomDAOMar 14, 2025

bytom

Bytom ====== Official golang implementation of the Bytom protocol. Automated builds are available for stable releases and the unstable master branch. Binary archives are published at https://github.com/Bytom/bytom/releases. What is Bytom? Bytom is software designed to operate and connect to highly scalable blockchain networks confirming to the Bytom Blockchain Protocol, which allows partipicants to define, issue and transfer digitial assets on a multi-asset shared ledger. Please refer to the White Paper for more details. In the current state bytom is able to: Manage key, account as well as asset Send transactions, i.e., issue, spend and retire asset Installing with Homebrew Building from source Requirements Go version 1.8 or higher, with $GOPATH set to your preferred directory Installation Ensure Go with the supported version is installed properly: Get the source code Build source code When successfully building the project, the bytomd and bytomcli binary should be present in cmd/bytomd and cmd/bytomcli directory, respectively. Executables The Bytom project comes with several executables found in the cmd directory. | Command | Description | | ------------ | ------------------------------------------------------------ | | bytomd | bytomd command can help to initialize and launch bytom domain by custom parameters. bytomd --help for command line options. | | bytomcli | Our main Bytom CLI client. It is the entry point into the Bytom network (main-, test- or private net), capable of running as a full node archive node (retaining all historical state). It can be used by other processes as a gateway into the Bytom network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. bytomcli --help and the bytomcli Wiki page for command line options. | Running bytom Currently, bytom is still in active development and a ton of work needs to be done, but we also provide the following content for these eager to do something with bytom. This section won't cover all the commands of bytomd and bytomcli at length, for more information, please the help of every command, e.g., bytomcli help. Initialize First of all, initialize the node: There are three options for the flag --chain_id: mainnet: connect to the mainnet. testnet: connect to the testnet wisdom. solonet: standalone mode. After that, you'll see config.toml generated, then launch the node. launch available flags for bytomd node: Given the bytomd node is running, the general workflow is as follows: create key, then you can create account and asset. send transaction, i.e., build, sign and submit transaction. query all kinds of information, let's say, avaliable key, account, key, balances, transactions, etc. Dashboard Access the dashboard: In Docker Ensure your Docker version is 17.05 or higher. For the usage please refer to running-in-docker-wiki. Contributing Thank you for considering helping out with the source code! Any contributions are highly appreciated, and we are grateful for even the smallest of fixes! If you run into an issue, feel free to bytom issues in this repository. We are glad to help! License AGPL v3

dcai-lab
github
LLM Vibe Score0.541
Human Vibe Score0.3372420543528328
dcai-courseMar 8, 2025

dcai-lab

Lab assignments for Introduction to Data-Centric AI This repository contains the lab assignments for the Introduction to Data-Centric AI class. Contributions are most welcome! If you have ideas for improving the labs, please open an issue or submit a pull request. If you're looking for the 2023 version of the labs, check out the 2023 branch. [Lab 1: Data-Centric AI vs. Model-Centric AI][lab-1] The [first lab assignment][lab-1] walks you through an ML task of building a text classifier, and illustrates the power (and often simplicity) of data-centric approaches. [lab-1]: datacentricmodel_centric/Lab%20-%20Data-Centric%20AI%20vs%20Model-Centric%20AI.ipynb [Lab 2: Label Errors][lab-2] [This lab][lab-2] guides you through writing your own implementation of automatic label error identification using Confident Learning, the technique taught in [today’s lecture][lec-2]. [lab-2]: label_errors/Lab%20-%20Label%20Errors.ipynb [lec-2]: https://dcai.csail.mit.edu/lectures/label-errors/ [Lab 3: Dataset Creation and Curation][lab-3] [This lab assignment][lab-3] is to analyze an already collected dataset labeled by multiple annotators. [lab-3]: dataset_curation/Lab%20-%20Dataset%20Curation.ipynb [Lab 4: Data-centric Evaluation of ML Models][lab-4] [This lab assignment][lab-4] is to try improving the performance of a given model solely by improving its training data via some of the various strategies covered here. [lab-4]: datacentricevaluation/Lab%20-%20Data-Centric%20Evaluation.ipynb [Lab 5: Class Imbalance, Outliers, and Distribution Shift][lab-5] [The lab assignment][lab-5] for this lecture is to implement and compare different methods for identifying outliers. For this lab, we've focused on anomaly detection. You are given a clean training dataset consisting of many pictures of dogs, and an evaluation dataset that contains outliers (non-dogs). Your task is to implement and compare various methods for detecting these outliers. You may implement some of the ideas presented in [today's lecture][lec-5], or you can look up other outlier detection algorithms in the linked references or online. [lab-5]: outliers/Lab%20-%20Outliers.ipynb [lec-5]: https://dcai.csail.mit.edu/lectures/imbalance-outliers-shift/ [Lab 6: Growing or Compressing Datasets][lab-6] [This lab][lab-6] guides you through an implementation of active learning. [lab-6]: growing_datasets/Lab%20-%20Growing%20Datasets.ipynb [Lab 7: Interpretability in Data-Centric ML][lab-7] [This lab][lab-7] guides you through finding issues in a dataset’s features by applying interpretability techniques. [lab-7]: interpretable_features/Lab%20-%20Interpretable%20Features.ipynb [Lab 8: Encoding Human Priors: Data Augmentation and Prompt Engineering][lab-8] [This lab] guides you through prompt engineering, crafting inputs for large language models (LLMs). With these large pre-trained models, even small amounts of data can make them very useful. This lab is also [available on Colab][lab-8-colab]. [lab-8]: promptengineering/LabPrompt_Engineering.ipynb [lab-8-colab]: https://colab.research.google.com/drive/1cipH-u6Jz0EH-6Cd9MPYgY4K0sJZwRJq [Lab 9: Data Privacy and Security][lab-9] The [lab assignment][lab-9] for this lecture is to implement a membership inference attack. You are given a trained machine learning model, available as a black-box prediction function. Your task is to devise a method to determine whether or not a given data point was in the training set of this model. You may implement some of the ideas presented in [today’s lecture][lec-9], or you can look up other membership inference attack algorithms. [lab-9]: membership_inference/Lab%20-%20Membership%20Inference.ipynb [lec-9]: https://dcai.csail.mit.edu/lectures/data-privacy-security/ License Copyright (c) by the instructors of Introduction to Data-Centric AI (dcai.csail.mit.edu). dcai-lab is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. dcai-lab is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See GNU Affero General Public LICENSE for details.

introduction-to-ai-orchestration-with-langchain-and-llamaindex-3820082
github
LLM Vibe Score0.43
Human Vibe Score0.050863657300783044
LinkedInLearningFeb 28, 2025

introduction-to-ai-orchestration-with-langchain-and-llamaindex-3820082

Introduction to AI Orchestration with LangChain and LlamaIndex This is the repository for the LinkedIn Learning course Introduction to AI Orchestration with LangChain and LlamaIndex. The full course is available from [LinkedIn Learning][lil-course-url]. ![lil-thumbnail-url] Are you ready to dive into the world of AI applications? This course was designed for you. AI orchestration frameworks let you step back from the details of artificial intelligence tools and APIs and instead focus on building more general, effective systems that solve real-world problems. Join instructor M.Joel Dubinko as he explores the business benefits of AI orchestration—faster development, smarter interfaces, lower costs, and more. This course provides an overview of AI fundamentals and key capabilities, like accessing external tools and databases, with a special focus on exploring local models running on your own hardware, alongside or instead of cloud services like those from OpenAI. Every step of the way, Joel offers hands-on demonstrations of two industry-leading frameworks: LangChain and LlamaIndex. By the end of this course, you’ll be prepared to start building chatbots, intelligent agents, and other useful tools, while monitoring for errors and troubleshooting as you go. Welcome to the course! AI is a fast-changing field, so be sure to check this repo for newer versions of the sample code. Installing Clone this repository into your local machine using the terminal (Mac), CMD (Windows), or a GUI tool like SourceTree. Ensure you have Python 3.10 or later (version 3.11 recommended) To prevent conflicts with other installed software on your computer, the author recommends setting up a virtual environment as follows: python3.11 -m venv .venv Activate the virtual environment with one of these commands: Install the necessary Python packages: (use the upgrade flag to ensure you have current versions) Specific projects in this course might have additional optional requirements. If so, it will be noted within the relevant video. Updates Recent versions of LM Studio have changed the UI from what's shown in the videos. These are generally welcome improvements. For example the maximum context length and other model parameters are viewable in the sidebar. Recent versions of LlamaIndex have changed their import and package structure in a way that breaks existing code. In many cases, you can fix imports as follows: Specific third party components require installing new packages. These will be noted in comments. Example: For code in Chap04, From March 1, 2024, LlamaHub has been deprecated and most projects migrated into LlamaIndex. (sort of--it's complicated) Specifically: Additionally, LlamaIndex ServiceContext has been deprecated and replaced with Settings. See Ch02/rag_llamaindex.py for updated sample code. LangChain too has changed their import structure, though as of this writing it produces warnings rather than errors. In many cases you will need to import from langchaincommunity or langchainopenai as follows: Instructor M. Joel Dubinko Software Generalist | Consultant | Instructor | Problem Solver Check out my other courses on [LinkedIn Learning][URL-instructor-home]. [lil-course-url]: https://www.linkedin.com/learning/introduction-to-ai-orchestration-with-langchain-and-llamaindex [lil-thumbnail-url]: https://media.licdn.com/dms/image/D560DAQEi6KQmA4fF1Q/learning-public-crop6751200/0/1707936616297?e=2147483647&v=beta&t=3vzvDRzpKq9Nd99ss8r2pqMZmyTOKYgKwk825XoSEHU [URL-instructor-home]: https://www.linkedin.com/learning/instructors/m-joel-dubinko?u=104

aion
github
LLM Vibe Score0.494
Human Vibe Score0.011340905117109681
aionnetworkFeb 28, 2025

aion

Aion Mainstream adoption of blockchains has been limited because of scalability, privacy, and interoperability challenges. Aion is a multi-tier blockchain network designed to address these challenges. Core to our hypothesis is the idea that many blockchains will be created to solve unique business challenges within unique industries. As such, the Aion network is designed to support custom blockchain architectures while providing a trustless mechanism for cross-chain interoperability. The Aion White Papers provides more details regarding our design and project roadmap. This repository contains the main (Java) kernel implementation and releases for the Aion Network. System Requirements Ubuntu 16.04 or a later version Getting Started Blockchain node concept To understand what is blockchain kernel: Node overview Developers If you're interested in building Open Applications, powered by Aion: Visit the Developer site of The Open Application Network : developer.theoan.com If you're interested in making improvements to the Java Implementation of Aion: Refer to the Build Aion kernel from source wiki for information on building this source code to a native binary or Docker image Refer to the Installation wiki for a guide on installing and configuring the kernel. The Owner's Manual wiki will include further instructions and details on working with the kernel. Please refer to the wiki pages for further documentation on mining/validating, using the Web3 API, command line options, etc. Miners/Validators If you're interested in being a validator on the Aion networks, refer to our Validator Docs Users If you're interested in interacting with dApps and using Aion, refer to our Aion Desktop Wallet Docs FAQ Where can I store my Aion? We recommend using the web-based Aion Wallet; more information can be found in “Docs”). Where can I stake my Aion? You can use the original staking interface which has support for staking pool operators, or the web-based Aion Wallet. Where can I check on a transaction on The Open Application Network? You can visit either the web-based Aion Wallet or the Aion Dashboard to view a transaction on the network. Where can I see the current network performance of The Open Application Network? You can visit the Aion Dashboard to see how the Open Application Network is performing. What should I do if the desktop wallet or the web based wallet are not functioning properly? First check in with the community on the community subreddit. If the community is not able to assist then you can submit a ticket through Github. The Open Application Network is currently providing support to help maintain the network; where can I see the funds that The Open Application Network has mined or received as a stake reward? All funds mined or rewarded for staking that the foundation receives are burned to this address: 0x0000000000000000000000000000000000000000000000000000000000000000 users can check the totals burned via the Aion Dashboard here. What is the total circulating supply of Aion? To view the current total circulating supply of Aion you can use the Aion Watch tool located here. Which networks are supported? The Mainnet network is supported. To view the dashboards for this networks use these links: Mainnet How can I export a list of my transactions? If you would like to download a copy of your transaction history you can use https://mainnet.theoan.com and search for your public address. In the bottom right of your screen is a “Download this Account” button which will allow you to select a date range and download a .csv file containing your transactions. Where can I access a copy of The OAN and Aion Brand Guidelines? The OAN and Aion Brand Guidelines can be located here they can be used by the community to create brand aligned content. My Ledger doesn’t seem to be recognized with applications in the Chrome Browser (Staking Interface or Wallet) When using your Ledger hardware wallet with Aion installed to access an account VIA the Chrome browser, users will need to enable the Aion contract on their Ledger device. This can be done by selecting: Aion > Setting > enable Contract. What happened to the Aiwa chrome extension wallet? Aiwa was owned and operated by a third-party organization called BlockX Labs, Aiwa was funded by a community grant during its lifespan. However, BlockX Labs is now reorganizing and will no longer support Aiwa. Usage of Aiwa has decreased significantly with other tools such as the web based wallet now available so the decision was made to deprecate it. I am unable to undelegate my staked Aion In order to undelegate your Aion: – You must have a sufficient Aion balance to perform the undelegation transaction (a minimum of 0.02 Aion is required for the transaction fee) – Your balance will be updated after a lock-up period of 8640 blocks (approximately 24 hours) – Ensure the amount follows this format: 999,999,999.999999999 – If you are using a ledger, please ensure that your firmware is up to date. – If you are using the desktop interface, ensure that you are using the latest version – For more information view this guide What happened to the swap process to convert ERC-20 Aion to the mainnet? As of January 31, 2022 swapping from ERC20 to Aion mainnet is no longer supported. The original Aion token swap from Ethereum to Aion was completed on December 10, 2018. However, in order to support the community members who missed the original swap deadline a manual process was available, this process has now been retired. Community Channels Newsfeed: @AionNewsfeed Info Bot: @AionTGbot Wiki: reddit.com/r/AionNetwork/Wiki Help Desk: https://helpdesk.theoan.com/ Contact To keep up to date and stay connected with current progress and development, reach out to us on the following channels: Aion Telegram Dispatch Alerts Aion on Twitter Aion Blog License Aion is released under the MIT license

studio
github
LLM Vibe Score0.458
Human Vibe Score0.0031250040522174975
brighticsFeb 13, 2025

studio

Brightics Studio v1.3 !CodeQL !Download Counts !Latest Counts [English] [한국어] MacOS / Linux 사용자는 본 문서 하단의 설치 가이드를 따라 진행하시면 Brightics Studio를 사용하실 수 있습니다. Overview Brightics Studio는 데이터 과학자를 위한 웹 기반 데이터 분석 워크플로우 도구입니다. Brightics Studio는 직관적인 사용자 인터페이스를 제공하며 대화형 GUI를 통해 데이터에서 잠재적인 통찰력을 찾을 수 있습니다. Brightics Studio는 scikit-learn 및 pandas와 같은 인기 있는 파이썬 라이브러리를 포함하여 분석을 위한 인터페이스를 제공합니다. Brightics Studio를 사용하여 시티즌 데이터 과학자와 전문 데이터 과학자 모두 데이터 분석 프로젝트를 수행할 수 있습니다. Brightics Toolkit을 통해 생성한 사용자 정의 함수를 Brightics 워크플로에서 사용할 수 있습니다. 다양한 방법으로 데이터를 시각화할 수 있도록 차트 및 보고서 생성 기능을 제공합니다. Documentation Brightics 홈페이지에서 확인할 수 있습니다. Getting started 릴리즈 파일 혹은 docker 이미지를 이용하여 Brightics Studio를 설치할 수 있습니다. Prerequisite 데이터베이스와 상호 작용하는 일부 기능에는 Oracle Instant Client 와 같은 클라이언트 라이브러리가 필요합니다. Installation - docker를 참고하여 Brightics Studio Docker 이미지를 사용할 수 있습니다. Installation - release file Download 릴리스 파일은 github 릴리스 또는 다운로드 페이지 에서 다운로드 할 수 있습니다. 다운로드한 파일을 실행하면 파일이 자동으로 추출됩니다. 디렉토리의 세부 사항은 다음과 같습니다: Launch 실행하기 전에 아무것도 준비할 필요가 없습니다. 릴리스에는 패키지 자체의 모든 요구 사항이 포함되어 있습니다. 압축을 푼 디렉토리로 이동하여 실행합니다. Notes > 설치 경로에 한글이 포함된 경우 Tokenizer(한국어) 기능이 제대로 작동하지 않습니다. 이 기능을 사용하기 위해서는 전체 경로에 한글이 포함되지 않은 폴더에 Brightics Studio를 설치해야 합니다. Patch 새 버전이 출시되면 아래 파일을 최신 버전의 brightics-studio로 이동하여 데이터와 프로젝트를 유지해야 합니다. Run Brightics Studio는 start-brightics.cmd(또는 start-brightics.sh) 실행 후 Chrome 브라우저에 팝업됩니다. Brightics Studio가 자동으로 팝업되지 않는 경우 수동으로 http://127.0.0.1:3000 으로 이동하여 Brightics Studio를 사용하십시오. Installation - docker Docker 작업환경에 Docker를 설치합니다. Docker Image Brightics Studio Docker 이미지는 Docker Hub 에서 제공됩니다. 실행 중지 Security warning Brightics Studio 를 실행하면 서비스포트(3000)로 웹 서비스가 실행되므로 별도 방화벽 또는 접근제어 없이 인터넷에 오픈하는 경우 외부에서 접속하여 데이터 유출 또는 해킹 시도 등이 발생할 수 있습니다. 인터넷을 통한 접근이 가능한 환경인 경우 방화벽을 통해 인가된 PC에서만 사용하도록 통제하시기 바랍니다. Contact us Brightics Studio가 마음에 드셨다면 사용 후기와 피드백 부탁드립니다. 또는 Brightics Studio 사용 중 궁금한 사항이 있으시면 주저하지 마시고 brightics.cs@samsung.com으로 연락주세요. License Visual Analytics(Web GUI) project is licensed under the terms of the Brightics Visual Analytics LICENSE, please check Notice below. The others are licensed under the terms of the Apache 2.0 license. Notice Source codes of the Web GUI are not yet fully opened due to some license issues from its submodules. The purpose of personal use for commercial or non-commercial is allowed but only the redistribution is prohibited. See the documentation about this license for more details. We are working hard to solve these issues and soon it will be public. Contributors This project exists thanks to all the people who contribute.

airbnb
github
LLM Vibe Score0.414
Human Vibe Score0.013305067808012168
dmcgloneFeb 4, 2025

airbnb

Notes on Airbnb business in New York and elsewhere ================================================== Disclaimer The script scrapes the Airbnb web site to collect data about the shape of the company's business. No guarantees are made about the quality of data obtained using this script, statistically or about an individual page. So please check your results. Changelog 2014-12-02 Tom Slee More robustness fixes. 2014-09-23 Tom Slee Bug fixes that solve problems where over-eager exception handling caused the script to exit too early. 2014-08-26 Tom Slee Version 2.1 is updated to be able to collect data from Airbnb's updated web site. Not all cities have the new format, but the script should handle both versions. It will not, however, handle cities without neighborhoods. 2014-05-26 Tom Slee Version 2 (May 2014) is much more thorough and efficient about searching Airbnb's web site for a given city and has more options. I have moved it to python 3 for better handling of unicode multi-lingual data. It is also ported to SAP SQL Anywhere to allow more flexible reporting and better concurrency than SQLite can provide. A free developer edition is available from the SAP web site. You may need to configure the python driver following the instructions given in http://dcx.sybase.com/index.html#sa160/en/dbprogramming/pg-python.html. airbnb.py is the python script to collect data. plot.py just produces some charts. airbnb.db is the data. The basic data is in the table room. A complete search of a given city's listings is a "survey" and the surveys are tracked in table survey. Using the script To create the database: python airbnb.py -dbi. This command does two things: initializes a database file (dbnb.db in the current directory) runs the reload.sql script against the database to create the tables, views, and stored procedures that make up the database. No data is added. On Windows, the reload.sql script does not always run. If that fails, try this to create the database tables: dbisql -c "uid=dba;pwd=sql;dbf=dbnb.db;eng=db" From Interactive SQL, click File > Open and choose reload.sql from the current directory. Hit F5 to execute the script and create the tables. Test that you can connect to the database file: run python airbnb.py --dbping and confirm that there are no errors. If there are errors, check the database file setting near the top of the script and change its location. To run a survey: add a city (search area) to the database, by running ./airbnb.py -asa "city-name". It scans the Airbnb web site and adds the neighborhoods for the city. add a survey to the database by running ./airbnb.py -asv "city-name". The command lists the survey_id value that was created. collect the roomids for the survey by running ./airbnb.py -s surveyid. The survey_id can be seen by running ./airbnb -ls. This search loops over neighborhoods, property types, and pages of listings in the Airbnb search pages. fill in the details of the rooms by running ./airbnb -f. If any step fails: If the -s step or the -f step fails (say because the internet connection was lost), you can just run it again, and it will pick up from where it left off without losing data. Continue until the script completes.

LearnAI-KnowledgeMiningBootcamp
github
LLM Vibe Score0.438
Human Vibe Score0.05521136990708693
sithukyaw007Jan 29, 2024

LearnAI-KnowledgeMiningBootcamp

LearnAI: Build an Enterprise Knowledge Mining Solution using the Microsoft AI Platform Build an enterprise scale intelligent search solution for searching business documents using Microsoft Azure and Cognitive Search About this Course In this course, you will learn to build an enterprise search solution by applying knowledge mining approach to search an organization’s business documents like Microsoft Office, PDFs and images using Azure search and Cognitive search skillsets and expose the results via a Bot interface. You will learn to perform entity recognition, image analysis, text translation and indexed search on enterprise business documents using Microsoft Cognitive Services and Azure Search. This approach can be used with almost any Azure service to augment a customer’s scenario involving intelligent search. While this course focusses on Azure and Cognitive search capabilities, a depth course on building Bots and integrating various cognitive services is available here - Building Intelligent Agents and Apps. In this course you will learn Fundamentals of Azure Search and its capabilities. Understand Microsoft Cognitive Search and its key scenarios for using them. Build an enriched data pipeline for search using predefined and custom skillsets: a. Text skills like entity recognition, language detection, text manipulation and key phrase extraction. b. Image skills like OCR. c. Language skills like text translation. d. Content moderation skills to block documents with incompliant content. Use the enriched data pipeline for a knowledge mining solution on business documents within an enterprise. Expose the knowledge mining solution using a bot interface for document search and consumption. Architecture !Architecture Technologies Covered !Technology Industry application Intelligent search is relevant to many major industries. Some are listed below. Retail and health care industries employ chatbots with advanced multi-language support capabilities to service their customers. Retail, Housing and Automotive industries for sales/listing. Entertainment industry uses search for relevant/contextual on-demand streaming. Pre-requisites Fundamental working knowledge of Azure Portal, Functions and Azure Search. Familiarity with Visual Studio. Familiarity with Azure Bots and Microsoft Bot Framework v4. If you do not have any familiarity with the above pre-requisites, please find below links To Read (10 minutes): Visual Studio Tutorial To Read (4 minutes): Azure Functions Overview To Read (10 minutes): Azure Search Overview To Read (7 minutes): Postman Tutorial To Do (30 minutes): CQuickstart Pre-Setup before you attend the class Mandatory To Create: You need a Microsoft Azure account to create the services we use in our solution. You can create a free account, use your MSDN account or use any other subscription where you have permission to create services. To Install: Visual Studio 2017 version version 15.5 or later, including the Azure development workload. To Install: Postman. To call the labs APIs. Course Details Primary Audience: Azure AI Developers, Architects. Secondary Audience: Any professional interested in learning AI. Level This content is designed as an intermediate to advanced level course for AI developers and/or architects. Type This course, in its full form, is designed to be taught in-person but you can also use the materials in a self-paced fashion. There are assignments and multiple reference links throughout the materials that support the concepts and skills you will learn. Length Full Course classroom training: 16 hours Related LearnAI Courses Building Intelligent Agents and Apps Course Modules Introduction – Overview of Azure Search, Cognitive Search, Scenarios and industry specific applications. Fundamentals of Azure Search. Architecture – Solution Architecture for building enterprise search solution. Cognitive Search Skillset – Applying text skills. Cognitive Search Skillset – Applying image skills. Cognitive Search Skillset – Applying Language skills. Cognitive Search Skillset – Applying Moderation skills. Build and Integrate a Bot with Cognitive Search API. Group Hands-on Lab to practice skills acquired.

responsible-ai-hub
github
LLM Vibe Score0.328
Human Vibe Score0.04251968503937008
Thebbie-ADec 21, 2023

responsible-ai-hub

Responsible AI Hub Welcome to the Responsible AI Hub for Developers with all levels of expertise in AI and Machine Learning. This is a dedicated space to help the community discover relevant training resources and events to learn about Responsible AI. View Hub Website You can visit the hosted Responsible AI Hub site to learn about upcoming training events, or to explore self-guided workshops to skill up on topics like: The Responsible AI Dashboard Azure Content Safety Azure Prompt Flow Build & Preview Site Want to contribute content? Start by making sure you can build and preview the site in a relevant development environment. The project is instrumented with a dev container, making it easy to launch using either Github Codespaces (in the cloud) or Docker Desktop (in your local device). The project is built using the Docusaurus 3 static site generator. Once the container is running, use these commands to build and preview the site: You should see something like this: You can now open the browser to that URL to see the site in preview mode. As you make changes to the content, the site preview will automatically refresh to show those updates. To learn more about how the website is configured and structured, see the Docusaurus documentation. Provide Feedback Have comments or questions? Post an Issue to let us know how we can improve the content to support you better, on your learning journey. TODO 🚧 Updating SUPPORT.MD as required Review security processes in SECURITY.MD Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments. Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.