Show HN: Otto-m8 – A low code AI/ML API deployment Platform
3 by farhan0167 | 0 comments on Hacker News.
Hi all, so I've been working on this low to no code platform that allows you to spin up deep learning workloads(I'm talking LLM's, Huggingface models, etc), interconnect a bunch of them, and deploy them as API's. The idea essentially came up early in September, when experimenting with combining a Huggingface based BERT model with an LLM at work, and I realized it would be cool if I could do that instantly(especially since it was a prototype). At the time, I was considering a platform that could essentially help you train deep learning models without any code. It was my observation that much of the code required to train or even run inference on HF models have matured significantly. But before I solved that problem, I wanted to solve inference. Initially inspired by n8n and AWS Cloudformation, I built out otto-m8 (translates to automate). Given a json payload that lists out all the resources, and how each model is interconnected, launch it as one-off API the user can query. And thanks to Reactflow, the UI was just something I couldn't just not implement. And as I built it out, I did not want to miss out on the LLM and Agent bit. With otto-m8, today, you can launch complex workflows by interconnecting HF models and LLM's(currently it supports OpenAI and Ollama only). But I like to see it being more than that. At the core, every workflow is an input process output model. Inputs get processed and there's an output. Therefore, with the way things are setup, one can integrate almost anything and make it interconnectable. Project Link: https://ift.tt/LlYMz63 Let me know what you guys think. I really would love feedback!
Monday, December 23, 2024
Sunday, December 22, 2024
Saturday, December 21, 2024
Friday, December 20, 2024
Thursday, December 19, 2024
Wednesday, December 18, 2024
Tuesday, December 17, 2024
Monday, December 16, 2024
Sunday, December 15, 2024
Saturday, December 14, 2024
Friday, December 13, 2024
Thursday, December 12, 2024
Wednesday, December 11, 2024
Tuesday, December 10, 2024
Monday, December 9, 2024
Sunday, December 8, 2024
New top story on Hacker News: Show HN: Grow Bluesky – A curated collection of the best tools for Bluesky users
Show HN: Grow Bluesky – A curated collection of the best tools for Bluesky users
10 by skaplich | 0 comments on Hacker News.
If you're building a service for Bluesky, share it in the comments, and I'll add it to Grow Bluesky
10 by skaplich | 0 comments on Hacker News.
If you're building a service for Bluesky, share it in the comments, and I'll add it to Grow Bluesky
Saturday, December 7, 2024
Friday, December 6, 2024
Thursday, December 5, 2024
Wednesday, December 4, 2024
Tuesday, December 3, 2024
Monday, December 2, 2024
Sunday, December 1, 2024
New top story on Hacker News: Show HN: Vicinity – Fast, Lightweight Nearest Neighbors with Flexible Back Ends
Show HN: Vicinity – Fast, Lightweight Nearest Neighbors with Flexible Back Ends
10 by Pringled | 0 comments on Hacker News.
We’ve just open-sourced Vicinity, a lightweight approximate nearest neighbors (ANN) search package that allows for fast experimentation and comparison of a larger number of well known algorithms. Main features: - Lightweight: the base package only uses Numpy - Unified interface: use any of the supported algorithms and backends with a single interface: HNSW, Annoy, FAISS, and many more algorithms and libraries are supported - Easy evaluation: evaluate the performance of your backend with a simple function to measure queries per second vs recall - Serialization: save and load your index for persistence After working with a large number of ANN libraries over the years, we found it increasingly cumbersome to learn the interface, features, quirks, and limitations of every library. After writing custom evaluation code to measure the speed and performance for the 100th time to compare libraries, we decided to build this as a way to easily use a large number of algorithms and libraries with a unified, simple interface that allows for quick comparison and evaluation. We are curious to hear your feedback! Are there any algorithms that are missing that you use? Any extra evaluation metrics that are useful?
10 by Pringled | 0 comments on Hacker News.
We’ve just open-sourced Vicinity, a lightweight approximate nearest neighbors (ANN) search package that allows for fast experimentation and comparison of a larger number of well known algorithms. Main features: - Lightweight: the base package only uses Numpy - Unified interface: use any of the supported algorithms and backends with a single interface: HNSW, Annoy, FAISS, and many more algorithms and libraries are supported - Easy evaluation: evaluate the performance of your backend with a simple function to measure queries per second vs recall - Serialization: save and load your index for persistence After working with a large number of ANN libraries over the years, we found it increasingly cumbersome to learn the interface, features, quirks, and limitations of every library. After writing custom evaluation code to measure the speed and performance for the 100th time to compare libraries, we decided to build this as a way to easily use a large number of algorithms and libraries with a unified, simple interface that allows for quick comparison and evaluation. We are curious to hear your feedback! Are there any algorithms that are missing that you use? Any extra evaluation metrics that are useful?
Saturday, November 30, 2024
New top story on Hacker News: Show HN: Jinbase – Multi-model transactional embedded database
Show HN: Jinbase – Multi-model transactional embedded database
4 by alexrustic | 0 comments on Hacker News.
Hi HN ! Alex here. I'm excited to show you Jinbase ( https://ift.tt/fQNiKDu ), my multi-model transactional embedded database. Almost a year ago, I introduced Paradict [1], my take on multi-format streaming serialization. Given its readability, the Paradict text format appears de facto as an interesting data format for config files. But using Paradict to manage config files would end up cluttering its programming interface and making it confusing for users who still have choices of alternative libraries (TOML, INI File, etc.) dedicated to config files. So I used Paradict as a dependency for KvF (Key-value file format) [2], a new project of mine that focuses on config files with sections. With its compact binary format, I thought Paradict would be an efficient dependency for a new project that would rely on I/O functions (such as Open, Read, Write, Seek, Tell and Close) to implement a minimalistic yet reliable persistence solution. But that was before I learned that "files are hard" [3]. SQLite with its transactions, BLOB data type and incremental I/O for BLOBs seemed like the right giant to stand on for my new project. Jinbase started small as a key-value store and ended up as a multi-model embedded database that pushes the boundaries of what we usually do with SQLite. The first transition to the second data model (the depot) happened when I realized that the key-value store was not well suited for cases where a unique identifier is supposed to be automatically generated for each new record, saving the user the burden of providing an identifier that could accidentally be subject to a collision and thus overwrite an existing record. After that, I implemented a search capability that accepts UID ranges for the depot store, timespans (records are automatically timestamped) for both the depot and key-value stores and GLOB patterns and number ranges for string and integer keys in the key-value store. The queue and stack data models emerged as solutions for use cases where records must be consumed in a specific order. A typical record would be retrieved and deleted from the database in a single transaction unit. Since SQLite is used as the storage engine, Jinbase supports the relational model de facto. For convenience, all tables related to Jinbase internals are prefixed with "jinbase_", making Jinbase a useful tool for opening legacy SQLite files to add new data models that will safely coexist with the ad hoc relational model. All four main data models (key-value, depot, queue, stack) support Paradict-compatible data types, such as dictionaries, strings, binary data, integers, datetimes, etc. Under the hood, when the user initiates a write operation, Jinbase serializes (except for binary data), chunks, and stores the data iteratively. A record can be accessed not only in bulk, but also with two levels of partial access granularity: the byte-level and the field-level. While SQLite's incremental I/O for BLOBs is designed to target an individual BLOB column in a row, Jinbase extends this so that for each record, incremental reads cover all chunks as if they were a single unified BLOB. For dictionary records only, Jinbase automatically creates and maintains a lightweight index consisting of pointers to root fields, which then allows extracting from an arbitrary record the contents of a field automatically deserialized before being returned. The most obvious use cases for Jinbase are storing user preferences, persisting session data before exit, order-based processing of data streams, exposing data for other processes, upgrading legacy SQLite files with new data models and bespoke data persistence solutions. Jinbase is written in Python, is available on PyPI and you can play with the examples on the README. Let me know what you think about this project. [1] https://ift.tt/nCZvDqX [2] https://ift.tt/OMHbz7F [3] https://ift.tt/Q3cwUCG
4 by alexrustic | 0 comments on Hacker News.
Hi HN ! Alex here. I'm excited to show you Jinbase ( https://ift.tt/fQNiKDu ), my multi-model transactional embedded database. Almost a year ago, I introduced Paradict [1], my take on multi-format streaming serialization. Given its readability, the Paradict text format appears de facto as an interesting data format for config files. But using Paradict to manage config files would end up cluttering its programming interface and making it confusing for users who still have choices of alternative libraries (TOML, INI File, etc.) dedicated to config files. So I used Paradict as a dependency for KvF (Key-value file format) [2], a new project of mine that focuses on config files with sections. With its compact binary format, I thought Paradict would be an efficient dependency for a new project that would rely on I/O functions (such as Open, Read, Write, Seek, Tell and Close) to implement a minimalistic yet reliable persistence solution. But that was before I learned that "files are hard" [3]. SQLite with its transactions, BLOB data type and incremental I/O for BLOBs seemed like the right giant to stand on for my new project. Jinbase started small as a key-value store and ended up as a multi-model embedded database that pushes the boundaries of what we usually do with SQLite. The first transition to the second data model (the depot) happened when I realized that the key-value store was not well suited for cases where a unique identifier is supposed to be automatically generated for each new record, saving the user the burden of providing an identifier that could accidentally be subject to a collision and thus overwrite an existing record. After that, I implemented a search capability that accepts UID ranges for the depot store, timespans (records are automatically timestamped) for both the depot and key-value stores and GLOB patterns and number ranges for string and integer keys in the key-value store. The queue and stack data models emerged as solutions for use cases where records must be consumed in a specific order. A typical record would be retrieved and deleted from the database in a single transaction unit. Since SQLite is used as the storage engine, Jinbase supports the relational model de facto. For convenience, all tables related to Jinbase internals are prefixed with "jinbase_", making Jinbase a useful tool for opening legacy SQLite files to add new data models that will safely coexist with the ad hoc relational model. All four main data models (key-value, depot, queue, stack) support Paradict-compatible data types, such as dictionaries, strings, binary data, integers, datetimes, etc. Under the hood, when the user initiates a write operation, Jinbase serializes (except for binary data), chunks, and stores the data iteratively. A record can be accessed not only in bulk, but also with two levels of partial access granularity: the byte-level and the field-level. While SQLite's incremental I/O for BLOBs is designed to target an individual BLOB column in a row, Jinbase extends this so that for each record, incremental reads cover all chunks as if they were a single unified BLOB. For dictionary records only, Jinbase automatically creates and maintains a lightweight index consisting of pointers to root fields, which then allows extracting from an arbitrary record the contents of a field automatically deserialized before being returned. The most obvious use cases for Jinbase are storing user preferences, persisting session data before exit, order-based processing of data streams, exposing data for other processes, upgrading legacy SQLite files with new data models and bespoke data persistence solutions. Jinbase is written in Python, is available on PyPI and you can play with the examples on the README. Let me know what you think about this project. [1] https://ift.tt/nCZvDqX [2] https://ift.tt/OMHbz7F [3] https://ift.tt/Q3cwUCG
Friday, November 29, 2024
Thursday, November 28, 2024
Wednesday, November 27, 2024
Tuesday, November 26, 2024
Monday, November 25, 2024
Sunday, November 24, 2024
Saturday, November 23, 2024
Friday, November 22, 2024
Thursday, November 21, 2024
Wednesday, November 20, 2024
Tuesday, November 19, 2024
Monday, November 18, 2024
Sunday, November 17, 2024
Saturday, November 16, 2024
Friday, November 15, 2024
Thursday, November 14, 2024
Wednesday, November 13, 2024
Tuesday, November 12, 2024
New top story on Hacker News: Large Language Models in National Security Applications
Large Language Models in National Security Applications
29 by bindidwodtj | 3 comments on Hacker News.
29 by bindidwodtj | 3 comments on Hacker News.
Monday, November 11, 2024
Sunday, November 10, 2024
Saturday, November 9, 2024
Friday, November 8, 2024
Thursday, November 7, 2024
Wednesday, November 6, 2024
New top story on Hacker News: Launch HN: Midship (YC S24) – Turn PDFs and Images into usable data
Launch HN: Midship (YC S24) – Turn PDFs and Images into usable data
12 by maxmaio | 12 comments on Hacker News.
Hey HN, we are Max, Kieran, and Aahel from Midship ( https://midship.ai ). Midship makes it easy to extract data from unstructured documents like pdfs and images. Here’s a video showing it in action: https://ift.tt/O8dBo2N?... , and a demo playground (no signup required!) to test it out: https://ift.tt/Gpsjf8O We started 5 months ago initially trying to make an AI natural language workflow builder that would be a simpler alternative to Zapier or Make.com. However, most of our users seemed to be much more interested in the basic (and not very good) document extraction feature we had. Seeing how people were spending hours a day manually extracting data from pdfs inspired us to build what has become Midship! The problem is that despite all our progress in software, huge amounts of business data still lives in PDFs and images. Sure, you can OCR them, but getting clean, structured data out is still painful. Most existing tools just give you a blob of markdown - leaving you to figure out which parts matter and how they relate. We've found that combining OCR with language models lets us do something more useful: extract specific fields and tables that users actually care about. The LLMs help correct OCR mistakes and understand context (like knowing that "Inv#" and "Invoice Number" mean the same thing). We have two main kinds of users today, non-technical users that extract data via our web app and developers who use our extraction api. We were initially focused on the first one as they seemed like an underserved part of the market, but we’ve received a lot of interest from developers who face the same issues. For pricing, we currently charge a monthly Saas fee per seat for the web app and a volume based pricing for the API. We’re really excited to share what we’ve built so far and look forward to any feedback from the community!
12 by maxmaio | 12 comments on Hacker News.
Hey HN, we are Max, Kieran, and Aahel from Midship ( https://midship.ai ). Midship makes it easy to extract data from unstructured documents like pdfs and images. Here’s a video showing it in action: https://ift.tt/O8dBo2N?... , and a demo playground (no signup required!) to test it out: https://ift.tt/Gpsjf8O We started 5 months ago initially trying to make an AI natural language workflow builder that would be a simpler alternative to Zapier or Make.com. However, most of our users seemed to be much more interested in the basic (and not very good) document extraction feature we had. Seeing how people were spending hours a day manually extracting data from pdfs inspired us to build what has become Midship! The problem is that despite all our progress in software, huge amounts of business data still lives in PDFs and images. Sure, you can OCR them, but getting clean, structured data out is still painful. Most existing tools just give you a blob of markdown - leaving you to figure out which parts matter and how they relate. We've found that combining OCR with language models lets us do something more useful: extract specific fields and tables that users actually care about. The LLMs help correct OCR mistakes and understand context (like knowing that "Inv#" and "Invoice Number" mean the same thing). We have two main kinds of users today, non-technical users that extract data via our web app and developers who use our extraction api. We were initially focused on the first one as they seemed like an underserved part of the market, but we’ve received a lot of interest from developers who face the same issues. For pricing, we currently charge a monthly Saas fee per seat for the web app and a volume based pricing for the API. We’re really excited to share what we’ve built so far and look forward to any feedback from the community!
Tuesday, November 5, 2024
Monday, November 4, 2024
New top story on Hacker News: Facebook Building Subsea Cable That Will Encompass the World
Facebook Building Subsea Cable That Will Encompass the World
13 by giuliomagnifico | 1 comments on Hacker News.
13 by giuliomagnifico | 1 comments on Hacker News.
Sunday, November 3, 2024
Saturday, November 2, 2024
Friday, November 1, 2024
Thursday, October 31, 2024
New top story on Hacker News: Show HN: Shimmer – ADHD-adapted body doubling
Show HN: Shimmer – ADHD-adapted body doubling
17 by christalwang | 5 comments on Hacker News.
I’m Chris, one of the co-founders of Shimmer. In 2022, following my ADHD diagnosis, I launched Shimmer ( https://shimmer.care ), a 1:1 ADHD Coaching for adults (HN launch here: https://ift.tt/sIdSyaB ). One problem we discovered while running 1:1 coaching is that people weren’t able to actually follow through (in real life) on the ideas they came up with during their weekly sessions with their coach. There is a concept called body doubling that’s popular within the ADHD community—it’s basically getting things done in the presence of other people. The positive accountability is proven to work. However, our members told us they tried other body doubling solutions or attempted to organize it themselves in real life but none of the solutions stuck. So we reverse engineered productive moments our members described, paired with scientific backing of what motivates ADHD-ers, and designed an online body doubling experience for our coaching members that provides a safe but productive space for them to get things done between weekly sessions. A few of the motivators we infused into the traditional body doubling experience were 1) newness/novelty — each session has a different guided experience in the break like breathwork or stretching, 2) urgency — there’s a large visible pomodoro timer on the top left that counts down from 25 min, 3) community — the shared space is ADHD-friendly, and has a mood check-in & sharing functionality built in so you don’t feel alone, 4) accountability — there’s a task list where each time you check something off, it notifies the group, and you can view others’ as well if they opt in. Here’s a video walking through the product experience: https://ift.tt/XPmcuwa Our body doubling was created and iterated alongside thousands of people with ADHD on our coaching platform over 9+ months of building & iterating with them. We’re excited to unveil this experience. If you have ADHD (or executive functioning challenges), we’d love for you to check out coaching & body doubling and give us critical feedback. Shimmer’s pricing: $140/mo. for Essentials plan (15-min weekly sessions), $230/mo. for Standard plan (30-min weekly sessions), $345/mo. for Immersive plan (45-min weekly sessions); all plans start with an additional 25% off the first month, HSA/FSA-eligible. The reason why the price is so high is that this is not a self-guided app or SaaS tool. You’re matched with a real, credentialed coach (not AI) and since ADHD coaching is not reimbursed in the US, the price is hard for us to bring down because the largest cost component is the coach’s compensation. *We know these prices are still expensive for many people with ADHD. Here are the actions we’re taking: (1) we offer needs-based scholarships and aim to have 5% of members on them at any time, (2) we often run fully sponsored scholarships with our partners—over 60 full ride scholarships and 100 group coaching spots have been disbursed alongside Asian Mental Health Project, Government of Canada, and more, and (3) we have aligned our coaching model alongside Health & Wellness Coaching, which is expected to be reimbursed in the next years. If there are ways we can further drive down the cost, please reach out to me directly at chris@shimmer.care.
17 by christalwang | 5 comments on Hacker News.
I’m Chris, one of the co-founders of Shimmer. In 2022, following my ADHD diagnosis, I launched Shimmer ( https://shimmer.care ), a 1:1 ADHD Coaching for adults (HN launch here: https://ift.tt/sIdSyaB ). One problem we discovered while running 1:1 coaching is that people weren’t able to actually follow through (in real life) on the ideas they came up with during their weekly sessions with their coach. There is a concept called body doubling that’s popular within the ADHD community—it’s basically getting things done in the presence of other people. The positive accountability is proven to work. However, our members told us they tried other body doubling solutions or attempted to organize it themselves in real life but none of the solutions stuck. So we reverse engineered productive moments our members described, paired with scientific backing of what motivates ADHD-ers, and designed an online body doubling experience for our coaching members that provides a safe but productive space for them to get things done between weekly sessions. A few of the motivators we infused into the traditional body doubling experience were 1) newness/novelty — each session has a different guided experience in the break like breathwork or stretching, 2) urgency — there’s a large visible pomodoro timer on the top left that counts down from 25 min, 3) community — the shared space is ADHD-friendly, and has a mood check-in & sharing functionality built in so you don’t feel alone, 4) accountability — there’s a task list where each time you check something off, it notifies the group, and you can view others’ as well if they opt in. Here’s a video walking through the product experience: https://ift.tt/XPmcuwa Our body doubling was created and iterated alongside thousands of people with ADHD on our coaching platform over 9+ months of building & iterating with them. We’re excited to unveil this experience. If you have ADHD (or executive functioning challenges), we’d love for you to check out coaching & body doubling and give us critical feedback. Shimmer’s pricing: $140/mo. for Essentials plan (15-min weekly sessions), $230/mo. for Standard plan (30-min weekly sessions), $345/mo. for Immersive plan (45-min weekly sessions); all plans start with an additional 25% off the first month, HSA/FSA-eligible. The reason why the price is so high is that this is not a self-guided app or SaaS tool. You’re matched with a real, credentialed coach (not AI) and since ADHD coaching is not reimbursed in the US, the price is hard for us to bring down because the largest cost component is the coach’s compensation. *We know these prices are still expensive for many people with ADHD. Here are the actions we’re taking: (1) we offer needs-based scholarships and aim to have 5% of members on them at any time, (2) we often run fully sponsored scholarships with our partners—over 60 full ride scholarships and 100 group coaching spots have been disbursed alongside Asian Mental Health Project, Government of Canada, and more, and (3) we have aligned our coaching model alongside Health & Wellness Coaching, which is expected to be reimbursed in the next years. If there are ways we can further drive down the cost, please reach out to me directly at chris@shimmer.care.
Wednesday, October 30, 2024
New top story on Hacker News: Show HN: AI OmniGen – AI Image Generator with Consistent Visuals
Show HN: AI OmniGen – AI Image Generator with Consistent Visuals
18 by lcorinst | 3 comments on Hacker News.
AI OmniGen is an advanced AI image generator, offering identity preservation for consistent subject representation and seamless image editing for refined, customized visuals.
18 by lcorinst | 3 comments on Hacker News.
AI OmniGen is an advanced AI image generator, offering identity preservation for consistent subject representation and seamless image editing for refined, customized visuals.
Tuesday, October 29, 2024
Monday, October 28, 2024
Sunday, October 27, 2024
Saturday, October 26, 2024
New top story on Hacker News: Ask HN: Escape from TCR? Family shared SMS
Ask HN: Escape from TCR? Family shared SMS
21 by tcrhelpforsms | 22 comments on Hacker News.
My wife is a bit particular and when we moved in together want there to be a household phone. So we signed up with a VoIP service and things worked. When we bought a home together, life got more complicated. There were utilities, contractors, service people, deliveries, etc. One day we realized that some of these folks would frequently send texts to our house number instead of calling, which we would never receive. So we upgraded our VoIP service/plan with a SMS option, and it was great. We could both just keep up to date on all the various things and either of us could respond as our schedules allowed, on any of our devices. We could even send images as MMS would allow to make communication clearer ("See this is exactly where things are going wrong", etc.) And as with many things in life, human nature has pretty much ruined it. To combat SMS spam, as far as we can tell, all business SMS usage needs to be approved through https://ift.tt/WmZ2NKV now. And recent years, it has been brought to our attention that the service we are paying for is really intended for businesses. We are increasingly hitting weird blocks on messaging due to being grandfathered in before TCR. It has gotten to the point where we are on the verge of registering a business just to keep our functionality. Except even just having a registered business may not be enough because TCR requires all kinds of detailed information about your business and its practices. So we are at a bit of a loss what else to do here. We could concede and one of us gets the responsibility of the "designated" cell phone number to handle everything household related. It is further complicated by my being an Apple fanboi and her a Windows/Android zealot. If we both used Apple devices we could maybe finagle something out by buying a dedicated iPhone for the house number and then take advantage of the cross-device Messages interoperability. There is Google Voice, and even ignoring my dislike for them, I don't feel they can necessarily be trusted not to drop the product or ban us for arbitrary reasons. Additionally, when it comes to actual phone calls, while Google Voice can forward calls to a number, I am not aware of any option to use it directly with regular handsets like many VoIP providers offer. Are there really no companies in this space serving personal users? If not, that would be an excellent business niche for the entrepreneural folks in the audience.
21 by tcrhelpforsms | 22 comments on Hacker News.
My wife is a bit particular and when we moved in together want there to be a household phone. So we signed up with a VoIP service and things worked. When we bought a home together, life got more complicated. There were utilities, contractors, service people, deliveries, etc. One day we realized that some of these folks would frequently send texts to our house number instead of calling, which we would never receive. So we upgraded our VoIP service/plan with a SMS option, and it was great. We could both just keep up to date on all the various things and either of us could respond as our schedules allowed, on any of our devices. We could even send images as MMS would allow to make communication clearer ("See this is exactly where things are going wrong", etc.) And as with many things in life, human nature has pretty much ruined it. To combat SMS spam, as far as we can tell, all business SMS usage needs to be approved through https://ift.tt/WmZ2NKV now. And recent years, it has been brought to our attention that the service we are paying for is really intended for businesses. We are increasingly hitting weird blocks on messaging due to being grandfathered in before TCR. It has gotten to the point where we are on the verge of registering a business just to keep our functionality. Except even just having a registered business may not be enough because TCR requires all kinds of detailed information about your business and its practices. So we are at a bit of a loss what else to do here. We could concede and one of us gets the responsibility of the "designated" cell phone number to handle everything household related. It is further complicated by my being an Apple fanboi and her a Windows/Android zealot. If we both used Apple devices we could maybe finagle something out by buying a dedicated iPhone for the house number and then take advantage of the cross-device Messages interoperability. There is Google Voice, and even ignoring my dislike for them, I don't feel they can necessarily be trusted not to drop the product or ban us for arbitrary reasons. Additionally, when it comes to actual phone calls, while Google Voice can forward calls to a number, I am not aware of any option to use it directly with regular handsets like many VoIP providers offer. Are there really no companies in this space serving personal users? If not, that would be an excellent business niche for the entrepreneural folks in the audience.
Friday, October 25, 2024
Thursday, October 24, 2024
Wednesday, October 23, 2024
Tuesday, October 22, 2024
Monday, October 21, 2024
Sunday, October 20, 2024
Saturday, October 19, 2024
Friday, October 18, 2024
Thursday, October 17, 2024
Wednesday, October 16, 2024
Tuesday, October 15, 2024
Monday, October 14, 2024
Sunday, October 13, 2024
Saturday, October 12, 2024
Friday, October 11, 2024
New top story on Hacker News: Regular expression search with suffix arrays (2015)
Regular expression search with suffix arrays (2015)
5 by intrepidsoldier | 1 comments on Hacker News.
5 by intrepidsoldier | 1 comments on Hacker News.
Thursday, October 10, 2024
Wednesday, October 9, 2024
New top story on Hacker News: Show HN: FinetuneDB – AI fine-tuning platform to create custom LLMs
Show HN: FinetuneDB – AI fine-tuning platform to create custom LLMs
9 by felix089 | 3 comments on Hacker News.
Hey HN! We’re building FinetuneDB ( https://finetunedb.com/ ), an LLM fine-tuning platform. It enables teams to easily create and manage high-quality datasets, and streamlines the entire workflow from fine-tuning to serving and evaluating models with domain experts. You can check out our docs here: ( https://ift.tt/idyDl9f ) FinetuneDB exists because creating and managing high-quality datasets is a real bottleneck when fine-tuning LLMs. The quality of your data directly impacts the performance of your fine-tuned models, and existing tools didn’t offer an easy way for teams to build, organize, and iterate on their datasets. We’ve been working closely with our pilot customers, both AI startups and more traditional businesses like a large newspaper, which is fine-tuning models on their articles to automate content generation in their tone of voice. The platform is built with an end-to-end workflow in mind, from dataset building, fine-tuning, serving, and evaluating outputs. The centerpiece is a version-controlled, no-code dataset manager where you can upload existing datasets in JSONL, use production data, or collaborate with domain experts to create high-quality datasets for custom use cases. We also offer evaluation workflows that allow non-technical contributors to annotate data, review model outputs, and refine responses (LLM-as-judge also available). We offer: - A free tier for developers and hobbyists who want to streamline dataset management. - Business-tier with full feature access for teams, using per-seat pricing. - A custom tier for model hosting, custom integrations, and self-hosting. Most users still use OpenAI models, but if you're working with open-source LLMs, we offer pay-as-you-go pricing for serverless inference for Llama and Mistral models with up to €100 in free credits to get started. We're in public beta right now, so any feedback—whether it’s about features, usability, or anything else—would be incredibly valuable. If you've worked on fine-tuning models before or are curious about custom LLMs, we’d love to hear from you. Our goal is to make the fine-tuning process more accessible and help more companies leverage their data and domain experts to create custom LLMs. Thanks for checking it out!
9 by felix089 | 3 comments on Hacker News.
Hey HN! We’re building FinetuneDB ( https://finetunedb.com/ ), an LLM fine-tuning platform. It enables teams to easily create and manage high-quality datasets, and streamlines the entire workflow from fine-tuning to serving and evaluating models with domain experts. You can check out our docs here: ( https://ift.tt/idyDl9f ) FinetuneDB exists because creating and managing high-quality datasets is a real bottleneck when fine-tuning LLMs. The quality of your data directly impacts the performance of your fine-tuned models, and existing tools didn’t offer an easy way for teams to build, organize, and iterate on their datasets. We’ve been working closely with our pilot customers, both AI startups and more traditional businesses like a large newspaper, which is fine-tuning models on their articles to automate content generation in their tone of voice. The platform is built with an end-to-end workflow in mind, from dataset building, fine-tuning, serving, and evaluating outputs. The centerpiece is a version-controlled, no-code dataset manager where you can upload existing datasets in JSONL, use production data, or collaborate with domain experts to create high-quality datasets for custom use cases. We also offer evaluation workflows that allow non-technical contributors to annotate data, review model outputs, and refine responses (LLM-as-judge also available). We offer: - A free tier for developers and hobbyists who want to streamline dataset management. - Business-tier with full feature access for teams, using per-seat pricing. - A custom tier for model hosting, custom integrations, and self-hosting. Most users still use OpenAI models, but if you're working with open-source LLMs, we offer pay-as-you-go pricing for serverless inference for Llama and Mistral models with up to €100 in free credits to get started. We're in public beta right now, so any feedback—whether it’s about features, usability, or anything else—would be incredibly valuable. If you've worked on fine-tuning models before or are curious about custom LLMs, we’d love to hear from you. Our goal is to make the fine-tuning process more accessible and help more companies leverage their data and domain experts to create custom LLMs. Thanks for checking it out!
Tuesday, October 8, 2024
Monday, October 7, 2024
Sunday, October 6, 2024
Saturday, October 5, 2024
Friday, October 4, 2024
Thursday, October 3, 2024
Wednesday, October 2, 2024
New top story on Hacker News: Show HN: Kameo – a Rust library for building fault-tolerant, async actors
Show HN: Kameo – a Rust library for building fault-tolerant, async actors
24 by tqwewe | 6 comments on Hacker News.
Hi HN, I’m excited to share Kameo, a lightweight Rust library that helps you build fault-tolerant, distributed, and asynchronous actors. If you're working on distributed systems, microservices, or real-time applications, Kameo offers a simple yet powerful API for handling concurrency, panic recovery, and remote messaging between nodes. Key Features: - Async Rust: Each actor runs as a separate Tokio task, making concurrency management simple. - Remote Messaging: Seamlessly send messages to actors across different nodes. - Supervision and Fault Tolerance: Create self-healing systems with actor hierarchies. - Backpressure Support: Supports bounded and unbounded mpsc messaging. I built Kameo because I wanted a more intuitive, scalable solution for distributed Rust applications. I’d love feedback from the HN community and contributions from anyone interested in Rust and actor-based systems. Check out the project on GitHub: https://ift.tt/0AVb7Yz Looking forward to hearing your thoughts!
24 by tqwewe | 6 comments on Hacker News.
Hi HN, I’m excited to share Kameo, a lightweight Rust library that helps you build fault-tolerant, distributed, and asynchronous actors. If you're working on distributed systems, microservices, or real-time applications, Kameo offers a simple yet powerful API for handling concurrency, panic recovery, and remote messaging between nodes. Key Features: - Async Rust: Each actor runs as a separate Tokio task, making concurrency management simple. - Remote Messaging: Seamlessly send messages to actors across different nodes. - Supervision and Fault Tolerance: Create self-healing systems with actor hierarchies. - Backpressure Support: Supports bounded and unbounded mpsc messaging. I built Kameo because I wanted a more intuitive, scalable solution for distributed Rust applications. I’d love feedback from the HN community and contributions from anyone interested in Rust and actor-based systems. Check out the project on GitHub: https://ift.tt/0AVb7Yz Looking forward to hearing your thoughts!
Tuesday, October 1, 2024
Monday, September 30, 2024
Sunday, September 29, 2024
Saturday, September 28, 2024
New top story on Hacker News: Show HN: Bringing multithreading to Python's async event loop
Show HN: Bringing multithreading to Python's async event loop
16 by nbsande | 3 comments on Hacker News.
This project explores the integration of multithreading into the asyncio event loop in Python. While this was initially built with enhancing CPU utilization for FastAPI servers in mind, the approach can be used with more general async programs too. If you’re interested in diving deeper into the details, I’ve written a blog post about it here: https://ift.tt/n7mrtEf
16 by nbsande | 3 comments on Hacker News.
This project explores the integration of multithreading into the asyncio event loop in Python. While this was initially built with enhancing CPU utilization for FastAPI servers in mind, the approach can be used with more general async programs too. If you’re interested in diving deeper into the details, I’ve written a blog post about it here: https://ift.tt/n7mrtEf
Friday, September 27, 2024
Thursday, September 26, 2024
Wednesday, September 25, 2024
Tuesday, September 24, 2024
Monday, September 23, 2024
Sunday, September 22, 2024
Saturday, September 21, 2024
Friday, September 20, 2024
New top story on Hacker News: Show HN: EloqKV – Scalable distributed ACID key-value database with Redis API
Show HN: EloqKV – Scalable distributed ACID key-value database with Redis API
10 by hubertzhang | 17 comments on Hacker News.
We're thrilled to unveil EloqKV, a lightning-fast distributed key-value store with a Redis-compatible API. Built on a new database architecture called the Data Substrate, EloqKV brings significant innovations to database design. Here’s the unique features that makes it stand out: - Flexible Deployment: Run it as a single-node in-memory KV cache, a larger-than-memory database or scale to a highly available, distributed transactional database with ease. - High Performance: Achieves performance levels comparable to top in-memory databases like Redis and DragonflyDB, while significantly outperforming durable KV stores like KVRocks. - Full ACID Transactions: Ensures complete transactional integrity, even in distributed environments. - Independent Resource Scaling: Scale CPU, memory, storage, and logging resources independently to meet your needs. We’d love to hear your thoughts and feedback!
10 by hubertzhang | 17 comments on Hacker News.
We're thrilled to unveil EloqKV, a lightning-fast distributed key-value store with a Redis-compatible API. Built on a new database architecture called the Data Substrate, EloqKV brings significant innovations to database design. Here’s the unique features that makes it stand out: - Flexible Deployment: Run it as a single-node in-memory KV cache, a larger-than-memory database or scale to a highly available, distributed transactional database with ease. - High Performance: Achieves performance levels comparable to top in-memory databases like Redis and DragonflyDB, while significantly outperforming durable KV stores like KVRocks. - Full ACID Transactions: Ensures complete transactional integrity, even in distributed environments. - Independent Resource Scaling: Scale CPU, memory, storage, and logging resources independently to meet your needs. We’d love to hear your thoughts and feedback!
Thursday, September 19, 2024
Wednesday, September 18, 2024
Tuesday, September 17, 2024
New top story on Hacker News: Hezbollah pager explosions kill several people in Lebanon
Hezbollah pager explosions kill several people in Lebanon
127 by logicchains | 880 comments on Hacker News.
127 by logicchains | 880 comments on Hacker News.
Monday, September 16, 2024
Sunday, September 15, 2024
New top story on Hacker News: Ask HN: Former gifted children with hard lives, how did you turn out?
Ask HN: Former gifted children with hard lives, how did you turn out?
64 by askHN2024 | 51 comments on Hacker News.
For various life reasons, I developed depression, and I am autistic and have ADHD (diagnosed, treated). I didn’t get treatment for my ADHD till after college. The point of this Ask HN isn’t to start a pity party, but I am just getting some data on how others like me are doing. I have an ACE score of 6. Currently, I look accomplished to people, but I don’t feel accomplished. My estimated networth is maybe 300K or more with home equity. My biggest concern with my quality of life is I don’t feel safe (don’t ask). So what’s your ACE score, and how satisfied are you with your life? ACE quiz: https://ift.tt/sBuaRHW...
64 by askHN2024 | 51 comments on Hacker News.
For various life reasons, I developed depression, and I am autistic and have ADHD (diagnosed, treated). I didn’t get treatment for my ADHD till after college. The point of this Ask HN isn’t to start a pity party, but I am just getting some data on how others like me are doing. I have an ACE score of 6. Currently, I look accomplished to people, but I don’t feel accomplished. My estimated networth is maybe 300K or more with home equity. My biggest concern with my quality of life is I don’t feel safe (don’t ask). So what’s your ACE score, and how satisfied are you with your life? ACE quiz: https://ift.tt/sBuaRHW...
Saturday, September 14, 2024
Friday, September 13, 2024
Thursday, September 12, 2024
Wednesday, September 11, 2024
Tuesday, September 10, 2024
Monday, September 9, 2024
Sunday, September 8, 2024
Saturday, September 7, 2024
Friday, September 6, 2024
Thursday, September 5, 2024
Wednesday, September 4, 2024
Tuesday, September 3, 2024
Monday, September 2, 2024
New top story on Hacker News: Ask HN: Who wants to be hired? (September 2024)
Ask HN: Who wants to be hired? (September 2024)
16 by whoishiring | 83 comments on Hacker News.
Share your information if you are looking for work. Please use this format: Location: Remote: Willing to relocate: Technologies: RĂŠsumĂŠ/CV: Email: Please only post if you are personally looking for work. Agencies, recruiters, job boards, and so on, are off topic here. Readers: please only email these addresses to discuss work opportunities. There's a site for searching these posts at https://ift.tt/XD0IO8j .
16 by whoishiring | 83 comments on Hacker News.
Share your information if you are looking for work. Please use this format: Location: Remote: Willing to relocate: Technologies: RĂŠsumĂŠ/CV: Email: Please only post if you are personally looking for work. Agencies, recruiters, job boards, and so on, are off topic here. Readers: please only email these addresses to discuss work opportunities. There's a site for searching these posts at https://ift.tt/XD0IO8j .
Sunday, September 1, 2024
New top story on Hacker News: Show HN: Shehzadi in Peril – My first ever game
Show HN: Shehzadi in Peril – My first ever game
10 by sh4jid | 3 comments on Hacker News.
Hello HN! This is the first game I ever built. It's very simple, but I'm still kind of proud of it because all the pixel art is original. Thanks for taking a look! GitHub link: https://ift.tt/fDOzXPU
10 by sh4jid | 3 comments on Hacker News.
Hello HN! This is the first game I ever built. It's very simple, but I'm still kind of proud of it because all the pixel art is original. Thanks for taking a look! GitHub link: https://ift.tt/fDOzXPU
Saturday, August 31, 2024
Friday, August 30, 2024
Thursday, August 29, 2024
Wednesday, August 28, 2024
New top story on Hacker News: Show HN: Claude Artifacts" but creating real web apps
Show HN: Claude Artifacts" but creating real web apps
20 by antonoo | 8 comments on Hacker News.
Hey Hacker News! Launching gptengineer.app into beta today. It's like Claude Artifacts, but: - you can edit the code in your fav IDE (two-way github sync) - installs npm packages - automatically picks up build and runtime errors and fixes them - very fast, built with rust The full stack capabilities are built on supabase (prefer to not have to handle auth + user data at this point so this is owned by the user) The seed for this project was an open source experiment, posted about that previously here: https://ift.tt/oNFa1Pt Would love feedback if you give it a try!
20 by antonoo | 8 comments on Hacker News.
Hey Hacker News! Launching gptengineer.app into beta today. It's like Claude Artifacts, but: - you can edit the code in your fav IDE (two-way github sync) - installs npm packages - automatically picks up build and runtime errors and fixes them - very fast, built with rust The full stack capabilities are built on supabase (prefer to not have to handle auth + user data at this point so this is owned by the user) The seed for this project was an open source experiment, posted about that previously here: https://ift.tt/oNFa1Pt Would love feedback if you give it a try!
Tuesday, August 27, 2024
Monday, August 26, 2024
New top story on Hacker News: In 2024, it really is better to run a startup in SF
In 2024, it really is better to run a startup in SF
25 by iancmceachern | 23 comments on Hacker News.
25 by iancmceachern | 23 comments on Hacker News.
Sunday, August 25, 2024
New top story on Hacker News: Why don't we have personalized search engines?
Why don't we have personalized search engines?
16 by enether | 16 comments on Hacker News.
- Search as it is today sucks - Google is an ad-engine, not a search engine - SEO is gamed all the time The end result is a search result that isn't that valuable. Why isn't there a tool that allows me to: - search good content I've read - search curated (from other people I trust) content - search books and other paid material I have bought - search my notes (that are scattered throughout 5 apps) All in one?
16 by enether | 16 comments on Hacker News.
- Search as it is today sucks - Google is an ad-engine, not a search engine - SEO is gamed all the time The end result is a search result that isn't that valuable. Why isn't there a tool that allows me to: - search good content I've read - search curated (from other people I trust) content - search books and other paid material I have bought - search my notes (that are scattered throughout 5 apps) All in one?
Saturday, August 24, 2024
Friday, August 23, 2024
Thursday, August 22, 2024
Wednesday, August 21, 2024
Tuesday, August 20, 2024
New top story on Hacker News: Show HN: Tree-sitter Integration for Swift
Show HN: Tree-sitter Integration for Swift
9 by daspoon | 1 comments on Hacker News.
I have created a Swift package ( https://ift.tt/UdpH35i ) enabling tree-sitter parsers to be written in Swift; specifically, as an array of production rules which map symbol types to pairings of syntax expression and type constructor. A member macro derives a tree-sitter grammar and embeds the generated parser in its expansion. This project is a work in progress, and I will be grateful for any feedback. Thanks, Dave
9 by daspoon | 1 comments on Hacker News.
I have created a Swift package ( https://ift.tt/UdpH35i ) enabling tree-sitter parsers to be written in Swift; specifically, as an array of production rules which map symbol types to pairings of syntax expression and type constructor. A member macro derives a tree-sitter grammar and embeds the generated parser in its expansion. This project is a work in progress, and I will be grateful for any feedback. Thanks, Dave
Monday, August 19, 2024
New top story on Hacker News: Ask HN: Google Ads Rejected My SaaS as Compromised Site
Ask HN: Google Ads Rejected My SaaS as Compromised Site
22 by madjam002 | 15 comments on Hacker News.
I’m a solo founder and really struggling to get Google Ads running for my website. My site always gets flagged as Compromised Site and Malicious Software, even though I’ve done several checks that shows it’s clean. Even Google’s own Safe Browsing shows it as clean. Their latest feedback after appealing suggests I change from a .co.uk to .com to resolve the issue which seems like complete nonsense. Does anyone have any suggestions on how I can fix this? All of my competitors are running ads and it’s extremely frustrating as a solo founder that I am unable to do so. Will post my website on request as I’m not sure if I’m allowed to post it.
22 by madjam002 | 15 comments on Hacker News.
I’m a solo founder and really struggling to get Google Ads running for my website. My site always gets flagged as Compromised Site and Malicious Software, even though I’ve done several checks that shows it’s clean. Even Google’s own Safe Browsing shows it as clean. Their latest feedback after appealing suggests I change from a .co.uk to .com to resolve the issue which seems like complete nonsense. Does anyone have any suggestions on how I can fix this? All of my competitors are running ads and it’s extremely frustrating as a solo founder that I am unable to do so. Will post my website on request as I’m not sure if I’m allowed to post it.
Sunday, August 18, 2024
Saturday, August 17, 2024
Friday, August 16, 2024
Thursday, August 15, 2024
New top story on Hacker News: Show HN: Denormalized – Embeddable Stream Processing in Rust and DataFusion
Show HN: Denormalized – Embeddable Stream Processing in Rust and DataFusion
20 by ambrood | 4 comments on Hacker News.
tl;dr we built an embeddable stream processing engine in Rust using apache DataFusion, check us out at https://ift.tt/6gYRZ9B Hey HN, We’d like to showcase a very early version of our embeddable stream processing engine called Denormalized. The rise of DuckDB has abundantly made it clear that even for many workloads of Terabyte scale, a single node system outshines the distributed query engines of previous generation such as Spark, Snowflake etc in terms of both performance and cost. Now a lot of workloads DuckDB is used for were normally considered to be “big data” in the previous generation, but no more. In the context of streaming especially, this problem is more acute. A streaming system is designed to incrementally process large amounts of data over a period of time. Even on the upper end of scale, productionized use-cases of stream processing are rarely performing compute on more than tens of gigabytes of data at a given time. Even so, the standard stream processing solutions such as Flink involve spinning up a distributed JVM cluster to even compute against the simplest of event streams. To that end, we’re building Denormalized designed to be embeddable in your applications and scale up to hundreds of thousands of events per second with a Flink-like dataflow API. While we currently only support Rust, we have plans for Python and Typescript bindings soon. We’re built atop DataFusion and the Arrow ecosystems and currently support streaming joins as well as windowed aggregations on Kafka topics. Please check out out repo at: https://ift.tt/6gYRZ9B We’d love to hear your feedback.
20 by ambrood | 4 comments on Hacker News.
tl;dr we built an embeddable stream processing engine in Rust using apache DataFusion, check us out at https://ift.tt/6gYRZ9B Hey HN, We’d like to showcase a very early version of our embeddable stream processing engine called Denormalized. The rise of DuckDB has abundantly made it clear that even for many workloads of Terabyte scale, a single node system outshines the distributed query engines of previous generation such as Spark, Snowflake etc in terms of both performance and cost. Now a lot of workloads DuckDB is used for were normally considered to be “big data” in the previous generation, but no more. In the context of streaming especially, this problem is more acute. A streaming system is designed to incrementally process large amounts of data over a period of time. Even on the upper end of scale, productionized use-cases of stream processing are rarely performing compute on more than tens of gigabytes of data at a given time. Even so, the standard stream processing solutions such as Flink involve spinning up a distributed JVM cluster to even compute against the simplest of event streams. To that end, we’re building Denormalized designed to be embeddable in your applications and scale up to hundreds of thousands of events per second with a Flink-like dataflow API. While we currently only support Rust, we have plans for Python and Typescript bindings soon. We’re built atop DataFusion and the Arrow ecosystems and currently support streaming joins as well as windowed aggregations on Kafka topics. Please check out out repo at: https://ift.tt/6gYRZ9B We’d love to hear your feedback.
Wednesday, August 14, 2024
Tuesday, August 13, 2024
Monday, August 12, 2024
New top story on Hacker News: Postgres.new: In-browser Postgres with an AI interface
Postgres.new: In-browser Postgres with an AI interface
21 by kiwicopple | 8 comments on Hacker News.
21 by kiwicopple | 8 comments on Hacker News.
Sunday, August 11, 2024
Saturday, August 10, 2024
Friday, August 9, 2024
Thursday, August 8, 2024
Wednesday, August 7, 2024
New top story on Hacker News: Ask HN: How different is AWS/GCP/Azure in everyday work
Ask HN: How different is AWS/GCP/Azure in everyday work
21 by michal_kluczek | 14 comments on Hacker News.
I've almost exclusively been working with GCP for years, with very few occasions when I've created some resources in AWS (I'm managing infra using terraform). When looking a job now, it's very common that I'm rejected before TI because I wasn't working with AWS. Is it really so fundamentally different from GCP or any other cloud provider for that matter? I have a wild feeling that 80-90% of the products all cloud providers offer are same toys but with different names and integrations mechanisms. There are surely some quirks that are exclusive for a specific cloud provider, but is it really that many to stifle your performance?
21 by michal_kluczek | 14 comments on Hacker News.
I've almost exclusively been working with GCP for years, with very few occasions when I've created some resources in AWS (I'm managing infra using terraform). When looking a job now, it's very common that I'm rejected before TI because I wasn't working with AWS. Is it really so fundamentally different from GCP or any other cloud provider for that matter? I have a wild feeling that 80-90% of the products all cloud providers offer are same toys but with different names and integrations mechanisms. There are surely some quirks that are exclusive for a specific cloud provider, but is it really that many to stifle your performance?
Tuesday, August 6, 2024
Monday, August 5, 2024
Sunday, August 4, 2024
Saturday, August 3, 2024
Friday, August 2, 2024
Thursday, August 1, 2024
Wednesday, July 31, 2024
New top story on Hacker News: Ask HN: Best Tools for Monorepo?
Ask HN: Best Tools for Monorepo?
7 by bradhe | 3 comments on Hacker News.
I've got a monorepo I'm working in that has a Golang backend with a couple services and a Next.js front-end. Everything lives in a monorepo together. My tooling is super weak, though! For instance, for process management in development I'm using Goreman, which is a Foreman alternative in Goalng. Wondering what's the state of the art for managing the processes in local dev in monorepos in 2024? Or other tools for managing a monorepo I might be missing in general!
7 by bradhe | 3 comments on Hacker News.
I've got a monorepo I'm working in that has a Golang backend with a couple services and a Next.js front-end. Everything lives in a monorepo together. My tooling is super weak, though! For instance, for process management in development I'm using Goreman, which is a Foreman alternative in Goalng. Wondering what's the state of the art for managing the processes in local dev in monorepos in 2024? Or other tools for managing a monorepo I might be missing in general!
Tuesday, July 30, 2024
Monday, July 29, 2024
New top story on Hacker News: Can the moon influence human health? New research
Can the moon influence human health? New research
25 by sabrina_ramonov | 11 comments on Hacker News.
25 by sabrina_ramonov | 11 comments on Hacker News.
Sunday, July 28, 2024
New top story on Hacker News: Show HN: I built an open-source tool to make on-call suck less
Show HN: I built an open-source tool to make on-call suck less
18 by aray07 | 2 comments on Hacker News.
Hey HN, I am building an open source platform to make on-call better and less stressful for engineers. We are building a tool that can silence alerts and help with debugging and root cause analysis. We also want to automate tedious parts of being on-call (running runbooks manually, answering questions on Slack, dealing with Pagerduty). Here is a quick video of how it works: https://youtu.be/m_K9Dq1kZDw I hated being on-call for a couple of reasons: * Alert volume: The number of alerts kept increasing over time. It was hard to maintain existing alerts. This would lead to a lot of noisy and unactionable alerts. I have lost count of the number of times I got woken up by alert that auto-resolved 5 minutes later. * Debugging: Debugging an alert or a customer support ticket would need me to gain context on a service that I might not have worked on before. These companies used many observability tools that would make debugging challenging. There are always a time pressure to resolve issues quickly. There were some more tangential issues that used to take up a lot of on-call time * Support: Answering questions from other teams. A lot of times these questions were repetitive and have been answered before. * Dealing with PagerDuty: These tools are hard to use. e.g. It was hard to schedule an override in PD or do holiday schedules. I am building an on-call tool that is Slack-native since that has become the de-facto tool for on-call engineers. We heard from a lot of engineers that maintaining good alert hygiene is a challenge. To start off, Opslane integrates with Datadog and can classify alerts as actionable or noisy. We analyze your alert history across various signals: 1. Alert frequency 2. How quickly the alerts have resolved in the past 3. Alert priority 4. Alert response history Our classification is conservative and it can be tuned as teams get more confidence in the predictions. We want to make sure that you aren't accidentally missing a critical alert. Additionally, we generate a weekly report based on all your alerts to give you a picture of your overall alert hygiene. What’s next? 1. Building more integrations (Prometheus, Splunk, Sentry, PagerDuty) to continue making on-call quality of life better 2. Help make debugging and root cause analysis easier. 3. Runbook automation We’re still pretty early in development and we want to make on-call quality of life better. Any feedback would be much appreciated!
18 by aray07 | 2 comments on Hacker News.
Hey HN, I am building an open source platform to make on-call better and less stressful for engineers. We are building a tool that can silence alerts and help with debugging and root cause analysis. We also want to automate tedious parts of being on-call (running runbooks manually, answering questions on Slack, dealing with Pagerduty). Here is a quick video of how it works: https://youtu.be/m_K9Dq1kZDw I hated being on-call for a couple of reasons: * Alert volume: The number of alerts kept increasing over time. It was hard to maintain existing alerts. This would lead to a lot of noisy and unactionable alerts. I have lost count of the number of times I got woken up by alert that auto-resolved 5 minutes later. * Debugging: Debugging an alert or a customer support ticket would need me to gain context on a service that I might not have worked on before. These companies used many observability tools that would make debugging challenging. There are always a time pressure to resolve issues quickly. There were some more tangential issues that used to take up a lot of on-call time * Support: Answering questions from other teams. A lot of times these questions were repetitive and have been answered before. * Dealing with PagerDuty: These tools are hard to use. e.g. It was hard to schedule an override in PD or do holiday schedules. I am building an on-call tool that is Slack-native since that has become the de-facto tool for on-call engineers. We heard from a lot of engineers that maintaining good alert hygiene is a challenge. To start off, Opslane integrates with Datadog and can classify alerts as actionable or noisy. We analyze your alert history across various signals: 1. Alert frequency 2. How quickly the alerts have resolved in the past 3. Alert priority 4. Alert response history Our classification is conservative and it can be tuned as teams get more confidence in the predictions. We want to make sure that you aren't accidentally missing a critical alert. Additionally, we generate a weekly report based on all your alerts to give you a picture of your overall alert hygiene. What’s next? 1. Building more integrations (Prometheus, Splunk, Sentry, PagerDuty) to continue making on-call quality of life better 2. Help make debugging and root cause analysis easier. 3. Runbook automation We’re still pretty early in development and we want to make on-call quality of life better. Any feedback would be much appreciated!
Saturday, July 27, 2024
New top story on Hacker News: Show HN: Semantic Grep – A Word2Vec-powered search tool
Show HN: Semantic Grep – A Word2Vec-powered search tool
13 by arunsupe | 2 comments on Hacker News.
Much improved new version. Search for words similar to the query. For example, "death" will find "death", "dying", "dead", "killing"... Incredibly useful for exploring large text datasets where exact matches are too restrictive.
13 by arunsupe | 2 comments on Hacker News.
Much improved new version. Search for words similar to the query. For example, "death" will find "death", "dying", "dead", "killing"... Incredibly useful for exploring large text datasets where exact matches are too restrictive.
Friday, July 26, 2024
Thursday, July 25, 2024
New top story on Hacker News: Show HN: A personalised AI tutor with < 1s voice responses
Show HN: A personalised AI tutor with < 1s voice responses
30 by za_mike157 | 6 comments on Hacker News.
TLDR: We created a personalised Andrej Karpathy tutor that can response to questions about his Youtube videos in sub 1 second responses (voice-to-voice). We do this using a voice enabled RAG agent. See later in the post for demo link, Github Repo and blog write up. A few weeks ago we released the worlds fastest voice bot, achieving 500ms voice-to-voice response times, including a 200ms delay waiting for a user to stop speaking. After reaching the front page of HN, we thought about how we could take this a step further based on feedback we were getting from the community. Many companies were looking for a way to implement function calling and RAG with voice interfaces while retaining a low enough latency. We couldn’t find many resources about how to do this online that: 1. Allowed us to achieve sub-second voice-to-voice latency 2. Was more flexible than existing solutions. Vapi, Retell, [Bland.ai]( http://Bland.ai ) are too opinionated plus since they just orchestrate API’s which incur network latency at every step. See requirement above 3. The unit economics actually work at scale. So we decided to create a implementation of our own. Process: As we mentioned in our previous release, if you want to achieve response times this low you need to make everything as local as possible. So below was our setup - Local STT: Deepgram model - Local Embedding model: Nomic v1.5 - Local VectorDB: Turso - Local LLM: Llama 3B - Local TTS: Deepgram model From our previous example, the only new components where: - Local Embedding model: We chose Nomic Embed text v1.5 model that gave a processing time of roughly ~200ms - Turso offers local embedded replicas combined with edgeDB’s which meant we were able to achieve 0.01 second read times. Pinecone also gave us good times of 0.043 seconds. The above changes led us to achieve sub 1 second voice-to-voice response times Application: With Andrej Karpathy’s announcement around [Eureka Labs]( https://eurekalabs.ai/ ), a new AI+Education company we thought we would create our very own personalised Andrej tutor. Listen to anyone of his Youtube lectures, as soon as your start specking, the video will pause and he will reply. Once your question has been answered you can then tell him to continue with the lecture and the video will automatically start playing. Demo: https://ift.tt/l2AEaMJ Blog: https://ift.tt/dqEGx5n... Github Repo: https://ift.tt/vXPlxgd... For demo purposes: - We used OpenAI for GPT-4-mini and embeddings (its cheaper to run on a CPU than GPU’s when running demos at scale. These changes add about ~1 second to the response time - We used Eleven labs to clone his voice to make replies sound more realistic. This adds about 300ms to the response time. The improvements that can be made which we would like the community to contribute to are: - Embed the video screens as well that when you ask certain questions it can show you the relevant lecture slide for the same chuck that it got context from to answer. - Insert the timestamps in the vectorDB timestamps so that if a question will be answered later in the lecture he can let you know This unlocks so many use cases in education, employee training, sales etc that it would be great to see what the community builds!
30 by za_mike157 | 6 comments on Hacker News.
TLDR: We created a personalised Andrej Karpathy tutor that can response to questions about his Youtube videos in sub 1 second responses (voice-to-voice). We do this using a voice enabled RAG agent. See later in the post for demo link, Github Repo and blog write up. A few weeks ago we released the worlds fastest voice bot, achieving 500ms voice-to-voice response times, including a 200ms delay waiting for a user to stop speaking. After reaching the front page of HN, we thought about how we could take this a step further based on feedback we were getting from the community. Many companies were looking for a way to implement function calling and RAG with voice interfaces while retaining a low enough latency. We couldn’t find many resources about how to do this online that: 1. Allowed us to achieve sub-second voice-to-voice latency 2. Was more flexible than existing solutions. Vapi, Retell, [Bland.ai]( http://Bland.ai ) are too opinionated plus since they just orchestrate API’s which incur network latency at every step. See requirement above 3. The unit economics actually work at scale. So we decided to create a implementation of our own. Process: As we mentioned in our previous release, if you want to achieve response times this low you need to make everything as local as possible. So below was our setup - Local STT: Deepgram model - Local Embedding model: Nomic v1.5 - Local VectorDB: Turso - Local LLM: Llama 3B - Local TTS: Deepgram model From our previous example, the only new components where: - Local Embedding model: We chose Nomic Embed text v1.5 model that gave a processing time of roughly ~200ms - Turso offers local embedded replicas combined with edgeDB’s which meant we were able to achieve 0.01 second read times. Pinecone also gave us good times of 0.043 seconds. The above changes led us to achieve sub 1 second voice-to-voice response times Application: With Andrej Karpathy’s announcement around [Eureka Labs]( https://eurekalabs.ai/ ), a new AI+Education company we thought we would create our very own personalised Andrej tutor. Listen to anyone of his Youtube lectures, as soon as your start specking, the video will pause and he will reply. Once your question has been answered you can then tell him to continue with the lecture and the video will automatically start playing. Demo: https://ift.tt/l2AEaMJ Blog: https://ift.tt/dqEGx5n... Github Repo: https://ift.tt/vXPlxgd... For demo purposes: - We used OpenAI for GPT-4-mini and embeddings (its cheaper to run on a CPU than GPU’s when running demos at scale. These changes add about ~1 second to the response time - We used Eleven labs to clone his voice to make replies sound more realistic. This adds about 300ms to the response time. The improvements that can be made which we would like the community to contribute to are: - Embed the video screens as well that when you ask certain questions it can show you the relevant lecture slide for the same chuck that it got context from to answer. - Insert the timestamps in the vectorDB timestamps so that if a question will be answered later in the lecture he can let you know This unlocks so many use cases in education, employee training, sales etc that it would be great to see what the community builds!
Wednesday, July 24, 2024
Tuesday, July 23, 2024
New top story on Hacker News: Show HN: Zerox – document OCR with GPT-mini
Show HN: Zerox – document OCR with GPT-mini
14 by themanmaran | 5 comments on Hacker News.
This started out as a weekend hack with gpt-4-mini, using the very basic strategy of "just ask the ai to ocr the document". But this turned out to be better performing than our current implementation of Unstructured/Textract. At pretty much the same cost. I've tested almost every variant of document OCR over the past year, especially trying things like table / chart extraction. I've found the rules based extraction has always been lacking. Documents are meant to be a visual representation after all. With weird layouts, tables, charts, etc. Using a vision model just make sense! In general, I'd categorize this solution as slow, expensive, and non deterministic. But 6 months ago it was impossible. And 6 months from now it'll be fast, cheap, and probably more reliable!
14 by themanmaran | 5 comments on Hacker News.
This started out as a weekend hack with gpt-4-mini, using the very basic strategy of "just ask the ai to ocr the document". But this turned out to be better performing than our current implementation of Unstructured/Textract. At pretty much the same cost. I've tested almost every variant of document OCR over the past year, especially trying things like table / chart extraction. I've found the rules based extraction has always been lacking. Documents are meant to be a visual representation after all. With weird layouts, tables, charts, etc. Using a vision model just make sense! In general, I'd categorize this solution as slow, expensive, and non deterministic. But 6 months ago it was impossible. And 6 months from now it'll be fast, cheap, and probably more reliable!
Monday, July 22, 2024
Sunday, July 21, 2024
New top story on Hacker News: Show HN: A fake SMTP server for software integration testing
Show HN: A fake SMTP server for software integration testing
13 by aeaa3 | 0 comments on Hacker News.
This is a side project of mine. Use this as your SMTP server in a test environment to guarantee that your users don't receive test emails. Looking for feedback, especially on the security side.
13 by aeaa3 | 0 comments on Hacker News.
This is a side project of mine. Use this as your SMTP server in a test environment to guarantee that your users don't receive test emails. Looking for feedback, especially on the security side.
New top story on Hacker News: Intel says 13th and 14th Gen mobile CPUs are crashing
Intel says 13th and 14th Gen mobile CPUs are crashing
22 by markus_zhang | 2 comments on Hacker News.
22 by markus_zhang | 2 comments on Hacker News.
Saturday, July 20, 2024
Friday, July 19, 2024
Thursday, July 18, 2024
Wednesday, July 17, 2024
New top story on Hacker News: Show HN: VisCircuit – A Note-Taking Website for Electronics and Circuits
Show HN: VisCircuit – A Note-Taking Website for Electronics and Circuits
8 by darrenyaoyaoyao | 0 comments on Hacker News.
Hi, everyone. I created a note-taking website for electronics and circuits where you can draw circuit diagrams and write text notes at the same time. I am a Digital IC designer, and I self-study different types of analog and digital circuits a lot. However, I found a problem. Circuits have many different architectures and are hard to memorize due to numerous experiential tips. I want to document what I learn in my note app, but I found there is no method for me to easily draw circuit and block diagrams alongside text notes. This issue has bothered me for a long time, from my master's school to my current working life. I decided to solve it, so I created a note-taking website specifically for electronics and circuits, called VisCircuit. With VisCircuit, you can easily draw circuit diagrams, block diagrams, and write text notes simultaneously. I have already used it for two weeks and have noted down things I find hard to remember, such as SRAM, amplifier circuits, and PCB components of Arduino and Raspberry Pi. I found this tool really useful for memorizing knowledge about electronics and circuits. Currently, I have opened VisCircuit for alpha testing, and I want to let some people use it and give me feedback. Feel free to try it, and I will really appreciate what you think about this project. Leave any suggestions for improvement. Thank you very much.
8 by darrenyaoyaoyao | 0 comments on Hacker News.
Hi, everyone. I created a note-taking website for electronics and circuits where you can draw circuit diagrams and write text notes at the same time. I am a Digital IC designer, and I self-study different types of analog and digital circuits a lot. However, I found a problem. Circuits have many different architectures and are hard to memorize due to numerous experiential tips. I want to document what I learn in my note app, but I found there is no method for me to easily draw circuit and block diagrams alongside text notes. This issue has bothered me for a long time, from my master's school to my current working life. I decided to solve it, so I created a note-taking website specifically for electronics and circuits, called VisCircuit. With VisCircuit, you can easily draw circuit diagrams, block diagrams, and write text notes simultaneously. I have already used it for two weeks and have noted down things I find hard to remember, such as SRAM, amplifier circuits, and PCB components of Arduino and Raspberry Pi. I found this tool really useful for memorizing knowledge about electronics and circuits. Currently, I have opened VisCircuit for alpha testing, and I want to let some people use it and give me feedback. Feel free to try it, and I will really appreciate what you think about this project. Leave any suggestions for improvement. Thank you very much.
Tuesday, July 16, 2024
Monday, July 15, 2024
Subscribe to:
Posts (Atom)