Show HN: Jinbase – Multi-model transactional embedded database
4 by alexrustic | 0 comments on Hacker News.
Hi HN ! Alex here. I'm excited to show you Jinbase ( https://ift.tt/fQNiKDu ), my multi-model transactional embedded database. Almost a year ago, I introduced Paradict [1], my take on multi-format streaming serialization. Given its readability, the Paradict text format appears de facto as an interesting data format for config files. But using Paradict to manage config files would end up cluttering its programming interface and making it confusing for users who still have choices of alternative libraries (TOML, INI File, etc.) dedicated to config files. So I used Paradict as a dependency for KvF (Key-value file format) [2], a new project of mine that focuses on config files with sections. With its compact binary format, I thought Paradict would be an efficient dependency for a new project that would rely on I/O functions (such as Open, Read, Write, Seek, Tell and Close) to implement a minimalistic yet reliable persistence solution. But that was before I learned that "files are hard" [3]. SQLite with its transactions, BLOB data type and incremental I/O for BLOBs seemed like the right giant to stand on for my new project. Jinbase started small as a key-value store and ended up as a multi-model embedded database that pushes the boundaries of what we usually do with SQLite. The first transition to the second data model (the depot) happened when I realized that the key-value store was not well suited for cases where a unique identifier is supposed to be automatically generated for each new record, saving the user the burden of providing an identifier that could accidentally be subject to a collision and thus overwrite an existing record. After that, I implemented a search capability that accepts UID ranges for the depot store, timespans (records are automatically timestamped) for both the depot and key-value stores and GLOB patterns and number ranges for string and integer keys in the key-value store. The queue and stack data models emerged as solutions for use cases where records must be consumed in a specific order. A typical record would be retrieved and deleted from the database in a single transaction unit. Since SQLite is used as the storage engine, Jinbase supports the relational model de facto. For convenience, all tables related to Jinbase internals are prefixed with "jinbase_", making Jinbase a useful tool for opening legacy SQLite files to add new data models that will safely coexist with the ad hoc relational model. All four main data models (key-value, depot, queue, stack) support Paradict-compatible data types, such as dictionaries, strings, binary data, integers, datetimes, etc. Under the hood, when the user initiates a write operation, Jinbase serializes (except for binary data), chunks, and stores the data iteratively. A record can be accessed not only in bulk, but also with two levels of partial access granularity: the byte-level and the field-level. While SQLite's incremental I/O for BLOBs is designed to target an individual BLOB column in a row, Jinbase extends this so that for each record, incremental reads cover all chunks as if they were a single unified BLOB. For dictionary records only, Jinbase automatically creates and maintains a lightweight index consisting of pointers to root fields, which then allows extracting from an arbitrary record the contents of a field automatically deserialized before being returned. The most obvious use cases for Jinbase are storing user preferences, persisting session data before exit, order-based processing of data streams, exposing data for other processes, upgrading legacy SQLite files with new data models and bespoke data persistence solutions. Jinbase is written in Python, is available on PyPI and you can play with the examples on the README. Let me know what you think about this project. [1] https://ift.tt/nCZvDqX [2] https://ift.tt/OMHbz7F [3] https://ift.tt/Q3cwUCG
Saturday, November 30, 2024
Friday, November 29, 2024
Thursday, November 28, 2024
Wednesday, November 27, 2024
Tuesday, November 26, 2024
Monday, November 25, 2024
Sunday, November 24, 2024
Saturday, November 23, 2024
Friday, November 22, 2024
Thursday, November 21, 2024
Wednesday, November 20, 2024
Tuesday, November 19, 2024
Monday, November 18, 2024
Sunday, November 17, 2024
Saturday, November 16, 2024
Friday, November 15, 2024
Thursday, November 14, 2024
Wednesday, November 13, 2024
Tuesday, November 12, 2024
New top story on Hacker News: Large Language Models in National Security Applications
Large Language Models in National Security Applications
29 by bindidwodtj | 3 comments on Hacker News.
29 by bindidwodtj | 3 comments on Hacker News.
Monday, November 11, 2024
Sunday, November 10, 2024
Saturday, November 9, 2024
Friday, November 8, 2024
Thursday, November 7, 2024
Wednesday, November 6, 2024
New top story on Hacker News: Launch HN: Midship (YC S24) – Turn PDFs and Images into usable data
Launch HN: Midship (YC S24) – Turn PDFs and Images into usable data
12 by maxmaio | 12 comments on Hacker News.
Hey HN, we are Max, Kieran, and Aahel from Midship ( https://midship.ai ). Midship makes it easy to extract data from unstructured documents like pdfs and images. Here’s a video showing it in action: https://ift.tt/O8dBo2N?... , and a demo playground (no signup required!) to test it out: https://ift.tt/Gpsjf8O We started 5 months ago initially trying to make an AI natural language workflow builder that would be a simpler alternative to Zapier or Make.com. However, most of our users seemed to be much more interested in the basic (and not very good) document extraction feature we had. Seeing how people were spending hours a day manually extracting data from pdfs inspired us to build what has become Midship! The problem is that despite all our progress in software, huge amounts of business data still lives in PDFs and images. Sure, you can OCR them, but getting clean, structured data out is still painful. Most existing tools just give you a blob of markdown - leaving you to figure out which parts matter and how they relate. We've found that combining OCR with language models lets us do something more useful: extract specific fields and tables that users actually care about. The LLMs help correct OCR mistakes and understand context (like knowing that "Inv#" and "Invoice Number" mean the same thing). We have two main kinds of users today, non-technical users that extract data via our web app and developers who use our extraction api. We were initially focused on the first one as they seemed like an underserved part of the market, but we’ve received a lot of interest from developers who face the same issues. For pricing, we currently charge a monthly Saas fee per seat for the web app and a volume based pricing for the API. We’re really excited to share what we’ve built so far and look forward to any feedback from the community!
12 by maxmaio | 12 comments on Hacker News.
Hey HN, we are Max, Kieran, and Aahel from Midship ( https://midship.ai ). Midship makes it easy to extract data from unstructured documents like pdfs and images. Here’s a video showing it in action: https://ift.tt/O8dBo2N?... , and a demo playground (no signup required!) to test it out: https://ift.tt/Gpsjf8O We started 5 months ago initially trying to make an AI natural language workflow builder that would be a simpler alternative to Zapier or Make.com. However, most of our users seemed to be much more interested in the basic (and not very good) document extraction feature we had. Seeing how people were spending hours a day manually extracting data from pdfs inspired us to build what has become Midship! The problem is that despite all our progress in software, huge amounts of business data still lives in PDFs and images. Sure, you can OCR them, but getting clean, structured data out is still painful. Most existing tools just give you a blob of markdown - leaving you to figure out which parts matter and how they relate. We've found that combining OCR with language models lets us do something more useful: extract specific fields and tables that users actually care about. The LLMs help correct OCR mistakes and understand context (like knowing that "Inv#" and "Invoice Number" mean the same thing). We have two main kinds of users today, non-technical users that extract data via our web app and developers who use our extraction api. We were initially focused on the first one as they seemed like an underserved part of the market, but we’ve received a lot of interest from developers who face the same issues. For pricing, we currently charge a monthly Saas fee per seat for the web app and a volume based pricing for the API. We’re really excited to share what we’ve built so far and look forward to any feedback from the community!
Tuesday, November 5, 2024
Monday, November 4, 2024
New top story on Hacker News: Facebook Building Subsea Cable That Will Encompass the World
Facebook Building Subsea Cable That Will Encompass the World
13 by giuliomagnifico | 1 comments on Hacker News.
13 by giuliomagnifico | 1 comments on Hacker News.
Sunday, November 3, 2024
Saturday, November 2, 2024
Friday, November 1, 2024
Subscribe to:
Posts (Atom)