Database Protocols Are Underwhelming
15 by PaulHoule | 1 comments on Hacker News.
Saturday, April 5, 2025
Friday, April 4, 2025
Thursday, April 3, 2025
Wednesday, April 2, 2025
Tuesday, April 1, 2025
New top story on Hacker News: Show HN: Make SVGs interactive in React with 1 line
Show HN: Make SVGs interactive in React with 1 line
6 by shantingHou | 1 comments on Hacker News.
Hey HN I built svggles (npm: interactive-illustrations), a React utility that makes it easy to add playful, interactive SVGs to your frontend. It supports mouse-tracking, scroll, hover, and other common interactions, and it's designed to be lightweight and intuitive for React devs. The inspiration came from my time playing with p5.js — I loved how expressive and fun it was to create interactive visuals. But I also wanted to bring that kind of creative freedom to everyday frontend work, in a way that fits naturally into the React ecosystem. My goal is to help frontend developers make their UIs feel more alive — not just functional, but fun. I also know creativity thrives in community, so it's open source and I’d love to see contributions from artists, developers, or anyone interested in visual interaction. Links: Website + Docs: svggles.vercel.app GitHub: github.com/shantinghou/interactive-illustrations NPM: interactive-illustrations Let me know what you think — ideas, feedback, and contributions are all welcome
6 by shantingHou | 1 comments on Hacker News.
Hey HN I built svggles (npm: interactive-illustrations), a React utility that makes it easy to add playful, interactive SVGs to your frontend. It supports mouse-tracking, scroll, hover, and other common interactions, and it's designed to be lightweight and intuitive for React devs. The inspiration came from my time playing with p5.js — I loved how expressive and fun it was to create interactive visuals. But I also wanted to bring that kind of creative freedom to everyday frontend work, in a way that fits naturally into the React ecosystem. My goal is to help frontend developers make their UIs feel more alive — not just functional, but fun. I also know creativity thrives in community, so it's open source and I’d love to see contributions from artists, developers, or anyone interested in visual interaction. Links: Website + Docs: svggles.vercel.app GitHub: github.com/shantinghou/interactive-illustrations NPM: interactive-illustrations Let me know what you think — ideas, feedback, and contributions are all welcome
New top story on Hacker News: Show HN: Qwen-2.5-32B is now the best open source OCR model
Show HN: Qwen-2.5-32B is now the best open source OCR model
13 by themanmaran | 2 comments on Hacker News.
Last week was big for open source LLMs. We got: - Qwen 2.5 VL (72b and 32b) - Gemma-3 (27b) - DeepSeek-v3-0324 And a couple weeks ago we got the new mistral-ocr model. We updated our OCR benchmark to include the new models. We evaluated 1,000 documents for JSON extraction accuracy. Major takeaways: - Qwen 2.5 VL (72b and 32b) are by far the most impressive. Both landed right around 75% accuracy (equivalent to GPT-4o’s performance). Qwen 72b was only 0.4% above 32b. Within the margin of error. - Both Qwen models passed mistral-ocr (72.2%), which is specifically trained for OCR. - Gemma-3 (27B) only scored 42.9%. Particularly surprising given that it's architecture is based on Gemini 2.0 which still tops the accuracy chart. The data set and benchmark runner is fully open source. You can check out the code and reproduction steps here: - https://ift.tt/nyvREUo... - https://ift.tt/WLwDHGS - https://ift.tt/nm1bhrJ
13 by themanmaran | 2 comments on Hacker News.
Last week was big for open source LLMs. We got: - Qwen 2.5 VL (72b and 32b) - Gemma-3 (27b) - DeepSeek-v3-0324 And a couple weeks ago we got the new mistral-ocr model. We updated our OCR benchmark to include the new models. We evaluated 1,000 documents for JSON extraction accuracy. Major takeaways: - Qwen 2.5 VL (72b and 32b) are by far the most impressive. Both landed right around 75% accuracy (equivalent to GPT-4o’s performance). Qwen 72b was only 0.4% above 32b. Within the margin of error. - Both Qwen models passed mistral-ocr (72.2%), which is specifically trained for OCR. - Gemma-3 (27B) only scored 42.9%. Particularly surprising given that it's architecture is based on Gemini 2.0 which still tops the accuracy chart. The data set and benchmark runner is fully open source. You can check out the code and reproduction steps here: - https://ift.tt/nyvREUo... - https://ift.tt/WLwDHGS - https://ift.tt/nm1bhrJ
Subscribe to:
Posts (Atom)