forked from capjamesg/hugging-face-papers-rss
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathhf_posts.xml
More file actions
2 lines (2 loc) · 11.8 KB
/
hf_posts.xml
File metadata and controls
2 lines (2 loc) · 11.8 KB
1
2
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>Hugging Face Posts</title><link>https://huggingface.co/</link><description>This is a website scraping RSS feed for the Hugginface trending posts.</description><generator>rfeed v1.1.1</generator><docs>https://github.com/svpino/rfeed/blob/master/README.md</docs><item><title>Qwen3.6-27B is out now! Run it locally on 18GB RAM. 💜</title><link>https://huggingface.co/posts/danielhanchen/642581794981336</link><description>Qwen3.6-27B is out now! Run it locally on 18GB RAM. 💜 Qwen3.6-27B surpasses Qwen3.5-397B-A17B on all major coding benchmarks. GGUFs to run: unsloth/Qwen3.6-27B-GGUF Guide + MLX: https://unsloth.ai/docs/models/qwen3.6 See translation</description><pubDate>Thu, 23 Apr 2026 15:16:04 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/danielhanchen/642581794981336</guid></item><item><title>Our lab recently released a paper where we introduce ShadowPEFT, a new Parameter-Efficient Fine-Tuning (PEFT) paradigm tailored for edge computing scenarios.</title><link>https://huggingface.co/posts/SeanLee97/529198970271294</link><description>Our lab recently released a paper where we introduce ShadowPEFT, a new Parameter-Efficient Fine-Tuning (PEFT) paradigm tailored for edge computing scenarios. Unlike traditional approaches such as LoRA and its variants, which inject trainable parameters directly into the weights of Transformer, requiring tight coupling with the backbone. ShadowPEFT instead enhances the frozen large base model by adding a lightweight, centralized, pretrainable, and detachable Shadow network. This shadow network operates in parallel with the base model, delivering learned corrections to each decoder layer. Because the shadow module is architecturally decoupled from the backbone, it can be independently trained, stored, and deployed, benefiting edge computing scenarios and edge-cloud collaboration computing. - HF Paper: ShadowPEFT: Shadow Network for Parameter-Efficient Fine-Tuning (2604.19254) - GitHub: https://github.com/ShadowLLM/shadow-peft - HF Collection: https://huggingface.co/collections/shadow-...</description><pubDate>Thu, 23 Apr 2026 15:16:04 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/SeanLee97/529198970271294</guid></item><item><title>We are hiring at Shirova AI. We need AI researchers and engineers to work in our research lab. Shirova AI is a research lab in India, so we can help our researchers move to nearby workspaces or let them work from home without ever coming to the lab. We're building our founding team, so the pay will be good. You can learn, so don't hesitate to mail us at: careers@shirova.com</title><link>https://huggingface.co/posts/Ujjwal-Tyagi/236659827352486</link><description>We are hiring at Shirova AI. We need AI researchers and engineers to work in our research lab. Shirova AI is a research lab in India, so we can help our researchers move to nearby workspaces or let them work from home without ever coming to the lab. We're building our founding team, so the pay will be good. You can learn, so don't hesitate to mail us at: careers@shirova.com See translation</description><pubDate>Thu, 23 Apr 2026 15:16:04 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/Ujjwal-Tyagi/236659827352486</guid></item><item><title>tencent/Hy3-preview</title><link>https://huggingface.co/posts/imnotkitty/486900092155706</link><description>tencent/Hy3-preview is out: an open-weights MoE reasoning model. ✅ 295B total / 21B active / 256K context ✅ Fused fast-and-slow thinking in a single model ✅ First model trained on Hunyuan's rebuilt pretraining + RL infra (Feb → Apr) Benchmarks: 👉 SWE-Bench Verified, Terminal-Bench 2.0, BrowseComp, WideSearch — competitive results, particularly strong on agentic tool use 👉 Top score on Tsinghua's 2026 Spring math PhD qualifying exam 👉 Strong context-learning and instruction-following on Tencent's CL-bench / CL-bench-Life More details can be found in my article: https://huggingface.co/blog/imnotkitty/hy3-preview See translation</description><pubDate>Thu, 23 Apr 2026 15:16:04 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/imnotkitty/486900092155706</guid></item><item><title>How LLM training with RL Environments works?</title><link>https://huggingface.co/posts/anakin87/346129929456219</link><description>How LLM training with RL Environments works? It all starts with 𝗥𝗲𝗶𝗻𝗳𝗼𝗿𝗰𝗲𝗺𝗲𝗻𝘁 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗩𝗲𝗿𝗶𝗳𝗶𝗮𝗯𝗹𝗲 𝗥𝗲𝘄𝗮𝗿𝗱𝘀 - question asked - model generates reasoning + answer - answer checked against ground truth - reward drives RL training In this setup, the environment is simple: fixed questions and answers, rollout logic, reward(s) Consider a more complex tic-tac-toe env ❌⭕ It adds: - dynamic game generation/handling - tunable opponent skill - multi-turn interactions (envs can also include tools) --- What happens at training? We use 𝗚𝗿𝗼𝘂𝗽 𝗥𝗲𝗹𝗮𝘁𝗶𝘃𝗲 𝗣𝗼𝗹𝗶𝗰𝘆 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 with a tic-tac-toe env No critic model needed, the group is the baseline Simpler than PPO 1️⃣ Rollout generation: from the same board, model plays N games via sampling 2️⃣ Each game scored with deterministic rewards (win, format, ...) 3️⃣ Mean score computed across the group 4️⃣ Each rollout's advantage = its score minus the group mean 5️⃣ Model updated to favor trajectories above baseline 🔁 Repeat For a deep dive, check out 🌱...</description><pubDate>Thu, 23 Apr 2026 15:16:04 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/anakin87/346129929456219</guid></item><item><title>Hey Hugging Face community 👋</title><link>https://huggingface.co/posts/dealermatt72/528956619550933</link><description>Hey Hugging Face community 👋 My name is M. I'm a solo founder and self-taught developer based in Houston, TX. I build AI-powered apps — I have an iOS app called DeFilter currently in App Store review, a security scanning platform called Sentinel, and a job marketplace called HireHuman.fyi for connecting humans with companies that prefer non-AI workers. I'm also a poker dealer by night, which means I think a lot about reading situations in real time — and that's exactly what sparked this idea. I'm not the most technical person in the room. But I have a vision, I have drive, and I believe the best projects get built when people with different skills come together around a shared idea. That's why I'm posting here. I want to build this with the community. — M ( @ dealermatt ) See translation</description><pubDate>Thu, 23 Apr 2026 15:16:04 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/dealermatt72/528956619550933</guid></item><item><title>Built a WeChat Mini Program in 20 minutes flat with Hy3 Preview + WorkBuddy…</title><link>https://huggingface.co/posts/Benedictat/243987262666144</link><description>Built a WeChat Mini Program in 20 minutes flat with Hy3 Preview + WorkBuddy… and I didn’t type a single line of code. Not even a semicolon. This Coding Agent is on steroids. Its comprehension in long back-and-forths is night and day better, and that 256K context window swallows the entire project structure whole. Tell it what you want, and it actually gets the full picture no confused blank stares from the AI. And we’re not messing around with dinky little code snippets here. It spits out a fully functional project app.json, every page’s wxml/wxss/js/json, even Mock data pre-packed. Import it into WeChat Dev Tools and it runs on the first try Only one tiny visual nitpick, zero logic bugs. Point out the flaw, and it fixes it instantly no new bugs, no passive-aggressive code breaks, no headaches The entire vibe Tell it your idea → Get a complete working project → Mention a tiny flaw → AI polishes it. No coding, no endless edits, no soul-crushing debugging that makes you want to throw...</description><pubDate>Thu, 23 Apr 2026 15:16:04 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/Benedictat/243987262666144</guid></item><item><title>Testing AI controlling AI with Hy3 Preview I barely lifted a finger the whole time.</title><link>https://huggingface.co/posts/wangbuer999/442628443911048</link><description>Testing AI controlling AI with Hy3 Preview I barely lifted a finger the whole time. One-click deployment of Hermes on WorkBuddy took some time with a few rounds of adjustments, and I finally got it up and running smoothly Only minor issue was setting up Supermemory it was a bit slow on the uptake. I had to go over simple steps several times, guiding it patiently like teaching a kid. The experience of AI orchestrating AI is absolutely incredible. started running Agents with Hunyuan right after its release, and it actually works perfectly. 295B parameters, 21B active parameters, with direct access to TokenHub now great cost-performance ratio too Honestly, I used to get stuck on all kinds of environment configurations when deploying Agents locally. Using Hy3 to take command made the whole process way more streamlined. See translation</description><pubDate>Thu, 23 Apr 2026 15:16:04 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/wangbuer999/442628443911048</guid></item><item><title>The rebuilt Hunyuan HY3 Preview is here!</title><link>https://huggingface.co/posts/kelsend/481584584577791</link><description>The rebuilt Hunyuan HY3 Preview is here! I tested it on all the tricky scenarios where most LLMs usually face-plant—and guess what? It didn’t flop. 295B total params, 21B active params, 256K context window. Built on MoE architecture, it delivers trillion-parameter-level performance with a much smaller footprint. Long-context capabilities get a massive upgrade. Agent abilities stand out this time: tool calling, workflow orchestration, and autonomous planning are far more stable in real business scenarios. AI PPT generation in Tencent Docs is also significantly smoother and more reliable. Real-world tests on WorkBuddy show first-token latency down 54%, success rate over 99.99%, and an Agent workflow that ran continuously for 495 steps. Its Coding Agent achieved top-tier results on both SWE-Bench Verified and Terminal-Bench 2.0 Now open-sourced on GitHub, HuggingFace, and ModelScope. Available on TokenHub at just 1.2 RMB per million tokens. See translation</description><pubDate>Thu, 23 Apr 2026 15:16:04 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/kelsend/481584584577791</guid></item><item><title>The ULTIMATE Guide to AI Voice Cloning: RVC WebUI (Zero to Hero)</title><link>https://huggingface.co/posts/MonsterMMORPG/151188185199835</link><description>The ULTIMATE Guide to AI Voice Cloning: RVC WebUI (Zero to Hero) Links Tutorial link : https://youtu.be/ZRrzvD4wNys App link : https://www.patreon.com/posts/rvc-web-ui-app-installer-zip-file-149104996 Info Ultimate AI Voice Changer Tutorial: SECourses Premium RVC Web UI (Windows, RunPod & Massed Compute). This video is only for educational and responsible usage purposes. With V3: Multiple voice merge to generate custom voice feature implemented. Welcome to the complete tutorial for the SECourses Premium RVC Web UI! In this video, I will show you how to easily transform your speaking voice or song vocals using our highly optimized AI voice conversion application. Whether you want to sound like a famous celebrity (like Donald Trump or Tupac), replace vocals in AI-generated music, or change your voice live in real-time, this tool has everything you need. The installer automatically downloads 30+ pre-trained demo voices, and you can easily add hundreds more from Hugging Face! I will...</description><pubDate>Thu, 23 Apr 2026 15:16:04 GMT</pubDate><guid isPermaLink="true">https://huggingface.co/posts/MonsterMMORPG/151188185199835</guid></item></channel></rss>