AI Power Progress iA
Daily

Intelligence Digest

Grounded updates from real sources (RSS/arXiv/NVD/KEV). AI synthesis is optional and clearly labeled.

Digest JSON Archive JSON Subscribe RSS Live Feed
2026-05-08 · Δ 11 new vs 2026-05-07 · generated 2026-05-09T06:11:10Z · 1.556s

Pipeline Freshness

FRESH
digest 2026-05-09 · generated 2026-05-09T07:45:03Z · no feed lag detected · news sources 6/6 ok · Δ 24 new
Viewing archive 2026-05-08. Pipeline freshness reflects the latest digest state (2026-05-09).

From Digest to Action

Ask AIgrounded

Turn updates into a plan

Use the site-wide Ask AI widget to summarize the latest digest, identify the highest-priority items, and turn them into practical next steps.

Servicesquick order

Need implementation help?

Route directly into the real delivery flows when a digest item should become security work, automation work, AI work, or a scoped build.

Follow-upprivate

Continue existing work

If today’s intelligence affects work already in progress, jump straight into the signed-in account flow or the private request-status lookup.

Highlights

Highlights (extractive; metadata-only).
Top highlights (metadata-only; no AI synthesis):

- SECURITY · Cybersecurity: CVE-2026-42208 · BerriAI LiteLLM SQL Injection Vulnerability (https://nvd.nist.gov/vuln/detail/CVE-2026-42208) — CISA KEV
snippet: BerriAI LiteLLM contains a SQL injection vulnerability that allows an attacker to read data from the proxy's database and potentially modify it, leading to unauthorised access to the proxy and the credentials it manages. · Action: Apply…
- RESEARCH · Research: ActCam: Zero-Shot Joint Camera and 3D Motion Control for Video Generation (https://arxiv.org/abs/2605.06667v1) — arXiv
snippet: For artistic applications, video generation requires fine-grained control over both performance and cinematography, i.e., the actor's motion and the camera trajectory. We present ActCam, a zero-shot method for video generation that joint…
- RESEARCH · Research: The Kubo-Thermalization Correspondence (https://arxiv.org/abs/2605.06666v1) — arXiv
snippet: Quantum thermalization describes how interacting quantum systems relax toward thermal equilibrium, a central problem in modern physics. Yet most experimental information on many-body systems comes from short-time transition spectroscopy,…
- RESEARCH · Research: UniPool: A Globally Shared Expert Pool for Mixture-of-Experts (https://arxiv.org/abs/2605.06665v1) — arXiv
snippet: Modern Mixture-of-Experts (MoE) architectures allocate expert capacity through a rigid per-layer rule: each transformer layer owns a separate expert set. This convention couples depth scaling with linear expert-parameter growth and assum…
- RESEARCH · Research: BAMI: Training-Free Bias Mitigation in GUI Grounding (https://arxiv.org/abs/2605.06664v1) — arXiv
snippet: GUI grounding is a critical capability for enabling GUI agents to execute tasks such as clicking and dragging. However, in complex scenarios like the ScreenSpot-Pro benchmark, existing models often suffer from suboptimal performance. Uti…
- RESEARCH · Research: EMO: Pretraining Mixture of Experts for Emergent Modularity (https://arxiv.org/abs/2605.06663v1) — arXiv
snippet: Large language models are typically deployed as monolithic systems, requiring the full model even when applications need only a narrow subset of capabilities, e.g., code, math, or domain-specific knowledge. Mixture-of-Experts (MoEs) seem…
- RESEARCH · Research: Multi-Robot Coordination in V2X Environments (https://arxiv.org/abs/2605.06662v1) — arXiv
snippet: This paper presents a Vehicle-to-Everything (V2X) communication framework that enables decentralized cooperation among social robots operating in complex urban traffic environments. Building on ETSI Cooperative Awareness and Maneuver Coo…
- RESEARCH · Research: Verifier-Backed Hard Problem Generation for Mathematical Reasoning (https://arxiv.org/abs/2605.06660v1) — arXiv
snippet: Large Language Models (LLMs) demonstrate strong capabilities for solving scientific and mathematical problems, yet they struggle to produce valid, challenging, and novel problems - an essential component for advancing LLM training and en…
- RESEARCH · Research: Relit-LiVE: Relight Video by Jointly Learning Environment Video (https://arxiv.org/abs/2605.06658v1) — arXiv
snippet: Recent advances have shown that large-scale video diffusion models can be repurposed as neural renderers by first decomposing videos into intrinsic scene representations and then performing forward rendering under novel illumination. Whi…
- RESEARCH · Research: Why Global LLM Leaderboards Are Misleading: Small Portfolios for Heterogeneous Supervised ML (https://arxiv.org/abs/2605.06656v1) — arXiv
snippet: Ranking LLMs via pairwise human feedback underpins current leaderboards for open-ended tasks, such as creative writing and problem-solving. We analyze ~89K comparisons in 116 languages from 52 LLMs from Arena, and show that the best-fit…
- RESEARCH · Research: Optimizer-Model Consistency: Full Finetuning with the Same Optimizer as Pretraining Forgets Less (https://arxiv.org/abs/2605.06654v1) — arXiv
snippet: Optimizers play an important role in both pretraining and finetuning stages when training large language models (LLMs). In this paper, we present an observation that full finetuning with the same optimizer as in pretraining achieves a be…
- RESEARCH · Research: When No Benchmark Exists: Validating Comparative LLM Safety Scoring Without Ground-Truth Labels (https://arxiv.org/abs/2605.06652v1) — arXiv
snippet: Many deployments must compare candidate language models for safety before a labeled benchmark exists for the relevant language, sector, or regulatory regime. We formalize this setting as benchmarkless comparative safety scoring and speci…

Digest

Tech + AI + Science News

Tech + AI + Science News

0 item(s)

No items in this section.
Research (arXiv)

Research (arXiv)

10 item(s)

ActCam: Zero-Shot Joint Camera and 3D Motion Control for Video Generation new
arXiv · 2026-05-07
Open
For artistic applications, video generation requires fine-grained control over both performance and cinematography, i.e., the actor's motion and the camera trajectory. We present ActCam, a zero-shot method for video generation that jointly transfers character motion from a driving video into a new scene and enables per-frame control of intrinsic and extrinsic camera parameters. ActCam builds on any pretrained image-to-video diffusion model that accepts conditioning in terms of scene depth and character pose. Given a source video with a moving character and a target camera motion, ActCam generates pose and depth conditions that remain geometrically consistent across frames. We then run a single sampling process with a two-phase conditioning schedule: early denoising steps condition on both pose and sparse depth to enforce scene structure, after which depth is dropped and pose-only guidanc
The Kubo-Thermalization Correspondence new
arXiv · 2026-05-07
Open
Quantum thermalization describes how interacting quantum systems relax toward thermal equilibrium, a central problem in modern physics. Yet most experimental information on many-body systems comes from short-time transition spectroscopy, typically interpreted within Kubo's linear-response framework. These perspectives - long-time equilibration versus short-time response - seem fundamentally disconnected. Here we establish an exact link between them: the Kubo-Thermalization correspondence, which connects long-time thermalized magnetization under weak driving to short-time linear-response spectra for a spin coupled to a thermal bath. The correspondence holds even when the steady state differs substantially from the initial state and when each regime is individually difficult to describe theoretically. We experimentally confirm the correspondence using effective spin-1/2 impurities realized
UniPool: A Globally Shared Expert Pool for Mixture-of-Experts new
arXiv · 2026-05-07
Open
Modern Mixture-of-Experts (MoE) architectures allocate expert capacity through a rigid per-layer rule: each transformer layer owns a separate expert set. This convention couples depth scaling with linear expert-parameter growth and assumes that every layer needs isolated expert capacity. However, recent analyses and our routing probe challenge this allocation rule: replacing a deeper layer's learned top-k router with uniform random routing drops downstream accuracy by only 1.0-1.6 points across multiple production MoE models. Motivated by this redundancy, we propose UniPool, an MoE architecture that treats expert capacity as a global architectural budget by replacing per-layer expert ownership with a single shared pool accessed by independent per-layer routers. To enable stable and balanced training under sharing, we introduce a pool-level auxiliary loss that balances expert utilization
BAMI: Training-Free Bias Mitigation in GUI Grounding new
arXiv · 2026-05-07
Open
GUI grounding is a critical capability for enabling GUI agents to execute tasks such as clicking and dragging. However, in complex scenarios like the ScreenSpot-Pro benchmark, existing models often suffer from suboptimal performance. Utilizing the proposed \textbf{Masked Prediction Distribution (MPD)} attribution method, we identify that the primary sources of errors are twofold: high image resolution (leading to precision bias) and intricate interface elements (resulting in ambiguity bias). To address these challenges, we introduce \textbf{Bias-Aware Manipulation Inference (BAMI)}, which incorporates two key manipulations, coarse-to-fine focus and candidate selection, to effectively mitigate these biases. Our extensive experimental results demonstrate that BAMI significantly enhances the accuracy of various GUI grounding models in a training-free setting. For instance, applying our meth
EMO: Pretraining Mixture of Experts for Emergent Modularity new
arXiv · 2026-05-07
Open
Large language models are typically deployed as monolithic systems, requiring the full model even when applications need only a narrow subset of capabilities, e.g., code, math, or domain-specific knowledge. Mixture-of-Experts (MoEs) seemingly offer a potential alternative by activating only a subset of experts per input, but in practice, restricting inference to a subset of experts for a given domain leads to severe performance degradation. This limits their practicality in memory-constrained settings, especially as models grow larger and sparser. We introduce EMO, an MoE designed for modularity-the independent use and composition of expert subsets-without requiring human-defined priors. Our key idea is to encourage tokens from similar domains to rely on similar experts. Since tokens within a document often share a domain, EMO restricts them to select experts from a shared pool, while al
Multi-Robot Coordination in V2X Environments new
arXiv · 2026-05-07
Open
This paper presents a Vehicle-to-Everything (V2X) communication framework that enables decentralized cooperation among social robots operating in complex urban traffic environments. Building on ETSI Cooperative Awareness and Maneuver Coordination services, the framework introduces two robot-centric facility-layer services: the Robot Awareness Service (RAS) and the Robot Maneuver Coordination Service (RMCS), realized through the Robot Awareness Message (RAM) and the Robot Maneuver Coordination Message (RMCM), respectively. RAS enables role-aware, task-oriented robot awareness while integrating externally detected Vulnerable Road Users (VRUs), including non-V2X pedestrians, into cooperative awareness. RMCS supports event-driven, low-latency coordination of robot maneuvers under explicitly established roles, without centralized infrastructure or prior pairing. A real-world proof of concept
Verifier-Backed Hard Problem Generation for Mathematical Reasoning new
arXiv · 2026-05-07
Open
Large Language Models (LLMs) demonstrate strong capabilities for solving scientific and mathematical problems, yet they struggle to produce valid, challenging, and novel problems - an essential component for advancing LLM training and enabling autonomous scientific research. Existing problem generation approaches either depend on expensive human expert involvement or adopt naive self-play paradigms, which frequently yield invalid problems due to reward hacking. This work introduces VHG, a verifier-enhanced hard problem generation framework built upon three-party self-play. By integrating an independent verifier into the conventional setter-solver duality, our design constrains the setter's reward to be jointly determined by problem validity (evaluated by the verifier) and difficulty (assessed by the solver). We instantiate two verifier variants: a Hard symbolic verifier and a Soft LLM-ba
Relit-LiVE: Relight Video by Jointly Learning Environment Video new
arXiv · 2026-05-07
Open
Recent advances have shown that large-scale video diffusion models can be repurposed as neural renderers by first decomposing videos into intrinsic scene representations and then performing forward rendering under novel illumination. While promising, this paradigm fundamentally relies on accurate intrinsic decomposition, which remains highly unreliable for real-world videos and often leads to distorted appearances, broken materials, and accumulated temporal artifacts during relighting. In this work, we present Relit-LiVE, a novel video relighting framework that produces physically consistent, temporally stable results without requiring prior knowledge of camera pose. Our key insight is to explicitly introduce raw reference images into the rendering process, enabling the model to recover critical scene cues that are inevitably lost or corrupted in intrinsic representations. Furthermore, w
Why Global LLM Leaderboards Are Misleading: Small Portfolios for Heterogeneous Supervised ML new
arXiv · 2026-05-07
Open
Ranking LLMs via pairwise human feedback underpins current leaderboards for open-ended tasks, such as creative writing and problem-solving. We analyze ~89K comparisons in 116 languages from 52 LLMs from Arena, and show that the best-fit global Bradley-Terry (BT) ranking is misleading. Nearly 2/3 of the decisive votes cancel out, and even the top 50 models according to the global BT ranking are statistically indistinguishable (pairwise win probabilities are at most 0.53 within the top 50 models). We trace this failure to strong, structured heterogeneity of opinions across language, task, and time. Moreover, we find an important characteristic - *language* plays a key role. Grouping by language (and families) increases the agreement of votes massively, resulting in two orders of magnitude higher spread in the ELO scores (i.e., very consistent rankings). What appears as global noise is in f
Optimizer-Model Consistency: Full Finetuning with the Same Optimizer as Pretraining Forgets Less new
arXiv · 2026-05-07
Open
Optimizers play an important role in both pretraining and finetuning stages when training large language models (LLMs). In this paper, we present an observation that full finetuning with the same optimizer as in pretraining achieves a better learning-forgetting tradeoff, i.e., forgetting less while achieving the same or better performance on the new task, than other optimizers and, possibly surprisingly, LoRA, during the supervised finetuning (SFT) stage. We term this phenomenon optimizer-model consistency. To better understand it, through controlled experiments and theoretical analysis, we show that: 1) optimizers can shape the models by having regularization effects on the activations, leading to different landscapes around the pretrained checkpoints; 2) in response to this regularization effect, the weight update in SFT should follow some specific structures to lower forgetting of the
Security (NVD + CISA KEV)

Security (NVD + CISA KEV)

1 item(s)

CVE-2026-42208 · BerriAI LiteLLM SQL Injection Vulnerability new
CISA KEV · 2026-05-08
Open
BerriAI LiteLLM contains a SQL injection vulnerability that allows an attacker to read data from the proxy's database and potentially modify it, leading to unauthorised access to the proxy and the credentials it manages. · Action: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. · Due: 2026-05-11
Projects + Resources (Discovery)

Projects + Resources (Discovery)

0 item(s)

No items in this section.

Sources

Grounded items come from local feed artifacts (metadata-only). This panel shows feed freshness and errors.

Feeds: generated 2026-05-09T07:44:26Z · 24.513s · ok
news ok

News (RSS/Atom)

104 item(s) · updated 2026-05-09T07:44:50Z · sources 6/6 ok · newest 2026-05-09 · configured
News sources (6/6 ok)
OpenAI News ok
18 item(s) · newest 2026-05-08
Hacker News ok
18 item(s) · newest 2026-05-09
GitHub Blog ok
10 item(s) · newest 2026-05-08
ScienceDaily — Artificial Intelligence ok
18 item(s) · newest 2026-05-06
ScienceDaily — Education & Learning ok
18 item(s) · newest 2026-03-11
ScienceDaily — Stem Cells ok
18 item(s) · newest 2026-04-08
arxiv ok

Research (arXiv)

50 item(s) · updated 2026-05-09T07:44:26Z · newest 2026-05-08
nvd ok

Security (NVD)

600 item(s) · updated 2026-05-09T07:44:31Z · newest 2026-05-09
kev ok

Security (CISA KEV)

600 item(s) · updated 2026-05-09T07:44:31Z · newest 2026-05-09
brave ok

Discovery (Brave)

145 item(s) · updated 2026-05-09T07:44:32Z · newest 2026-05-09 · configured

Archive

Archived by UTC day. Early v1 pages are best-effort and may be incomplete if feeds were offline.