Tech + AI + Science NewsTech + AI + Science News
14 item(s)
Introducing Advanced Account Security
OpenAI Blog · 2026-04-30
Introducing Advanced Account Security: phishing-resistant login, stronger recovery, and enhanced protections to safeguard sensitive data and prevent account takeover.
Building the compute infrastructure for the Intelligence Age
OpenAI Blog · 2026-04-29
OpenAI scales Stargate to build the compute infrastructure powering AGI, adding new data center capacity to meet growing AI demand.
Cybersecurity in the Intelligence Age
OpenAI Blog · 2026-04-29
OpenAI outlines a five-part action plan for strengthening cybersecurity in the Intelligence Age, focused on democratizing AI-powered cyber defense and protecting critical systems.
OpenAI models, Codex, and Managed Agents come to AWS
OpenAI Blog · 2026-04-28
OpenAI GPT models, Codex, and Managed Agents are now available on AWS, enabling enterprises to build secure AI in their AWS environments.
The Ed-Tech Backlash Is Here. What It Means for Schools
Brave News
Most educators—74%—say their ... technology due to pushback or complaints from parents, according to the EdWeek Research Center survey. The percentage of districts dialing back tech use in school could rise as generative AI increasingly becomes integral to ed-tech tools, and educators, parents, and students see the downsides of too much AI use. Surveys from multiple organizations so far show that while parents want their children to learn how to use ...
FDA testing speedier drug development with real-time clinical trials | STAT
Brave News
FDA announces cancer drug trials by AstraZeneca and Amgen will be monitored in real time, a test of how to shorten time interval between trial phases.
AI hardware trends: Key shifts from GPUs to specialized chips new
Brave News
AI hardware is the physical foundation behind model training, inference, and the growing demand for hardware acceleration across data centers, edge devices, and consumer products. For most teams, that still means GPUs first. But the pressure points are obvious: memory limits, power draw, deployment ...
EU countries, lawmakers fail to reach deal on watered-down AI rules | Reuters new
Brave News
EU countries and European Parliament lawmakers failed to reach a deal on watered-down landmark artificial intelligence rules after 12 hours of negotiations on Tuesday and will resume talks next month.
ClawIRC – IRC Chat for Agents new
Hacker News · 2026-05-01
Comments
Snowball Earth may hide a far stranger climate cycle than anyone expected new
Hacker News · 2026-04-30
Comments
Rivian allows you to disable all internet connectivity new
Hacker News · 2026-04-30
Comments
The upsell game – Vercel upselling tactics revealed new
Hacker News · 2026-04-30
Comments
GitHub Copilot CLI for Beginners: Interactive v. non-interactive mode
GitHub Blog · 2026-04-30
Learn the difference between CLI interactive v. non-interactive modes. The post GitHub Copilot CLI for Beginners: Interactive v. non-interactive mode appeared first on The GitHub Blog .
GitHub for Beginners: Getting started with Markdown
GitHub Blog · 2026-04-28
Discover how to format and edit your comments and posts using Markdown. The post GitHub for Beginners: Getting started with Markdown appeared first on The GitHub Blog .
Research (arXiv)Research (arXiv)
10 item(s)
Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models
arXiv · 2026-04-29
Diffusion large language models (dLLMs) offer parallel decoding and bidirectional context, but state-of-the-art dLLMs require billions of parameters for competitive performance. While existing distillation methods for dLLMs reduce inference steps within a single architecture, none address cross-architecture knowledge transfer, in which the teacher and student differ in architecture, attention mechanism, and tokenizer. We present TIDE, the first framework for cross-architecture dLLM distillation, comprising three modular components: (1) TIDAL, which jointly modulates distillation strength across training progress and diffusion timestep to account for the teacher's noise-dependent reliability; (2) CompDemo, which enriches the teacher's context via complementary mask splitting to improve predictions under heavy masking; and (3) Reverse CALM, a cross-tokenizer objective that inverts chunk-le
Optimizing Dynamic Metasurface Antenna Configurations for Direction-of-Arrival and Polarization Estimation Using an Experimentally Calibrated Multiport-Network Model
arXiv · 2026-04-29
Sensing the direction of arrival and polarization of impinging signals is a key prerequisite for beamforming and interference mitigation in modern wireless communication systems. Dynamic metasurface antennas (DMAs) can multiplex direction- and polarization-dependent field information onto a single detector by sequentially switching between programmable configurations. This makes DMAs attractive for joint direction-of-arrival and polarization (DoA-P) estimation with a single radio-frequency chain. Experimental demonstrations have so far relied on random pre-measured configuration sequences because optimizing the configurations requires an accurate forward model of the fabricated DMA. Here, we use an experimentally calibrated model based on multiport-network theory (MNT) to optimize DMA configuration sequences for DoA-P estimation. Our experimentally calibrated MNT model predicts the dual-
Large quantum dot energy level shifts in anomalous photon-assisted tunneling
arXiv · 2026-04-29
Orbital energy splittings are important quantum dot parameters for the operation of hole spin qubits. They are known to depend on the lateral confinement of the quantum dots. However, when changing top, plunger gate voltages, which are the typical control parameter for qubit applications, such energy splitting changes are typically negligible, both as measured in experiment and as assumed in effective theories. Here, we study the singlet-triplet (ST) splittings, which depend on the orbital splittings, of a double quantum dot (DQD) in a Ge/SiGe heterostructure using photon-assisted tunneling (PAT) and pulsed-gate spectroscopy. We find that the ST splittings have a surprising, strong dependence on the top gate voltages, leading to anomalous PAT measurements. We combine data from both measurements in a model that well describes the linear gate-voltage dependence of the ST splittings. Finall
Three-Step Nav: A Hierarchical Global-Local Planner for Zero-Shot Vision-and-Language Navigation
arXiv · 2026-04-29
Breakthrough progress in vision-based navigation through unknown environments has been achieved by using multimodal large language models (MLLMs). These models can plan a sequence of motions by evaluating the current view at each time step against the task and goal given to the agent. However, current zero-shot Vision-and-Language Navigation (VLN) agents powered by MLLMs still tend to drift off course, halt prematurely, and achieve low overall success rates. We propose Three-Step Nav to counteract these failures with a three-view protocol: First, "look forward" to extract global landmarks and sketch a coarse plan. Then, "look now" to align the current visual observation with the next sub-goal for fine-grained guidance. Finally, "look backward" audits the entire trajectory to correct accumulated drift before stopping. Requiring no gradient updates or task-specific fine-tuning, our planner
Simulating dynamics of RLC circuits with a quantum differential-algebraic equations solver
arXiv · 2026-04-29
We introduce a quantum algorithm for simulating the dynamics of electrical circuits consisting of resistors, inductors and capacitors (aka RLC circuits) along with power sources. Given oracle access to the connectivity of the circuit and values of the electrical elements, our algorithm prepares a quantum state that encodes voltages and current values either at a specified time or the history of their evolution over a time-interval. For an RLC circuit with $N$ components, our algorithm runs in time $\textsf{polylog}(N)$ under mild assumptions on the connectivity of the circuit and values of its components. This provides an exponential speed-up over classical algorithms that take $\textsf{poly}(N)$ time in the worst-case. Our algorithm can be used to estimate energy across a set of components or dissipated power in $\textsf{polylog}(N)$ time, a problem that we prove is BQP-hard and therefo
ProcFunc: Function-Oriented Abstractions for Procedural 3D Generation in Python
arXiv · 2026-04-29
We introduce ProcFunc, a library for Blender-based procedural 3D generation in Python. ProcFunc provides a library of easy-to-use Python functions, which streamline creating, combining, analyzing, and executing procedural generation code. ProcFunc makes it easy to create large-scale diverse training data, by combinatorial compositions of semantic components. VLMs can use ProcFunc to edit procedural material and geometry code and can create new procedural code with significantly fewer coding errors. Finally, as an example use case, we use ProcFunc to develop a new procedural generator of indoor rooms, which includes a collection of new compositional procedural materials. We demonstrate the detail, runtime efficiency, and diversity of this room generator, as well as its use for 3D synthetic data generation. Please visit https://github.com/princeton-vl/procfunc for source code.
Hyper Input Convex Neural Networks for Shape Constrained Learning and Optimal Transport
arXiv · 2026-04-29
We introduce Hyper Input Convex Neural Networks (HyCNNs), a novel neural network architecture designed for learning convex functions. HyCNNs combine the principles of Maxout networks with input convex neural networks (ICNNs) to create a neural network that is always convex in the input, theoretically capable of leveraging depth, and performs reliable when trained at scale compared to ICNNs. Concretely, we prove that HyCNNs require exponentially fewer parameters than ICNNs to approximate quadratic functions up to a given precision. Throughout a series of synthetic experiments, we demonstrate that HyCNNs outperform existing ICNNs and MLPs in terms of predictive performance for convex regression and interpolation tasks. We further apply HyCNNs to learn high-dimensional optimal transport maps for synthetic examples and for single-cell RNA sequencing data, where they oftentimes outperform ICN
Schwinger-Keldysh Path Integral for Gauge theories
arXiv · 2026-04-29
We develop the Schwinger-Keldysh path-integral formalism for open non-Abelian gauge theories that are gauge-fixed via the BRST method in covariant gauges. We focus on generic initial states, pure and mixed, specified at finite times suitable for non-equilibrium processes. We pay particular attention to the handling of the indefinite Hilbert space, the construction of BRST-invariant Schrodinger picture wavefunctionals, density matrices and inner product, the implementation of the Hata-Kugo prescription, and the role of boundary terms at both the initial and final times. We highlight the advantages of the Nakanishi-Lautrup field representation in dealing with initial/final conditions. The resulting Schwinger-Keldysh path integral is manifestly invariant under a diagonal (retarded) BRST symmetry for arbitrary physical initial states, whether pure or mixed. From this, we obtain the correspon
Select to Think: Unlocking SLM Potential with Local Sufficiency
arXiv · 2026-04-29
Small language models (SLMs) offer computational efficiency for scalable deployment, yet they often fall short of the reasoning power exhibited by their larger counterparts (LLMs). To mitigate this gap, current approaches invoke an LLM to generate tokens at points of reasoning divergence, but these external calls introduce substantial latency and costs. Alternatively, standard distillation is often hindered by the capacity limitation, as SLMs struggle to accurately mimic the LLM's complex generative distribution. We address this dilemma by identifying local sufficiency: at divergence points, the LLM's preferred token consistently resides within the SLM's top-K next-token predictions, even when failing to emerge as the SLM top-1 choice. We therefore propose SELECT TO THINK (S2T), which reframes the LLM's role from open-ended generation to selection among the SLM's proposals, simplifying t
World2VLM: Distilling World Model Imagination into VLMs for Dynamic Spatial Reasoning
arXiv · 2026-04-29
Vision-language models (VLMs) have shown strong performance on static visual understanding, yet they still struggle with dynamic spatial reasoning that requires imagining how scenes evolve under egocentric motion. Recent efforts address this limitation either by scaling spatial supervision with synthetic data or by coupling VLMs with world models at inference time. However, the former often lacks explicit modeling of motion-conditioned state transitions, while the latter incurs substantial computational overhead. In this work, we propose World2VLM, a training framework that distills spatial imagination from a generative world model into a vision-language model. Given an initial observation and a parameterized camera trajectory, we use a view-consistent world model to synthesize geometrically aligned future views and derive structured supervision for both forward (action-to-outcome) and i
Projects + Resources (Discovery)Projects + Resources (Discovery)
10 item(s)
Classe Python di Google | Python Education | Google for Developers new
Brave Search
Ti diamo il benvenuto in <strong>Python Class di Google</strong>, un corso senza costi per chi ha poca esperienza nella programmazione e vuole imparare a usare Python. Il corso include materiali scritti, video di lezioni e molti esercizi di codice per esercitarsi ...
Le Basi di Python - Corso Completo - Programmare in Python new
Brave Search
Impara a programmare in Python col nostro video corso base di Python 3, in Italiano!
I 30 migliori corsi gratuiti su AI e machine learning – Apostolato Digitale new
Brave Search
𝟭. <strong>Generative AI by Microsoft: https://lnkd.in/eGiwKFK9</strong> 𝟮. MIT Efficient DL Computing: https://lnkd.in/e5EPBA7N 𝟯. NLP by UT Austin: https://lnkd.in/egHapvKh 𝟰. Deep Learning by Sebastian: https://lnkd.in/e9BvvAVS 𝟱. LLMs Bootcamp: https://lnkd.in/eTSP4yvr 𝟲. Harvard Intro ...
Un corso online gratuito - Elements of AI new
Brave Search
Learn more about Reaktor’s and the University of Helsinki’s AI course - no programming or complicated math required.
r/learnpython on Reddit: Migliori siti gratuiti per imparare i corsi di Python new
Brave Search
... Consiglio vivamente il corso di python di <strong>key2learn su YouTube</strong>. In anni di tentativi di imparare questa lingua a menadito, il loro corso è l'unico che ha risposto alla maggior parte delle domande.
Curso de programação gratuito - JavaScript do Zero - Trybe new
Brave Search
<strong>O JavaScript do Zero faz parte dos cursos de programação da Trybe</strong>. Aprenda a programar com uma das linguagens mais usadas no mundo, o JavaScript. Escreva suas primeiras linhas de código e descubra se a programação é a área certa para você.
Learn JavaScript | Google Developer Program | Google for Developers new
Brave Search
Earn this badge by completing "Learn JavaScript", an evergreen course and reference to level up your web development knowledge.
Javascript [40 Horas] new
Brave Search
Aprenda JavaScript do zero com curso gratuito e online do Curso em Vídeo. Aulas práticas, conteúdo moderno e certificado, com apoio do Google.
Linguagem de Programação Python - Básico - Fundação Bradesco - Escola Virtual new
Brave Search
<strong>O objetivo deste curso é articular teoria e prática do raciocínio lógico por meio da Linguagem de Programação Python</strong>, desenvolvendo programas básicos em modo console, apresentando, de forma simples, a plataforma de desenvolvimento PyCharm.
Cinco Cursos Gratuitos com Certificado de Python new
Brave Search
We cannot provide a description for this page right now