Cyber-attacks on the organisations of any size from large critical infrastructure to SME and start-ups are constantly increasing and impact of such attacks can be wide-reaching and devastating. Organisations need to increase the knowledge and capability to improve the preparedness actions for cyber threats and incidents and mutual assistance among the organisations for any security needs. In this context, EuDoros aims to offer preparedness support services which will deploy to the selected entities from different sectors (e.g. energy, transport, banking, health, other digital infrastructure and public administration sector) in order to enhance their existing security capabilities and increase the level of protection and resilience against cyber threats.
The project will customise preparedness support services taken into account sector specific content and integrate into an innovative, open, collaborative, and tailor-made platform for deploying into the infrastructure of the selected entities. There will be 50 entities selected from at least 150 organisations participating to the open call process benefited of using the preparedness support services for enhancing overall capability for managing threats and incidents. The selected entities from various sectors will be trained using cyber-range based training service for creating awareness and given consultancy support for using and adoption the EuDoros.
TURING is a three-year Horizon Europe project that aims to transform the way we model and simulate complex physical systems. By combining advances in machine learning, computational sciences, and physics, TURING develops physics-aware, robust, and trustworthy generative foundation models (FMs) that can complement or replace traditional, resource-intensive physics-based models.
The project pioneers innovative approaches such as meta-learning, transformer architectures for partial differential equations, hybrid AI-based solvers, low-dimensional representations, uncertainty quantification, explainability, and adversarial robustness.
TURING’s goal is to deliver both generative and multimodal FMs and task-specific models that accurately capture the physical properties of complex systems and maintain high performance under changing data, environmental conditions, and adversarial scenarios.
The value of TURING’s models will be demonstrated in three critical application domains: nuclear energy, high-energy physics, and meteorology. By providing open tools, data repositories, and a collaborative framework, the project aims to accelerate scientific discovery and deliver faster, more reliable solutions for industry and society.
The contemporary AI landscape demands a holistic framework ensuring security across the supply chain and entire AI lifecycle. Despite existing adversarial attack techniques, a comprehensive end-to-end flow for identifying threats and vulnerabilities with associated risksis lacking. The EU, through initiatives like the AI Act, emphasizes safety and trustworthiness in AI applications but lacks a systemmanaging weaknesses in a networked AI-supply chain. The CoEvolution project integrates its architecture components to create an end-to-end Security, Trust, and Robustness (STR) assessment solution, generating context-aware AI models characterized by their AI Model Bill of Materials (AIMBOM).
The goal is a universal hub providing a coherent STR risk assessment and security assurance flow, aligning with MLDevOps and EU AI regulatory frameworks. The paradigm includes novel AI model descriptions, AIMBOM management, security monitoring, and context awareness. CoEvolution introduces a new STR paradigm based on Bills-of-Materials, offering a unified
approach to describing AI models in supply chains, ensuring STR compliance with EU directives on trust, fairness, data governance,
and GDPR guidelines. Open source trusted datasets and CoEvolution-developed AI models enhance the hub's capabilities, aiming for a robust, adaptable risk analysis and security assessment framework aligned with evolving AI cybersecurity threats.
cPAID envisions researching, designing, and developing a cloud-based platform-agnostic defense framework for the holistic protection
of AI applications and the overall AI operations of organizations against malicious actions and adversarial attacks. cPAID aims at tackling both poisoning and evasion adversarial attacks by combining AI-based defense methods (e.g., life-long semi-supervised reinforcement learning, transfer learning, feature reduction, adversarial training), security- and privacy-by-design, privacy-preserving, explainable AI (XAI), Generative AI, context-awareness as well as risk and vulnerability assessment and threat intelligence of AI systems.
cPAID will identify guidelines to a) guarantee security- and privacy-by-design in the design and development of AI applications, b) thoroughly
assess the robustness and resiliency of ML and DL algorithms against adversarial attacks, c) ensure that EU principles for AI ethics have been considered, and d) validate the performance of AI systems in real-life use case scenarios.
The identified guidelines aspire to promote research toward developing certification schemes that will certify the robustness, security, privacy, and ethical excellence of AI applications and systems.