Logo
AI is Transforming the World — and Requires a New Safety Framework


PROJECT: Global AI Security Initiative (GASI)

Artificial intelligence is rapidly entering every area of modern life: finance, healthcare, transportation, industry, education, government, and everyday consumer services. Alongside its enormous potential, AI also introduces new categories of risk. If serious AI safety efforts are not pursued today, humanity may face consequences unlike anything seen before — from large-scale economic disruption and loss of control over critical infrastructure to risks that affect the long-term future of civilization itself. We identify four key categories of AI risk that require immediate attention.


Malicious Use or Compromise of AI

Hacking AI systems, using AI for cyberattacks, creation of fake content, using AI against people.

Loss of Control Over AI

Autonomous actions against human interests, hiding true intentions, uncontrolled self-improvement.

AI Errors, Unsafe Actions, and Hallucinations

False answers delivered with high confidence, failures in process control, errors in healthcare, transport, finance, or justice.

Socioeconomic Destabilization

Rising unemployment, human degradation, weakening of social bonds, pressure on national economies.

Our Response to the Core Risks of AI

After analyzing the major categories of AI risk, we concluded that traditional cybersecurity approaches are no longer sufficient. Artificial intelligence requires new defense systems capable of operating in real time, anticipating threats, and preventing incidents before they occur. We propose three complementary solutions that together form a next-generation AI Security ecosystem.

Dailogix

Global Continuous AI Vulnerability Discovery Platform

Dailogix is a distributed next-generation red team platform for continuous testing of AI systems against attacks, jailbreaks, alignment failures, and emerging threats. Vulnerabilities are discovered and mitigated before malicious actors can exploit them.

A New AI Architecture

Reazonex is an alternative AI architecture in which the reasoning core is based on a graph-structured intelligence model, rather than a traditional transformer. AI becomes not only powerful, but also predictable, accurate, and governable.

AI Oversight and Predictive Risk Control System

Oraculex is an intelligent monitoring platform for supervising AI behavior, detecting early warning signals, and forecasting dangerous scenarios. Dangerous trajectories are detected in advance rather than after damage has been done.


Risk Category \ Protection Dailogix Reazonex Oraculex AI Governance
& Regulation
1. Malicious Use or Compromise of AI High Medium High Low
2. Loss of Control Over AI Medium High Maximum Medium
3. Errors, Hallucinations, Unsafe Actions Medium Maximum High Medium
4. Socioeconomic Destabilization Low Medium Medium Maximum

News

News and updates about our project

April 17, 2026 — New website structure

Our project has grown into a full ecosystem of three products focused on AI safety. To reflect this progress, we have redesigned the website structure. Each product now has its own dedicated page with a separate description, making it easier to explore our vision, technologies, and approach to a safer AI future. If you value this mission, consider supporting our project — it is part of our future.

February 15, 2026 — Phase 1 Completed!

We have successfully completed Phase 1 — Architecture & Specifications. The system architecture and technical documentation are ready for development. We launched a public prototype of our global red-team system and built a prototype graph-based reasoner — a deterministic, controllable alternative to probabilistic AI models.

February 02, 2026 — First prototype of our graph reasoner!

We’ve built the first prototype of our graph reasoner — a deterministic alternative to LLMs. Universal world modeling. Same input → same answer. Millisecond-level speed. We believe this is the future beyond transformers. Reazonex

December 09, 2025 — Reazonex!Logo

We are launching a new initiative focused on building a reasoning graph and reasoner — Reazonex. This module will become an integral part of our AI red-teaming and evaluation platform, Dailogix. Reazonex is designed to enhance analytical depth, interpretability, and the overall quality of logical stress-testing, making it a key component for AI Safety tasks. In the future, Reazonex may evolve into a standalone product, as we believe that reasoning graphs represent the next major step in the evolution of artificial intelligence. Reazonex

December 1, 2025 — A New Name for Our System!Logo

Our first system has been officially named Dailogix. Dailogix

November 27, 2025 — First Version of Our LLM Security Testing System Goes Live!

We’ve launched an early test system for LLM security analysis. The prototype lets users evaluate model behavior, detect dangerous prompts, and share vulnerability-revealing queries. While basic for now, it already supports manual testing and heuristic risk checks, with advanced AI-driven analysis coming next. Test System

November 17, 2025 — Phase 1 has kicked off!

We’re setting up the test infrastructure and have a partner ready to provide their public AI model to test our prototypes.

November 10, 2025 — Appreciation Points program

Early supporters can now join our AI safety project! We launched an Appreciation Points program, rewarding contributions ×5. Points show recognition and may grant future subscription discounts, letting supporters help protect AI systems worldwide. Appreciation Points program

October 21, 2025 — Introducing Our Articles Section

We’ve launched a new Articles section! Our first article focuses on AI training data as the foundation of AI safety. Observing live lectures, discussions, and debates allows AI to learn reasoning, correct errors early, and strengthen foundational security for more robust and reliable systems. Articles

October 16, 2025 — Project launch on Product Hunt

We’ve officially launched on Product Hunt! Discover our project, share feedback, and support the release as we open it to a wider tech community for the first time. Product Hunt

October 14, 2025 — Project release on the Web

The official project website is now live. Users can explore product details, roadmap, and updates directly online — marking our first public web release.

October 10, 2025 — AI security concept and roadmap defined

Following consultations with leading AI and cybersecurity experts, we explored strategies for effective AI protection. Based on the research results, a security framework and development roadmap were formed. This milestone concludes Phase 0 — Pre-Project Stage: Research & Concept, establishing a solid foundation for architecture design.

September 26, 2025 — Security analysis and attack simulations

Initial testing of local AI models began in March 2025 to assess system robustness and threat exposure. By late September, multiple attack vectors were simulated, including data-poisoning and prompt-injection scenarios. The study identified key vulnerabilities and produced insights that shaped the foundation of our global AI security architecture.


Support Our Project

Help us make AI safer by contributing to the project.
Read our article about the Appreciation Program to learn how your support is recognized.We Have Refined Our Approach to Supporting the Project
Please consider sending funds to the wallets below:

BTC QR
BTC:
bc1q5uw5vx2fzg909ltam62re9mugulq73cu0v3u9m
ETH QR
ETH / ERC-20 (USDT, USDC, DAI):
0x3DE31F812020B45D93750Be4Bc55D51c52375666

SOL QR
SOL / Solana:
48fMzmbuYMWEBzM2sYVxFfmHJnahRVQqBPqRpKsXevfh
BASE QR
Base:
0xCACf21d2762a4c312b8b6AcE99C21cB2a188CD70

Contact Us

projgasi@proton.me


Public OpenPGP key:

-----BEGIN PGP PUBLIC KEY BLOCK-----

mDMEaRTHNhYJKwYBBAHaRw8BAQdAog3Hl2KphmRPiP/OyAVfC1N9d6EIGbcVOAp0
h2homUO0IVByb2plY3QgR0FTSSA8cHJvamdhc2lAcHJvdG9uLm1lPoiZBBMWCgBB
FiEEB2unwGx0JINAPfQtIJ6r9OFvmTgFAmkUxzYCGwMFCWXS8koFCwkIBwICIgIG
FQoJCAsCBBYCAwECHgcCF4AACgkQIJ6r9OFvmTjOhgD/XezE9XVTL7aKFadZE0ap
+2R1+ftT8ckMFcGYJq5n4X0BAJ3JCTKO/PKYkE56Z1a9SSbuK3FZJh/IhHGe2fKl
D6MJuDgEaRTHNhIKKwYBBAGXVQEFAQEHQFUP/CiGR/SnajxTy1fRJkZFpWyXlcA7
ymPWmxUgPfttAwEIB4h+BBgWCgAmFiEEB2unwGx0JINAPfQtIJ6r9OFvmTgFAmkU
xzYCGwwFCWXS8koACgkQIJ6r9OFvmTjG1QD+KwtvncQP0BLU2dcPFcuJhUlwjxd1
+pFg2dreowF55w0BAJtEAZwwl9OrJgJhqY1Pbr7rRx3fRPUrbDPU/9FUZ1MD
=6pX1
-----END PGP PUBLIC KEY BLOCK-----
  

Fingerprint: 076BA7C06C742483403DF42D209EABF4E16F9938



"The future is not set. There is no fate but what we make for ourselves." — Terminator