BitsLab’s AI Audit Agent Discovers Multiple Vulnerabilities in Story Protocol’s Smart Contracts

By ChainwireNewsroom
BitsLab’s AI Audit Agent Discovers Multiple Vulnerabilities in Story Protocol’s Smart Contracts

Silicon Valley, United States, November 4th, 2025, Chainwire

Press Release Details

Silicon Valley, United States, November 4th, 2025, Chainwire


Recently, BitsLab’s AI-powered Audit Agent conducted an in-depth security audit of Story Protocol’s smart contract layer and successfully identified one medium-severity and one low-severity vulnerability.

About Story Protocol

Story Protocol is a peer-to-peer intellectual property network that creates a programmable marketplace for knowledge and creativity. Scientific and creative assets are registered on a universal ledger with customizable usage parameters.

BitsLab AI Audit Agent Architecture

BitsLab’s AI Audit Agent is a multi-language general-purpose auditing framework, built upon an advanced system architecture and agent-based technology.

The Agent adopts a three-phase pipeline — Planning, Reasoning/Scanning, and Validating/Confirmation — designed to deliver high-precision and high-efficiency vulnerability detection through several key innovations:

  • Task Database and Planning Engine: The system structures the entire audit into database-managed tasks for resumable execution. A task-driven planning engine automatically parses projects into multi-granular scan units (e.g., function-level, business-flow-level), and uses large models to intelligently extract business logic for deep contextual understanding.
  • Multi-Model Voting and Synthesis: The framework integrates multiple state-of-the-art large models (including OpenAI, Claude, and DeepSeek), routing tasks intelligently by type. During the checklist generation phase, several models generate candidate lists concurrently, and a consensus-based prompt synthesis produces a unified version — effectively reducing model hallucination and bias.
  • Deep RAG and Call-Tree Integration: Beyond building a code-level vector database (LanceDB) for semantic retrieval (RAG), the verification stage merges it with a precomputed contract call tree. When the AI requires additional context, the system automatically retrieves similar code and aggregates contextual information along the call hierarchy, forming an “explainability-enhanced context” for decision support.
  • Strong Confirmation and Early-Stopping Strategy: The verification pipeline employs multi-round confirmation logic, featuring a “strong confirmation” rule (e.g., early convergence upon multiple ‘yes’ votes in one round) and a “clear ‘no’ early stop” mechanism. This combination maintains precision while reducing audit costs and accelerating completion.
  • Highly Configurable Controls: Numerous system parameters — including model selection, concurrency level, scanning mode, and online/offline behavior — are fully configurable through environment variables, allowing flexible adaptation to diverse audit environments and cost constraints.

Vulnerabilities Found in Story Protocol

Through deep scanning and multi-round validation by BitsLab’s AI audit framework, two logic vulnerabilities were identified in Story Protocol’s contract layer:

  • [Medium Severity] Logical Error in _exists Function

Description: In the PILicenseTemplate.sol contract, the internal function _exists(uint256 licenseTermsId) incorrectly returns true when licenseTermsId equals 0, even though 0 is an invalid license ID. The licenseTermsId serves as a critical credential for users when attaching license terms and minting license tokens.

Technical Details:

In the contract, IDs are registered starting from 1 (via ++$.licenseTermsCounter). However, the _exists function checks validity using the condition licenseTermsId <= _getPILicenseTemplateStorage().licenseTermsCounter.

Before any ID is registered, licenseTermsCounter equals 0. Therefore, calling _exists(0) satisfies the condition (0 <= 0) and incorrectly returns true.

Impact:

Although this vulnerability does not directly cause any immediate user fund loss, it could affect external integrations that rely on this function for validation. Moreover, this subtle boundary condition confusion might “plant a hidden landmine” for future feature development, potentially introducing financial risks down the line.

  • [Low Severity] Hash Inconsistency Between registerLicenseTerms and getLicenseTermsId

Description:

The PILicenseTemplate contract uses inconsistent hash calculation methods when registering license terms and retrieving their IDs.

Technical Details:

When registering license terms, users can attach a uri field pointing to a webpage that contains detailed license information.

In the registerLicenseTerms function, the system escapes the uri field within the PILTerms structure using LibString.escapeJSON before computing the hash.

However, in the getLicenseTermsId function, the same structure is hashed without performing this escaping step.

Impact:

If the uri includes characters requiring escape (such as "), calling getLicenseTermsId will fail to locate the registered ID (returning 0).

Alternatively, two different license terms with distinct uris could hash to the same ID, causing users to mistakenly re-register identical terms — resulting in data redundancy or logic confusion in applications.

Detection Approach of the AI Audit Agent

These two vulnerabilities — the medium-severity logical flaw and the low-severity hash inconsistency — are classic examples of contextual issues that traditional tools struggle to identify and human auditors often overlook.

Business-Flow-Level Analysis Beyond Single Functions:

BitsLab AI’s Planning Engine automatically extracts and analyzes entire business flows such as “registration–retrieval.”

During the Scanning phase, it compares implementations between related functions, discovering that one applies escapeJSON while the other does not — leading to a logical mismatch.

Accurate Boundary and State Validation:

The medium-severity _exists bug represents a classic boundary condition error.

During the planning stage, the AI parsed the logic of _exists, associated it with the state variable licenseTermsCounter (initialized as 0), and understood the ID generation rule ++$.licenseTermsCounter (IDs start at 1).

Through a checklist-driven scan, the AI systematically tested the edge case ID = 0, and with contextual reasoning, quickly identified (0 <= 0) as an erroneous true return.

 AI Empowering Web3 Security

This audit of Story Protocol once again demonstrates BitsLab AI Audit Agent’s capability in uncovering complex business logic vulnerabilities.

 By combining multi-model consensus, RAG-enhanced context, call tree analysis, and automated validation pipelines, the tool can efficiently and accurately detect deep-seated security risks that traditional methods often miss.

BitsLab remains committed to leveraging cutting-edge AI technologies to safeguard the Web3 ecosystem with precision, speed, and transparency.

About BitsLab

BitsLab is a security organization dedicated to safeguarding and building emerging Web3 ecosystems, with a vision to become a Web3 security institution respected by both the industry and users. The company operates three sub-brands: MoveBit, ScaleBit, and TonBit.

BitsLab focuses on infrastructure development and security auditing for emerging ecosystems, covering but not limited to Sui, Aptos, TON, Linea, BNB Chain, Soneium, Starknet, Movement, Monad, Internet Computer, and Solana. The company also demonstrates deep technical expertise in auditing various programming languages, including Circom, Halo2, Move, Cairo, Tact, FunC, Vyper, and Solidity. The BitsLab team brings together top-tier vulnerability researchers who have won multiple international CTF awards and discovered critical vulnerabilities in well-known projects such as TON, Aptos, Sui, Nervos, OKX, and Cosmos.



Contact
Marketing Manager
Jason Li
BitsLab
jasonlee@bitslab.xyz