Baseline 생태계를 구성하는 블록체인, DApp, 거래소, 인프라, 데이터 분석, 그리고 기관들을 살펴보세요.
Above the Line
Claims in. Evidence checked. Outputs qualified.
Qualification States
Ambiguity is a valid output.
What You Can See
The Line
Not correctness — epistemic status.
Baseline takes claims about token markets, evaluates them against on-chain evidence, and produces outputs any independent party can replay.
How It Works
Testable statement about market structure.
On-chain data with full provenance.
Structured relationships and patterns.
Qualified, immutable, replayable.
Qualification
Ambiguity is a valid output — not a failure.
Every published metric will eventually be gamed. Baseline treats that as signal.
False confidence is worse than visible uncertainty.
History is never silently rewritten.
Reproducible, not narrative-based.
Ecosystem
A Verification Oracle and Structured Indexer for Digital Asset Markets
Stay above the line.
Anchored on FirmaChain mainnet — a gateway to the multichain ecosystem.
Vision
3 Tenets
The Problem
How Baseline Works
Build on Baseline
Partners
Join the Line
Stay above the line.
Baseline 생태계를 구성하는 블록체인, DApp, 거래소, 인프라, 데이터 분석, 그리고 기관들을 살펴보세요.
Baseline은 검증된 Baseline 네트워크 기반 위에서, 더 투명하고 재현 가능하며
Core Member
Baseline은 블록체인 인프라, 거래소, 제품 전략, 대규모 기술 운영 경험을 갖춘 팀이 이끌고 있습니다.
F O U N D E R / C E O
C T O
Baseline roadmap outlines how reproducible verification expands across market structure, developer tooling, and ecosystem integrations.
Baseline is a verification oracle and structured indexer for qualifying claims about market structure on public blockchains. This document is the Protocol Overview. An engineering companion — the Baseline Technical Specification (Yellow Paper) — provides formal schemas, algorithms, and implementation details.
Baseline is a verification oracle and structured indexer for qualifying claims about market structure on public blockchains. It takes a specific, evaluable statement about a token, pool, wallet, or cohort, evaluates it against retrievable on-chain evidence using disclosed methods, and produces a structured output that any independent party can replay and verify. Its initial focus is the Korean market, where demand for trustworthy, locally-anchored market intelligence is acute and existing infrastructure is insufficient. It is designed to serve as the trusted data layer for prediction markets, institutional participants, and regulated entities.
What distinguishes Baseline from generic oracles and indexers is the verification layer. Traditional oracles push price feeds or external data onto a chain without qualifying how that data was produced. Traditional indexers serve raw blockchain data without evaluating what it means. Baseline does both, indexing and oracle delivery, but wraps every output in a reproducible, attested verification envelope. Every data point Baseline serves can be independently replayed, disputed, and audited.
Baseline defines a boundary between information that can be qualified by verification and information that cannot. We call this boundary the line.
Information is above the line when it is produced from retrievable evidence, evaluated under disclosed methods, and anchored to a specific context such that an independent party can replay the evaluation and reach the same result. Information is below the line when it depends on private context, unverifiable intent, opaque aggregation, or methods that cannot be inspected or replayed.
The line is not a statement about correctness. It is a statement about epistemic status. Baseline does not elevate an above-the-line output into universal truth. It records how the output was produced, what evidence it depends on, and where uncertainty remains.
Above-the-line statements are scoped. They specify the subject, the context window, the admissible evidence, and the method version used. If inference is involved, uncertainty is preserved rather than collapsed.
Below-the-line statements lack one or more of these properties. They may be actionable in practice, but they are structurally opaque. Baseline does not attempt to eliminate these statements from markets. It prevents them from being silently treated as equivalent to qualified outputs.
Baseline initially supports Solana and EVM-compatible chains (Ethereum, Arbitrum, Base, and other EVM L2s). These chains were selected because they host the majority of active token markets, decentralized exchanges, and liquidity pool infrastructure relevant to the protocol's initial use case.
The evidence system is designed to be chain-extensible. Adding a new chain requires defining chain-specific evidence source types, canonical form mappings, and finality parameters. The claim schema, graph model, verification engine, and attestation protocol are chain-agnostic by design. The Technical Specification (Section 3) provides the full evidence source registry and chain-specific implementation details.
Baseline is built on the foundation of FirmaChain, a Cosmos SDK blockchain originally focused on document verification and digital trust infrastructure. The FirmaChain network's core competency, establishing verifiable, tamper-proof records, maps directly onto Baseline's mission of producing reproducible, auditable verification outputs for market structure.
The network's native token powers the Baseline verification economy. The native token serves three primary functions within the protocol:
Validator staking. Validators stake the native as collateral against their attestation obligations. Stake is slashable for provably false attestations, aligning validator incentives with honest execution.
Verification fees. Parties submitting claims for verification pay fees denominated in the native token. Fees are distributed among the engine operator, attesting validators, and the protocol treasury.
Protocol governance. Token holders participate in governance decisions affecting protocol parameters, engine version adoption schedules, and predicate registry updates.
The FirmaChain mainnet is not merely the host of the native token. It is the settlement and anchoring layer for Baseline's verification activity. When a Baseline verification is executed, a cryptographic commitment to the result, including the attestation hash and engine version reference, is recorded on the FirmaChain mainnet. This ensures that the verification output has a tamper-evident, publicly auditable on-chain footprint, even though the underlying computation and evidence retrieval happen off-chain.
To be explicit about the division of responsibilities: the FirmaChain mainnet handles token economics (staking, fee distribution, governance), validator identity and slashing, and on-chain attestation records.
The Baseline engine, which runs partly as a chain-native module and partly as an off-chain service, handles evidence retrieval from monitored chains (Solana, EVM chains, etc.), verification computation, and Verification Object generation.
These two layers are interdependent: the engine's outputs derive their finality and auditability from being anchored on the mainnet, and the mainnet's economic activity is driven by the engine's verification throughput. The full on-chain anchoring architecture, including batch commitment structure and storage design, is specified in the Technical Specification, Section 8.
Baseline verifies claims.
A claim is a precise, evaluable statement about a subject within a defined context. A claim is not a recommendation, a score, or a prediction. It is an assertion that can be tested against evidence using disclosed methods. Every verification result corresponds to one claim evaluation under one context.
The formal claim schema, including type definitions, validation rules, and the predicate registry, is specified in the Technical Specification, Section 2.
A subject is the entity a claim refers to. Subjects include tokens, accounts, contracts, pools, transactions, and defined cohorts derived from these primitives. Subjects are typed so that only valid predicates and evidence sources apply.
Baseline distinguishes between observed subjects and inferred subjects. Inferred subjects, such as wallet clusters or cohorts, are produced by explicit, versioned inference methods and carry confidence properties. Identity assertions are not assumed. They are claims.
Context defines where and when a claim applies. Claims are anchored to explicit temporal bounds, typically as block ranges, and include the execution environment such as chain identifiers and protocol conditions relevant to evaluation.
Scope bounds the evidence graph the engine is permitted to construct. Scope is a constraint against both ambiguity and unbounded aggregation. Two evaluations of the same predicate over different windows or scopes are treated as distinct claim evaluations.
Baseline treats an incompletely specified claim as non-verifiable by design.
Baseline assumes disagreement by default.
In adversarial markets, disagreement is not an edge case. It is the normal state of affairs. Participants disagree about control, intent, manipulation, risk, and legitimacy, often while observing the same underlying data. Any verification system that treats disagreement as failure will either collapse into authority or fragment into incompatible interpretations.
Baseline is designed to allow dispute without breakdown.
A dispute arises when two or more parties contest the qualification of a claim. This contest may concern the sufficiency of evidence, the appropriateness of a method, the interpretation of probabilistic inference, or the relevance of context boundaries. Baseline does not attempt to eliminate these disagreements. Instead, it constrains where disagreement can occur and how it is expressed.
The first constraint is that disputes must attach to claims, not to narratives. A participant cannot dispute "this token is a scam" within Baseline because that statement is not a claim. A participant can dispute a qualified claim such as "this token exhibits coordinated distribution with high confidence under version X of the clustering method." The dispute therefore targets something concrete: the subject, the predicate, the context, the evidence, or the method.
The second constraint is that disputes must be reproducible. A disputing party must be able to point to an alternative evaluation that can be replayed. This may involve rerunning the same claim against the same historical state using a different version of the verification engine, adjusting context boundaries, or proposing a competing inference method. What is not permitted is disputing a result without reference to how it was produced.
Ambiguity is treated as a first-class output, not as a failure state.
Many claims cannot be resolved to a single definitive qualification. Wallet control may be partially coordinated. Liquidity may be sufficient under some stress assumptions and insufficient under others. Early trading may be dominated by professional actors without implying malicious intent. Baseline explicitly preserves this ambiguity rather than collapsing it into binary judgments.
Verification outputs therefore encode ambiguity rather than suppress it. Deterministic evaluations may still yield ambiguous interpretations when multiple structural properties interact. Probabilistic inference explicitly encodes uncertainty bounds and alternative hypotheses. In such cases, the system may qualify a claim as inconclusive or contested rather than forcing a resolution.
This approach is intentional. Collapsing ambiguity into certainty is one of the primary ways market participants are misled. Baseline refuses to perform this collapse at the protocol level. Products built on Baseline are constrained to respect this distinction, even if it reduces apparent simplicity.
Baseline does not resolve disputes by branching the system into incompatible states. Verification outputs are additive and versioned. A disputed claim may have multiple qualified evaluations corresponding to different engine versions or contexts. These evaluations can coexist. Participants can choose which versions they trust or which confidence thresholds they accept without fragmenting the underlying verification substrate.
This property is critical for Baseline's long-term role as a standard. Standards persist precisely because they allow disagreement while maintaining shared reference points.
Finally, Baseline distinguishes between disputable claims and unverifiable claims. Some statements cannot be evaluated with available evidence, regardless of method or context. These statements remain below the line. Their exclusion is not a judgment of importance or relevance. It is an acknowledgment of epistemic limits. By making these limits explicit, Baseline prevents unverifiable assertions from silently acquiring the same status as qualified structure.
Dispute and ambiguity are not weaknesses of verification. They are the reason verification must exist. Baseline's role is not to resolve every disagreement, but to ensure that disagreements occur over qualified outputs rather than over unexamined narratives.
The Verification Engine is the execution layer that evaluates claims and produces qualified verification outputs. It is not a data pipeline; it is not an analytics service. It is a constrained runtime whose sole responsibility is to determine how a claim relates to evidence under explicitly defined rules.
The engine exists to ensure that verification is a function of state, method, and context rather than operator discretion. Given the same claim, the same context, the same evidence anchors, and the same engine version, the output must be reproducible. This invariant is foundational. Without it, verification degenerates into interpretation.
Verification begins when a claim is submitted to the engine. The engine resolves the subject and context, anchors the evaluation to specific historical state, and determines which evidence sources and evaluators are admissible for that claim type. It then constructs an internal representation of relevant state and executes the evaluation logic associated with the claim's predicate.
The engine is deliberately bounded. Claims must declare scopes that constrain time, subject expansion, and evidence reach. This prevents unbounded aggregation and ensures that verification remains precise rather than exploratory. Boundedness also ensures that verification outputs are comparable across subjects, because they are evaluated under similar constraints.
The engine is versioned as a whole. Changes to evaluation logic, inference methods, or canonical representations result in a new engine version rather than modifying prior behavior. This allows historical verification results to remain meaningful and prevents silent reinterpretation of the past.
The engine does not decide what information is important. It decides what information is admissible and how it is evaluated. Importance is left to products and users. This separation is how Baseline preserves neutrality while still enforcing rigor.
The full execution model, including the seven-phase evaluation pipeline, execution constraints, and deterministic evaluation specifications, is defined in the Technical Specification, Section 5.
Evidence is the raw material of verification. In Baseline, evidence is not treated as a flat collection of data points, but as state with provenance.
Evidence consists of observations that can be independently retrieved from public sources and anchored to specific moments in time. On-chain balances, transfers, contract code, events, liquidity pool reserves, and transaction ordering all constitute evidence. Off-chain inputs may exist, but only if their source and limitations are explicitly declared. Evidence without provenance is not admissible.
Provenance is not metadata. It is a constraint. Every piece of evidence carries with it a source reference, a retrieval method, a temporal anchor, and an integrity guarantee. This allows a third party to reconstruct the same evidence independently and verify that it corresponds to the same underlying state.
Baseline treats provenance as part of verification, not as an implementation detail. A verification output is only as strong as its weakest evidence link. If an evidence source cannot be replayed, cannot be time-anchored, or cannot be independently retrieved, then any claim depending on it cannot cross the line.
The evidence system, including the evidence unit schema, source type taxonomy, provenance requirements, and canonicalization specification, is formally defined in the Technical Specification, Section 3.
Baseline does not evaluate claims over isolated data points. It evaluates claims over structured relationships.
The Evidence Graph is the internal representation that captures how pieces of evidence relate to one another through time, control, and interaction. Nodes in the graph represent canonical entities such as accounts, contracts, pools, and transactions. Edges represent relationships such as transfers, funding paths, shared deployment infrastructure, or temporal proximity.
This graph is constructed dynamically for each verification, scoped to the claim's declared boundaries. It is not a global graph of the entire chain. This scoping prevents accidental overreach and ensures that verification remains focused on the structure relevant to the claim.
The Evidence Graph allows the engine to reason about patterns that are invisible in tabular representations. Coordination, for example, is not observable in a single balance snapshot. It emerges from repeated interactions, shared funding sources, synchronized behavior, and persistence through time. These properties are graph-native.
Temporal structure is explicit in the graph. Relationships are ordered, bounded, and anchored to block heights. This prevents the engine from conflating causation and correlation or from using future information to explain past behavior.
The Evidence Graph is inspectable in principle. While products may render simplified views, the underlying structure exists so that verification can be audited and disputed at the structural level rather than at the level of summarized metrics.
By grounding evaluation in a graph rather than a table, Baseline ensures that verification operates on structure rather than appearance.
The Evidence Graph data model, construction algorithm, scope enforcement rules, and complexity analysis are formally defined in the Technical Specification, Section 4.
Deterministic evaluation is the portion of verification that produces results directly derivable from public state.
A deterministic evaluation yields the same output for any verifier executing the same engine version against the same historical anchors. These evaluations form the hardest layer of above-the-line information because they do not rely on inference or interpretation.
Examples include supply concentration measures, liquidity depth relative to defined valuation references, presence or absence of mint and upgrade authorities, ownership renunciation status, token transfer restrictions, and time-indexed supply changes. These properties are not opinions. They are functions of state.
Deterministic evaluation is rule-based and explicit. Each deterministic predicate corresponds to a well-defined computation whose inputs and outputs are fully specified. Changes to these rules result in new engine versions rather than retroactive reinterpretation.
Importantly, deterministic does not mean simplistic. A deterministic evaluation may still involve complex computation over the Evidence Graph, such as identifying indirect control paths or evaluating liquidity behavior across multiple pools. What makes it deterministic is not the complexity, but the absence of probabilistic judgment.
Deterministic outputs establish the structural floor of verification. They define what can be asserted without uncertainty. Other layers of verification may build on them, but they cannot override them.
Not all meaningful claims can be resolved deterministically.
In adversarial markets, some of the most important questions concern latent structure: control, coordination, intent, and dominance. These properties are not explicitly encoded on-chain; they must be inferred from patterns.
Baseline treats inference as unavoidable but dangerous, and therefore elevates uncertainty to a first-class property rather than attempting to hide it.
Inference is performed using statistical models, heuristics, and graph-based methods applied to the Evidence Graph. These methods produce likelihoods, not certainties. Each inferred result is accompanied by confidence bounds and, where applicable, alternative explanations that fit the evidence.
Inference methods are versioned and disclosed. A change in clustering logic or sniper detection methodology results in a new method version. Verification outputs explicitly reference the method version used so that results can be compared or disputed across time.
Baseline never upgrades inference into fact. An inferred claim does not become equivalent to a deterministic one simply because its confidence is high. Products and users may choose to treat high-confidence inference as actionable, but the protocol preserves the distinction.
Uncertainty is not a failure state. It is an honest output. A claim may be qualified as inconclusive when evidence supports multiple incompatible interpretations or when confidence remains below meaningful thresholds. Baseline prefers inconclusiveness over false certainty.
By encoding uncertainty rather than collapsing it, Baseline enables disagreement without confusion. Participants may disagree about how much confidence is sufficient, but they do not disagree about what the system actually produced.
Inference is where most verification systems lose credibility. Baseline's design ensures that inference remains visible, bounded, and disputable rather than authoritative.
Inference method specifications, including wallet clustering and sniper detection algorithms, confidence schemas, and degradation rules, are defined in the Technical Specification, Section 5.3.
A Verification Object is the atomic output of Baseline.
It is a structured record of a single claim evaluation under a specific context and engine version. A Verification Object binds the claim definition, subject and context, evidence references, method identifiers, and the result. Products render Verification Objects. Validators attest to them. APIs return them.
Verification Objects are immutable records. If methods evolve or new evidence becomes admissible, a new object is produced under a new version. Earlier objects remain addressable. The object's identity is content-addressed: its identifier is the Keccak-256 hash of its canonically encoded contents, ensuring that any modification produces a detectably different object.
The complete Verification Object schema is specified in the Technical Specification, Section 6.1.
Verification does not produce verdicts. It produces qualifications that describe how a claim relates to evidence. There are five qualification states:
Verified. The claim is supported entirely by deterministic evaluation. All relevant predicates were evaluated, all results are conclusive and consistent, and no inference was required. This is the strongest qualification state.
Inferred. The claim is supported by probabilistic evidence. At least one inference method was used, and the primary hypothesis exceeds the method's reporting threshold. Confidence bounds and alternative hypotheses are attached to the output.
Observed. The evidence graph was constructed and structural properties were described, but the specific predicate could not be meaningfully evaluated within this context. Structure was found and recorded, but the claim itself remains unevaluated.
Inconclusive. Evidence supports multiple incompatible interpretations and no primary hypothesis has confidence above the reporting threshold. This is an honest output, not a failure. The evidence is genuinely ambiguous.
Unqualified. Evidence is insufficient to construct a meaningful evaluation. This may occur because evidence retrieval failed, the subject was not found, or the scope was entirely empty. Unqualified is distinct from a negative result: it means the system could not evaluate the claim, not that the claim is false.
These states preserve the distinction between structure and interpretation. Products are constrained to render them explicitly. Qualification determination is mechanical: the engine applies qualification rules deterministically based on evaluation results. Given the same results, the same qualification state is always produced.
Formal entry conditions for each qualification state are specified in the Technical Specification, Section 6.2.
When inference is used, Baseline emits confidence and bounds rather than categorical certainty. Confidence is tied to a specific inference method and version. Bounds represent the plausible range of outcomes under the model and evidence limitations.
Baseline does not define universal action thresholds. It exposes uncertainty so that products and users can define their own boundaries without the protocol collapsing inference into fact.
Reproducibility is the primary guarantee that Baseline offers. Without it, verification reduces to trust in an operator. A Verification Object can be independently reproduced by any party with access to the same public evidence and the same engine version.
This property is enforced through the mechanisms described earlier in this document: explicit evidence anchoring, deterministic canonicalization via BCE (see Evidence and Provenance), and versioned evaluation logic (see The Verification Engine). This guarantee extends to historical replay.
A Verification Object remains valid as a record of what the system concluded when evaluated against a specific historical state, even if the present state has changed.
Reproducibility also underpins dispute resolution: a disputing party can rerun the same claim under the same conditions or propose an alternative engine or method version. Disagreement is resolved by comparison, not by appeal.
Baseline treats changes in interpretation as changes in version, not changes in truth.
Markets evolve. Methods improve. Definitions sharpen. Rather than overwriting the past, Baseline encodes evolution through versioning. Claim schemas, evaluation methods, and the engine itself are all versioned components.
A Verification Object is therefore always a statement about reality as understood by a specific version of Baseline at a specific point in time. This does not make it obsolete. It makes it precise.
Versioned reality allows Baseline to improve without breaking trust. Participants can compare results across versions, choose which versions they rely on, and understand exactly what changed. No result is silently reclassified.
This approach also allows multiple interpretations to coexist. Two Verification Objects evaluating the same claim under different engine versions can both be valid. The difference between them is itself information.
By formalizing versioned reality, Baseline avoids the false promise of a single, timeless truth. Instead, it provides a stable framework for reasoning as understanding evolves.
Engine versioning semantics, deprecation policy, and version selection are specified in the Technical Specification, Section 5.5.
Validators are independent actors that attest to the correctness of verification execution.
They do not generate data, select claims, or interpret results. Their role is narrowly defined: to confirm that a Verification Object was produced by executing a specific version of the Baseline Verification Engine against the declared evidence anchors and context.
This distinction is critical. Validators do not assert that a claim is true. They assert that the system was run correctly.
Validators operate by replaying verification requests using the same engine version, claim definition, context, and evidence references. If the replayed execution produces an identical Verification Object, the validator may attest to that object. If it does not, the validator withholds attestation.
This design ensures that validation is mechanical rather than discretionary. A validator cannot substitute its own interpretation or judgment without deviating from the protocol.
Validator technical requirements, the seven-step replay protocol, and the attestation schema are formally specified in the Technical Specification, Section 7.
Baseline distinguishes between two validator participation tiers. All validators in the FirmaChain network continue to perform standard block validation and receive block rewards as before. This baseline tier is unchanged by the introduction of Baseline and requires no additional infrastructure.
Validators may optionally upgrade to the Baseline attestation tier by running the Baseline engine alongside their existing node. Attestation-tier validators replay verification requests and attest to the resulting Verification Objects, earning a share of verification fees paid in the native token by requesters. This additional incentive layer sits entirely on top of, and does not replace, the existing block reward structure. Participation in Baseline attestation is opt-in and does not affect a validator's standing in standard block consensus.
A validator attests to execution integrity, not to semantic correctness.
Specifically, a validator attests that the claim was evaluated using the declared engine and method versions, the evidence referenced was admissible and correctly anchored, the evaluation was executed within the declared scope, and the resulting Verification Object matches the output produced by the validator's own replay.
Validators do not attest that a claim is desirable, accurate in a normative sense, or predictive of future outcomes. They do not attest to confidence thresholds or to how a product should interpret uncertainty.
This narrow scope is intentional. It allows validators to provide cryptographic assurance without becoming arbiters of meaning.
Attestations are additive. Multiple validators may independently attest to the same Verification Object. Products and users may define their own requirements for how many attestations are sufficient for their purposes.
Validators are economically aligned with honest execution through the native token staking and fee model inherited from the FirmaChain network infrastructure.
When a party submits a claim for verification, they pay a fee denominated in the native token. This fee is distributed among three recipients: the engine operator that executes the primary verification (majority share), the validators that attest to the result (minority share split among attesting validators), and the protocol treasury (remainder, funding development and infrastructure).
Validators must stake the native token as collateral to participate. If a third party demonstrates that a validator attested to a Verification Object its own replay could not have produced, the validator's stake is slashed. A portion of the slashed stake is awarded to the party that proved the false attestation, creating an external audit incentive.
This structure ensures that the cost of dishonest attestation exceeds the potential benefit. Validators earn steady income from honest replay, and lose disproportionately from false attestation. The existing FirmaChain validator set provides a ready-made network of staked participants, avoiding the cold-start problem that undermines many new attestation protocols.
To be explicit: standard block validation rewards are not affected by Baseline. Validators who do not run the Baseline engine continue to receive block rewards on the same terms as before. The verification fee distribution described above represents an additional incentive layer, funded entirely by fees paid by verification requesters, that only flows to validators actively participating in Baseline attestation.
The full incentive model, including proposed fee splits, staking requirements, and slashing conditions, is specified in the Technical Specification, Section 7.5.
Validator attestations are expressed through cryptographic signatures.
Each attestation binds the validator's identity to a specific Verification Object hash, along with metadata indicating the engine version and verification context. This ensures that signatures cannot be reused or misapplied to different outputs.
Signed Verification Objects can be transmitted, cached, and stored without loss of integrity. Any party receiving a signed object can verify its authenticity by checking the validator signature against the object hash.
Signing does not create finality in the blockchain sense. It creates authenticity. A signed object is not immutable truth. It is a verifiable record that a specific execution occurred and was independently confirmed.
Baseline deliberately separates signing from settlement. Verification objects may be signed off-chain or on-chain depending on product requirements, but their meaning remains the same.
Verification in Baseline is not a one-time event.
Because claims are time-bound and engines are versioned, a claim may be re-evaluated under new conditions. A re-evaluation may use a different context window, a newer engine version, or additional evidence that has since become available.
Re-evaluation does not invalidate prior results. Each Verification Object remains a valid statement about what the system concluded under its original conditions. New results coexist with old ones.
Replayability is what makes this coexistence meaningful. Any party can replay an old verification to confirm that it was produced correctly. This property prevents silent reinterpretation and allows verification history to be audited.
Re-evaluation therefore extends Baseline's usefulness over time without requiring global state resets or forced upgrades.
Disagreement is inevitable. Forking is optional.
Baseline is designed so that disagreement does not require fragmentation of the system. Multiple Verification Objects may exist for the same claim evaluated under different assumptions, contexts, or engine versions. These objects do not compete for dominance. They coexist.
Validators do not resolve disagreements. They confirm execution integrity. Products and users choose which Verification Objects to rely on, which engine versions to trust, and which confidence thresholds to accept.
This design avoids the failure mode where a system must choose a single authoritative interpretation to remain coherent. Instead, coherence is maintained through shared structure and explicit versioning.
Forking occurs when participants disagree on the rules of verification itself. Baseline minimizes this by making rules explicit, inspectable, and versioned. When rules change, they do so through new versions rather than through contentious reinterpretation.
Disagreement becomes part of the record rather than a source of instability. This is how Baseline maintains unity without imposing consensus.
In Baseline, products are views over verification, not sources of verification themselves.
A product does not define claims, alter qualification states, or reinterpret evidence. It selects which Verification Objects to surface, how to group them, and how to present them to a user. The underlying verification remains unchanged regardless of presentation.
This separation is structural. It ensures that the meaning of verification does not drift as products evolve, and that multiple products can coexist without fragmenting the standard. A consumer-facing portal, an institutional dashboard, and a developer API may look entirely different while relying on the same verification substrate.
Products therefore operate downstream of the line. They may guide attention, provide comparison, or emphasize certain risks, but they cannot silently upgrade below-the-line information into above-the-line verification. This constraint is enforced by design rather than by policy.
By treating products as views, Baseline preserves neutrality while still enabling usability. The protocol defines what can be said with verification. Products define how that information is experienced.
The Memecoin Intel Portal is the first product built on Baseline.
It exists to address a specific failure mode in Web3 markets: the extreme information asymmetry present in meme coin trading. In these markets, professional participants operate with aggregated structural insight while ordinary participants rely on fragmented signals and narrative inference.
The Portal does not attempt to equalize outcomes. It equalizes access to structure.
Technically, the Portal is a renderer of Verification Objects. It requests a standard set of claims for a given token and presents the qualified results in a coherent interface. The Portal does not compute verification logic itself. It delegates all verification to the Baseline engine and displays the signed outputs.
The Portal is intentionally narrow in scope. It focuses on early-stage token structure, distribution behavior, liquidity characteristics, and coordination signals. It avoids long-horizon prediction, sentiment analysis, or performance ranking. Its value lies in making structure legible, not in forecasting returns.
As the first Baseline product, the Portal serves as a proving ground. If verification can be made reproducible and interpretable in one of the most adversarial market categories, the same substrate can extend to broader domains.
The Memecoin Intel Portal surfaces a defined subset of Baseline claims tailored to meme coin markets.
These claims focus on properties that materially affect market structure without requiring subjective interpretation. They include claims about distribution concentration, early acquisition patterns, liquidity depth relative to valuation, persistence of control signals over time, and the presence of coordinated behavior among holders or traders.
Each memecoin claim is versioned and explicitly scoped. A claim evaluated over the first hours after deployment is distinct from the same claim evaluated days later. The Portal makes these distinctions explicit rather than collapsing them into a single label.
Importantly, memecoin claims do not attempt to classify intent. A claim about sniper dominance does not assert malicious behavior. A claim about wallet clustering does not assert ownership. These claims describe observable or inferable structure and preserve uncertainty where inference is required.
By constraining claims in this way, the Portal avoids becoming an accusation engine. It remains a verification surface.
Visualization is the primary way users interact with Baseline outputs, but visualization is not verification.
A visualization is an interpretation layer. It translates Verification Objects into forms that humans can reason about, such as timelines, distributions, and comparative views. The act of visualization necessarily emphasizes some aspects of the data and deemphasizes others.
Baseline treats this interpretive step explicitly rather than pretending it does not exist. The Portal's visual language is designed to reflect qualification states, uncertainty, and scope boundaries rather than hide them. Deterministic results and inferred results are rendered differently. Confidence and bounds are visible. Ambiguity is not smoothed away.
This is a deliberate design choice. Many market tools use visualization to create false clarity. Baseline uses visualization to surface structure while preserving epistemic limits.
Users remain responsible for interpretation. The Portal provides the materials for reasoning, not conclusions.
Compliant products built on Baseline operate within defined protocol boundaries. They do not issue buy or sell recommendations derived from verification results. They do not collapse qualification states into a single score that implies desirability. They do not present inferred claims as deterministic facts. They do not suppress uncertainty to simplify messaging.
These boundaries are enforced through the Verification Object schema itself: products receive structured outputs with explicit qualification states and confidence bounds, and compliant rendering preserves these distinctions. A product that strips or overrides them is no longer faithfully representing the verification substrate and cannot present its outputs as Baseline-verified.
This constraint is central to Baseline's credibility. The moment verification is used to assert what users should do rather than what can be verified, the line is erased.
The Baseline verification API is under active development. When production-ready, it will expose endpoints for claim submission, Verification Object retrieval, attestation status queries, and engine version management.
The target API design supports explicit version control, allowing integrators to pin to specific engine versions or request evaluation under the latest stable release. Method-level overrides will be available for advanced use cases where integrators need to specify particular inference method versions.
The preliminary API interface is outlined in the Technical Specification, Section 10. This specification will be finalized alongside the production API release. The Verification Object JSON schema (Technical Specification, Section 6.1), the feature engineering reference (Technical Specification, Appendix C), and the test vector suite (Technical Specification, Appendix D) serve as the current integration reference points for developers building against the protocol.
Developers building products on Baseline operate downstream of the verification layer. A Baseline product receives Verification Objects and renders them. It does not modify qualification states, suppress uncertainty, or reinterpret evidence.
The key integration patterns are: querying for Verification Objects by subject and context, rendering qualification states and confidence bounds accurately, respecting the distinction between deterministic and inferred results, and displaying attestation counts so users can assess the assurance level of each output.
Products must preserve the qualification state and confidence information from the Verification Object in their user-facing rendering. A product that collapses these states into a single score, hides uncertainty, or presents inferred results as deterministic violates the protocol's constraints and ceases to be a compliant Baseline product.
Detailed developer documentation, including SDKs, code examples, and rate limit policies, will be published alongside the production API release.
Baseline assumes that it operates in adversarial environments by default.
Market participants are not neutral observers of verification systems. They adapt to them. They probe their boundaries, reverse-engineer their metrics, and attempt to shape observable structure in ways that produce favorable qualifications. Any system that assumes honest behavior or static conditions will fail once it is relied upon.
Baseline therefore makes a conservative assumption: subjects being evaluated may actively attempt to influence verification outcomes, and users of verification outputs may selectively interpret results to support predetermined narratives.
This assumption influences every design choice. Evidence is bounded. Methods are versioned. Inference is probabilistic rather than categorical. Outputs are qualified rather than declarative. Validators attest to execution integrity rather than meaning. Products are constrained in what they can say.
Adversarial behavior is not treated as an exception. It is treated as the environment.
Verification in markets is difficult because the properties that matter are rarely explicit. On-chain systems expose state, but they do not expose intent. Control is fragmented across addresses. Coordination is expressed indirectly through timing, reuse, and persistence. Liquidity can be real or engineered. Distribution can be broad or synthetically dispersed. The same surface-level metrics can correspond to radically different underlying realities.
Markets are also dynamic. Properties that appear benign at one point in time may become concerning later, and vice versa. Any verification system that does not account for time risks confusing transient structure with persistent behavior.
There is a structural tension between completeness and tractability. Capturing every possible relationship produces an intractable system. Simplifying too aggressively produces misleading outputs. Baseline addresses this by requiring explicit scopes and by treating verification as bounded evaluation rather than exhaustive analysis.
Finally, verification must remain reproducible. Many systems fail not because they lack insight, but because they cannot explain how that insight was produced. Without reproducibility, disagreement becomes irresolvable and trust collapses into authority.
Verification is therefore hard not because data is unavailable, but because structure must be extracted under constraint.
Inference is necessary, but it is also the most fragile component of verification.
One failure mode is overconfidence. Models may produce high-confidence outputs in regions where training data is sparse or where adversarial adaptation has occurred. Without explicit uncertainty bounds, these outputs appear more authoritative than they deserve.
Another failure mode is proxy collapse. An inference method may rely on features that correlate with the target property under historical conditions but cease to do so once those features become known and manipulated. For example, wallet reuse patterns may change once participants understand they are being tracked.
A third failure mode is contextual leakage. Inference performed over overly broad scopes may incorporate irrelevant or future information, leading to conclusions that cannot be replayed under strict historical constraints.
Baseline mitigates these failure modes by isolating inference from deterministic evaluation, versioning inference methods, enforcing temporal anchoring, and emitting confidence and bounds rather than categorical judgments. These mitigations do not eliminate failure, but they make failure observable rather than silent.
A comprehensive catalog of inference failure modes, evidence retrieval failures, and their mitigations is provided in the Technical Specification, Section 9.
Any published metric will eventually be gamed.
Once participants understand how a verification system qualifies claims, they will attempt to shape observable behavior to achieve favorable outcomes without changing underlying intent. Distribution may be fragmented across more addresses. Liquidity may be staged temporarily. Transactions may be routed to obscure coordination.
Baseline does not attempt to prevent metric gaming outright. Instead, it treats gaming as a signal.
Because verification operates over structure rather than isolated metrics, gaming one property often introduces anomalies elsewhere. Synthetic dispersion introduces funding correlations. Temporary liquidity introduces temporal instability. Obfuscation introduces complexity that persists over time.
Metric gaming therefore becomes part of the evidence graph rather than a blind spot. When adaptation occurs, inference confidence may drop, bounds may widen, or claims may become inconclusive. These are not failures of the system. They are accurate reflections of increased uncertainty.
Baseline also anticipates that methods will need to evolve. This is why methods are versioned and why old versions remain replayable. Adaptation does not rewrite history. It forces explicit evolution.
Many market tools suppress uncertainty to simplify interfaces and strengthen apparent authority. Baseline does the opposite.
Uncertainty is information about the limits of what can be verified. Suppressing it creates false confidence and perverse incentives: a system expected to produce definitive answers will be pressured to overfit, overclaim, or silently change methods to maintain apparent accuracy.
By making uncertainty explicit, Baseline aligns incentives toward honesty. Preserved uncertainty also enables disagreement without collapse. Two participants can accept the same Verification Object and still disagree about what to do with it, because they see the same bounds and confidence. The system remains coherent even when interpretations diverge.
Baseline does not replace the document verification and digital trust services built on FirmaChain. FirmaSign, FVAB, and Donue, along with their signing records, document verification functions, and audit trail infrastructure, carry forward into the Baseline ecosystem.
These services transition from operating on raw FirmaChain infrastructure to operating as products built on top of Baseline, meaning their outputs gain the reproducibility guarantees, attestation layer, and structured verification properties that Baseline provides.
Existing users and integrations are unaffected in function; the underlying verification standard they rely on is upgraded, not replaced.
Meme coin markets are not an anomaly. They are a compressed expression of broader market dynamics under extreme informational stress.
The same structural properties that dominate meme coins (distribution asymmetry, liquidity fragility, coordination, narrative amplification, and rapid adaptation) also exist in other token markets. They are simply less visible when volatility is lower and time horizons are longer.
Baseline's initial focus on meme coins is therefore not a specialization. It is a stress test. By demonstrating that claims about structure can be evaluated, qualified, and disputed in one of the most adversarial environments, Baseline establishes primitives that generalize.
As Baseline expands from meme coins to broader token markets, the underlying verification engine does not change. What changes is the set of admissible subjects, contexts, and claims. Longer evaluation windows become relevant. Emission schedules, unlock mechanics, governance control, and treasury behavior become first-class evidence. Liquidity evaluation shifts from launch integrity to sustainability. Distribution analysis shifts from initial dispersion to long-term control.
This expansion preserves the same boundary: Baseline continues to qualify structure, not value. Tokens are not ranked by expected return. They are evaluated by how legible their market structure is and how much uncertainty remains.
In this way, Baseline evolves from a memecoin verification tool into a comprehensive market structure oracle: indexing, qualifying, and attesting to the structural properties of any token market on supported chains.
Baseline's long-term position is as the trusted verification oracle for the Korean digital asset ecosystem. Korea's market structure (high retail participation, concentrated exchange infrastructure on Upbit and Bithumb, active prediction market development, and an evolving regulatory framework under the Virtual Asset Users Protection Act) creates acute demand for locally-anchored, verifiable market intelligence that existing global infrastructure does not adequately serve.
Prediction markets are the most immediate expansion beyond token structure verification. These markets exist to aggregate belief into price. Their value depends not only on outcomes, but on the integrity of the aggregation process itself. If liquidity is shallow, participation is coordinated, or incentives distort expression, prices cease to be meaningful signals. Baseline qualifies the structure of these markets (participant diversity, concentration of exposure, timing of capital flows, and susceptibility to manipulation) so that consumers of prediction market data can assess how much weight a signal deserves.
Institutional verification follows naturally. Korean financial institutions, asset managers, and regulated entities require verified market structure data to meet compliance obligations and inform investment decisions. Baseline provides this as an oracle service: qualified, attested, reproducible outputs that institutions can reference with confidence. Claims in this domain extend to governance participation integrity, treasury behavior, and long-term control structure, properties that matter for institutional due diligence but are currently opaque.
The verification primitives are the same across all domains. What changes is the set of admissible subjects, contexts, and claims. The protocol's value increases as it serves more participants in the Korean ecosystem: prediction markets consuming Baseline outputs, institutions referencing Baseline attestations, and regulators using Baseline's qualified structure data as a foundation for market oversight.
The following examples illustrate how Baseline claims are structured in practice. Full JSON schemas and worked examples with evaluation traces are provided in the Technical Specification, Appendices A.1 through A.3.
Plain language: "The top 10 holders of token XYZ control more than 50% of circulating supply within the first 24 hours of launch."
Predicate: supply_concentration
Subject: Token (XYZ contract address)
Context: Solana, block range covering first 24 hours post-deployment
Evaluation class: Deterministic
Expected qualification: VERIFIED (supported entirely by on-chain balance data)
Plain language: "Multiple wallets holding token ABC appear to be controlled by a coordinated group based on shared funding sources and synchronized trading behavior."
Predicate: wallet_clustering
Subject: Token (ABC contract address)
Context: Solana, first 7 days, max 3-hop graph expansion
Evaluation class: Inferential (graph-based community detection)
Expected qualification: INFERRED (with confidence bounds and alternative hypotheses)
Plain language: "The primary liquidity pool for token DEF can absorb a $10,000 sell without moving the price by more than 5%."
Predicate: liquidity_depth
Subject: Pool (DEF/SOL Raydium pool address)
Context: Solana, current block height
Evaluation class: Deterministic (constant product AMM calculation)
Expected qualification: VERIFIED
The Verification Object is the atomic output of the Baseline Verification Engine. Its complete schema, including nested type definitions for all result types, evidence references, qualification states, confidence structures, and metadata, is formally specified in the Technical Specification, Section 6.1.
The authoritative JSON schema is machine-readable and versioned alongside the engine at the Baseline API documentation.
| Term | Definition |
|---|---|
| Claim | A precise, evaluable statement about a subject within a defined context. The atomic input to the Verification Engine. |
| Subject | The entity a claim refers to. Types: Token, Account, Contract, Pool, Transaction, Cohort. |
| Predicate | The property being evaluated. Each predicate has a defined evaluation class (deterministic or inferential). |
| Context | The chain, block range, and protocol scope under which a claim is evaluated. |
| Scope | Constraints on graph expansion: maximum hops, nodes, edges, temporal lookback. |
| Evidence Unit | A single observation with provenance: source, retrieval method, temporal anchor, and content hash. |
| Evidence Graph | A directed, typed, temporally-anchored multigraph constructed from evidence units for a specific claim evaluation. |
| Verification Object | The atomic output of Baseline. A content-addressed, immutable record of a single claim evaluation. |
| Qualification | How a claim relates to evidence: VERIFIED, INFERRED, OBSERVED, INCONCLUSIVE, or UNQUALIFIED. |
| Confidence | A structured measure of certainty for inferred results, including point estimate, bounds, and degradation flags. |
| Attestation | A cryptographic signature by a validator confirming that replay of a Verification Object produced an identical result. |
| Validator | An independent actor that attests to verification execution integrity by replaying evaluations. |
| Engine Version | A versioned release of the Baseline Verification Engine. Follows semantic versioning. |
| Canonicalization | The process of transforming raw evidence into a deterministic, provider-independent representation. |
| BCE | Baseline Canonical Encoding. The deterministic binary encoding specification used for canonicalization. |
| The Line | The boundary between information that can be qualified by verification and information that cannot. |
| Native Token | The native token of the Baseline network (rebranded from FirmaChain FCT). Used for staking, fees, and governance. |
The Baseline Technical Specification (Yellow Paper) is the engineering companion to this document. It provides the formal definitions, data structures, algorithms, and operational requirements necessary to implement, validate, and build on Baseline.
The Technical Specification covers: claim schemas and the subject type system (Section 2), the evidence system including provenance and canonicalization (Section 3), evidence graph data model and construction algorithm (Section 4), verification engine execution model and evaluation methods (Section 5), verification object schema and qualification semantics (Section 6), attestation and validator protocol including incentive design (Section 7), on-chain data anchoring and storage architecture (Section 8), known failure modes and mitigations (Section 9), API specification (Section 10), reference implementation notes (Section 11), formal definitions and glossary (Appendix E), and SDK type definitions (Appendix F).
It is intended for engineers building on, integrating with, or validating against Baseline.