test(feed): end-to-end integration + two-node propagation (Phase B hardening)
Adds two integration-test files that exercise the full feed stack over
real HTTP requests, plus a fix to the publish signature model that the
EXIF scrubbing test surfaced.
Bug fix — api_feed.go publish signature flow
Previously: server scrubbed the attachment → computed content_hash
over the SCRUBBED bytes → verified the author's signature against
that hash. But the client, not owning the scrubber, signs over the
RAW upload. The two hashes differ whenever scrub touches the bytes
(which it always does for images), so every signed upload with an
image was rejected as "signature invalid".
Fixed order:
1. decode attachment from base64
2. compute raw_content_hash over Content + raw attachment
3. verify author's signature against raw_content_hash
4. scrub attachment (strips EXIF / re-encodes)
5. compute final_content_hash over Content + scrubbed attachment
6. return final hash in response for the on-chain CREATE_POST tx
The signature proves the upload is authentic; the final hash binds
the on-chain record to what readers actually download.
node/feed_e2e_test.go
In-process harness: real BadgerDB chain + feed mailbox + media
scrubber + httptest.Server with RegisterFeedRoutes. Tests drive
it via real http.Post / http.Get so rate limiters, auth, scrubber,
and handler code all run on the happy path.
Tests:
- TestE2EFullFlow — publish → CREATE_POST tx → body fetch → view
bump → stats → author list → soft-delete → 410 Gone on re-fetch
- TestE2ELikeUnlikeAffectsStats — on-chain LIKE_POST bumps /stats,
liked_by_me reflects the caller
- TestE2ETimeline — follow graph, merged timeline newest-first
- TestE2ETrendingRanking — likes × 3 + views puts hot post at [0]
- TestE2EForYouFilters — excludes own posts + followed authors +
already-liked posts; surfaces strangers
- TestE2EHashtagSearch — tag returns only tagged posts
- TestE2EScrubberStripsEXIF — injects SUPERSECRETGPS canary into a
JPEG APP1 segment, uploads via /feed/publish, reads back — asserts
canary is GONE from stored attachment. This is the privacy-critical
regression gate: if it ever breaks, GPS coordinates leak.
- TestE2ERejectsMIMEMismatch — PNG labelled as JPEG → 400
- TestE2ERejectsBadSignature — wrong signer → 403
- TestE2ERejectsStaleTimestamp — 1-hour-old ts → 400 (anti-replay)
node/feed_twonode_test.go
Simulates two independent nodes sharing block history (gossip via
same-block AddBlock on both chains). Verifies the v2.0.0 design
contract: chain state replicates, but post BODIES live only on the
hosting relay.
Tests:
- TestTwoNodePostPropagation — Alice publishes on A; B's chain sees
the record; B's HTTP /feed/post/{id} returns 404 (body is A's);
fetch from A succeeds using hosting_relay field from B's chain
lookup. Documents the client-side routing contract.
- TestTwoNodeLikeCounterSharedAcrossNodes — Bob likes from Node B;
both A's and B's /stats show likes=1. Proves engagement aggregates
are chain-authoritative, not per-relay.
- TestTwoNodeFollowGraphReplicates — FOLLOW tx propagates, /timeline
on B returns A-hosted posts with metadata (no body, as designed).
Coverage summary
Publish flow (sign → scrub → hash → store): ✓
CREATE_POST on-chain fee accounting: ✓
Like / Unlike counter consistency: ✓
Follow graph → timeline merge: ✓
Trending ranking by likes × 3 + views: ✓
For You exclusion rules (self, followed, liked): ✓
Hashtag inverted index: ✓
View counter increment + stats aggregate: ✓
Soft-delete → 410 Gone: ✓
Metadata scrubbing (EXIF canary): ✓
MIME mismatch rejection: ✓
Signature authenticity: ✓
Timestamp anti-replay (±5 min window): ✓
Two-node block propagation: ✓
Cross-node body fetch via hosting_relay: ✓
Likes aggregation across nodes: ✓
All 7 test packages green: blockchain consensus identity media node
relay vm.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
107
node/api_feed.go
107
node/api_feed.go
@@ -149,8 +149,8 @@ func feedPublish(cfg FeedConfig) http.HandlerFunc {
|
||||
return
|
||||
}
|
||||
|
||||
// Decode attachment.
|
||||
var attachment []byte
|
||||
// Decode attachment (raw upload — before scrub).
|
||||
var rawAttachment []byte
|
||||
var attachmentMIME string
|
||||
if req.AttachmentB64 != "" {
|
||||
b, err := base64.StdEncoding.DecodeString(req.AttachmentB64)
|
||||
@@ -160,57 +160,21 @@ func feedPublish(cfg FeedConfig) http.HandlerFunc {
|
||||
return
|
||||
}
|
||||
}
|
||||
attachment = b
|
||||
rawAttachment = b
|
||||
attachmentMIME = req.AttachmentMIME
|
||||
|
||||
// MANDATORY server-side scrub: strip ALL metadata (EXIF/GPS/
|
||||
// camera/author/ICC/etc.) and re-compress. Client is expected
|
||||
// to have done a first pass, but we never trust it — a photo
|
||||
// from a phone carries GPS coordinates by default and the client
|
||||
// might forget or a hostile client might skip the scrub entirely.
|
||||
//
|
||||
// Images are handled in-process (stdlib re-encode to JPEG kills
|
||||
// all metadata by construction). Videos/audio are forwarded to
|
||||
// the media sidecar; if none is configured and the operator
|
||||
// hasn't opted in to AllowUnscrubbedVideo, we reject.
|
||||
if cfg.Scrubber == nil {
|
||||
jsonErr(w, fmt.Errorf("media scrubber not configured on this node"), 503)
|
||||
return
|
||||
}
|
||||
ctx, cancel := context.WithTimeout(r.Context(), 60*time.Second)
|
||||
cleaned, newMIME, err := cfg.Scrubber.Scrub(ctx, attachment, attachmentMIME)
|
||||
cancel()
|
||||
if err != nil {
|
||||
// Graceful video fallback only when explicitly allowed.
|
||||
if err == media.ErrSidecarUnavailable && cfg.AllowUnscrubbedVideo {
|
||||
// Keep bytes as-is (operator accepted the risk), just log.
|
||||
log.Printf("[feed] WARNING: storing unscrubbed video — no sidecar configured (author=%s)", req.Author)
|
||||
} else {
|
||||
status := 400
|
||||
if err == media.ErrSidecarUnavailable {
|
||||
status = 503
|
||||
}
|
||||
jsonErr(w, fmt.Errorf("scrub attachment: %w", err), status)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
attachment = cleaned
|
||||
attachmentMIME = newMIME
|
||||
}
|
||||
}
|
||||
|
||||
// Content hash is computed over the scrubbed bytes — that's what
|
||||
// the on-chain tx will reference, and what readers fetch. Binds
|
||||
// the body to the metadata so a misbehaving relay can't substitute
|
||||
// a different body under the same PostID.
|
||||
h := sha256.New()
|
||||
h.Write([]byte(req.Content))
|
||||
h.Write(attachment)
|
||||
contentHash := h.Sum(nil)
|
||||
contentHashHex := hex.EncodeToString(contentHash)
|
||||
// ── Step 1: verify signature over the RAW-upload hash ──────────
|
||||
// The client signs what it sent. The server recomputes hash over
|
||||
// the as-received bytes and verifies — this proves the upload
|
||||
// came from the claimed author and wasn't tampered with in transit.
|
||||
rawHasher := sha256.New()
|
||||
rawHasher.Write([]byte(req.Content))
|
||||
rawHasher.Write(rawAttachment)
|
||||
rawContentHash := rawHasher.Sum(nil)
|
||||
rawContentHashHex := hex.EncodeToString(rawContentHash)
|
||||
|
||||
// Verify the author's signature over the canonical publish bytes.
|
||||
msg := []byte(fmt.Sprintf("publish:%s:%s:%d", req.PostID, contentHashHex, req.Ts))
|
||||
msg := []byte(fmt.Sprintf("publish:%s:%s:%d", req.PostID, rawContentHashHex, req.Ts))
|
||||
sigBytes, err := base64.StdEncoding.DecodeString(req.Sig)
|
||||
if err != nil {
|
||||
if sigBytes, err = base64.RawURLEncoding.DecodeString(req.Sig); err != nil {
|
||||
@@ -228,6 +192,51 @@ func feedPublish(cfg FeedConfig) http.HandlerFunc {
|
||||
return
|
||||
}
|
||||
|
||||
// ── Step 2: MANDATORY server-side metadata scrub ─────────────
|
||||
// Runs AFTER signature verification so a fake client can't burn
|
||||
// CPU by triggering expensive scrub work on unauthenticated inputs.
|
||||
//
|
||||
// Images: in-process stdlib re-encode → kills EXIF/GPS/ICC/XMP by
|
||||
// construction. Videos/audio: forwarded to FFmpeg sidecar; without
|
||||
// one, we reject unless operator opted in to unscrubbed video.
|
||||
attachment := rawAttachment
|
||||
if len(attachment) > 0 {
|
||||
if cfg.Scrubber == nil {
|
||||
jsonErr(w, fmt.Errorf("media scrubber not configured on this node"), 503)
|
||||
return
|
||||
}
|
||||
ctx, cancel := context.WithTimeout(r.Context(), 60*time.Second)
|
||||
cleaned, newMIME, err := cfg.Scrubber.Scrub(ctx, attachment, attachmentMIME)
|
||||
cancel()
|
||||
if err != nil {
|
||||
if err == media.ErrSidecarUnavailable && cfg.AllowUnscrubbedVideo {
|
||||
log.Printf("[feed] WARNING: storing unscrubbed video — no sidecar configured (author=%s)", req.Author)
|
||||
} else {
|
||||
status := 400
|
||||
if err == media.ErrSidecarUnavailable {
|
||||
status = 503
|
||||
}
|
||||
jsonErr(w, fmt.Errorf("scrub attachment: %w", err), status)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
attachment = cleaned
|
||||
attachmentMIME = newMIME
|
||||
}
|
||||
}
|
||||
|
||||
// ── Step 3: recompute content hash over the SCRUBBED bytes ────
|
||||
// This is what goes into the response + on-chain CREATE_POST, so
|
||||
// anyone fetching the body can verify integrity against the chain.
|
||||
// The signature check already used the raw-upload hash above;
|
||||
// this final hash binds the on-chain record to what readers will
|
||||
// actually download.
|
||||
finalHasher := sha256.New()
|
||||
finalHasher.Write([]byte(req.Content))
|
||||
finalHasher.Write(attachment)
|
||||
contentHash := finalHasher.Sum(nil)
|
||||
contentHashHex := hex.EncodeToString(contentHash)
|
||||
|
||||
post := &relay.FeedPost{
|
||||
PostID: req.PostID,
|
||||
Author: req.Author,
|
||||
|
||||
831
node/feed_e2e_test.go
Normal file
831
node/feed_e2e_test.go
Normal file
@@ -0,0 +1,831 @@
|
||||
// End-to-end integration tests for the social feed (v2.0.0).
|
||||
//
|
||||
// These tests exercise the full HTTP surface against a real in-process
|
||||
// setup: a BadgerDB chain, a BadgerDB feed-mailbox, the media scrubber,
|
||||
// and a net/http ServeMux with all feed routes wired. Requests hit the
|
||||
// real handlers (including rate-limiters, auth, and scrubber) so we
|
||||
// catch wire-level regressions that unit tests miss.
|
||||
//
|
||||
// Layout of a typical test:
|
||||
//
|
||||
// h := newFeedHarness(t)
|
||||
// defer h.Close()
|
||||
// author := h.newUser("alice")
|
||||
// h.fund(author, 1_000_000) // give them tokens
|
||||
// resp := h.publish(author, "Hello #world", nil) // POST /feed/publish
|
||||
// h.commitCreatePost(author, resp) // chain tx
|
||||
// got := h.getPost(resp.PostID)
|
||||
// ...
|
||||
package node
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"image"
|
||||
"image/color"
|
||||
"image/jpeg"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"go-blockchain/blockchain"
|
||||
"go-blockchain/identity"
|
||||
"go-blockchain/media"
|
||||
"go-blockchain/relay"
|
||||
)
|
||||
|
||||
// ── Harness ──────────────────────────────────────────────────────────────
|
||||
|
||||
type feedHarness struct {
|
||||
t *testing.T
|
||||
|
||||
chainDir string
|
||||
feedDir string
|
||||
chain *blockchain.Chain
|
||||
mailbox *relay.FeedMailbox
|
||||
scrubber *media.Scrubber
|
||||
server *httptest.Server
|
||||
validator *identity.Identity
|
||||
tip *blockchain.Block
|
||||
}
|
||||
|
||||
func newFeedHarness(t *testing.T) *feedHarness {
|
||||
t.Helper()
|
||||
chainDir, err := os.MkdirTemp("", "dchain-e2e-chain-*")
|
||||
if err != nil {
|
||||
t.Fatalf("MkdirTemp chain: %v", err)
|
||||
}
|
||||
feedDir, err := os.MkdirTemp("", "dchain-e2e-feed-*")
|
||||
if err != nil {
|
||||
t.Fatalf("MkdirTemp feed: %v", err)
|
||||
}
|
||||
c, err := blockchain.NewChain(chainDir)
|
||||
if err != nil {
|
||||
t.Fatalf("NewChain: %v", err)
|
||||
}
|
||||
fm, err := relay.OpenFeedMailbox(feedDir, 24*time.Hour)
|
||||
if err != nil {
|
||||
t.Fatalf("OpenFeedMailbox: %v", err)
|
||||
}
|
||||
|
||||
validator, err := identity.Generate()
|
||||
if err != nil {
|
||||
t.Fatalf("identity.Generate: %v", err)
|
||||
}
|
||||
// Bootstrap a genesis block so the validator has funds to disburse.
|
||||
genesis := blockchain.GenesisBlock(validator.PubKeyHex(), validator.PrivKey)
|
||||
if err := c.AddBlock(genesis); err != nil {
|
||||
t.Fatalf("AddBlock genesis: %v", err)
|
||||
}
|
||||
|
||||
scrubber := media.NewScrubber(media.SidecarConfig{}) // no sidecar — images only
|
||||
cfg := FeedConfig{
|
||||
Mailbox: fm,
|
||||
HostingRelayPub: validator.PubKeyHex(),
|
||||
Scrubber: scrubber,
|
||||
AllowUnscrubbedVideo: false,
|
||||
GetPost: c.Post,
|
||||
LikeCount: c.LikeCount,
|
||||
HasLiked: c.HasLiked,
|
||||
PostsByAuthor: c.PostsByAuthor,
|
||||
Following: c.Following,
|
||||
}
|
||||
mux := http.NewServeMux()
|
||||
RegisterFeedRoutes(mux, cfg)
|
||||
srv := httptest.NewServer(mux)
|
||||
|
||||
h := &feedHarness{
|
||||
t: t, chainDir: chainDir, feedDir: feedDir,
|
||||
chain: c, mailbox: fm, scrubber: scrubber,
|
||||
server: srv, validator: validator, tip: genesis,
|
||||
}
|
||||
t.Cleanup(h.Close)
|
||||
return h
|
||||
}
|
||||
|
||||
// Close releases all handles and removes the temp directories. Safe to
|
||||
// call multiple times.
|
||||
func (h *feedHarness) Close() {
|
||||
if h.server != nil {
|
||||
h.server.Close()
|
||||
h.server = nil
|
||||
}
|
||||
if h.mailbox != nil {
|
||||
_ = h.mailbox.Close()
|
||||
h.mailbox = nil
|
||||
}
|
||||
if h.chain != nil {
|
||||
_ = h.chain.Close()
|
||||
h.chain = nil
|
||||
}
|
||||
// Retry because Windows holds mmap files briefly after Close.
|
||||
for _, dir := range []string{h.chainDir, h.feedDir} {
|
||||
for i := 0; i < 20; i++ {
|
||||
if err := os.RemoveAll(dir); err == nil {
|
||||
break
|
||||
}
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// newUser generates a fresh identity. Not funded — call fund() separately.
|
||||
func (h *feedHarness) newUser(label string) *identity.Identity {
|
||||
h.t.Helper()
|
||||
id, err := identity.Generate()
|
||||
if err != nil {
|
||||
h.t.Fatalf("%s identity: %v", label, err)
|
||||
}
|
||||
return id
|
||||
}
|
||||
|
||||
// fund sends `amount` µT from the genesis validator to `target`, committing
|
||||
// the transfer in its own block.
|
||||
func (h *feedHarness) fund(target *identity.Identity, amount uint64) {
|
||||
h.t.Helper()
|
||||
tx := &blockchain.Transaction{
|
||||
ID: h.nextTxID(h.validator.PubKeyHex(), blockchain.EventTransfer),
|
||||
Type: blockchain.EventTransfer,
|
||||
From: h.validator.PubKeyHex(),
|
||||
To: target.PubKeyHex(),
|
||||
Amount: amount,
|
||||
Fee: blockchain.MinFee,
|
||||
Timestamp: time.Now().UTC(),
|
||||
}
|
||||
h.commit(tx)
|
||||
}
|
||||
|
||||
// commit wraps one or more txs into a block, signs, and appends.
|
||||
func (h *feedHarness) commit(txs ...*blockchain.Transaction) {
|
||||
h.t.Helper()
|
||||
// Small sleep to guarantee distinct tx IDs across calls.
|
||||
time.Sleep(2 * time.Millisecond)
|
||||
var totalFees uint64
|
||||
for _, tx := range txs {
|
||||
totalFees += tx.Fee
|
||||
}
|
||||
b := &blockchain.Block{
|
||||
Index: h.tip.Index + 1,
|
||||
Timestamp: time.Now().UTC(),
|
||||
Transactions: txs,
|
||||
PrevHash: h.tip.Hash,
|
||||
Validator: h.validator.PubKeyHex(),
|
||||
TotalFees: totalFees,
|
||||
}
|
||||
b.ComputeHash()
|
||||
b.Sign(h.validator.PrivKey)
|
||||
if err := h.chain.AddBlock(b); err != nil {
|
||||
h.t.Fatalf("AddBlock: %v", err)
|
||||
}
|
||||
h.tip = b
|
||||
}
|
||||
|
||||
func (h *feedHarness) nextTxID(from string, typ blockchain.EventType) string {
|
||||
// Hash (from, type, now_nanos) for uniqueness.
|
||||
sum := sha256.Sum256([]byte(fmt.Sprintf("%s:%s:%d", from, typ, time.Now().UnixNano())))
|
||||
return hex.EncodeToString(sum[:16])
|
||||
}
|
||||
|
||||
// publish POSTs /feed/publish as `author` with signed request body. On
|
||||
// success returns the server's response so the caller can commit the
|
||||
// on-chain CREATE_POST with matching metadata.
|
||||
func (h *feedHarness) publish(author *identity.Identity, content string, attachment []byte) feedPublishResponse {
|
||||
h.t.Helper()
|
||||
attachB64 := ""
|
||||
attachMIME := ""
|
||||
if len(attachment) > 0 {
|
||||
attachB64 = base64.StdEncoding.EncodeToString(attachment)
|
||||
attachMIME = "image/jpeg"
|
||||
}
|
||||
// Client-side hash matches the server's canonical bytes rule:
|
||||
// publish:<post_id>:<sha256(content||attachment) hex>:<ts>
|
||||
// The client knows its own attachment before any server-side scrub,
|
||||
// so this is the hash over the "raw upload". The server recomputes
|
||||
// over SCRUBBED bytes and returns that as content_hash — the client
|
||||
// then uses server's number for CREATE_POST.
|
||||
idHash := sha256.Sum256([]byte(fmt.Sprintf("%s-%d-%s",
|
||||
author.PubKeyHex(), time.Now().UnixNano(), content)))
|
||||
postID := hex.EncodeToString(idHash[:16])
|
||||
// Build signature over CLIENT-side hash.
|
||||
h256 := sha256.New()
|
||||
h256.Write([]byte(content))
|
||||
h256.Write(attachment)
|
||||
clientHash := hex.EncodeToString(h256.Sum(nil))
|
||||
ts := time.Now().Unix()
|
||||
sigBytes := author.Sign([]byte(fmt.Sprintf("publish:%s:%s:%d", postID, clientHash, ts)))
|
||||
|
||||
req := feedPublishRequest{
|
||||
PostID: postID,
|
||||
Author: author.PubKeyHex(),
|
||||
Content: content,
|
||||
AttachmentB64: attachB64,
|
||||
AttachmentMIME: attachMIME,
|
||||
Sig: base64.StdEncoding.EncodeToString(sigBytes),
|
||||
Ts: ts,
|
||||
}
|
||||
var resp feedPublishResponse
|
||||
h.postJSON("/feed/publish", req, &resp)
|
||||
return resp
|
||||
}
|
||||
|
||||
// commitCreatePost sends the on-chain CREATE_POST tx that pays the
|
||||
// hosting relay (this node's validator in the harness). Must be called
|
||||
// after publish() so the two agree on the content hash and size.
|
||||
func (h *feedHarness) commitCreatePost(author *identity.Identity, pub feedPublishResponse) {
|
||||
h.t.Helper()
|
||||
contentHash, err := hex.DecodeString(pub.ContentHash)
|
||||
if err != nil {
|
||||
h.t.Fatalf("decode content hash: %v", err)
|
||||
}
|
||||
payload := blockchain.CreatePostPayload{
|
||||
PostID: pub.PostID,
|
||||
ContentHash: contentHash,
|
||||
Size: pub.Size,
|
||||
HostingRelay: pub.HostingRelay,
|
||||
}
|
||||
pbytes, _ := json.Marshal(payload)
|
||||
tx := &blockchain.Transaction{
|
||||
ID: h.nextTxID(author.PubKeyHex(), blockchain.EventCreatePost),
|
||||
Type: blockchain.EventCreatePost,
|
||||
From: author.PubKeyHex(),
|
||||
Amount: 0,
|
||||
Fee: pub.EstimatedFeeUT,
|
||||
Payload: pbytes,
|
||||
Timestamp: time.Now().UTC(),
|
||||
}
|
||||
h.commit(tx)
|
||||
}
|
||||
|
||||
// like / unlike / follow / unfollow helpers — all just small tx builders.
|
||||
|
||||
func (h *feedHarness) like(liker *identity.Identity, postID string) {
|
||||
payload, _ := json.Marshal(blockchain.LikePostPayload{PostID: postID})
|
||||
tx := &blockchain.Transaction{
|
||||
ID: h.nextTxID(liker.PubKeyHex(), blockchain.EventLikePost),
|
||||
Type: blockchain.EventLikePost,
|
||||
From: liker.PubKeyHex(),
|
||||
Fee: blockchain.MinFee,
|
||||
Payload: payload,
|
||||
Timestamp: time.Now().UTC(),
|
||||
}
|
||||
h.commit(tx)
|
||||
}
|
||||
|
||||
func (h *feedHarness) follow(follower *identity.Identity, target string) {
|
||||
tx := &blockchain.Transaction{
|
||||
ID: h.nextTxID(follower.PubKeyHex(), blockchain.EventFollow),
|
||||
Type: blockchain.EventFollow,
|
||||
From: follower.PubKeyHex(),
|
||||
To: target,
|
||||
Fee: blockchain.MinFee,
|
||||
Payload: []byte(`{}`),
|
||||
Timestamp: time.Now().UTC(),
|
||||
}
|
||||
h.commit(tx)
|
||||
}
|
||||
|
||||
// deletePost commits an on-chain EventDeletePost for the given post,
|
||||
// signed by the author.
|
||||
func (h *feedHarness) deletePost(author *identity.Identity, postID string) {
|
||||
payload, _ := json.Marshal(blockchain.DeletePostPayload{PostID: postID})
|
||||
tx := &blockchain.Transaction{
|
||||
ID: h.nextTxID(author.PubKeyHex(), blockchain.EventDeletePost),
|
||||
Type: blockchain.EventDeletePost,
|
||||
From: author.PubKeyHex(),
|
||||
Fee: blockchain.MinFee,
|
||||
Payload: payload,
|
||||
Timestamp: time.Now().UTC(),
|
||||
}
|
||||
h.commit(tx)
|
||||
}
|
||||
|
||||
// ── HTTP helpers ──────────────────────────────────────────────────────────
|
||||
|
||||
func (h *feedHarness) postJSON(path string, req any, out any) {
|
||||
h.t.Helper()
|
||||
body, _ := json.Marshal(req)
|
||||
resp, err := http.Post(h.server.URL+path, "application/json", bytes.NewReader(body))
|
||||
if err != nil {
|
||||
h.t.Fatalf("POST %s: %v", path, err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode >= 400 {
|
||||
raw, _ := io.ReadAll(resp.Body)
|
||||
h.t.Fatalf("POST %s → %d: %s", path, resp.StatusCode, string(raw))
|
||||
}
|
||||
if out != nil {
|
||||
if err := json.NewDecoder(resp.Body).Decode(out); err != nil {
|
||||
h.t.Fatalf("decode %s response: %v", path, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (h *feedHarness) postJSONExpectStatus(path string, req any, want int) string {
|
||||
h.t.Helper()
|
||||
body, _ := json.Marshal(req)
|
||||
resp, err := http.Post(h.server.URL+path, "application/json", bytes.NewReader(body))
|
||||
if err != nil {
|
||||
h.t.Fatalf("POST %s: %v", path, err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
raw, _ := io.ReadAll(resp.Body)
|
||||
if resp.StatusCode != want {
|
||||
h.t.Fatalf("POST %s → %d, want %d: %s", path, resp.StatusCode, want, string(raw))
|
||||
}
|
||||
return string(raw)
|
||||
}
|
||||
|
||||
func (h *feedHarness) getJSON(path string, out any) {
|
||||
h.t.Helper()
|
||||
resp, err := http.Get(h.server.URL + path)
|
||||
if err != nil {
|
||||
h.t.Fatalf("GET %s: %v", path, err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode >= 400 {
|
||||
raw, _ := io.ReadAll(resp.Body)
|
||||
h.t.Fatalf("GET %s → %d: %s", path, resp.StatusCode, string(raw))
|
||||
}
|
||||
if out != nil {
|
||||
if err := json.NewDecoder(resp.Body).Decode(out); err != nil {
|
||||
h.t.Fatalf("decode %s response: %v", path, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// getStatus fetches path and returns status + body; doesn't fail on non-2xx.
|
||||
func (h *feedHarness) getStatus(path string) (int, string) {
|
||||
resp, err := http.Get(h.server.URL + path)
|
||||
if err != nil {
|
||||
h.t.Fatalf("GET %s: %v", path, err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
raw, _ := io.ReadAll(resp.Body)
|
||||
return resp.StatusCode, string(raw)
|
||||
}
|
||||
|
||||
// postRaw is for endpoints like /feed/post/{id}/view that take no body.
|
||||
func (h *feedHarness) postRaw(path string, out any) {
|
||||
h.t.Helper()
|
||||
resp, err := http.Post(h.server.URL+path, "application/json", nil)
|
||||
if err != nil {
|
||||
h.t.Fatalf("POST %s: %v", path, err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode >= 400 {
|
||||
raw, _ := io.ReadAll(resp.Body)
|
||||
h.t.Fatalf("POST %s → %d: %s", path, resp.StatusCode, string(raw))
|
||||
}
|
||||
if out != nil {
|
||||
if err := json.NewDecoder(resp.Body).Decode(out); err != nil {
|
||||
h.t.Fatalf("decode %s response: %v", path, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────────
|
||||
|
||||
// TestE2EFullFlow runs the whole publish → commit → read cycle end-to-end.
|
||||
//
|
||||
// Covers: /feed/publish signature, /feed/post/{id} body fetch, /feed/post/{id}/stats,
|
||||
// /feed/post/{id}/view counter, CREATE_POST fee debit to author + credit to
|
||||
// hosting relay, PostsByAuthor enrichment, DELETE soft-delete → 410.
|
||||
func TestE2EFullFlow(t *testing.T) {
|
||||
h := newFeedHarness(t)
|
||||
|
||||
alice := h.newUser("alice")
|
||||
h.fund(alice, 10*blockchain.Token)
|
||||
|
||||
hostBalBefore, _ := h.chain.Balance(h.validator.PubKeyHex())
|
||||
|
||||
// 1. PUBLISH → body lands in feed mailbox.
|
||||
pub := h.publish(alice, "Hello from the feed #dchain #intro", nil)
|
||||
if pub.PostID == "" || pub.ContentHash == "" {
|
||||
t.Fatalf("publish response missing required fields: %+v", pub)
|
||||
}
|
||||
if pub.HostingRelay != h.validator.PubKeyHex() {
|
||||
t.Errorf("hosting_relay: got %s, want %s", pub.HostingRelay, h.validator.PubKeyHex())
|
||||
}
|
||||
wantTags := []string{"dchain", "intro"}
|
||||
if len(pub.Hashtags) != len(wantTags) {
|
||||
t.Errorf("hashtags: got %v, want %v", pub.Hashtags, wantTags)
|
||||
}
|
||||
|
||||
// Before the CREATE_POST tx lands the body is available but /stats
|
||||
// says 0 likes. That's the expected "just published, not committed" state.
|
||||
|
||||
// 2. COMMIT on-chain CREATE_POST tx.
|
||||
h.commitCreatePost(alice, pub)
|
||||
|
||||
// Hosting relay should have been credited tx.Fee.
|
||||
hostBalAfter, _ := h.chain.Balance(h.validator.PubKeyHex())
|
||||
if hostBalAfter <= hostBalBefore {
|
||||
t.Errorf("hosting relay balance did not increase after CREATE_POST: %d → %d",
|
||||
hostBalBefore, hostBalAfter)
|
||||
}
|
||||
|
||||
// 3. READ via HTTP — body comes back.
|
||||
var got struct {
|
||||
PostID string `json:"post_id"`
|
||||
Author string `json:"author"`
|
||||
Content string `json:"content"`
|
||||
}
|
||||
h.getJSON("/feed/post/"+pub.PostID, &got)
|
||||
if got.Content != "Hello from the feed #dchain #intro" {
|
||||
t.Errorf("content: got %q, want original", got.Content)
|
||||
}
|
||||
if got.Author != alice.PubKeyHex() {
|
||||
t.Errorf("author: got %s, want %s", got.Author, alice.PubKeyHex())
|
||||
}
|
||||
|
||||
// 4. VIEW COUNTER increments.
|
||||
var viewResp struct {
|
||||
Views uint64 `json:"views"`
|
||||
}
|
||||
for i := 1; i <= 3; i++ {
|
||||
h.postRaw("/feed/post/"+pub.PostID+"/view", &viewResp)
|
||||
if viewResp.Views != uint64(i) {
|
||||
t.Errorf("views #%d: got %d, want %d", i, viewResp.Views, i)
|
||||
}
|
||||
}
|
||||
|
||||
// 5. STATS aggregate is correct.
|
||||
var stats postStatsResponse
|
||||
h.getJSON("/feed/post/"+pub.PostID+"/stats", &stats)
|
||||
if stats.Views != 3 {
|
||||
t.Errorf("stats.views: got %d, want 3", stats.Views)
|
||||
}
|
||||
if stats.Likes != 0 {
|
||||
t.Errorf("stats.likes: got %d, want 0", stats.Likes)
|
||||
}
|
||||
|
||||
// 6. AUTHOR listing merges chain record + body + stats.
|
||||
var authorResp struct {
|
||||
Count int `json:"count"`
|
||||
Posts []feedAuthorItem `json:"posts"`
|
||||
}
|
||||
h.getJSON("/feed/author/"+alice.PubKeyHex(), &authorResp)
|
||||
if authorResp.Count != 1 {
|
||||
t.Fatalf("author count: got %d, want 1", authorResp.Count)
|
||||
}
|
||||
if authorResp.Posts[0].Views != 3 {
|
||||
t.Errorf("author post views: got %d, want 3", authorResp.Posts[0].Views)
|
||||
}
|
||||
if len(authorResp.Posts[0].Hashtags) != 2 {
|
||||
t.Errorf("author post hashtags: got %v, want 2", authorResp.Posts[0].Hashtags)
|
||||
}
|
||||
|
||||
// 7. DELETE → body stays in mailbox but chain marks deleted → 410 on fetch.
|
||||
h.deletePost(alice, pub.PostID)
|
||||
status, body := h.getStatus("/feed/post/" + pub.PostID)
|
||||
if status != http.StatusGone {
|
||||
t.Errorf("GET deleted post: got status %d, want 410; body: %s", status, body)
|
||||
}
|
||||
}
|
||||
|
||||
// TestE2ELikeUnlikeAffectsStats: on-chain LIKE_POST updates /stats.
|
||||
func TestE2ELikeUnlikeAffectsStats(t *testing.T) {
|
||||
h := newFeedHarness(t)
|
||||
alice := h.newUser("alice")
|
||||
bob := h.newUser("bob")
|
||||
h.fund(alice, 10*blockchain.Token)
|
||||
h.fund(bob, 10*blockchain.Token)
|
||||
|
||||
pub := h.publish(alice, "likeable", nil)
|
||||
h.commitCreatePost(alice, pub)
|
||||
|
||||
// Bob likes alice's post.
|
||||
h.like(bob, pub.PostID)
|
||||
|
||||
var stats postStatsResponse
|
||||
h.getJSON("/feed/post/"+pub.PostID+"/stats?me="+bob.PubKeyHex(), &stats)
|
||||
if stats.Likes != 1 {
|
||||
t.Errorf("likes after like: got %d, want 1", stats.Likes)
|
||||
}
|
||||
if stats.LikedByMe == nil || !*stats.LikedByMe {
|
||||
t.Errorf("liked_by_me: got %v, want true", stats.LikedByMe)
|
||||
}
|
||||
|
||||
// And a non-liker sees liked_by_me=false.
|
||||
carol := h.newUser("carol")
|
||||
h.getJSON("/feed/post/"+pub.PostID+"/stats?me="+carol.PubKeyHex(), &stats)
|
||||
if stats.LikedByMe == nil || *stats.LikedByMe {
|
||||
t.Errorf("liked_by_me for carol: got %v, want false", stats.LikedByMe)
|
||||
}
|
||||
}
|
||||
|
||||
// TestE2ETimeline: follow graph merges posts newest-first.
|
||||
func TestE2ETimeline(t *testing.T) {
|
||||
h := newFeedHarness(t)
|
||||
alice := h.newUser("alice")
|
||||
bob := h.newUser("bob")
|
||||
carol := h.newUser("carol")
|
||||
// Fund everyone.
|
||||
for _, u := range []*identity.Identity{alice, bob, carol} {
|
||||
h.fund(u, 10*blockchain.Token)
|
||||
}
|
||||
|
||||
// Alice follows bob + carol.
|
||||
h.follow(alice, bob.PubKeyHex())
|
||||
h.follow(alice, carol.PubKeyHex())
|
||||
|
||||
// Bob + carol each publish a post. Sleep 1.1s between so the tx
|
||||
// timestamps land in distinct unix seconds — the chain chrono index
|
||||
// is second-resolution, not millisecond.
|
||||
pubBob := h.publish(bob, "post from bob", nil)
|
||||
h.commitCreatePost(bob, pubBob)
|
||||
time.Sleep(1100 * time.Millisecond)
|
||||
pubCarol := h.publish(carol, "post from carol", nil)
|
||||
h.commitCreatePost(carol, pubCarol)
|
||||
|
||||
var tl struct {
|
||||
Count int `json:"count"`
|
||||
Posts []feedAuthorItem `json:"posts"`
|
||||
}
|
||||
h.getJSON("/feed/timeline?follower="+alice.PubKeyHex(), &tl)
|
||||
if tl.Count != 2 {
|
||||
t.Fatalf("timeline count: got %d, want 2", tl.Count)
|
||||
}
|
||||
// Newest first — carol was published last, so her post should be [0].
|
||||
if tl.Posts[0].PostID != pubCarol.PostID {
|
||||
t.Errorf("timeline[0]: got %s, want carol's post %s", tl.Posts[0].PostID, pubCarol.PostID)
|
||||
}
|
||||
if tl.Posts[1].PostID != pubBob.PostID {
|
||||
t.Errorf("timeline[1]: got %s, want bob's post %s", tl.Posts[1].PostID, pubBob.PostID)
|
||||
}
|
||||
}
|
||||
|
||||
// TestE2ETrendingRanking: post with more engagement floats to the top.
|
||||
func TestE2ETrendingRanking(t *testing.T) {
|
||||
h := newFeedHarness(t)
|
||||
alice := h.newUser("alice")
|
||||
bob := h.newUser("bob")
|
||||
carol := h.newUser("carol")
|
||||
for _, u := range []*identity.Identity{alice, bob, carol} {
|
||||
h.fund(u, 10*blockchain.Token)
|
||||
}
|
||||
|
||||
lowPost := h.publish(alice, "low-engagement post", nil)
|
||||
h.commitCreatePost(alice, lowPost)
|
||||
hotPost := h.publish(alice, "hot post", nil)
|
||||
h.commitCreatePost(alice, hotPost)
|
||||
|
||||
// Hot post gets 2 likes + 5 views; low post stays at 0.
|
||||
h.like(bob, hotPost.PostID)
|
||||
h.like(carol, hotPost.PostID)
|
||||
var viewResp struct{ Views uint64 }
|
||||
for i := 0; i < 5; i++ {
|
||||
h.postRaw("/feed/post/"+hotPost.PostID+"/view", &viewResp)
|
||||
}
|
||||
|
||||
var tr struct {
|
||||
Count int `json:"count"`
|
||||
Posts []feedAuthorItem `json:"posts"`
|
||||
}
|
||||
h.getJSON("/feed/trending?limit=10", &tr)
|
||||
if tr.Count < 2 {
|
||||
t.Fatalf("trending: got %d posts, want ≥2", tr.Count)
|
||||
}
|
||||
// Hot post MUST be first (likes × 3 + views = 11 vs 0).
|
||||
if tr.Posts[0].PostID != hotPost.PostID {
|
||||
t.Errorf("trending[0]: got %s, want hot post %s", tr.Posts[0].PostID, hotPost.PostID)
|
||||
}
|
||||
}
|
||||
|
||||
// TestE2EForYouFilters: recommendations exclude followed authors,
|
||||
// already-liked posts, and the user's own posts.
|
||||
func TestE2EForYouFilters(t *testing.T) {
|
||||
h := newFeedHarness(t)
|
||||
alice := h.newUser("alice") // asking for recs
|
||||
bob := h.newUser("bob") // alice follows bob → bob's posts excluded
|
||||
carol := h.newUser("carol") // stranger → should surface
|
||||
dave := h.newUser("dave") // post liked by alice → excluded
|
||||
|
||||
for _, u := range []*identity.Identity{alice, bob, carol, dave} {
|
||||
h.fund(u, 10*blockchain.Token)
|
||||
}
|
||||
|
||||
// Alice follows bob.
|
||||
h.follow(alice, bob.PubKeyHex())
|
||||
|
||||
// Each non-alice user publishes a post, plus alice herself.
|
||||
postOwn := h.publish(alice, "my own post", nil)
|
||||
h.commitCreatePost(alice, postOwn)
|
||||
postBob := h.publish(bob, "from bob (followed)", nil)
|
||||
h.commitCreatePost(bob, postBob)
|
||||
postCarol := h.publish(carol, "from carol (stranger)", nil)
|
||||
h.commitCreatePost(carol, postCarol)
|
||||
postDave := h.publish(dave, "from dave", nil)
|
||||
h.commitCreatePost(dave, postDave)
|
||||
|
||||
// Alice likes dave's post — so it should NOT appear in her ForYou.
|
||||
h.like(alice, postDave.PostID)
|
||||
|
||||
var fy struct {
|
||||
Count int `json:"count"`
|
||||
Posts []feedAuthorItem `json:"posts"`
|
||||
}
|
||||
h.getJSON("/feed/foryou?pub="+alice.PubKeyHex()+"&limit=20", &fy)
|
||||
|
||||
// Expected: only carol's post. The others are excluded.
|
||||
seen := map[string]bool{}
|
||||
for _, p := range fy.Posts {
|
||||
seen[p.PostID] = true
|
||||
}
|
||||
if seen[postOwn.PostID] {
|
||||
t.Errorf("ForYou included alice's own post %s", postOwn.PostID)
|
||||
}
|
||||
if seen[postBob.PostID] {
|
||||
t.Errorf("ForYou included followed author bob's post %s", postBob.PostID)
|
||||
}
|
||||
if seen[postDave.PostID] {
|
||||
t.Errorf("ForYou included already-liked post from dave %s", postDave.PostID)
|
||||
}
|
||||
if !seen[postCarol.PostID] {
|
||||
t.Errorf("ForYou missing carol's post %s (should surface)", postCarol.PostID)
|
||||
}
|
||||
}
|
||||
|
||||
// TestE2EHashtagSearch: a tag returns only posts that used it.
|
||||
func TestE2EHashtagSearch(t *testing.T) {
|
||||
h := newFeedHarness(t)
|
||||
alice := h.newUser("alice")
|
||||
h.fund(alice, 10*blockchain.Token)
|
||||
|
||||
goPost := h.publish(alice, "learning #golang today", nil)
|
||||
h.commitCreatePost(alice, goPost)
|
||||
rustPost := h.publish(alice, "later — #rust", nil)
|
||||
h.commitCreatePost(alice, rustPost)
|
||||
untagged := h.publish(alice, "no tags", nil)
|
||||
h.commitCreatePost(alice, untagged)
|
||||
|
||||
var tag struct {
|
||||
Tag string `json:"tag"`
|
||||
Count int `json:"count"`
|
||||
Posts []feedAuthorItem `json:"posts"`
|
||||
}
|
||||
h.getJSON("/feed/hashtag/golang", &tag)
|
||||
if tag.Count != 1 || tag.Posts[0].PostID != goPost.PostID {
|
||||
t.Errorf("hashtag(golang): got %+v, want [%s]", tag, goPost.PostID)
|
||||
}
|
||||
h.getJSON("/feed/hashtag/rust", &tag)
|
||||
if tag.Count != 1 || tag.Posts[0].PostID != rustPost.PostID {
|
||||
t.Errorf("hashtag(rust): got %+v, want [%s]", tag, rustPost.PostID)
|
||||
}
|
||||
}
|
||||
|
||||
// TestE2EScrubberStripsEXIF: uploaded image with EXIF canary comes back
|
||||
// without the canary in the stored body. Proves server-side scrub is
|
||||
// mandatory and working at the HTTP boundary.
|
||||
func TestE2EScrubberStripsEXIF(t *testing.T) {
|
||||
h := newFeedHarness(t)
|
||||
alice := h.newUser("alice")
|
||||
h.fund(alice, 1*blockchain.Token)
|
||||
|
||||
// Build a JPEG with an injected EXIF segment containing a canary.
|
||||
var jpegBuf bytes.Buffer
|
||||
img := image.NewRGBA(image.Rect(0, 0, 16, 16))
|
||||
for y := 0; y < 16; y++ {
|
||||
for x := 0; x < 16; x++ {
|
||||
img.Set(x, y, color.RGBA{uint8(x * 16), uint8(y * 16), 100, 255})
|
||||
}
|
||||
}
|
||||
if err := jpeg.Encode(&jpegBuf, img, &jpeg.Options{Quality: 80}); err != nil {
|
||||
t.Fatalf("jpeg encode: %v", err)
|
||||
}
|
||||
withEXIF := injectEXIFSegment(t, jpegBuf.Bytes(),
|
||||
"SUPERSECRETGPS-51.5N-0.1W-iPhone-Serial-A1B2C3")
|
||||
|
||||
// Pre-flight: the upload bytes DO contain the canary.
|
||||
if !bytes.Contains(withEXIF, []byte("SUPERSECRETGPS")) {
|
||||
t.Fatalf("test setup: canary not injected")
|
||||
}
|
||||
|
||||
pub := h.publish(alice, "look at this photo", withEXIF)
|
||||
if pub.PostID == "" {
|
||||
t.Fatalf("publish failed")
|
||||
}
|
||||
h.commitCreatePost(alice, pub)
|
||||
|
||||
// Fetch the stored body back. The attachment field is the cleaned bytes.
|
||||
var fetched struct {
|
||||
Attachment string `json:"attachment"` // base64
|
||||
}
|
||||
h.getJSON("/feed/post/"+pub.PostID, &fetched)
|
||||
if fetched.Attachment == "" {
|
||||
t.Fatalf("attachment not returned")
|
||||
}
|
||||
decoded, err := base64.StdEncoding.DecodeString(fetched.Attachment)
|
||||
if err != nil {
|
||||
t.Fatalf("decode attachment: %v", err)
|
||||
}
|
||||
if bytes.Contains(decoded, []byte("SUPERSECRETGPS")) {
|
||||
t.Errorf("CRITICAL: EXIF canary survived server-side scrub — metadata leaked")
|
||||
}
|
||||
// Sanity: still a valid JPEG after scrub.
|
||||
if _, err := jpeg.Decode(bytes.NewReader(decoded)); err != nil {
|
||||
t.Errorf("scrubbed attachment is not a valid JPEG: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// TestE2ERejectsMIMEMismatch: claimed MIME vs magic bytes.
|
||||
func TestE2ERejectsMIMEMismatch(t *testing.T) {
|
||||
h := newFeedHarness(t)
|
||||
alice := h.newUser("alice")
|
||||
h.fund(alice, 1*blockchain.Token)
|
||||
|
||||
// Build a PNG but claim it's a JPEG.
|
||||
fake := []byte{0x89, 'P', 'N', 'G', '\r', '\n', 0x1a, '\n',
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
|
||||
ts := time.Now().Unix()
|
||||
postID := "mimecheck"
|
||||
hash := sha256.Sum256(append([]byte("mislabel"), fake...))
|
||||
sig := alice.Sign([]byte(fmt.Sprintf("publish:%s:%s:%d",
|
||||
postID, hex.EncodeToString(hash[:]), ts)))
|
||||
req := feedPublishRequest{
|
||||
PostID: postID,
|
||||
Author: alice.PubKeyHex(),
|
||||
Content: "mislabel",
|
||||
AttachmentB64: base64.StdEncoding.EncodeToString(fake),
|
||||
AttachmentMIME: "image/jpeg", // LIE — it's PNG magic
|
||||
Sig: base64.StdEncoding.EncodeToString(sig),
|
||||
Ts: ts,
|
||||
}
|
||||
h.postJSONExpectStatus("/feed/publish", req, http.StatusBadRequest)
|
||||
}
|
||||
|
||||
// TestE2ERejectsBadSignature: wrong signer cannot publish.
|
||||
func TestE2ERejectsBadSignature(t *testing.T) {
|
||||
h := newFeedHarness(t)
|
||||
alice := h.newUser("alice")
|
||||
eve := h.newUser("eve")
|
||||
h.fund(alice, 1*blockchain.Token)
|
||||
h.fund(eve, 1*blockchain.Token)
|
||||
|
||||
ts := time.Now().Unix()
|
||||
postID := "forgery"
|
||||
hash := sha256.Sum256([]byte("evil"))
|
||||
// Eve signs over data but claims to be alice.
|
||||
sig := eve.Sign([]byte(fmt.Sprintf("publish:%s:%s:%d",
|
||||
postID, hex.EncodeToString(hash[:]), ts)))
|
||||
req := feedPublishRequest{
|
||||
PostID: postID,
|
||||
Author: alice.PubKeyHex(), // claim alice
|
||||
Content: "evil",
|
||||
Sig: base64.StdEncoding.EncodeToString(sig),
|
||||
Ts: ts,
|
||||
}
|
||||
h.postJSONExpectStatus("/feed/publish", req, http.StatusForbidden)
|
||||
}
|
||||
|
||||
// TestE2ERejectsStaleTimestamp: publish with ts way in the past must be rejected.
|
||||
func TestE2ERejectsStaleTimestamp(t *testing.T) {
|
||||
h := newFeedHarness(t)
|
||||
alice := h.newUser("alice")
|
||||
h.fund(alice, 1*blockchain.Token)
|
||||
|
||||
ts := time.Now().Add(-1 * time.Hour).Unix() // 1 hour stale
|
||||
postID := "stale"
|
||||
hash := sha256.Sum256([]byte("old"))
|
||||
sig := alice.Sign([]byte(fmt.Sprintf("publish:%s:%s:%d",
|
||||
postID, hex.EncodeToString(hash[:]), ts)))
|
||||
req := feedPublishRequest{
|
||||
PostID: postID,
|
||||
Author: alice.PubKeyHex(),
|
||||
Content: "old",
|
||||
Sig: base64.StdEncoding.EncodeToString(sig),
|
||||
Ts: ts,
|
||||
}
|
||||
h.postJSONExpectStatus("/feed/publish", req, http.StatusBadRequest)
|
||||
}
|
||||
|
||||
// injectEXIFSegment splices an APP1 EXIF segment with the given canary
|
||||
// string into a JPEG. Mirrors media/scrub_test.go but local to keep the
|
||||
// integration test self-contained.
|
||||
func injectEXIFSegment(t *testing.T, src []byte, canary string) []byte {
|
||||
t.Helper()
|
||||
if len(src) < 2 || src[0] != 0xFF || src[1] != 0xD8 {
|
||||
t.Fatalf("not a JPEG")
|
||||
}
|
||||
payload := []byte("Exif\x00\x00" + canary)
|
||||
segLen := len(payload) + 2
|
||||
out := make([]byte, 0, len(src)+segLen+4)
|
||||
out = append(out, src[0], src[1]) // SOI
|
||||
out = append(out, 0xFF, 0xE1, byte(segLen>>8), byte(segLen&0xff))
|
||||
out = append(out, payload...)
|
||||
out = append(out, src[2:]...)
|
||||
return out
|
||||
}
|
||||
|
||||
// Silence unused-import lint if strings gets trimmed by refactor.
|
||||
var _ = strings.TrimSpace
|
||||
var _ = context.TODO
|
||||
504
node/feed_twonode_test.go
Normal file
504
node/feed_twonode_test.go
Normal file
@@ -0,0 +1,504 @@
|
||||
// Two-node simulation: verifies that a post published on Node A is
|
||||
// discoverable and fetchable from Node B after block propagation.
|
||||
//
|
||||
// The real network uses libp2p gossipsub for blocks + an HTTP pull
|
||||
// fallback. For tests we simulate gossip by manually calling chain.AddBlock
|
||||
// on both nodes with the same block — identical to what each node does
|
||||
// after receiving a peer's gossiped block in production.
|
||||
//
|
||||
// Body ownership: only the HOSTING relay has the post body in its
|
||||
// feed mailbox. Readers on OTHER nodes see the on-chain record
|
||||
// (hosting_relay pubkey, content hash, size, author) and fetch the
|
||||
// body directly from the hosting node over HTTP. That's the design —
|
||||
// storage costs don't get amortised across the whole network, the
|
||||
// author pays one node to host, and the public reads from that one
|
||||
// node (or from replicas if/when we add post pinning in v3.0.0).
|
||||
package node
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"go-blockchain/blockchain"
|
||||
"go-blockchain/identity"
|
||||
"go-blockchain/media"
|
||||
"go-blockchain/relay"
|
||||
)
|
||||
|
||||
// twoNodeHarness wires two independent chain+feed instances sharing a
|
||||
// single block history (simulated gossip). Node A is the "hosting"
|
||||
// relay; Node B is the reader.
|
||||
type twoNodeHarness struct {
|
||||
t *testing.T
|
||||
|
||||
aChainDir, aFeedDir string
|
||||
bChainDir, bFeedDir string
|
||||
|
||||
aChain, bChain *blockchain.Chain
|
||||
aMailbox, bMailbox *relay.FeedMailbox
|
||||
aServer, bServer *httptest.Server
|
||||
aHostPub string
|
||||
bHostPub string
|
||||
validator *identity.Identity
|
||||
tipIndex uint64
|
||||
tipHash []byte
|
||||
}
|
||||
|
||||
func newTwoNodeHarness(t *testing.T) *twoNodeHarness {
|
||||
t.Helper()
|
||||
mkdir := func(prefix string) string {
|
||||
d, err := os.MkdirTemp("", prefix)
|
||||
if err != nil {
|
||||
t.Fatalf("MkdirTemp: %v", err)
|
||||
}
|
||||
return d
|
||||
}
|
||||
|
||||
h := &twoNodeHarness{
|
||||
t: t,
|
||||
aChainDir: mkdir("dchain-2n-chainA-*"),
|
||||
aFeedDir: mkdir("dchain-2n-feedA-*"),
|
||||
bChainDir: mkdir("dchain-2n-chainB-*"),
|
||||
bFeedDir: mkdir("dchain-2n-feedB-*"),
|
||||
}
|
||||
|
||||
var err error
|
||||
h.aChain, err = blockchain.NewChain(h.aChainDir)
|
||||
if err != nil {
|
||||
t.Fatalf("chain A: %v", err)
|
||||
}
|
||||
h.bChain, err = blockchain.NewChain(h.bChainDir)
|
||||
if err != nil {
|
||||
t.Fatalf("chain B: %v", err)
|
||||
}
|
||||
h.aMailbox, err = relay.OpenFeedMailbox(h.aFeedDir, 24*time.Hour)
|
||||
if err != nil {
|
||||
t.Fatalf("feed A: %v", err)
|
||||
}
|
||||
h.bMailbox, err = relay.OpenFeedMailbox(h.bFeedDir, 24*time.Hour)
|
||||
if err != nil {
|
||||
t.Fatalf("feed B: %v", err)
|
||||
}
|
||||
|
||||
h.validator, err = identity.Generate()
|
||||
if err != nil {
|
||||
t.Fatalf("validator: %v", err)
|
||||
}
|
||||
// Both nodes start from the same genesis — the single bootstrap
|
||||
// validator allocates the initial supply. In production this is
|
||||
// hardcoded; in tests we just generate and use it to sign blocks
|
||||
// on both chains.
|
||||
genesis := blockchain.GenesisBlock(h.validator.PubKeyHex(), h.validator.PrivKey)
|
||||
if err := h.aChain.AddBlock(genesis); err != nil {
|
||||
t.Fatalf("A genesis: %v", err)
|
||||
}
|
||||
if err := h.bChain.AddBlock(genesis); err != nil {
|
||||
t.Fatalf("B genesis: %v", err)
|
||||
}
|
||||
h.tipIndex = genesis.Index
|
||||
h.tipHash = genesis.Hash
|
||||
|
||||
// Node A hosts; Node B is a pure reader (no host_pub of its own that
|
||||
// anyone publishes to). They share a single validator because this
|
||||
// test isn't about consensus — it's about chain state propagation.
|
||||
h.aHostPub = h.validator.PubKeyHex()
|
||||
|
||||
// Node B uses a separate identity purely for its hosting_relay field
|
||||
// (never actually hosts anything in this scenario). Distinguishes A
|
||||
// from B in balance assertions.
|
||||
idB, _ := identity.Generate()
|
||||
h.bHostPub = idB.PubKeyHex()
|
||||
|
||||
scrubber := media.NewScrubber(media.SidecarConfig{})
|
||||
|
||||
aCfg := FeedConfig{
|
||||
Mailbox: h.aMailbox,
|
||||
HostingRelayPub: h.aHostPub,
|
||||
Scrubber: scrubber,
|
||||
GetPost: h.aChain.Post,
|
||||
LikeCount: h.aChain.LikeCount,
|
||||
HasLiked: h.aChain.HasLiked,
|
||||
PostsByAuthor: h.aChain.PostsByAuthor,
|
||||
Following: h.aChain.Following,
|
||||
}
|
||||
bCfg := FeedConfig{
|
||||
Mailbox: h.bMailbox,
|
||||
HostingRelayPub: h.bHostPub,
|
||||
Scrubber: scrubber,
|
||||
GetPost: h.bChain.Post,
|
||||
LikeCount: h.bChain.LikeCount,
|
||||
HasLiked: h.bChain.HasLiked,
|
||||
PostsByAuthor: h.bChain.PostsByAuthor,
|
||||
Following: h.bChain.Following,
|
||||
}
|
||||
muxA := http.NewServeMux()
|
||||
RegisterFeedRoutes(muxA, aCfg)
|
||||
h.aServer = httptest.NewServer(muxA)
|
||||
muxB := http.NewServeMux()
|
||||
RegisterFeedRoutes(muxB, bCfg)
|
||||
h.bServer = httptest.NewServer(muxB)
|
||||
|
||||
t.Cleanup(h.Close)
|
||||
return h
|
||||
}
|
||||
|
||||
func (h *twoNodeHarness) Close() {
|
||||
if h.aServer != nil {
|
||||
h.aServer.Close()
|
||||
}
|
||||
if h.bServer != nil {
|
||||
h.bServer.Close()
|
||||
}
|
||||
if h.aMailbox != nil {
|
||||
_ = h.aMailbox.Close()
|
||||
}
|
||||
if h.bMailbox != nil {
|
||||
_ = h.bMailbox.Close()
|
||||
}
|
||||
if h.aChain != nil {
|
||||
_ = h.aChain.Close()
|
||||
}
|
||||
if h.bChain != nil {
|
||||
_ = h.bChain.Close()
|
||||
}
|
||||
for _, dir := range []string{h.aChainDir, h.aFeedDir, h.bChainDir, h.bFeedDir} {
|
||||
for i := 0; i < 20; i++ {
|
||||
if err := os.RemoveAll(dir); err == nil {
|
||||
break
|
||||
}
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// gossipBlock simulates libp2p block propagation: same block applied to
|
||||
// both chains. In production, AddBlock is called on each peer after the
|
||||
// gossipsub message arrives — no chain-level difference from the direct
|
||||
// call here.
|
||||
func (h *twoNodeHarness) gossipBlock(txs ...*blockchain.Transaction) {
|
||||
h.t.Helper()
|
||||
time.Sleep(2 * time.Millisecond) // distinct tx IDs
|
||||
var totalFees uint64
|
||||
for _, tx := range txs {
|
||||
totalFees += tx.Fee
|
||||
}
|
||||
b := &blockchain.Block{
|
||||
Index: h.tipIndex + 1,
|
||||
Timestamp: time.Now().UTC(),
|
||||
Transactions: txs,
|
||||
PrevHash: h.tipHash,
|
||||
Validator: h.validator.PubKeyHex(),
|
||||
TotalFees: totalFees,
|
||||
}
|
||||
b.ComputeHash()
|
||||
b.Sign(h.validator.PrivKey)
|
||||
|
||||
if err := h.aChain.AddBlock(b); err != nil {
|
||||
h.t.Fatalf("A AddBlock: %v", err)
|
||||
}
|
||||
if err := h.bChain.AddBlock(b); err != nil {
|
||||
h.t.Fatalf("B AddBlock: %v", err)
|
||||
}
|
||||
h.tipIndex = b.Index
|
||||
h.tipHash = b.Hash
|
||||
}
|
||||
|
||||
func (h *twoNodeHarness) nextTxID(from string, typ blockchain.EventType) string {
|
||||
sum := sha256.Sum256([]byte(fmt.Sprintf("%s:%s:%d", from, typ, time.Now().UnixNano())))
|
||||
return hex.EncodeToString(sum[:16])
|
||||
}
|
||||
|
||||
// fundAB transfers from validator → target, propagated to both chains.
|
||||
func (h *twoNodeHarness) fundAB(target *identity.Identity, amount uint64) {
|
||||
tx := &blockchain.Transaction{
|
||||
ID: h.nextTxID(h.validator.PubKeyHex(), blockchain.EventTransfer),
|
||||
Type: blockchain.EventTransfer,
|
||||
From: h.validator.PubKeyHex(),
|
||||
To: target.PubKeyHex(),
|
||||
Amount: amount,
|
||||
Fee: blockchain.MinFee,
|
||||
Timestamp: time.Now().UTC(),
|
||||
}
|
||||
h.gossipBlock(tx)
|
||||
}
|
||||
|
||||
// publishOnA uploads body to A's feed mailbox (only A gets the body) and
|
||||
// gossips the CREATE_POST tx to both chains (both see the metadata).
|
||||
func (h *twoNodeHarness) publishOnA(author *identity.Identity, content string) feedPublishResponse {
|
||||
h.t.Helper()
|
||||
idHash := sha256.Sum256([]byte(fmt.Sprintf("%s-%d-%s",
|
||||
author.PubKeyHex(), time.Now().UnixNano(), content)))
|
||||
postID := hex.EncodeToString(idHash[:16])
|
||||
clientHasher := sha256.New()
|
||||
clientHasher.Write([]byte(content))
|
||||
clientHash := hex.EncodeToString(clientHasher.Sum(nil))
|
||||
ts := time.Now().Unix()
|
||||
sig := author.Sign([]byte(fmt.Sprintf("publish:%s:%s:%d", postID, clientHash, ts)))
|
||||
|
||||
req := feedPublishRequest{
|
||||
PostID: postID,
|
||||
Author: author.PubKeyHex(),
|
||||
Content: content,
|
||||
Sig: base64.StdEncoding.EncodeToString(sig),
|
||||
Ts: ts,
|
||||
}
|
||||
body, _ := json.Marshal(req)
|
||||
resp, err := http.Post(h.aServer.URL+"/feed/publish", "application/json", strings.NewReader(string(body)))
|
||||
if err != nil {
|
||||
h.t.Fatalf("publish on A: %v", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode >= 400 {
|
||||
raw, _ := io.ReadAll(resp.Body)
|
||||
h.t.Fatalf("publish on A → %d: %s", resp.StatusCode, string(raw))
|
||||
}
|
||||
var out feedPublishResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&out); err != nil {
|
||||
h.t.Fatalf("decode publish: %v", err)
|
||||
}
|
||||
|
||||
// Now the ON-CHAIN CREATE_POST tx — gossiped to both nodes.
|
||||
contentHash, _ := hex.DecodeString(out.ContentHash)
|
||||
payload, _ := json.Marshal(blockchain.CreatePostPayload{
|
||||
PostID: out.PostID,
|
||||
ContentHash: contentHash,
|
||||
Size: out.Size,
|
||||
HostingRelay: out.HostingRelay,
|
||||
})
|
||||
tx := &blockchain.Transaction{
|
||||
ID: h.nextTxID(author.PubKeyHex(), blockchain.EventCreatePost),
|
||||
Type: blockchain.EventCreatePost,
|
||||
From: author.PubKeyHex(),
|
||||
Fee: out.EstimatedFeeUT,
|
||||
Payload: payload,
|
||||
Timestamp: time.Now().UTC(),
|
||||
}
|
||||
h.gossipBlock(tx)
|
||||
return out
|
||||
}
|
||||
|
||||
// likeOnB submits a LIKE_POST tx originating on Node B (simulates a
|
||||
// follower using their own node). Both chains receive the block.
|
||||
func (h *twoNodeHarness) likeOnB(liker *identity.Identity, postID string) {
|
||||
payload, _ := json.Marshal(blockchain.LikePostPayload{PostID: postID})
|
||||
tx := &blockchain.Transaction{
|
||||
ID: h.nextTxID(liker.PubKeyHex(), blockchain.EventLikePost),
|
||||
Type: blockchain.EventLikePost,
|
||||
From: liker.PubKeyHex(),
|
||||
Fee: blockchain.MinFee,
|
||||
Payload: payload,
|
||||
Timestamp: time.Now().UTC(),
|
||||
}
|
||||
h.gossipBlock(tx)
|
||||
}
|
||||
|
||||
// getBodyFromA fetches /feed/post/{id} from Node A's HTTP server.
|
||||
func (h *twoNodeHarness) getBodyFromA(postID string) (int, []byte) {
|
||||
h.t.Helper()
|
||||
resp, err := http.Get(h.aServer.URL + "/feed/post/" + postID)
|
||||
if err != nil {
|
||||
h.t.Fatalf("GET A: %v", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
raw, _ := io.ReadAll(resp.Body)
|
||||
return resp.StatusCode, raw
|
||||
}
|
||||
|
||||
// getBodyFromB same for Node B.
|
||||
func (h *twoNodeHarness) getBodyFromB(postID string) (int, []byte) {
|
||||
h.t.Helper()
|
||||
resp, err := http.Get(h.bServer.URL + "/feed/post/" + postID)
|
||||
if err != nil {
|
||||
h.t.Fatalf("GET B: %v", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
raw, _ := io.ReadAll(resp.Body)
|
||||
return resp.StatusCode, raw
|
||||
}
|
||||
|
||||
// ── Tests ─────────────────────────────────────────────────────────────────
|
||||
|
||||
// TestTwoNodePostPropagation: Alice publishes on Node A. After block
|
||||
// propagation, both chains have the record. Node B can read the
|
||||
// on-chain metadata directly, and can fetch the body from Node A (the
|
||||
// hosting relay) — which is what the client does in production.
|
||||
func TestTwoNodePostPropagation(t *testing.T) {
|
||||
h := newTwoNodeHarness(t)
|
||||
alice, _ := identity.Generate()
|
||||
h.fundAB(alice, 10*blockchain.Token)
|
||||
|
||||
pub := h.publishOnA(alice, "hello from node A")
|
||||
|
||||
// Node A chain has the record.
|
||||
recA, err := h.aChain.Post(pub.PostID)
|
||||
if err != nil || recA == nil {
|
||||
t.Fatalf("A chain.Post: %v (rec=%v)", err, recA)
|
||||
}
|
||||
// Node B chain also has the record — propagation successful.
|
||||
recB, err := h.bChain.Post(pub.PostID)
|
||||
if err != nil || recB == nil {
|
||||
t.Fatalf("B chain.Post: %v (rec=%v)", err, recB)
|
||||
}
|
||||
if recA.PostID != recB.PostID || recA.Author != recB.Author {
|
||||
t.Errorf("chains disagree: A=%+v B=%+v", recA, recB)
|
||||
}
|
||||
if recB.HostingRelay != h.aHostPub {
|
||||
t.Errorf("B sees hosting_relay=%s, want A's pub=%s", recB.HostingRelay, h.aHostPub)
|
||||
}
|
||||
|
||||
// Node A HTTP serves the body.
|
||||
statusA, _ := h.getBodyFromA(pub.PostID)
|
||||
if statusA != http.StatusOK {
|
||||
t.Errorf("A GET: status %d, want 200", statusA)
|
||||
}
|
||||
|
||||
// Node B HTTP does NOT have the body — body only lives on the hosting
|
||||
// relay. This is by design: the reader client on Node B would read
|
||||
// chain.Post(id).HostingRelay, look up its URL via /api/relays, and
|
||||
// fetch directly from Node A. Tested by the next assertion.
|
||||
statusB, _ := h.getBodyFromB(pub.PostID)
|
||||
if statusB != http.StatusNotFound {
|
||||
t.Errorf("B GET: status %d, want 404 (body lives only on hosting relay)", statusB)
|
||||
}
|
||||
|
||||
// Simulate the client routing step: use chain record from B to find
|
||||
// hosting relay, then fetch from A.
|
||||
hosting := recB.HostingRelay
|
||||
if hosting != h.aHostPub {
|
||||
t.Fatalf("hosting not A: %s", hosting)
|
||||
}
|
||||
// In production: look up hosting's URL via /api/relays. Here we
|
||||
// already know it = h.aServer.URL. Just verify the fetch works.
|
||||
statusCross, bodyCross := h.getBodyFromA(pub.PostID)
|
||||
if statusCross != http.StatusOK {
|
||||
t.Fatalf("cross-node fetch: status %d", statusCross)
|
||||
}
|
||||
var fetched struct {
|
||||
Content string `json:"content"`
|
||||
Author string `json:"author"`
|
||||
}
|
||||
if err := json.Unmarshal(bodyCross, &fetched); err != nil {
|
||||
t.Fatalf("decode cross-node body: %v", err)
|
||||
}
|
||||
if fetched.Content != "hello from node A" {
|
||||
t.Errorf("cross-node content: got %q", fetched.Content)
|
||||
}
|
||||
}
|
||||
|
||||
// TestTwoNodeLikeCounterSharedAcrossNodes: a like submitted with tx
|
||||
// origin on Node B bumps the on-chain counter — which Node A's HTTP
|
||||
// /stats then reflects. Demonstrates that engagement aggregates are
|
||||
// consistent across the mesh because they live on the chain, not in
|
||||
// any single relay's memory.
|
||||
func TestTwoNodeLikeCounterSharedAcrossNodes(t *testing.T) {
|
||||
h := newTwoNodeHarness(t)
|
||||
alice, _ := identity.Generate()
|
||||
bob, _ := identity.Generate()
|
||||
h.fundAB(alice, 10*blockchain.Token)
|
||||
h.fundAB(bob, 10*blockchain.Token)
|
||||
|
||||
pub := h.publishOnA(alice, "content for engagement test")
|
||||
h.likeOnB(bob, pub.PostID)
|
||||
|
||||
// A's HTTP stats (backed by its chain.LikeCount) should see the like.
|
||||
resp, err := http.Get(h.aServer.URL + "/feed/post/" + pub.PostID + "/stats")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
var stats postStatsResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&stats); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if stats.Likes != 1 {
|
||||
t.Errorf("A /stats: got %d likes, want 1", stats.Likes)
|
||||
}
|
||||
|
||||
// Same for B.
|
||||
resp, err = http.Get(h.bServer.URL + "/feed/post/" + pub.PostID + "/stats")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if err := json.NewDecoder(resp.Body).Decode(&stats); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if stats.Likes != 1 {
|
||||
t.Errorf("B /stats: got %d likes, want 1", stats.Likes)
|
||||
}
|
||||
}
|
||||
|
||||
// TestTwoNodeFollowGraphReplicates: FOLLOW tx on any node propagates to
|
||||
// both chains; B's /feed/timeline returns A-hosted posts correctly.
|
||||
func TestTwoNodeFollowGraphReplicates(t *testing.T) {
|
||||
h := newTwoNodeHarness(t)
|
||||
alice, _ := identity.Generate() // will follow bob
|
||||
bob, _ := identity.Generate() // author
|
||||
h.fundAB(alice, 10*blockchain.Token)
|
||||
h.fundAB(bob, 10*blockchain.Token)
|
||||
|
||||
// Alice follows Bob (tx gossiped to both nodes).
|
||||
followTx := &blockchain.Transaction{
|
||||
ID: h.nextTxID(alice.PubKeyHex(), blockchain.EventFollow),
|
||||
Type: blockchain.EventFollow,
|
||||
From: alice.PubKeyHex(),
|
||||
To: bob.PubKeyHex(),
|
||||
Fee: blockchain.MinFee,
|
||||
Payload: []byte(`{}`),
|
||||
Timestamp: time.Now().UTC(),
|
||||
}
|
||||
h.gossipBlock(followTx)
|
||||
|
||||
// Bob publishes on A. Alice queries timeline on B.
|
||||
bobPost := h.publishOnA(bob, "bob speaks")
|
||||
|
||||
// Alice's timeline on Node B should include Bob's post (metadata
|
||||
// lives on chain). Body fetch would go to A, but /timeline only
|
||||
// returns the enriched record — which DOES include body content
|
||||
// because B's feed_mailbox doesn't have it... hmm.
|
||||
//
|
||||
// Actually this reveals a limitation: /feed/timeline on B merges
|
||||
// chain records (available) with local mailbox bodies (missing).
|
||||
// The body parts of the response will be empty. For the e2e test we
|
||||
// just verify the metadata is there — the client is expected to
|
||||
// resolve bodies separately via the hosting_relay URL.
|
||||
resp, err := http.Get(h.bServer.URL + "/feed/timeline?follower=" + alice.PubKeyHex())
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
var tl struct {
|
||||
Count int `json:"count"`
|
||||
Posts []feedAuthorItem `json:"posts"`
|
||||
}
|
||||
if err := json.NewDecoder(resp.Body).Decode(&tl); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if tl.Count != 1 {
|
||||
t.Fatalf("B timeline count: got %d, want 1", tl.Count)
|
||||
}
|
||||
if tl.Posts[0].PostID != bobPost.PostID {
|
||||
t.Errorf("B timeline[0]: got %s, want %s", tl.Posts[0].PostID, bobPost.PostID)
|
||||
}
|
||||
// Metadata must be correct even if body is empty on B.
|
||||
if tl.Posts[0].Author != bob.PubKeyHex() {
|
||||
t.Errorf("B timeline[0].author: got %s, want %s", tl.Posts[0].Author, bob.PubKeyHex())
|
||||
}
|
||||
if tl.Posts[0].HostingRelay != h.aHostPub {
|
||||
t.Errorf("B timeline[0].hosting_relay: got %s, want A (%s)", tl.Posts[0].HostingRelay, h.aHostPub)
|
||||
}
|
||||
// Body is intentionally empty on B (A hosts it). Verify.
|
||||
if tl.Posts[0].Content != "" {
|
||||
t.Errorf("B timeline[0].content: got %q, want empty (body lives on A)", tl.Posts[0].Content)
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user