Derivative Experimentation Prompts for Anomaly Detection (AD)
So that reviewers and readers can reproduce the results for this IEEE paper, please find the various "Derivative Experimentation Prompts for Anomaly Detection (AD)" prompts below (rather than in the paper). They are numbered AD #17 through AD #25.
AD #17 (Derivative of AD #15)
(as referenced in Section IIIC. Experimentation)
Prospective Value-added Proposition: Multi-Pass, Visual Attention, Uncertainty Detection, Multi-Agent, and False Positives/False Negatives preserved; Clear and Concise (minimized instruction set); Strategic Repetition.
Task:
Analyze the uploaded image and produce a simplified annotated version showing all detected objects, anomalies, and structural layouts.
Instructions:
Reason step-by-step using multiple independent reasoning paths (self-consistency). Explicitly identify false positives and false negatives, provide examples, assess likelihood and impact, note assumptions/uncertainties, and summarize conclusions from the most consistent reasoning.
Visual Grid:
Divide the image into a logical grid (e.g., 3x3 or 4x4). For each region: detect shapes/objects, classify contextually, identify anomalies. Reconcile duplicates across regions.
Multi-Agent Analysis:
Agent A: Visual detection; Agent B: Engineering interpretation; Agent C: Context assessment; Agent D: Dataset verification (USGS, NASA); Agent E: Anomaly and risk evaluation.
Confidence Scoring:
Assign confidence per object based on visual, contextual, and dataset evidence. High=3/3, Medium=2/3, Low=1/3.
Verification Checklist:
Ensure all regions analyzed, ambiguous objects reviewed, false positives/negatives checked, cross-region reconciliation done, triangulated evidence supports conclusions, anomalies highlighted, assumptions noted.
Image Overlay:
Preserve essential layout, remove minor details, overlay objects with bright outlines, highlight anomalies in red, keep base image visible.
Output:
1. Annotated image with overlays.
2. Object table: category, location, confidence, anomaly.
3. False positives.
4. False negatives.
5. Assumptions/uncertainties.
6. Final synthesis.
AD #18 (Derivative of AD #15)
(as referenced in Section IIIC. Experimentation)
Prospective Value-added Proposition: Multi-Pass, Visual Attention, Uncertainty Detection, Multi-Agent, and False Positives/False Negatives preserved; Clear and Concise (minimized instruction set); Strategic Repetition.
Task:
Analyze the uploaded image, produce a simplified annotated version showing objects, anomalies, and layout.
Instructions:
Reason step-by-step using multiple paths (self-consistency). Identify false positives/negatives, note assumptions/uncertainties, summarize conclusions from consistent reasoning.
Visual Grid:
Divide image (3x3/4x4). Detect, classify, identify anomalies per region. Reconcile duplicates.
Agents:
A: Visual detection, B: Engineering, C: Context, D: Dataset verification, E: Anomaly/risk.
Confidence:
Assign per object (High=3/3, Medium=2/3, Low=1/3).
Verify: All regions, FP/FN, anomalies, cross-region consistency.
Output:
1. Annotated image.
2. Object table.
3. False positives.
4. False negatives.
5. Assumptions/uncertainties.
6. Final synthesis.
AD #19 (Derivative of AD #15)
(as referenced in Section IIIC. Experimentation)
Prospective Value-added Proposition: Triangulated Evidence, Multi-Pass, Visual Attention, Uncertainty Detection, Multi-Agent, and False Positives/False Negatives preserved; Clear and Concise (minimized instruction set); Strategic Repetition.
Task:
Analyze the uploaded image and produce a simplified annotated version showing objects, anomalies, and layout.
Instructions:
Perform multi-pass, step-by-step reasoning. Detect uncertain/ambiguous objects, reconcile duplicates, flag false positives/negatives, note assumptions, and summarize most consistent conclusions.
Visual Attention:
Divide image into a grid (3x3/4x4). Detect, classify, and identify anomalies per region. Reconcile overlapping detections.
Triangulated Evidence:
Assign confidence per object based on visual cues, context, and dataset alignment. High=3/3, Medium=2/3, Low=1/3.
Agents:
A: Visual detection, B: Engineering/structural, C: Context, D: Dataset verification, E: Anomaly/risk.
Output:
1. Annotated image.
2. Object table (category, location, confidence, anomaly).
3. Uncertain/ambiguous detections.
4. False positives.
5. False negatives.
6. Assumptions/uncertainties.
7. Final synthesis.
AD #20 (Derivative of AD #15)
(as referenced in Section IIIC. Experimentation)
Prospective Value-added Proposition: Triangulated Evidence, Multi-Pass, Visual Attention, Uncertainty Detection Preserved, and False Positives/False Negatives preserved; Clear and Concise (minimized instruction set); Strategic Repetition.
Task:
Analyze the uploaded image, produce a simplified annotated version with objects, anomalies, and layout.
Instructions:
Multi-pass stepwise reasoning. Detect uncertain/ambiguous objects, reconcile duplicates, flag false positives/negatives, note assumptions, summarize most consistent conclusions.
Visual Attention:
Divide image into grid. Detect, classify, identify anomalies per region.
Triangulated Evidence:
Assign confidence per object based on visual, context, and dataset alignment (High=3/3, Medium=2/3, Low=1/3).
Output:
Annotated image, object table, uncertain detections, FP/FN, assumptions, final synthesis.
AD #21 (Derivative of AD #15)
(as referenced in Section IIIC. Experimentation)
Prospective Value-added Proposition: Triangulated Evidence, Multi-Pass, Visual Attention, Uncertainty Detection, and False Positives/False Negatives preserved; Clear and Concise (minimized instruction set); Strategic Repetition; Prepended Strict Instructive.
Return results in sections:
AnnotatedImage, ObjectTable, UncertainDetections, FalsePositives, FalseNegatives, Assumptions, FinalSynthesis.
Analyze the uploaded image using multi-pass visual attention. Divide the image into a grid and inspect each region for objects, structures, and anomalies. Reconcile duplicates across regions. Evaluate detections using triangulated evidence (visual cues, contextual reasoning, dataset alignment) and assign confidence (High/Med/Low). Flag uncertain detections and analyze false positives and false negatives. Produce a simplified annotated image plus an object table, uncertainty notes, assumptions, and final synthesis based on the most consistent reasoning.
AD #22 (Derivative of AD #15)
(as referenced in Section IIIC. Experimentation)
Prospective Value-added Proposition: Triangulated Evidence, Multi-Pass, Visual Attention, and Uncertainty Detection Preserved; Clear and Concise (minimized instruction set)
Analyze the image via multi-pass reasoning; detect objects/anomalies, flag uncertain/ambiguous, reconcile duplicates, assign confidence via visual/context/dataset triangulation, output annotated image, table, FP/FN, assumptions, synthesis.
AD #23 (Derivative of AD #15)
(as referenced in Section IIIC. Experimentation)
Prospective Value-added Proposition: Triangulated Evidence, Multi-Pass, Visual Attention, Uncertainty Detection, and False Positives/False Negatives preserved; Clear and Concise (minimized instruction set); Strategic Repetition; Prepended Strict Instructive.
Return results as structured sections or JSON when possible.
Task: Analyze the uploaded image and generate a simplified annotated version highlighting objects, structures, and anomalies.
Method: Use multi-pass visual attention. Divide the image into a grid and inspect each region sequentially. Detect objects, classify structures, and flag anomalies. Reconcile overlapping or duplicate detections across regions.
Evidence: Evaluate each detection using triangulated evidence (visual features, contextual reasoning, dataset/pattern agreement). Assign confidence: High (3/3), Medium (2/3), Low (1/3). Mark uncertain or ambiguous detections.
Quality Check: Identify potential false positives and false negatives. Note assumptions and uncertainties.
Output:
1. Annotated simplified image.
2. Object table (category, location, confidence, anomaly).
3. Uncertain detections.
4. False positives.
5. False negatives.
6. Assumptions.
7. Final synthesis based on the most consistent evidence.
AD #24 (Derivative of AD #15)
(as referenced in Section IIIC. Experimentation)
Prospective Value-added Proposition: Triangulated Evidence, Multi-Pass, Visual Attention, Uncertainty Detection, and False Positives/False Negatives preserved; Clear and Concise (minimized instruction set); Strategic Repetition; Postpended Strict Instructive.
Stage 1 – Visual Observation
Divide the image into a grid and scan each region sequentially. List visible shapes, objects, textures, edges, and structural patterns. Record approximate location for each detection. Note potential anomalies or ambiguous shapes.
Rules: Focus only on observable visual features. Avoid assumptions about purpose or meaning. Allow uncertain detections and mark them as “uncertain”. Reconcile duplicate detections across neighboring regions.
Output: raw observation table (object/shape, region, notes, uncertainty).
--------------------------------------------------
Stage 2 – Interpretation & Verification
Use the observation table to classify objects and structures. Apply triangulated evidence: visual features, contextual reasoning, and known patterns/datasets. Assign confidence per object (High=3/3, Medium=2/3, Low=1/3).
Evaluate:
- uncertain detections
- possible false positives
- possible false negatives
- spatial relationships and structural layout
Generate a simplified annotated image highlighting objects and anomalies.
Output:
1. Annotated image.
2. Object table (category, location, confidence, anomaly).
3. Uncertain detections.
4. False positives.
5. False negatives.
6. Assumptions/uncertainties.
7. Final synthesis.
--------------------------------------------------
Re-check detections with Medium/Low confidence and attempt to confirm or reject them using contextual and spatial reasoning.
AD #25 (Derivative of AD #15)
(as referenced in Section IIIC. Experimentation)
Prospective Value-added Proposition: Uncertainty is explicitly addressed, wherein Step 1 illuminates the uncertain detections, Step 2 facilitates the winnowing of false positives/false negatives via triangulated reasoning, and Step 3 incorporates validation and reaffirmation. False Positives/False Negatives and Visual Attention are also preserved.
Step 1: Visual Observation & Grid Analysis
- Divide the image into a grid of regions for systematic scanning. List all observable visual features (shapes, objects, textures, edges, structural patterns) within each region. For each detection, record its approximate location.
- Classify each observation by confidence: certain or uncertain.
- If multiple regions detect the same feature, consolidate these observations.
Output:
- Observation Table: (Object/Shape, Region, Location, Uncertainty)
--------------------------------------------------
Step 2: Interpretation & Classification
- Cross-check the observations against known patterns or datasets to classify detected objects and structures.
- For each observation, assign a confidence rating (High=3/3, Medium=2/3, Low=1/3).
- Evaluate possible false positives/false negatives, ambiguous detections, and spatial relationships between objects.
Output:
- Annotated Image: Highlight objects, anomalies, and uncertainties.
- Object Table: (Category, Location, Confidence, Anomaly)
- Notes on False Positives/Negatives, Uncertainties, and Assumptions
--------------------------------------------------
Step 3: Verification & Final Synthesis
- Focus on objects with Medium/Low confidence. Use contextual and spatial reasoning to confirm or reject their classification.
- Update the Object Table, correcting any misclassifications, and reassess the image for overlooked features.
- Provide a final synthesis of findings, including any remaining uncertainties or unresolved issues.
Output:
- Updated Object Table (with refined confidence and anomalies).
- Final Annotated Image.
- Summary of Uncertainties/Remaining Questions.