Derivative Experimentation Scripts for AD #23
So that reviewers and readers can reproduce the results for this IEEE paper, please find the "Derivative Experimentation Scripts for AD #23" in Python, Go (Golang), and Rust below (rather than in the paper):
Python Script for AD #23 (Derivative of AD #15)
(as referenced in Section IIIC. Experimentation)
Prospective Efficacy Value-added Proposition: Triangulated Evidence, Multi-Pass, Visual Attention, and Uncertainty Detection Preserved; Clear and Concise (minimized instruction set); Prepended Strict Instructive.
import openai
import base64
import json
from PIL import Image
import io
# ------------------------
# Configuration for AD #23
# ------------------------
openai.api_key = "PERTINENT_OPENAI_API_KEY" # Pertinent API key
IMAGE_FILE = "AD23_input_image.png"
OUTPUT_JSON = "AD23_image_analysis_output.json"
ANNOTATED_IMAGE_FILE = "AD15_annotated_image.png"
# ---------------------------------------
# 1. Read and encode the image for AD #23
# ---------------------------------------
with open(IMAGE_FILE, "rb") as f:
image_bytes = f.read()
image_base64 = base64.b64encode(image_bytes).decode("utf-8")
# ------------------------------------------------
# 2. AD #23 Reasoning and Value-Added Propositions
# ------------------------------------------------
prompt = """
Return results as structured sections or JSON when possible.
Task: Analyze the uploaded image and generate a simplified annotated version highlighting objects, structures, and anomalies.
Method: Use multi-pass visual attention. Divide the image into a grid and inspect each region sequentially. Detect objects, classify structures, and flag anomalies. Reconcile overlapping or duplicate detections across regions.
Evidence: Evaluate each detection using triangulated evidence (visual features, contextual reasoning, dataset/pattern agreement). Assign confidence: High (3/3), Medium (2/3), Low (1/3). Mark uncertain or ambiguous detections.
Quality Check: Identify potential false positives and false negatives. Note assumptions and uncertainties.
Output:
1. Annotated simplified image
2. Object table (category, location, confidence, anomaly)
3. Uncertain detections
4. False positives
5. False negatives
6. Assumptions
7. Final synthesis based on the most consistent evidence.
"""
# -------------------------------
# 3. Initiate the OpenAI API call
# -------------------------------
response = openai.Image.create(
model="gpt-5.0-mini", # gpt-5.0-mini
prompt=prompt,
images=[image_base64],
size="1024x1024",
response_format="b64_json"
)
# ----------------------------------------------
# 4. Extract the AD #23 annotated image/metadata
# ----------------------------------------------
annotated_b64 = response['data'][0]['b64_json']
annotated_bytes = base64.b64decode(annotated_b64)
# Save annotated image
with open(ANNOTATED_IMAGE_FILE, "wb") as f:
f.write(annotated_bytes)
# Optional: Display image
img = Image.open(io.BytesIO(annotated_bytes))
img.show()
# ----------------------------------------
# 5. Save the AD #23-related JSON metadata
# ----------------------------------------
analysis_output = {
"annotated_image_file": ANNOTATED_IMAGE_FILE,
"object_detection_table": response['data'][0].get('objects', []),
"false_positive_analysis": response['data'][0].get('false_positives', []),
"false_negative_analysis": response['data'][0].get('false_negatives', []),
"assumptions_uncertainties": response['data'][0].get('assumptions', []),
"final_synthesis": response['data'][0].get('final_synthesis', "")
}
with open(OUTPUT_JSON, "w") as f:
json.dump(analysis_output, f, indent=2)
print(f"Annotated image saved as: {ANNOTATED_IMAGE_FILE}")
print(f"Analysis metadata saved as: {OUTPUT_JSON}")
Go (Golang) Script for AD #23 (Derivative of AD #15)
(as referenced in Section IIIC. Experimentation)
Prospective Efficacy Value-added Proposition: Triangulated Evidence, Multi-Pass, Visual Attention, and Uncertainty Detection Preserved; Clear and Concise (minimized instruction set); Prepended Strict Instructive.
package main
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"os"
)
type ChatMessage struct {
Role string `json:"role"`
Content string `json:"content"`
}
type ChatRequest struct {
Model string `json:"model"`
Messages []ChatMessage `json:"messages"`
}
type ChatResponseChoice struct {
Message ChatMessage `json:"message"`
}
type ChatResponse struct {
Choices []ChatResponseChoice `json:"choices"`
}
func generatePrompt(context string) string {
return fmt.Sprintf(`
Return results as structured sections or JSON when possible.
Task: Analyze the uploaded image and generate a simplified annotated version highlighting objects, structures, and anomalies.
Method: Use multi-pass visual attention. Divide the image into a grid and inspect each region sequentially. Detect objects, classify structures, and flag anomalies. Reconcile overlapping or duplicate detections across regions.
Evidence: Evaluate each detection using triangulated evidence (visual features, contextual reasoning, dataset/pattern agreement). Assign confidence: High (3/3), Medium (2/3), Low (1/3). Mark uncertain or ambiguous detections.
Quality Check: Identify potential false positives and false negatives. Note assumptions and uncertainties.
Output:
1. Annotated simplified image
2. Object table (category, location, confidence, anomaly)
3. Uncertain detections
4. False positives
5. False negatives
6. Assumptions
7. Final synthesis based on the most consistent evidence.
`, context)
}
func callOpenAI(prompt string) (string, error) {
apiKey := os.Getenv("PERTINENT_OPENAI_API_KEY")
if apiKey == "" {
return "", fmt.Errorf("PERTINENT_OPENAI_API_KEY environment variable not set")
}
requestBody := ChatRequest{
Model: "gpt-5.0-mini",
Messages: []ChatMessage{
{Role: "system", Content: "Variant #15"},
{Role: "user", Content: prompt},
},
}
bodyBytes, err := json.Marshal(requestBody)
if err != nil {
return "", err
}
req, err := http.NewRequest("POST", "https://api.openai.com/v1/chat/completions", bytes.NewBuffer(bodyBytes))
if err != nil {
return "", err
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "Bearer "+apiKey)
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
respBody, _ := ioutil.ReadAll(resp.Body)
if resp.StatusCode != 200 {
return "", fmt.Errorf("OpenAI API error: %s", string(respBody))
}
var chatResp ChatResponse
if err := json.Unmarshal(respBody, &chatResp); err != nil {
return "", err
}
if len(chatResp.Choices) > 0 {
return chatResp.Choices[0].Message.Content, nil
}
return "", fmt.Errorf("No response from OpenAI API")
}
func main() {
var context string
fmt.Println("Enter a brief description of the image and analysis goal:")
fmt.Scanln(&context)
prompt := generatePrompt(context)
response, err := callOpenAI(prompt)
if err != nil {
fmt.Println("Error:", err)
return
}
fmt.Println(\n=== AD #23 ===\n)
fmt.Println(response)
}
Rust Script for AD #23 (Derivative of AD #15)
(as referenced in Section IIIC. Experimentation)
Prospective Efficacy Value-added Proposition: Triangulated Evidence, Multi-Pass, Visual Attention, and Uncertainty Detection Preserved; Clear and Concise (minimized instruction set); Prepended Strict Instructive.
use std::fs::File;
use std::io::{Read, Write};
use base64::{engine::general_purpose, Engine as _};
fn main() -> std::io::Result<()> {
// ------------------------
// Configuration for AD #23
// ------------------------
let image_path = "input_image.png";
let output_prompt_file = "generated_prompt.txt";
let grid_size = "3x3";
let agents = "Agent A — Visual Detection Specialist, Agent B — Infrastructure/Engineering Analyst, Agent C — Context Analyst, Agent D — Dataset Verification Analyst, Agent E — Anomaly & Risk Evaluator";
let custom_instructions = "";
// ---------------------------------------
// 1. Read and encode the image for AD #23
// ---------------------------------------
let mut file = File::open(image_path)?;
let mut buffer = Vec::new();
file.read_to_end(&mut buffer)?;
let image_base64 = general_purpose::STANDARD.encode(&buffer);
// ------------------------------------------------
// 2. AD #23 Reasoning and Value-Added Propositions
// ------------------------------------------------
let prompt = format!(
r#"Return results as structured sections or JSON when possible.
Task: Analyze the uploaded image and generate a simplified annotated version highlighting objects, structures, and anomalies.
Method: Use multi-pass visual attention. Divide the image into a grid and inspect each region sequentially. Detect objects, classify structures, and flag anomalies. Reconcile overlapping or duplicate detections across regions.
{}
Evidence: Evaluate each detection using triangulated evidence (visual features, contextual reasoning, dataset/pattern agreement). Assign confidence: High (3/3), Medium (2/3), Low (1/3). Mark uncertain or ambiguous detections.
{}
Quality Check: Identify potential false positives and false negatives. Note assumptions and uncertainties.
{}
Output:
1. Annotated simplified image
2. Object table (category, location, confidence, anomaly)
3. Uncertain detections
4. False positives
5. False negatives
6. Assumptions
7. Final synthesis based on the most consistent evidence.
[Image Base64: {}]
"#,
custom_instructions, grid_size, agents, image_base64
);
// ----------------------------------------
// 3. Save the AD #23-related JSON metadata
// ----------------------------------------
let mut output_file = File::create(output_prompt_file)?;
output_file.write_all(prompt.as_bytes())?;
println!("Generated prompt saved to {}", output_prompt_file);
Ok(())
}
In brief, the Python script is useful for prototyping (but slower) while Go (Golang) has high-throughput advantages and Rust can be ultra-high performance (and is well suited for bulk processing).