These sites cover technology, ideas, and experiments. I use modern tools—including AI—because they are useful. I do not use them to outsource responsibility. Every post ultimately stands on my judgment.
How AI is used here
Writing and editing support. AI may be used during drafting, restructuring, language polishing, outlining, or to sanity-check readability. It can also help with quick translations and alternative phrasings. The final text is curated, revised, and owned by me.
Research assistance. AI can help scan large amounts of material, extract candidate facts, propose angles, or generate summaries of sources. It is a speed tool, not an authority. If something matters, it must survive verification against primary sources or other reliable references.
Data and code work. AI may assist with code snippets, debugging ideas, or exploratory analysis. Anything that ends up published is tested and reviewed, and I assume responsibility for errors.
Automation. When content is produced from structured inputs (for example, a repeatable format driven by a known dataset), the goal is to reduce busywork and keep the interesting work human: interpretation, critique, and synthesis. If a post is fundamentally automated, that will be obvious from the format and context—even if there is no special label.
Visuals and images
Most images on these sites are AI-generated. Some visuals (especially diagrams or explanatory graphics) are made by me. I generally do not add a special notice to distinguish between the two.
What matters is how to interpret them: unless a post explicitly says otherwise, images should be read as illustrative, not documentary. They are meant to set tone, convey an idea, or provide a conceptual visualization—not to serve as evidence that something happened exactly as depicted.
Accuracy and “confident nonsense”
AI can produce plausible text that is wrong. Because of that, AI output is never treated as a source. When a post contains concrete factual claims (numbers, quotes, timelines, attributions), I aim to verify them. If verification is uncertain, I try to phrase that uncertainty clearly rather than smuggling it in as certainty.
Corrections
Mistakes happen. When I find an error—whether introduced by me, by a source, or by an AI-assisted step—I correct it. If the change affects meaning, I prefer to note that a correction was made.
What will change, and what will not
Tools will keep evolving. The policy will evolve with them. The invariant part is simple: I use AI to move faster and explore more, not to lower standards or dodge accountability.