Visualize how different prompting strategies structure and tokenize their inputs before sending to the LLM