Step-by-step guides for every platform. Learn how to create, configure, and share AI skills.
Already have a skill to share? Submit it directly to our directory.
Submit a Skill3 guides
Go to chat.openai.com and click your profile → My GPTs → Create a GPT
In the 'Create' tab, describe what you want your GPT to do in natural language
Configure name, description, instructions, and conversation starters
Upload knowledge files (PDFs, docs) for your GPT to reference
Add Actions to connect external APIs (optional)
Set visibility: Only me, Anyone with a link, or Public
Click 'Create' and share the link
Tip: For best results, write clear and specific instructions. Include example outputs and edge cases in your instructions.
Click your profile icon → Settings → Personalization → Custom Instructions
Fill in 'What would you like ChatGPT to know about you?' with context about your role and needs
Fill in 'How would you like ChatGPT to respond?' with output format preferences, tone, and constraints
These instructions apply to all new conversations automatically
Tip: Custom instructions are great for setting default behavior like 'Always respond in Turkish' or 'Always include code examples'.
In GPT Builder, go to the 'Configure' tab → Actions
Click 'Create new action' and paste your OpenAPI schema
Set the authentication method (None, API Key, or OAuth)
Test the action with sample requests
The GPT can now call your API during conversations
Tip: You can use services like Zapier or Make.com to create no-code API endpoints for your actions.
4 guides
Go to claude.ai and click 'Projects' in the sidebar
Click 'Create Project' and give it a name and description
Add custom instructions that define how Claude should behave in this project
Upload files (PDFs, code, docs) to the project knowledge base — Claude can reference up to 200K tokens
Start conversations within the project — Claude will use your instructions and files automatically
Tip: Projects are perfect for domain-specific work. Upload your company docs, style guides, or codebases for context-aware responses.
When using the Claude API, include a 'system' message at the beginning of your conversation
Define Claude's role, behavior, output format, and constraints in the system prompt
Include few-shot examples of desired inputs and outputs
Use XML tags like <instructions>, <context>, <output_format> for clear structure
Tip: Claude responds exceptionally well to structured prompts with XML tags. Be specific about what to include AND exclude.
Install Claude Code: npm install -g @anthropic-ai/claude-code
Create a CLAUDE.md file in your project root with instructions and context
Add custom slash commands in .claude/commands/ directory as markdown files
Each .md file becomes a /command — the file content is the prompt template
Use $ARGUMENTS placeholder for dynamic input
Tip: Claude Code skills are powerful for automating development workflows. Create commands for code review, testing, deployment, and more.
Claude Desktop and Claude Code support Model Context Protocol (MCP) tools
Install MCP servers: npx @anthropic-ai/create-mcp-server or use existing ones from npm
Configure in claude_desktop_config.json or .claude/settings.json
Claude can now use the MCP tools during conversations to access databases, APIs, files, etc.
Tip: MCP tools give Claude real-world capabilities. Connect databases, GitHub, Slack, and more.
4 guides
Go to aistudio.google.com and click 'Create new prompt'
Choose 'Structured Prompt' for consistent, template-based outputs
Define input columns (variables) and output columns (expected results)
Add example rows showing input-output pairs for few-shot learning
Test with new inputs and iterate on your examples
Save and share via the 'Get Code' button (Python, JavaScript, REST)
Tip: Structured prompts are ideal for classification, extraction, and transformation tasks where you need consistent output formats.
Click 'Create new prompt' → 'Chat Prompt'
Write system instructions defining the AI's role and behavior
Add tools (function declarations) if your prompt needs to call external APIs
Configure model settings: temperature, top-p, max output tokens
Test in the playground and export code when ready
Tip: Use the 'Safety settings' to customize content filtering for your specific use case.
Get your API key from aistudio.google.com → 'Get API Key'
Install the SDK: pip install google-genai (Python) or npm install @google/genai
Initialize the client with your API key
Use generateContent() for text, image, audio, and video inputs
Add tool declarations for function calling capabilities
Tip: Gemini supports multimodal inputs natively — send images, audio, and video alongside text for rich AI interactions.
For production, use Vertex AI in Google Cloud Console
Create an endpoint and deploy your tuned model or use Gemini directly
Set up authentication with service accounts
Configure autoscaling, monitoring, and logging
Use Vertex AI Pipelines for automated ML workflows
Tip: Vertex AI provides enterprise features like VPC, CMEK encryption, and SLA guarantees that AI Studio doesn't.
3 guides
Open the Antigravity dashboard and click 'New Workflow'
Choose a trigger: Schedule, Webhook, Form Submission, or Event
Drag and drop action blocks from the sidebar: Send Email, API Call, Database Query, etc.
Connect blocks with arrows to define the flow
Add conditional branches (If/Else) for decision logic
Configure each block's settings (API endpoints, email templates, etc.)
Test the workflow with sample data before activating
Toggle the workflow to 'Active' to start running
Tip: Start simple with 2-3 blocks and gradually add complexity. Use the 'Test' button after each change.
Browse the Template Gallery for pre-built workflows
Click 'Use Template' to create a copy in your workspace
Customize the template: change trigger settings, update API keys, modify conditions
Connect your accounts (Slack, Gmail, CRM, etc.) when prompted
Test and activate
Tip: Templates are the fastest way to get started. Customize them rather than building from scratch.
Go to Settings → Integrations → Add New
Choose 'Custom API' for services not in the built-in list
Enter the API base URL and authentication details
Define the available actions (endpoints) with request/response schemas
Your custom integration now appears in the workflow builder sidebar
Tip: Use webhook blocks for real-time integrations with any service that supports webhooks.
4 guides
Find MCP servers on npm, GitHub, or this directory
Install: npm install -g @modelcontextprotocol/server-name
Configure in your AI client's settings file (e.g., claude_desktop_config.json)
Add the server command and any required environment variables
Restart your AI client — the tools become available automatically
The AI can now use the MCP tools during conversations
Tip: Popular MCP servers: filesystem, github, slack, postgres, sqlite, brave-search. Check our MCP category for 1,500+ options.
Initialize: npx @anthropic-ai/create-mcp-server my-server
Define tools in src/index.ts using the MCP SDK
Each tool needs: name, description, input schema (JSON Schema), and a handler function
Add resources for data the AI can read (files, database records, etc.)
Add prompts for reusable prompt templates
Test locally: npx tsx src/index.ts
Publish to npm for others to use: npm publish
Tip: Focus on clear tool descriptions — the AI uses these to decide when and how to call your tools.
For Claude Desktop: Edit ~/Library/Application Support/Claude/claude_desktop_config.json (Mac) or %APPDATA%/Claude/claude_desktop_config.json (Windows)
For Claude Code: Edit .claude/settings.json in your project or ~/.claude/settings.json globally
Add your server to the 'mcpServers' object with command and args
Set environment variables with the 'env' field for API keys
Use 'stdio' transport for local servers, 'sse' for remote ones
Tip: Use environment variables for API keys — never hardcode secrets in config files.
MCP is an open standard — it works with any compatible AI client
For custom integrations, use the MCP client SDK (@modelcontextprotocol/sdk)
Connect to MCP servers programmatically in your own applications
Bridge MCP to OpenAI/Gemini by translating MCP tool schemas to their function calling format
Use MCP proxy servers to expose MCP tools via REST APIs
Tip: MCP's open standard means your tools work across AI platforms without rewriting.
Submit your skill to our directory and help others discover it.