C

Comprehensive Perplexity Module

All-in-one skill covering perform, powered, searches, real. Includes structured workflows, validation checks, and reusable patterns for scientific.

SkillClipticsscientificv1.0.0MIT
0 views0 copies

Comprehensive Perplexity Module

Integrate with the Perplexity AI search API to add real-time, citation-backed web search capabilities to your applications. This skill covers API integration, search queries with source attribution, conversational search sessions, and building research-augmented AI applications.

When to Use This Skill

Choose Comprehensive Perplexity Module when you need to:

  • Add real-time web search with AI-synthesized answers to your application
  • Build research tools that return answers with source citations
  • Create conversational search interfaces that maintain context across queries
  • Augment LLM responses with current, factual information from the web

Consider alternatives when:

  • You need raw search results without AI synthesis (use Google Custom Search or Bing API)
  • You need academic literature specifically (use OpenAlex or Semantic Scholar)
  • You need your own RAG pipeline with custom documents (use vector databases)

Quick Start

pip install requests python-dotenv
import requests import os PERPLEXITY_API_KEY = os.getenv("PERPLEXITY_API_KEY") def search(query, model="llama-3.1-sonar-large-128k-online"): """Search using Perplexity AI API.""" response = requests.post( "https://api.perplexity.ai/chat/completions", headers={ "Authorization": f"Bearer {PERPLEXITY_API_KEY}", "Content-Type": "application/json" }, json={ "model": model, "messages": [ {"role": "system", "content": "Be precise and cite sources."}, {"role": "user", "content": query} ] } ) data = response.json() answer = data["choices"][0]["message"]["content"] citations = data.get("citations", []) return answer, citations answer, sources = search("What are the latest breakthroughs in quantum computing 2024?") print(answer) print(f"\nSources: {sources}")

Core Concepts

Available Models

ModelDescriptionBest For
llama-3.1-sonar-small-128k-onlineFast, lightweight online searchQuick lookups
llama-3.1-sonar-large-128k-onlineBalanced accuracy and speedGeneral research
llama-3.1-sonar-huge-128k-onlineHighest quality answersDeep research
llama-3.1-sonar-small-128k-chatOffline conversationNo web search needed
llama-3.1-sonar-large-128k-chatOffline with more capacityComplex reasoning

Research Assistant with Citations

import requests import json from datetime import datetime class PerplexityResearcher: def __init__(self, api_key): self.api_key = api_key self.base_url = "https://api.perplexity.ai/chat/completions" self.conversation_history = [] def research(self, query, follow_up=False): """Conduct research with citation tracking.""" messages = [ {"role": "system", "content": ( "You are a research assistant. Provide detailed, " "factual answers with specific data points. " "Always cite your sources." )} ] if follow_up and self.conversation_history: messages.extend(self.conversation_history) messages.append({"role": "user", "content": query}) response = requests.post( self.base_url, headers={ "Authorization": f"Bearer {self.api_key}", "Content-Type": "application/json" }, json={ "model": "llama-3.1-sonar-large-128k-online", "messages": messages, "temperature": 0.2, "max_tokens": 2048, "return_citations": True } ) data = response.json() answer = data["choices"][0]["message"]["content"] citations = data.get("citations", []) # Track conversation self.conversation_history.append( {"role": "user", "content": query} ) self.conversation_history.append( {"role": "assistant", "content": answer} ) return { "answer": answer, "citations": citations, "timestamp": datetime.now().isoformat(), "query": query } def multi_angle_research(self, topic, angles=None): """Research a topic from multiple angles.""" if angles is None: angles = [ f"What is {topic} and what are the key facts?", f"What are the latest developments in {topic}?", f"What are the main challenges and controversies around {topic}?", f"What do experts predict about the future of {topic}?" ] results = [] for angle in angles: result = self.research(angle) results.append(result) return results # Usage researcher = PerplexityResearcher(PERPLEXITY_API_KEY) findings = researcher.multi_angle_research("mRNA vaccine technology") for f in findings: print(f"\nQ: {f['query']}") print(f"A: {f['answer'][:200]}...") print(f"Sources: {len(f['citations'])}")

Configuration

ParameterDescriptionDefault
modelPerplexity model to use"llama-3.1-sonar-large-128k-online"
temperatureResponse randomness (0-2)0.2
max_tokensMaximum response length1024
return_citationsInclude source URLstrue
search_recency_filterTime filter (day, week, month, year)None
top_pNucleus sampling parameter0.9

Best Practices

  1. Use low temperature for factual queries — Set temperature=0.1-0.2 for research and fact-finding queries. Higher temperatures introduce variation that can reduce factual accuracy. Only increase temperature for creative or brainstorming queries.

  2. Leverage conversation history for follow-ups — Send previous messages in the conversation to enable follow-up questions like "Can you elaborate on point 3?" without repeating context. This produces more coherent research sessions.

  3. Apply recency filters for time-sensitive queries — Use search_recency_filter="week" for current events or "month" for recent developments. Without a filter, the model may include outdated information alongside current data.

  4. Validate citations before publishing — Perplexity provides source URLs, but always verify that the cited source actually supports the claimed fact. AI synthesis can occasionally misattribute information or draw incorrect inferences from sources.

  5. Cache responses for repeated queries — API calls cost money and take time. Cache responses with a TTL appropriate for your use case (hours for current events, days for stable topics). Use the query string as the cache key.

Common Issues

API returns 401 Unauthorized — Verify your API key is correct and hasn't expired. Perplexity API keys are separate from Perplexity Pro subscriptions. Check that the key is set in your environment variables without leading/trailing whitespace.

Answers lack specific citations — Some queries return answers without inline citations. Add explicit instructions in the system prompt: "Cite specific sources for each claim." Also try the larger model (sonar-huge) which tends to provide more detailed source attribution.

Rate limiting on high-volume queries — Perplexity enforces rate limits that vary by plan tier. Implement exponential backoff: wait 1s after the first 429 response, then 2s, 4s, etc. For bulk research, space queries at least 1 second apart to stay within limits.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates