L

Labarchive Integration Elite

All-in-one skill covering electronic, notebook, integration, access. Includes structured workflows, validation checks, and reusable patterns for scientific.

SkillClipticsscientificv1.0.0MIT
0 views0 copies

LabArchives Integration Elite

Integrate with the LabArchives electronic lab notebook platform for automated research documentation, data management, and experiment tracking. This skill enables programmatic access to notebooks, entries, attachments, and folder structures through the LabArchives REST API.

When to Use This Skill

Choose LabArchives Integration Elite when you need to:

  • Automate lab notebook entry creation from experimental pipelines
  • Upload instrument data files and analysis results to LabArchives
  • Build custom integrations between LabArchives and LIMS or data analysis tools
  • Generate reports from notebook entries across multiple experiments

Consider alternatives when:

  • You need a general electronic lab notebook without API integration (use the platform directly)
  • You need version-controlled code notebooks (use Jupyter or Git-based workflows)
  • You need inventory management without lab notebook features (use a dedicated LIMS)

Quick Start

# Install required packages pip install requests python-dotenv
import requests import hashlib import hmac import time class LabArchivesClient: BASE_URL = "https://api.labarchives.com/api" def __init__(self, access_key, password): self.access_key = access_key self.password = password def _sign_request(self, method, params): """Generate HMAC-SHA1 signature for API authentication.""" sig_string = f"{method}\n{params}\n{int(time.time())}" signature = hmac.new( self.password.encode(), sig_string.encode(), hashlib.sha1 ).hexdigest() return signature def get_notebooks(self): """Retrieve all notebooks for the authenticated user.""" response = requests.get( f"{self.BASE_URL}/notebooks", params={"access_key_id": self.access_key} ) return response.json() def create_entry(self, notebook_id, folder_id, title, content): """Create a new notebook entry.""" response = requests.post( f"{self.BASE_URL}/entries", json={ "notebook_id": notebook_id, "folder_id": folder_id, "title": title, "content": content }, params={"access_key_id": self.access_key} ) return response.json() # Initialize client client = LabArchivesClient("your-access-key", "your-password") notebooks = client.get_notebooks()

Core Concepts

API Endpoints

EndpointMethodDescription
/notebooksGETList all notebooks
/notebooks/{id}/foldersGETList folders in a notebook
/entriesPOSTCreate a new entry
/entries/{id}PUTUpdate an existing entry
/entries/{id}/attachmentsPOSTUpload file attachment
/searchGETSearch across notebooks
/reports/generatePOSTGenerate notebook report

Automated Experiment Logging

import json from datetime import datetime def log_experiment(client, notebook_id, folder_id, experiment_data): """Log a complete experiment with metadata and results.""" title = f"Experiment - {experiment_data['name']} - {datetime.now().strftime('%Y-%m-%d')}" # Format content as structured HTML content = f""" <h2>Experiment: {experiment_data['name']}</h2> <p><strong>Date:</strong> {datetime.now().isoformat()}</p> <p><strong>Operator:</strong> {experiment_data.get('operator', 'N/A')}</p> <h3>Protocol</h3> <p>{experiment_data.get('protocol', '')}</p> <h3>Parameters</h3> <table border="1"> <tr><th>Parameter</th><th>Value</th><th>Unit</th></tr> {''.join(f"<tr><td>{p['name']}</td><td>{p['value']}</td><td>{p['unit']}</td></tr>" for p in experiment_data.get('parameters', []))} </table> <h3>Results</h3> <pre>{json.dumps(experiment_data.get('results', {}), indent=2)}</pre> <h3>Observations</h3> <p>{experiment_data.get('observations', '')}</p> """ entry = client.create_entry(notebook_id, folder_id, title, content) # Upload any data files for filepath in experiment_data.get('data_files', []): client.upload_attachment(entry['id'], filepath) return entry # Example usage experiment = { "name": "PCR Amplification - Gene X", "operator": "Dr. Smith", "protocol": "Standard PCR with Taq polymerase", "parameters": [ {"name": "Annealing Temp", "value": "58", "unit": "°C"}, {"name": "Cycles", "value": "35", "unit": "cycles"}, {"name": "Template", "value": "50", "unit": "ng"} ], "results": {"bands_detected": 1, "size_bp": 1250}, "observations": "Clean single band at expected size" } log_experiment(client, "nb-123", "folder-456", experiment)

Configuration

ParameterDescriptionDefault
access_key_idLabArchives API access keyRequired
passwordAccount password for signingRequired
base_urlAPI base URL"https://api.labarchives.com/api"
timeoutRequest timeout in seconds30
max_attachment_sizeMax upload file size (MB)250
auto_versioningCreate entry versions on updatetrue

Best Practices

  1. Structure notebooks by project — Create separate notebooks for each research project and use a consistent folder hierarchy (e.g., Protocols, Raw Data, Analysis, Reports). This makes cross-referencing and auditing straightforward.

  2. Attach raw data files directly — Upload instrument output files (CSV, FASTA, images) as attachments rather than copying data into entry text. This preserves the original file format and enables reproducibility.

  3. Use templates for recurring experiments — Create entry templates with pre-filled sections (Protocol, Materials, Parameters, Results, Notes) so that all experiments of the same type have a consistent structure.

  4. Implement error handling for API calls — Network timeouts and rate limits are common with cloud APIs. Wrap all API calls in try/except blocks with retry logic and log failures so no experimental data is lost.

  5. Maintain an audit trail — Enable auto-versioning on entries and avoid deleting old entries. Regulatory compliance (GLP, GxP) requires a complete record of all modifications with timestamps and user identity.

Common Issues

Authentication signature mismatch — The HMAC-SHA1 signature must include the exact method name and parameters in the correct order. Double-check that your timestamp is in UTC seconds (not milliseconds) and that the signing string uses newline separators between components.

Large file uploads timing out — Files over 100 MB often hit server-side timeouts. Split large datasets into smaller chunks or compress them before uploading. Use multipart upload when available, and set your client timeout to at least 120 seconds for large files.

Notebook search returning stale results — LabArchives indexes notebook entries asynchronously, so recently created or updated entries may not appear in search results immediately. Wait 30-60 seconds after creating entries before searching, or query entries directly by ID when you need immediate access.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates