Skip to content
Cherith
All Roles

Senior Full Stack Engineer (GenAI)

Labs Remote Full-time

You will support the AI Lead and Senior Data Analyst by building and shipping the core product and platform capabilities for the Cherith GRTI Search Platform. That includes the core search experience and APIs (retrieval pipelines, reranking, guardrails, and UI), plus the surrounding workflows that make it production-ready for multiple partners (onboarding, content management, access controls, analytics, and integrations).

Who We Are

We apply cutting edge technology in AI, Machine Learning and Natural Language Understanding to Faith, Discipleship and Christian Living, backed by rigorous measurement and responsible data practices. We build useful applications for personal spiritual growth, designing tools to enhance faith communities such as small groups or churches, and offering our expertise to international mission organizations.

Our AI Initiative

Our AI mission is to place faithful AI in the hands of trusted pastors and ministries, so that their impact will be broader and deeper. To continue achieving this, we are developing and researching unique projects where trust, accuracy, and governance are first-class requirements:

  • A platform to support the GRTI network of ministries and organizations. This platform is a central, searchable library of biblical resources, alongside integrations that bring search into GRTI partner applications and address their core pain points. We will also provide analytics so partners can understand usage, content coverage, and quality over time.
  • A mature profiling algorithm and recommendation system to understand user behaviour and ensure the technical system is aiding spiritual growth rather than detracting from it, with a centred-set approach to profiling and careful safeguards.
  • Fine-tuning a Christian LLM to further enhance our AI efforts and improve accuracy within our existing and future systems, with strong data stewardship and clear evaluation standards.

About the Role

You are a product-minded engineer who enjoys owning features end-to-end, from data contracts to front-end polish, while collaborating closely with the AI Lead on quality and safety, and with the Senior Data Analyst on metrics, dashboards, and partner reporting. You will also support implementation work for centred-set profiling and recommendations, and contribute engineering support for the Christian LLM lifecycle (data workflows, evaluation hooks, deployment practices), without needing to be the primary model researcher.

Outcomes You Will Drive in the First 12 Months

  • Ship partner-friendly platform features (search modalities, onboarding, content upload, dashboards, usage metrics, and admin tools) behind flags with safe rollout and rollback.
  • Support a multi-tenant production-ready MVP with solid partner isolation, privacy-aware access controls, and clear auditability.
  • Improve answer precision and faithfulness with hybrid retrieval and reranker iterations measured by automated evaluations and monthly evaluation cycles.
  • Reduce p95 latency and cost per query via caching, efficient context construction, and cost and usage guardrails.
  • Maintain healthy DORA metrics with small, frequent, reversible deployments, backed by strong observability and incident-ready runbooks.

What You Will Do

  • Build and operate RAG services in Python (APIs, retrieval, reranking, prompt orchestration) with guardrails and safe rendering to mitigate prompt injection, insecure output handling, and data leakage.
  • Deliver multi-tenant platform foundations (tenant configuration, partner isolation, privacy-aware access controls), plus partner integration options (APIs, SDK patterns, embed-friendly components) with rate limits and clear documentation.
  • Own ingestion and content management workflows end-to-end (partner onboarding, upload, parsing, chunking, indexing, re-indexing, corpus health, and failure handling).
  • Partner with the AI Lead to implement evaluation hooks, regression tests, release gates, and quality, safety, latency monitoring across CI and deployments.
  • Partner with the Senior Data Analyst to instrument analytics and deliver partner-facing dashboards for usage, quality trends, content coverage, and onboarding health.
  • Build and maintain an accessible React UI across the product (search, onboarding, content overview, theological judge views, metrics, admin experiences).
  • Improve performance and cost with practical engineering (caching, batching, streaming, retrieval efficiency) and guardrails to prevent runaway usage.

What You Will Bring

  • 7 or more years building production web services and user interfaces, with strong end-to-end ownership.
  • 5 or more years delivering production-ready Python and React.
  • 5 years experience with search or retrieval systems (RAG, BM25, Lucene, Elasticsearch, or Solr) plus vector search or embeddings.
  • Strong API design skills (clear contracts, versioning, backwards compatibility) and experience building reliable backend services.
  • Experience shipping multi-tenant or multi-customer systems with real access controls, privacy constraints, and auditability.
  • Practical strength in testing (unit and integration), CI, and observability (metrics, tracing) tied to user experience.
  • Security-minded development habits for user-generated content and external data sources, plus comfort co-engineering with AI tools (Copilot, Cursor, Windsurf or similar) and working in Agile and cloud environments (GCP, AWS, Azure).

Nice to Have

  • Strong grasp of modern AI patterns (LLMs, SLMs, embeddings, agents, multimodal, reasoning-first models) and practical orchestration.
  • Experience with hybrid retrieval and reranking (for example cross-encoders) and prompt or context optimisation.
  • Experience implementing AI governance in production (audit logs, prompt and model versioning, release gates, lightweight risk reviews).
  • Familiarity with LLM application security best practices (for example OWASP guidance), including abuse monitoring and cost and usage controls.
  • Experience building evaluation harnesses for LLM systems, including offline regression sets and production monitoring.
  • Experience supporting fine-tuning delivery workflows (dataset hygiene, eval-driven iteration, safe rollout), even if not the primary ML researcher.
  • Background in accessibility or design systems, and a Bachelor's degree or equivalent practical experience.

Location and Travel

U.S.-based remote. Possible minimal travel for team onsites or partner sessions.

Interested in this role?

Send us a note with a bit about yourself and why this role caught your eye. No formal cover letter required.

Apply via Email