Skip to content

Privacy-Preserving Prompt Processing Framework for Cloud-Edge Collaborative LLMs on KubeEdge-Ianvs #204

@shenjiaxing

Description

@shenjiaxing

Description:
With the widespread adoption of Large Language Models (LLMs) across industries, user privacy protection has merged as a critical challenge. Traditional cloud-based LLM deployments require users to upload sensitive prompts to remote servers, creating significant privacy risks. However, purely edge-deployed lightweight models offer limited performance. This project aims to develop a privacy-preserving cloud-edge collaborative inference framework based on KubeEdge-Ianvs that processes sensitive prompts with irreversible transformations at the edge, ensuring that original data cannot be reconstructed even with state-of-the-art embedding inversion attacks, while maintaining model utility. The framework will address the privacy-utility tradeoff in collaborative LLM inference and provide quantitative evaluation methods.

Expected Outcomes:

  1. Implement an end-to-end privacy-preserving cloud-edge collaborative LLM inference framework in KubeEdge-Ianvs, supporting separation of edge-side prompt processing and cloud-based model inference
  2. Design and implement a set of privacy-utility tradeoff evaluation methods, including model utility metrics and privacy protection metrics
  3. Provide resource optimization solutions adapted to different edge device capabilities. (Optional)

Recommended Skills:
Python, PyTorch/TensorFlow
LLM
KubeEdge, Kubernetes
Federated learning, differential privacy

Metadata

Metadata

Labels

kind/featureCategorizes issue or PR as related to a new feature.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions