第 4 期 | GPTSecurity 周报
GPTSecurity 是一个涵盖了前沿学术研究和实践经验分享的社区,集成了生成预训练 Transformer(GPT)、人工智能生成内容(AIGC)以及大型语言模型(LLM)等安全领域应用的知识。在这里,您可以找到关于 GPT/AIGC/LLM 最新的研究论文、博客文章、实用的工具和预设指令(Prompts)。现为了更好的知悉近一周的贡献内容,现总结如下如下。
Security Papers
GPT 自身安全
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation
https://arxiv.org/pdf/2305.11391.pdf
Taxonomy of Risks posed by Language Models
https://dl.acm.org/doi/10.1145/3531146.3533088
威胁检测
FlowTransformer: A Transformer Framework for Flow-based Network Intrusion Detection Systems
https://arxiv.org/pdf/2304.14746.pdf
软件供应链安全
CODAMOSA: Escaping Coverage Plateaus in Test Generation with Pre-trained Large Language Models
微软在 ICSE2023 上发布的论文,旨在利用 LLM 来缓解传统 fuzz 中的陷入“Coverage Plateaus”的问题
https://www.carolemieux.com/codamosa_icse23.pdf
BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT
https://www.ndss-symposium.org/wp-content/uploads/2023/02/NDSS2023Poster_paper_7966.pdf
评论