写点什么

第 1 期 | GPTSecurity 周报

作者:云起无垠
  • 2023-10-18
    北京
  • 本文字数:2592 字

    阅读完需:约 9 分钟

第1期 | GPTSecurity周报


GPTSecurity 是一个涵盖了前沿学术研究和实践经验分享的社区,集成了生成预训练 Transformer(GPT)、人工智能生成内容(AIGC)以及大型语言模型(LLM)等安全领域应用的知识。在这里,您可以找到关于 GPT/AIGC/LLM 最新的研究论文、博客文章、实用的工具和预设指令(Prompts)。现为了更好的知悉近一周的贡献内容,现总结如下如下。


Security Papers


1. 行业新闻


安全大模型进入爆发期!谷歌云已接入全线安全产品|RSAC 2023


https://mp.weixin.qq.com/s/5Aywrqk7B6YCiLRbojNCuQ


2. 软件供应链安全


1)论文


Large Language Models are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models


https://arxiv.org/pdf/2212.14834.pdf


SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques


https://dl.acm.org/doi/abs/10.1145/3549035.3561184


LLMSecEval: A Dataset of Natural Language Prompts for Security Evaluation


https://arxiv.org/pdf/2303.09384.pdf


DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection


https://arxiv.org/pdf/2304.00409.pdf


2)博客


我是如何用 GPT 自动化生成 Nuclei 的 POC


https://mp.weixin.qq.com/s/Z8cTUItmbwuWbRTAU_Y3pg


3. ChatGPT 自身安全


1)论文


  • GPT-4 Technical Report


https://arxiv.org/abs/2303.08774


  • Ignore Previous Prompt: Attack Techniques For Language Models


https://arxiv.org/abs/2211.09527


  • More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models


https://arxiv.org/abs/2302.12173


  • Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks


https://arxiv.org/abs/2302.05733


  • RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models


https://arxiv.org/abs/2009.11462


  • Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models


https://arxiv.org/abs/2102.02503


  • Taxonomy of Risks posed by Language Models


https://dl.acm.org/doi/10.1145/3531146.3533088


  • Survey of Hallucination in Natural Language Generation


https://arxiv.org/abs/2202.03629


  • Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned


https://arxiv.org/abs/2209.07858


  • Pop Quiz! Can a Large Language Model Help With Reverse Engineering


https://arxiv.org/abs/2202.01142


  • Evaluating Large Language Models Trained on Code


https://arxiv.org/abs/2107.03374


  • Is GitHub's Copilot as Bad as Humans at Introducing Vulnerabilities in Code?


https://arxiv.org/abs/2204.04741


  • Using Large Language Models to Enhance Programming Error Messages


https://arxiv.org/abs/2210.11630


  • Controlling Large Language Models to Generate Secure and Vulnerable Code


https://arxiv.org/abs/2302.05319


  • Systematically Finding Security Vulnerabilities in Black-Box Code Generation Models


https://arxiv.org/abs/2302.04012


  • SecurityEval dataset: mining vulnerability examples to evaluate machine learning-based code generation techniques


https://dl.acm.org/doi/abs/10.1145/3549035.3561184


  • Assessing the quality of GitHub copilot’s code generation


https://dl.acm.org/doi/abs/10.1145/3558489.3559072


  • Can we generate shellcodes via natural language? An empirical study


https://link.springer.com/article/10.1007/s10515-022-00331-3


2)博客


  • 用 ChatGPT 来生成编码器与配套 WebShell


https://mp.weixin.qq.com/s/I9IhkZZ3YrxblWIxWMXAWA


  • 使用 ChatGPT 来生成钓鱼邮件和钓鱼网站


https://www.richardosgood.com/posts/using-openai-chat-for-phishing/


  • Chatting Our Way Into Creating a Polymorphic Malware


https://www.cyberark.com/resources/threat-research-blog/chatting-our-way-into-creating-a- polymorphic-malware


  • Hacking Humans with AI as a Service


https://media.defcon.org/DEF%20CON%2029/DEF%20CON%2029%20presentations/Eugene%20Lim%20Glenice%20Tan%20Tan%20Kee%20Hock%20-%20Hacking%20Humans%20with%20AI%20as%20a%20Service.pdf


  • 内建虚拟机实现 ChatGPT 的越狱


https://www.engraved.blog/building-a-virtual-machine-inside/


  • ChatGPT can boost your Threat Modeling skills


https://infosecwriteups.com/chatgpt-can-boost-your-threat-modeling-skills-ab82149d0140


  • 干货分享!Langchain 框架 Prompt Injection 在野 0day 漏洞分析\


https://mp.weixin.qq.com/s/wFJ8TPBiS74RzjeNk7lRsw


  • LLM 中的安全隐患-以 VirusTotal Code Insight 中的提示注入为例


https://mp.weixin.qq.com/s/U2yPGOmzlvlF6WeNd7B7ww


4. 威胁检测


ChatGPT 赋能的威胁分析——使用 ChatGPT 为每个 npm, PyPI 包检查安全问题,包括信息渗透、SQL 注入漏洞、凭证泄露、提权、后门、恶意安装、预设指令污染等威胁


https://socket.dev/blog/introducing-socket-ai-chatgpt-powered-threat-analysis


Security Tools


1. PentestGPT


简介:一款基于 GPT 的自动化渗透测试工具。它建立在 ChatGPT 之上,以交互方式运行,以指导渗透测试人员进行整体进度和具体操作。PentestGPT 能够解决简单到中等的 HackTheBox 机器和其他 CTF 挑战。与 Auto-GPT 的相比,它有以下几个差异点:


1)在安全测试中使用Auto-GPT很好,但它并未针对与安全相关的任务进行优化。


2)PentestGPT 专为通过自定义会话交互进行渗透测试而设计。


3)目前,PentestGPT 不依赖于搜索引擎。


链接:[https://github.com/GreyDGL/PentestGPT]


2. Audit GPT


简介:一款针对区块链安全审计任务进行 GPT 微调的智能化工具


链接:https://github.com/fuzzland/audit_gpt


3. IATelligence


简介:IATelligence 是一个 Python 脚本,它将提取 PE 文件的 IAT 并请求 GPT 以获取有关 API 和 ATT&CK 矩阵相关的更多信息


链接https://github.com/fr0gger/IATelligence

用户头像

云起无垠

关注

定义开发安全新范式 2022-10-14 加入

云起无垠致力于为企业提供自动化安全检测能力,让企业更安全,让安全更智能。

评论

发布
暂无评论
第1期 | GPTSecurity周报_云起无垠_InfoQ写作社区