diff --git "a/data.jsonl" "b/data.jsonl" --- "a/data.jsonl" +++ "b/data.jsonl" @@ -911,7 +911,7 @@ {"organization": "3b1b", "repo_name": "manim", "base_commit": "6880ebcbc2525b2f3c0731439bef7ff981b4b5b4", "is_iss": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/924", "iss_label": "", "title": "Reconsidering TEX_USE_CTEX / using XeLaTeX", "body": "I worked on manim back in 2018. I added the function for using CTeX (XeLaTeX package for Chinese) and XeLaTeX instead of LaTeX using the flag `TEX_USE_CTEX` in constants.py (#315).\r\n\r\nI have stopped working on manim since 2019, but over the months there are apparently more and more people who want to use LaTeX rendering in non-English languages, and even on very old issues I still occasionally see people asking how to do that... Looking back at my change I really should have **decoupled using CTeX (TeX template) from XeLaTeX (rendering tool)**. This has caused a *lot* of confusions and made weird hacks/fixes necessary for only using XeLaTeX, especially for a language that is not Chinese or English, with the most recent #858 and #840. It really should have been a flag `TEX_USE_XELATEX` and another flag `TEMPLATE_TEX_NAME`, and the flag `TEX_USE_CTEX` is such that when it is `True`, `TEX_USE_XELATEX` is `True` and `TEMPLATE_TEX_NAME` is `\"ctex_template.tex\"`; otherwise `TEX_USE_XELATEX` is `False` and `TEMPLATE_TEX_NAME` is `\"tex_template.tex\"`. Then set `TEMPLATE_TEX_FILE` to `os.path.join(os.path.dirname(os.path.realpath(__file__)), TEMPLATE_TEX_NAME)`. Corresponding logic: constants.py lines 74\u201379.\r\n\r\nIt might be even better to set it dynamically using a function or as a parameter of `TexMobject()`, (see issues like #891). I looked at the source code and this is definitely possible. The options I can think of are\r\n1. Use the current `TEX_USE_CTEX`\r\n2. Add flags `TEX_USE_XELATEX` and `TEMPLATE_TEX_NAME`, and rework `TEX_USE_CTEX`\r\n3. Add parameters for `TexMobject()` like `use_xelatex=False` and `tex_template=\"tex_template.tex\"`\r\n4. Use the flags of 2. as a default, and make it possible to change the default using 3.\r\n\r\nNot really sure if this is the right place to raise this issue.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "ManimCommunity"}, {"pro": "manim", "path": ["manim/utils/tex_templates.py"]}], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manim/utils/tex_templates.py"], "doc": [], "test": [], "config": [], "asset": ["ManimCommunity"]}} {"organization": "3b1b", "repo_name": "manim", "base_commit": "49582c35919097585699598ad0ca49fe3f2117b5", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/660", "iss_label": "", "title": "ColorByCaracter help ", "body": "I want to color only theta of ```{ e }^{ i\\theta }```\r\n\r\nI was going through ColorByCaracter in 3_text_like_arrays.py . \r\nBut I fail to understand how you people separate the tex formula into arrays. I know about arrays but I can only copy the tex code from [Daum Equation Editor](http://s1.daumcdn.net/editor/fp/service_nc/pencil/Pencil_chromestore.html) and paste it. I don't know how to divide them into arrays.\r\n\r\nPlease help me.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "49582c35919097585699598ad0ca49fe3f2117b5", "files": [{"path": "manimlib/mobject/svg/tex_mobject.py", "Loc": {"('TexMobject', None, 132)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["manimlib/mobject/svg/tex_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "3b1b", "repo_name": "manim", "base_commit": "32abbb9371308e8dff7410de387fe78e64b6fe7a", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/700", "iss_label": "", "title": "OSError: No file matching Suv.svg in image directory", "body": "I've tried putting the .SVG image into */media/designs/svg_images. But when I want to quote it in the .py file it still reports errors:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/jason/Documents/manim/manimlib/extract_scene.py\", line 155, in main\r\n scene = SceneClass(**scene_kwargs)\r\n File \"/home/jason/Documents/manim/manimlib/scene/scene.py\", line 53, in __init__\r\n self.construct()\r\n File \"SVGTEST.py\", line 44, in construct\r\n height=height_size\r\n File \"/home/jason/Documents/manim/manimlib/mobject/svg/svg_mobject.py\", line 45, in __init__\r\n self.ensure_valid_file()\r\n File \"/home/jason/Documents/manim/manimlib/mobject/svg/svg_mobject.py\", line 63, in ensure_valid_file\r\n self.file_name)\r\nOSError: No file matching MYSVG.svg in image directory\r\n\r\n```\r\n(Manjaro Linux, Texlive)", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "32abbb9371308e8dff7410de387fe78e64b6fe7a", "files": [{"path": "manimlib/mobject/svg/svg_mobject.py", "Loc": {"('SVGMobject', 'ensure_valid_file', 49)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["manimlib/mobject/svg/svg_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "b74e5ca254bccc1575b4c7b7de3c1cb2010aac75", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/694", "iss_label": "", "title": "can't graph trigonometric function of secx, cscx, cotx, tanx,...", "body": "source code:\r\n\r\nclass PlotFunctions(GraphScene):\r\n CONFIG = {\r\n \"x_min\" : -10,\r\n \"x_max\" : 10.3,\r\n \"y_min\" : -1.5,\r\n \"y_max\" : 1.5,\r\n \"graph_origin\" : ORIGIN ,\r\n \"function_color\" : RED ,\r\n \"axes_color\" : GREEN,\r\n \"x_labeled_nums\" :range(-10,12,2),\r\n\r\n }\r\n def construct(self):\r\n self.setup_axes(animate=True)\r\n func_graph=self.get_graph(self.func_to_graph,self.function_color)\r\n func_graph2=self.get_graph(self.func_to_graph2)\r\n vert_line = self.get_vertical_line_to_graph(TAU,func_graph,color=YELLOW)\r\n graph_lab = self.get_graph_label(func_graph, label = \"\\\\cos(x)\")\r\n graph_lab2=self.get_graph_label(func_graph2,label = \"\\\\sin(x)\", x_val=-10, direction=UP/2)\r\n two_pi = TexMobject(\"x = 2 \\\\pi\")\r\n label_coord = self.input_to_graph_point(TAU,func_graph)\r\n two_pi.next_to(label_coord,RIGHT+UP)\r\n\r\n\r\n\r\n self.play(ShowCreation(func_graph),ShowCreation(func_graph2))\r\n self.play(ShowCreation(vert_line), ShowCreation(graph_lab), ShowCreation(graph_lab2),ShowCreation(two_pi))\r\n\r\n\r\n def func_to_graph(self,x):\r\n #return np.cos(x)\r\n return np.tan(x)\r\n\r\n def func_to_graph2(self,x):\r\n return np.sin(x)\r\n\r\nI replaced \"return np.cos(x)\" to \"return np.tan(x)\"...i got this:\r\n![image](https://user-images.githubusercontent.com/36161299/63267544-e140a700-c2c4-11e9-9164-a14d37ee8673.png)\r\n\r\nand then I replaced \"return np.cos(x)\" to \"return np.sec(x)/cot(x)/csc(x)\"...i got this:\r\nAttributeError: module 'numpy' has no attribute 'sec'...\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "b74e5ca254bccc1575b4c7b7de3c1cb2010aac75", "files": [{"path": "manimlib/mobject/types/vectorized_mobject.py", "Loc": {"('VGroup', None, 868)": {"mod": []}}, "status": "modified"}, {"Loc": {"": [17]}, "path": null}]}, "own_code_loc": [{"Loc": [17], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\n+\n0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": [null, "manimlib/mobject/types/vectorized_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "b74e5ca254bccc1575b4c7b7de3c1cb2010aac75", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/694", "iss_label": "", "title": "can't graph trigonometric function of secx, cscx, cotx, tanx,...", "body": "source code:\r\n\r\nclass PlotFunctions(GraphScene):\r\n CONFIG = {\r\n \"x_min\" : -10,\r\n \"x_max\" : 10.3,\r\n \"y_min\" : -1.5,\r\n \"y_max\" : 1.5,\r\n \"graph_origin\" : ORIGIN ,\r\n \"function_color\" : RED ,\r\n \"axes_color\" : GREEN,\r\n \"x_labeled_nums\" :range(-10,12,2),\r\n\r\n }\r\n def construct(self):\r\n self.setup_axes(animate=True)\r\n func_graph=self.get_graph(self.func_to_graph,self.function_color)\r\n func_graph2=self.get_graph(self.func_to_graph2)\r\n vert_line = self.get_vertical_line_to_graph(TAU,func_graph,color=YELLOW)\r\n graph_lab = self.get_graph_label(func_graph, label = \"\\\\cos(x)\")\r\n graph_lab2=self.get_graph_label(func_graph2,label = \"\\\\sin(x)\", x_val=-10, direction=UP/2)\r\n two_pi = TexMobject(\"x = 2 \\\\pi\")\r\n label_coord = self.input_to_graph_point(TAU,func_graph)\r\n two_pi.next_to(label_coord,RIGHT+UP)\r\n\r\n\r\n\r\n self.play(ShowCreation(func_graph),ShowCreation(func_graph2))\r\n self.play(ShowCreation(vert_line), ShowCreation(graph_lab), ShowCreation(graph_lab2),ShowCreation(two_pi))\r\n\r\n\r\n def func_to_graph(self,x):\r\n #return np.cos(x)\r\n return np.tan(x)\r\n\r\n def func_to_graph2(self,x):\r\n return np.sin(x)\r\n\r\nI replaced \"return np.cos(x)\" to \"return np.tan(x)\"...i got this:\r\n![image](https://user-images.githubusercontent.com/36161299/63267544-e140a700-c2c4-11e9-9164-a14d37ee8673.png)\r\n\r\nand then I replaced \"return np.cos(x)\" to \"return np.sec(x)/cot(x)/csc(x)\"...i got this:\r\nAttributeError: module 'numpy' has no attribute 'sec'...\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "b74e5ca254bccc1575b4c7b7de3c1cb2010aac75", "files": [{"path": "manimlib/mobject/types/vectorized_mobject.py", "Loc": {"('VGroup', None, 868)": {"mod": []}}, "status": "modified"}, {"Loc": {"": {"mod": [17]}}, "path": null}]}, "own_code_loc": [{"Loc": [17], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\n+\n0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": [null, "manimlib/mobject/types/vectorized_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "3b1b", "repo_name": "manim", "base_commit": "fc153bb49a529e8cbb02dd1514f06387cbf0ee6e", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/1206", "iss_label": "", "title": "Manim can't find my png file", "body": "I'm new to coding and am trying to learn manim, which I'm using on my macbook pro. I'm trying to create a scene where manim draws a png file I saved. I saved the png file as \"shirt.png\" in my manim folder. I then ran the following code:\r\n\r\n\r\n```\r\nfrom manimlib.imports import *\r\n\r\nclass OutFit(Scene):\r\n\tdef construct(self):\r\n\t\t\r\n\t\tshirt = ImageMobject(\"shirt\")\r\n\t\t\r\n\t\tself.play(Write(shirt))\r\n```\r\nI've looked up several ways of how to get manim to do images and some solutions, but since I'm pretty new at this I don't always understand the answers I've found from other people's issues or if it applies to mine. I keep getting this error response:\r\n\r\nraise IOError(\"File {} not Found\".format(file_name))\r\nOSError: File shirt not Found\r\n\r\nAny help is much appreciated. \r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "fc153bb49a529e8cbb02dd1514f06387cbf0ee6e", "files": [{"path": "manimlib/animation/fading.py", "Loc": {"('FadeIn', None, 34)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["manimlib/animation/fading.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "3b1b", "repo_name": "manim", "base_commit": "64c960041b5b9dcb0aac50019268a3bdf69d9563", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/608", "iss_label": "", "title": "What is VMobject exactly?", "body": "Can anyone explain what is the purpose of `VMobject` and how it differs from `Mobject`?\r\n\r\nI am trying to make some `old_projects` work. For example, I had to change `PMobject` to inherit from `VMobject` instead of `Mobject` in order to fix `NumberLineScene`. I do not know if it is correct thing to do or how will it affect the other scripts because I am unable to find the fundamental differences between the two objects. The wiki does not explain a lot, so please tell some detailed information.\r\n\r\nI dug commit histories and saw \r\n\r\n> \"Starting to vectorize all things\"\r\n\r\n kind of commit messages when the `VMobject` class is added to the engine. What does it mean \"Vectorize\" in this context?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "64c960041b5b9dcb0aac50019268a3bdf69d9563", "files": [{"path": "manimlib/mobject/types/vectorized_mobject.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": ["manimlib/mobject/types/vectorized_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "a2779fe2f6c9ab29508676f21242b1c6b88e2f67", "is_iss": 0, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/5229", "iss_label": "documentation\nenhancement\nfix-me", "title": "[Documentation]: Micro-agents", "body": "**What problem or use case are you trying to solve?**\r\n\r\nCurrently in the `openhands/agenthub/codeact_agent` directory, we have an implementation of micro agents, but this is not documented.\r\n\r\nTo do so, we can:\r\n1. read the implementation of codeact agent\r\n2. read an example microagent in `openhands/agenthub/codeact_agent/micro/github.md`\r\n3. add documentation to `openhands/agenthub/codeact_agent/README.md`\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a2779fe2f6c9ab29508676f21242b1c6b88e2f67", "files": [{"path": "microagents/README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": [], "doc": ["microagents/README.md"], "test": [], "config": [], "asset": []}} @@ -923,13 +923,13 @@ {"organization": "psf", "repo_name": "requests", "base_commit": "2de907ad778de270911acaffe93883f0e2729a4a", "is_iss": 1, "iss_html_url": "https://github.com/psf/requests/issues/4602", "iss_label": "", "title": "Chunk-encoded request doesn't recognize iter_content generator", "body": "Passing a generator created by iter_content() as request data raises \"TypeError: sendall() argument 1 must be string or buffer, not generator\".\r\n\r\n## Expected Result\r\n\r\nThe POST request successfully delives the content from the GET request.\r\n\r\n## Actual Result\r\n\r\nA TypeError is raised:\r\n```\r\nTraceback (most recent call last):\r\n File \"..\\test.py\", line 7, in \r\n PostForward(\"http://myhost/img/foo.png\", \"http://myotherhost/convert\")\r\n File \"..\\test.py\", line 6, in PostForward\r\n return requests.post(url=dst, data=data, headers={'Content-Length': length})\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\api.py\", line 112, in post\r\n return request('post', url, data=data, json=json, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\api.py\", line 58, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\sessions.py\", line 508, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\sessions.py\", line 618, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\adapters.py\", line 440, in send\r\n timeout=timeout\r\n File \"C:\\Python27\\lib\\site-packages\\urllib3\\connectionpool.py\", line 601, in urlopen\r\n chunked=chunked)\r\n File \"C:\\Python27\\lib\\site-packages\\urllib3\\connectionpool.py\", line 357, in _make_request\r\n conn.request(method, url, **httplib_request_kw)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 1042, in request\r\n self._send_request(method, url, body, headers)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 1082, in _send_request\r\n self.endheaders(body)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 1038, in endheaders\r\n self._send_output(message_body)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 886, in _send_output\r\n self.send(message_body)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 858, in send\r\n self.sock.sendall(data)\r\n File \"C:\\Python27\\lib\\socket.py\", line 228, in meth\r\n return getattr(self._sock,name)(*args)\r\nTypeError: sendall() argument 1 must be string or buffer, not generator\r\n```\r\n\r\n## Reproduction Steps\r\n\r\n```python\r\nimport requests\r\ndef PostForward(src, dst):\r\n\twith requests.get(url=src, stream=True) as srcResponse:\r\n\t\tlength = srcResponse.headers['Content-Length']\r\n\t\tdata = srcResponse.iter_content(1024)\r\n\t\treturn requests.post(url=dst, data=data, headers={'Content-Length': length})\r\nPostForward(\"http://myhost/img/foo.png\", \"http://myotherhost/convert\")\r\n```\r\n\r\n## System Information\r\n\r\n $ python -m requests.help\r\n\r\n```\r\n{\r\n \"chardet\": {\r\n \"version\": \"3.0.4\"\r\n },\r\n \"cryptography\": {\r\n \"version\": \"\"\r\n },\r\n \"idna\": {\r\n \"version\": \"2.6\"\r\n },\r\n \"implementation\": {\r\n \"name\": \"CPython\",\r\n \"version\": \"2.7.14\"\r\n },\r\n \"platform\": {\r\n \"release\": \"10\",\r\n \"system\": \"Windows\"\r\n },\r\n \"pyOpenSSL\": {\r\n \"openssl_version\": \"\",\r\n \"version\": null\r\n },\r\n \"requests\": {\r\n \"version\": \"2.18.4\"\r\n },\r\n \"system_ssl\": {\r\n \"version\": \"100020bf\"\r\n },\r\n \"urllib3\": {\r\n \"version\": \"1.22\"\r\n },\r\n \"using_pyopenssl\": false\r\n}\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "requests"}, {"pro": "toolbelt", "path": ["requests_toolbelt/streaming_iterator.py"]}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["requests_toolbelt/streaming_iterator.py"], "doc": [], "test": [], "config": [], "asset": ["requests"]}} {"organization": "psf", "repo_name": "requests", "base_commit": "f17ef753d2c1f4db0d7f5aec51261da1db20d611", "is_iss": 1, "iss_html_url": "https://github.com/psf/requests/issues/3031", "iss_label": "Needs Info\nQuestion/Not a bug", "title": "[WinError 10048] Only one usage of each socket address ...", "body": "I notice that despite using requests.Session() - I still seem to be creating new connections/sockets which eventually exhaust (TIME_WAIT) and I get the following error:\n\n> [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted',))\n\n```\ns = requests.Session()\ndata = zip(url_routes, cycle(s))\ncalc_routes = pool.map(processRequest, data)\n\n```\n\nI posted a bit more [here](http://stackoverflow.com/questions/35793908/python-multiprocessing-associate-a-process-with-a-session), however not sure how to address this\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [8], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "psf", "repo_name": "requests", "base_commit": "6f659a41794045292b836859f1281d33eeed8260", "is_iss": 0, "iss_html_url": "https://github.com/psf/requests/issues/3740", "iss_label": "", "title": "File download weirdness", "body": "I noticed this issue building conda recipes. Conda uses requests to download files from the internet.\r\n\r\nThe file that is being fetched is: https://dakota.sandia.gov/sites/default/files/distributions/public/dakota-6.5-public.src.tar.gz\r\n(link found here: https://dakota.sandia.gov/download.html)\r\n\r\nDownloading with curl -O\r\nfilesize: 78MB\r\nmd5: 02c46e904d40bba6b308065db34c1ad7\r\n\r\nDownloading with urllib2 (from the standard library):\r\nfilesize: 78MB\r\nmd5: 02c46e904d40bba6b308065db34c1ad7\r\n\r\nDownloading with requests-2.12.1 (supplied with conda)\r\nfilesize: 248MB\r\nmd5: 41e4268140d850756812510512d8eee8\r\ntar -tf doesn't indicate any corruption.\r\n\r\nI'm not sure what is different with this particular URL, but the other files I tried with requests worked. I don't know where the extra 170MB is coming from?\r\n\r\ncode used to download files:\r\n```python\r\ndef download_file(url, fn):\r\n r = requests.get(url, stream=True)\r\n with open(fn, 'wb') as f:\r\n for chunk in r.iter_content(chunk_size=1024): \r\n if chunk:\r\n f.write(chunk)\r\n\r\ndef download_urllib2(url, fn):\r\n f = urllib2.urlopen(url)\r\n with open(fn, 'wb') as fh:\r\n for x in iter(lambda: f.read(1024), b''):\r\n fh.write(x)\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6f659a41794045292b836859f1281d33eeed8260", "files": [{"path": "docs/user/quickstart.rst", "Loc": {"(None, None, 166)": {"mod": [166]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": [], "doc": ["docs/user/quickstart.rst"], "test": [], "config": [], "asset": []}} -{"organization": "psf", "repo_name": "requests", "base_commit": "62176a1ca7207db37273365b4691ed599203b828", "is_iss": 0, "iss_html_url": "https://github.com/psf/requests/issues/3849", "iss_label": "", "title": "Received response with content-encoding: gzip, but failed to decode it", "body": "```python\r\nimport requests\r\n\r\nrequests.get('http://gett.bike/')\r\n```\r\nThis code raises the following exception:\r\n```python\r\nContentDecodingError: ('Received response with content-encoding: gzip, but failed to decode it.',\r\nerror('Error -3 while decompressing data: incorrect data check',))\r\n```\r\nArch linux x64\r\nrequests==2.13.0\r\npython=3.6.0", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "62176a1ca7207db37273365b4691ed599203b828", "files": [{"path": "src/requests/api.py", "Loc": {"(None, 'request', 14)": {"mod": [24]}}, "status": "modified"}, {"Loc": {"": [4]}, "path": null}]}, "own_code_loc": [{"Loc": [4], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": [null, "src/requests/api.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "psf", "repo_name": "requests", "base_commit": "62176a1ca7207db37273365b4691ed599203b828", "is_iss": 0, "iss_html_url": "https://github.com/psf/requests/issues/3849", "iss_label": "", "title": "Received response with content-encoding: gzip, but failed to decode it", "body": "```python\r\nimport requests\r\n\r\nrequests.get('http://gett.bike/')\r\n```\r\nThis code raises the following exception:\r\n```python\r\nContentDecodingError: ('Received response with content-encoding: gzip, but failed to decode it.',\r\nerror('Error -3 while decompressing data: incorrect data check',))\r\n```\r\nArch linux x64\r\nrequests==2.13.0\r\npython=3.6.0", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "62176a1ca7207db37273365b4691ed599203b828", "files": [{"path": "src/requests/api.py", "Loc": {"(None, 'request', 14)": {"mod": [24]}}, "status": "modified"}, {"Loc": {"": {"mod": [4]}}, "path": null}]}, "own_code_loc": [{"Loc": [4], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": [null, "src/requests/api.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "psf", "repo_name": "requests", "base_commit": "057722af23edf3f69bf7bdfed7c6c32cbe1ce2e7", "is_iss": 1, "iss_html_url": "https://github.com/psf/requests/issues/3015", "iss_label": "", "title": "Ability to set timeout after response", "body": "For devs who use this great library, it would be very beneficial to be able to set the timeout AFTER initial connection. There are a few scenarios where this is useful but one of the main patterns/use cases is this:\n\n```\n\nimport requests\nimport socket\n\n# May or may not subclass threading.Thread\nclass Getter(object):\n def __init__(self):\n self.request = requests.get(url, stream=True)\n\n def run(self):\n with open(path, 'r+b') as file:\n\n bytes_consumed = 0\n while True:\n try:\n\n chunk = self.request.raw.read(size)\n if not chunk:\n break\n chunk_length = len(chunk)\n\n file.write(chunk)\n bytes_consumed += chunk_length\n\n except socket.timeout:\n # handle incomplete download by using range header next time, etc.\n```\n\nHandling incomplete downloads due to connection loss is common and especially important when downloading large or many files (or both). As you can see, this can be achieved in a fairly straightforward way. The issue is there is really no good way to write tests for this. Each method would involve OS specific code which would also be a no-go for CI services.\n\nWhat would be an option is the ability to set the timeout after establishing a connection. This way in a test you could do \"r.timeout = (None, 0.00001)\" and during reading it would simulate a timeout.\n\nTo my knowledge this is no way currently to inject a new Timeout class retroactively. Is this correct?\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [20], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "psf", "repo_name": "requests", "base_commit": "1285f576ae0a848de27af10d917c19b60940d1fa", "is_iss": 1, "iss_html_url": "https://github.com/psf/requests/issues/3774", "iss_label": "", "title": "bad handshake error with ssl3", "body": "I have an inhouse IIS server with ssl3 but an expired certificate, so I used requests without certificate verification and it was working fine with requests 2.11.1. But after I upgrade requests to 2.12.0, there was an error occured. \r\n\r\nthe code is:\r\n...\r\nrequests.get('https://10.192.8.89:8080/yps_report', verify=False)\r\n...\r\n\r\nerror message:\r\nTraceback (most recent call last):\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\contrib\\pyopenssl.py\", line 417, in wrap_socket\r\n cnx.do_handshake()\r\n File \"c:\\python35\\lib\\site-packages\\OpenSSL\\SSL.py\", line 1426, in do_handshake\r\n self._raise_ssl_error(self._ssl, result)\r\n File \"c:\\python35\\lib\\site-packages\\OpenSSL\\SSL.py\", line 1167, in _raise_ssl_error\r\n raise SysCallError(-1, \"Unexpected EOF\")\r\nOpenSSL.SSL.SysCallError: (-1, 'Unexpected EOF')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 594, in urlopen\r\n chunked=chunked)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 350, in _make_request\r\n self._validate_conn(conn)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 835, in _validate_conn\r\n conn.connect()\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connection.py\", line 323, in connect\r\n ssl_context=context)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\util\\ssl_.py\", line 324, in ssl_wrap_socket\r\n return context.wrap_socket(sock, server_hostname=server_hostname)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\contrib\\pyopenssl.py\", line 424, in wrap_socket\r\n raise ssl.SSLError('bad handshake: %r' % e)\r\nssl.SSLError: (\"bad handshake: SysCallError(-1, 'Unexpected EOF')\",)\r\n...\r\n\r\nI tried to downgrade requests to 2.11.1 and the error was gone. I have no idea how to fix this.\r\nfrom requests.adapters import HTTPAdapter\nfrom requests.packages.urllib3.util.ssl_ import create_urllib3_context\n\n# This is the 2.11 Requests cipher string.\nCIPHERS = (\n 'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+HIGH:'\n 'DH+HIGH:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+HIGH:RSA+3DES:!aNULL:'\n '!eNULL:!MD5'\n)\n\nclass DESAdapter(HTTPAdapter):\n def init_poolmanager(self, *args, **kwargs):\n context = create_urllib3_context(ciphers=CIPHERS)\n kwargs['ssl_context'] = context\n return super(HTTPAdapter, self).init_poolmanager(*args, **kwargs)\n\ns = requests.Session()\ns.mount('https://10.192.8.89', DESAdapter())", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [41], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\n\u9700\u8981\u5c06\u4e0b\u9762\u7684user\u7684\u4e00\u4e2acomment\u4e2duser\u7684\u4ee3\u7801\u653e\u5165\u5176\u4e2d", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "ansible", "repo_name": "ansible", "base_commit": "a6d4c3ff7cf43c24be6622102cee834fc5096496", "is_iss": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/78759", "iss_label": "module\nsupport:core\nbug\naffects_2.9", "title": "\"Invalid data passed to 'loop', it requires a list, got this instead: .", "body": "### Summary\r\n\r\nWhen trying to pass a variable called i.e. sysctl.values to loop, I will get the above error.\r\n\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\ndebug (only used for debugging)\r\n\r\n### Ansible Version\r\n\r\n```console\r\n$ ansible --version\r\nansible 2.9.27\r\n config file = /home/rf/.ansible.cfg\r\n configured module search path = ['/home/rf/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.10/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 3.10.6 (main, Aug 2 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)]\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console\r\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\r\n\r\n[I] -2-> ansible-config dump --only-changed\r\nANSIBLE_PIPELINING(/home/rf/.ansible.cfg) = True\r\nANSIBLE_SSH_ARGS(/home/rf/.ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s\r\nDEFAULT_FORKS(/home/rf/.ansible.cfg) = 50\r\nDEFAULT_HOST_LIST(/home/rf/.ansible.cfg) = ['/home/rf/hosts']\r\nINVENTORY_CACHE_ENABLED(/home/rf/.ansible.cfg) = True\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\nFedora 36\r\n\r\n### Steps to Reproduce\r\n\r\n\r\n```yaml (paste below)\r\n- name: Test\r\n hosts: localhost\r\n gather_facts: True\r\n tasks:\r\n - debug:\r\n msg: \"{{ item }}\"\r\n loop: \"{{ sysctl2 }}\"\r\n - debug:\r\n msg: \"{{ item }}\"\r\n loop: \"{{ sysctl.values }}\"\r\n vars:\r\n sysctl:\r\n values:\r\n - { name: \"net.ipv4.ip_forward\", value: \"1\" }\r\n sysctl2:\r\n - { name: \"net.ipv4.ip_forward\", value: \"1\" }\r\n```\r\n\r\n\r\n\r\n\r\n### Expected Results\r\n\r\nOutput of debug using sysctl.values\r\n\r\n### Actual Results\r\n\r\n```console\r\nPLAY [Test] ********************************************************************************************************************************************************************************************\r\n\r\nTASK [Gathering Facts] *********************************************************************************************************************************************************************************\r\nok: [localhost]\r\n\r\nTASK [debug] *******************************************************************************************************************************************************************************************\r\nok: [localhost] => (item={'name': 'net.ipv4.ip_forward', 'value': '1'}) => {\r\n \"msg\": {\r\n \"name\": \"net.ipv4.ip_forward\",\r\n \"value\": \"1\"\r\n }\r\n}\r\n\r\nTASK [debug] *******************************************************************************************************************************************************************************************\r\nfatal: [localhost]: FAILED! => {\"msg\": \"Invalid data passed to 'loop', it requires a list, got this instead: . Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup.\"}\r\n\r\nPLAY RECAP *********************************************************************************************************************************************************************************************\r\nlocalhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0\r\n```\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [59], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "ansible", "repo_name": "ansible", "base_commit": "8af920c8924b2fd9a0e4192c3c7e6085b687bfdc", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/82382", "iss_label": "bug\naffects_2.16", "title": "Ansible core 2.16.1 broke AnsibleUnsafeBytes iteration", "body": "### Summary\r\n\r\nUpgrading form 2.16.0 to 2.16.1 (Ansible 9.0.1 to 9.1.0), iterating over AnsibleUnsafeBytes does not create a list of numbers anymore.\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\ncore, unsafe_proxy\r\n\r\n### Ansible Version\r\n\r\n```console\r\n$ ansible --version\r\n\r\n\r\nansible [core 2.16.1]\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/lib/python3.12/site-packages/ansible\r\n ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /usr/local/bin/ansible\r\n python version = 3.12.0 (main, Nov 29 2023, 03:32:06) [GCC 10.2.1 20210110] (/usr/local/bin/python)\r\n jinja version = 3.1.2\r\n libyaml = True\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console\r\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\r\n\r\n\r\n/bin/sh: 1: less: not found\r\n```\r\n\r\n(sorry, dockerized environment)\r\n\r\n\r\n### OS / Environment\r\n\r\nDebian bullseye / 11 (in python docker image: `python:3.12.0-bullseye`), ansible via pip (`ansible==9.1.0`)\r\n\r\n### Steps to Reproduce\r\n\r\n\r\n```py\r\nfrom ansible.utils.unsafe_proxy import AnsibleUnsafeText \r\nx = AnsibleUnsafeText(\"asdf\")\r\ny = x.encode(\"utf8\")\r\nlist(y)\r\n```\r\n\r\n### Expected Results\r\n\r\n```\r\n[97, 115, 100, 102]\r\n```\r\n\r\nThis is what happens on 2.16.0.\r\n\r\n### Actual Results\r\n\r\n```console\r\n[b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00']\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "8af920c8924b2fd9a0e4192c3c7e6085b687bfdc", "files": [{"path": "Version", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Other"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} {"organization": "ansible", "repo_name": "ansible", "base_commit": "bcf9cd1e2a01d8e111a28db157ebc255a5592dca", "is_iss": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/20085", "iss_label": "cloud\naffects_2.1\nmodule\ndocker\nbug", "title": "docker_container task fail on exit code", "body": "Unless i'm missing something i expect that if I were to do something like the following the task would fail? But it does not \ud83d\ude1f \r\n\r\n```yaml\r\n tasks:\r\n docker_container:\r\n name: \"exit-test\"\r\n image: \"ubuntu:latest\"\r\n command: \"bash -c 'exit 123'\"\r\n```\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\ndocker_container\r\n\r\n##### ANSIBLE VERSION\r\n```\r\n2.1.1.0\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nN/A\r\n\r\n##### STEPS TO REPRODUCE\r\n```yaml\r\n tasks:\r\n docker_container:\r\n name: \"exit-test\"\r\n image: \"ubuntu:latest\"\r\n command: \"bash -c 'exit 123'\"\r\n```\r\n##### EXPECTED RESULTS\r\nShould fail the task\r\n\r\n##### ACTUAL RESULTS\r\nTask is ok.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ansible", "pro": "ansible-modules-core", "path": ["cloud/docker/docker_container.py"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["cloud/docker/docker_container.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "d5324c11a0c389d2ede8375e2024cb37b9eb8ce5", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/19352", "iss_label": "affects_2.0\nmodule\nsupport:core\nbug\nfiles", "title": "Template update convert \\n to actual new line", "body": "##### ISSUE TYPE\r\n\r\n Bug Report\r\n\r\n##### COMPONENT NAME\r\n\r\ntemplate\r\n\r\n##### ANSIBLE VERSION\r\n\r\n2.0 and higher\r\nCONFIGURATION\r\n```\r\n[ssh_connection]\r\ncontrol_path = %(directory)s/%%C\r\n```\r\n##### OS / ENVIRONMENT\r\n\r\nMac OS X 10.11.6\r\nCentos 6.x, 7.x\r\nSUMMARY\r\n\r\nIn the input .j2 file, we substitute a variable with an environment variable that has a line/string that contains a grok expression containing `(?m)\\n` . The output generated by the template module in versions 2.0 and later, treats the \\n as actual line break. Where as versions up to 1.9.6 retains the literal `(?m)\\n` without replacing the \\n with an actual line break. We see the line break after we upgraded the Ansible version to 2.x.\r\n\r\nAny way we can work around this issue? Thank you for your help.\r\n##### STEPS TO REPRODUCE\r\n\r\nOur execution flow is probably not the nicest - we want to reengineer it soon. Basic steps:\r\n\r\n Run a shell script with ansible-playbook command that pass in an env variable with `(?m)\\n` literal.\r\n Playbook calls a main yaml file and assigns shell environment var to a included task yaml file.\r\n The task yaml file invokes the template module.\r\n\r\nIn the snippet below I stripped out other lines/vars for clarity.\r\n\r\nmain shell\r\n```\r\nset GROK_PATTERN_GENERAL_ERROR_PG=\"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\\n%{USER:logerror}%{GREEDYDATA})\"\r\n```\r\n```\r\nansible-playbook -i ../common/host.inventory \\\r\n -${VERBOSE} \\\r\n t.yml \\\r\n ${CHECK_ONLY} \\\r\n --extra-vars \"hosts='${HOST}'\r\n xlogstash_grok_general_error='${GROK_PATTERN_GENERAL_ERROR_PG}'\r\n \"\r\n```\r\nt.yml\r\n```\r\n---\r\n- hosts: 127.0.0.1\r\n connection: local\r\n\r\n tasks:\r\n - include_vars: ../common/defaults/main.yml\r\n - name: generate logstash kafka logscan filter config file\r\n include: tasks/t.yml\r\n vars:\r\n logstash_grok_general_error: \"{{xlogstash_grok_general_error}}\"\r\n```\r\ntasks/t.yml\r\n```\r\n---\r\n - name: generate logstash kafka logscan filter config file\r\n template: src=../common/templates/my.conf.j2\r\n dest=\"./500-filter.conf\"\r\n```\r\nmy.conf.j2\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"{{logstash_grok_general_error}}\"\r\n ]\r\n }\r\n```\r\nNote the `(?m)\\n` are still on the same line.\r\n##### EXPECTED RESULTS\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\\n%{USER:logerror}%{GREEDYDATA})\"\r\n ]\r\n }\r\n```\r\n##### ACTUAL RESULTS\r\n\r\nNote `(?m)\\n` now has the `\\n` as actual line break.\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\r\n%{USER:logerror}%{GREEDYDATA})\"\r\n ]\r\n }\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "d5324c11a0c389d2ede8375e2024cb37b9eb8ce5", "files": [{"path": "lib/ansible/template/__init__.py", "Loc": {}}, {"path": "t.yml", "Loc": {"": [60]}}]}, "own_code_loc": [{"path": "t.yml", "Loc": [60]}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "3\n+\n0", "info_type": "Code"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": ["lib/ansible/template/__init__.py"], "doc": [], "test": [], "config": ["t.yml"], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "d5324c11a0c389d2ede8375e2024cb37b9eb8ce5", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/19352", "iss_label": "affects_2.0\nmodule\nsupport:core\nbug\nfiles", "title": "Template update convert \\n to actual new line", "body": "##### ISSUE TYPE\r\n\r\n Bug Report\r\n\r\n##### COMPONENT NAME\r\n\r\ntemplate\r\n\r\n##### ANSIBLE VERSION\r\n\r\n2.0 and higher\r\nCONFIGURATION\r\n```\r\n[ssh_connection]\r\ncontrol_path = %(directory)s/%%C\r\n```\r\n##### OS / ENVIRONMENT\r\n\r\nMac OS X 10.11.6\r\nCentos 6.x, 7.x\r\nSUMMARY\r\n\r\nIn the input .j2 file, we substitute a variable with an environment variable that has a line/string that contains a grok expression containing `(?m)\\n` . The output generated by the template module in versions 2.0 and later, treats the \\n as actual line break. Where as versions up to 1.9.6 retains the literal `(?m)\\n` without replacing the \\n with an actual line break. We see the line break after we upgraded the Ansible version to 2.x.\r\n\r\nAny way we can work around this issue? Thank you for your help.\r\n##### STEPS TO REPRODUCE\r\n\r\nOur execution flow is probably not the nicest - we want to reengineer it soon. Basic steps:\r\n\r\n Run a shell script with ansible-playbook command that pass in an env variable with `(?m)\\n` literal.\r\n Playbook calls a main yaml file and assigns shell environment var to a included task yaml file.\r\n The task yaml file invokes the template module.\r\n\r\nIn the snippet below I stripped out other lines/vars for clarity.\r\n\r\nmain shell\r\n```\r\nset GROK_PATTERN_GENERAL_ERROR_PG=\"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\\n%{USER:logerror}%{GREEDYDATA})\"\r\n```\r\n```\r\nansible-playbook -i ../common/host.inventory \\\r\n -${VERBOSE} \\\r\n t.yml \\\r\n ${CHECK_ONLY} \\\r\n --extra-vars \"hosts='${HOST}'\r\n xlogstash_grok_general_error='${GROK_PATTERN_GENERAL_ERROR_PG}'\r\n \"\r\n```\r\nt.yml\r\n```\r\n---\r\n- hosts: 127.0.0.1\r\n connection: local\r\n\r\n tasks:\r\n - include_vars: ../common/defaults/main.yml\r\n - name: generate logstash kafka logscan filter config file\r\n include: tasks/t.yml\r\n vars:\r\n logstash_grok_general_error: \"{{xlogstash_grok_general_error}}\"\r\n```\r\ntasks/t.yml\r\n```\r\n---\r\n - name: generate logstash kafka logscan filter config file\r\n template: src=../common/templates/my.conf.j2\r\n dest=\"./500-filter.conf\"\r\n```\r\nmy.conf.j2\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"{{logstash_grok_general_error}}\"\r\n ]\r\n }\r\n```\r\nNote the `(?m)\\n` are still on the same line.\r\n##### EXPECTED RESULTS\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\\n%{USER:logerror}%{GREEDYDATA})\"\r\n ]\r\n }\r\n```\r\n##### ACTUAL RESULTS\r\n\r\nNote `(?m)\\n` now has the `\\n` as actual line break.\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\r\n%{USER:logerror}%{GREEDYDATA})\"\r\n ]\r\n }\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "d5324c11a0c389d2ede8375e2024cb37b9eb8ce5", "files": [{"path": "lib/ansible/template/__init__.py", "Loc": {}}, {"path": "t.yml", "Loc": {"": {"mod": [60]}}}]}, "own_code_loc": [{"path": "t.yml", "Loc": [60]}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "3\n+\n0", "info_type": "Code"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": ["lib/ansible/template/__init__.py"], "doc": [], "test": [], "config": ["t.yml"], "asset": []}} {"organization": "ansible", "repo_name": "ansible", "base_commit": "a29fcfa9952ff40e389a5e93c880bc2a23e3f2e7", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/73922", "iss_label": "python3\nmodule\nsupport:core\nbug\naffects_2.10", "title": "cron: Remove/delete an environment variable", "body": "### Summary\r\n\r\nWith `env=yes`, `cron` add environment variable (with the `name` & `value`) parameters.\r\nI though that having `env` + `state=absent` would remove said variable, but that's not the case (the cron file is actually removed).\r\nAs such there is no way to remove a variable and the more obvious way to attempt to do it results in a surprising result.\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\nansible.builtin.cron\r\n\r\n### Ansible Version\r\n\r\n```console\r\n$ ansible --version\r\nansible 2.10.5\r\n config file = /home/user/.ansible.cfg\r\n configured module search path = ['/usr/share/ansible']\r\n ansible python module location = /home/user/.local/lib/python3.8/site-packages/ansible\r\n executable location = /home/user/.local/bin/ansible\r\n python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]\r\n\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\n\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\nUbuntu 20.04\r\n\r\n### Steps to Reproduce\r\n\r\n```yaml\r\n cron:\r\n cron_file: foobar\r\n user: root\r\n env: yes\r\n name: \"VAR\"\r\n value: \"False\"\r\n state: absent\r\n```\r\n\r\n\r\n### Expected Results\r\n\r\nThe \"VAR\" variable is removed from /etc/cron.d/foobar\r\n\r\n### Actual Results\r\n\r\n/etc/cron.d/foobar is removed.\r\nThere is no way to remove the \"VAR\" variable.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a29fcfa9952ff40e389a5e93c880bc2a23e3f2e7", "files": [{"path": "lib/ansible/modules/cron.py", "Loc": {"(None, None, None)": {"mod": [15]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["lib/ansible/modules/cron.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "ansible", "repo_name": "ansible", "base_commit": "7490044bbe28029afa9e3099d86eae9fda5f88b7", "is_iss": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/11351", "iss_label": "affects_2.0\naffects_2.3\nc:executor/playbook_executor\nsupport:core\nfeature\nP3", "title": "enable do/until with async tasks", "body": "##### ISSUE TYPE\nFeature Idea\n\n##### COMPONENT NAME\ncore\n\n##### ANSIBLE VERSION\n2.0\n\n##### CONFIGURATION\n\n\n##### OS / ENVIRONMENT\n\n\n##### SUMMARY\nWhen a task is marked as async, there is no way to loop until a condition is met.\nWith poll:0 and async_status you can poll for async task to complete but you cannot repeat the original async task itself until a condition is met.\n\n```\ncat /tmp/async-test.yml \n\n---\n# Run through the test of an async command\n\n- hosts: all\n tasks:\n - name: \"Check an async command\"\n command: /bin/sleep 3\n async: 5\n poll: 1\n register: command_result\n until: command_result.failed\n retries: 5\n delay: 10\n```\n\n```\n$ansible-playbook -i localhost, /tmp/async-test.yml \n ____________\n< PLAY [all] >\n ------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\n _________________\n< GATHERING FACTS >\n -----------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\nok: [localhost]\n ______________________________\n< TASK: Check an async command >\n ------------------------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\nfatal: [localhost] => error while evaluating conditional: command_result.failed: {% if command_result.failed %} True {% else %} False {% endif %}\n\nFATAL: all hosts have already failed -- aborting\n ____________\n< PLAY RECAP >\n ------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\n to retry, use: --limit @/opt/ashishkh/async-test.retry\n\nlocalhost : ok=1 changed=0 unreachable=2 failed=0 \n```\n\n\n##### STEPS TO REPRODUCE\n\n\n##### EXPECTED RESULTS\n\n\n##### ACTUAL RESULTS\n\n\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"path": "/tmp/async-test.yml", "Loc": [33]}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "1", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["/tmp/async-test.yml"], "asset": []}} {"organization": "ansible", "repo_name": "ansible", "base_commit": "833970483100bfe89123a5718606234115921aec", "is_iss": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/67993", "iss_label": "cloud\naws\nopenstack\nmodule\nsupport:community\naffects_2.5\nbug\ntraceback\nsystem", "title": "Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol(unable to disable stickiness not supported in NLB)", "body": "##### SUMMARY\r\nWe are using Ansible 2.5 to deploy AWS resources in our environment. From March 02, 2019 our deployment is failing with the below error.\r\n\r\nERROR:\r\n=====\r\nTASK [immutable_server : target group for analytics-tst-plebos loadbalancer] ***\r\nAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidConfigurationRequestException:\r\nAn error occurred (InvalidConfigurationRequest) when calling the ModifyTargetGroupAttributes operation: \r\nStickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\r\n17:21:08 fatal: [localhost]: FAILED! => {\"changed\": false, \"error\": {\"code\": \"InvalidConfigurationRequest\", \"message\": \"Stickiness type 'lb_cookie'\r\nis not supported for target groups with the TCP protocol\", \"type\": \"Sender\"}, \"msg\": \"An error occurred (InvalidConfigurationRequest) \r\nwhen calling the ModifyTargetGroupAttributes operation: Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\", \r\n\"response_metadata\": {\"http_headers\": {\"connection\": \"close\", \"content-length\": \"359\", \"content-type\": \"text/xml\", \"date\": \"Tue, 03 Mar 2020 11:51:08 GMT\", \r\n\"x-amzn-requestid\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\"}, \"http_status_code\": 400, \"request_id\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\", \"retry_attempts\": 0}}\r\n\r\n##### ISSUE TYPE\r\n- Bug Report - Unable to disable stickiness not supported in NLB\r\n\r\n##### COMPONENT NAME\r\n- name: \"target group for {{ server_name }} loadbalancer\"\r\n elb_target_group:\r\n state: present\r\n name: \"{{ server_name }}-elb\"\r\n protocol: tcp\r\n port: 80\r\n target_type: instance\r\n deregistration_delay_timeout: 35\r\n modify_targets: False\r\n vpc_id: \"{{ vpc_out.vpcs.0.id }}\"\r\n health_check_protocol: \"{{ load_balancer_ping_protocol | default('http') }}\"\r\n health_check_port: \"{{ load_balancer_ping_port | default('80') }}\"\r\n health_check_path: \"{{ load_balancer_ping_path | default('/elb/ping')}}\"\r\n health_check_interval: 30\r\n unhealthy_threshold_count: 2\r\n healthy_threshold_count: 2\r\n stickiness_enabled: False\r\n tags: \"{{ aws.tags_as_dict }}\"\r\n register: target_group_out\r\n\r\n##### ANSIBLE VERSION\r\n```paste below\r\nAnsible version = 2.5.0\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n```paste below\r\n- name: \"target group for {{ server_name }} loadbalancer\"\r\n elb_target_group:\r\n state: present\r\n name: \"{{ server_name }}-elb\"\r\n protocol: tcp\r\n port: 80\r\n target_type: instance\r\n deregistration_delay_timeout: 35\r\n modify_targets: False\r\n vpc_id: \"{{ vpc_out.vpcs.0.id }}\"\r\n health_check_protocol: \"{{ load_balancer_ping_protocol | default('http') }}\"\r\n health_check_port: \"{{ load_balancer_ping_port | default('80') }}\"\r\n health_check_path: \"{{ load_balancer_ping_path | default('/elb/ping')}}\"\r\n health_check_interval: 30\r\n unhealthy_threshold_count: 2\r\n healthy_threshold_count: 2\r\n stickiness_enabled: False\r\n tags: \"{{ aws.tags_as_dict }}\"\r\n register: target_group_out\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nUbuntu 18.04 LTS / AWS environment\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\nKindly use the below playbook to deploy loadbalancer using Ansible on AWS cloud.\r\n\r\n\r\n```yaml\r\n- name: \"target group for {{ server_name }} loadbalancer\"\r\n elb_target_group:\r\n state: present\r\n name: \"{{ server_name }}-elb\"\r\n protocol: tcp\r\n port: 80\r\n target_type: instance\r\n deregistration_delay_timeout: 35\r\n modify_targets: False\r\n vpc_id: \"{{ vpc_out.vpcs.0.id }}\"\r\n health_check_protocol: \"{{ load_balancer_ping_protocol | default('http') }}\"\r\n health_check_port: \"{{ load_balancer_ping_port | default('80') }}\"\r\n health_check_path: \"{{ load_balancer_ping_path | default('/elb/ping')}}\"\r\n health_check_interval: 30\r\n unhealthy_threshold_count: 2\r\n healthy_threshold_count: 2\r\n stickiness_enabled: False\r\n tags: \"{{ aws.tags_as_dict }}\"\r\n register: target_group_out\r\n```\r\n\r\n\r\n\r\n##### EXPECTED RESULTS\r\nAn AWS Network loadbalancer will be created.\r\n\r\n\r\n##### ACTUAL RESULTS\r\nThe deployment fails with below error.\r\n\r\n\r\n```paste below\r\n TASK [immutable_server : target group for analytics-tst-plebos loadbalancer] ***\r\n17:21:08 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidConfigurationRequestException:\r\nAn error occurred (InvalidConfigurationRequest) when calling the ModifyTargetGroupAttributes operation: \r\nStickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\r\n17:21:08 fatal: [localhost]: FAILED! => {\"changed\": false, \"error\": {\"code\": \"InvalidConfigurationRequest\", \"message\": \"Stickiness type 'lb_cookie'\r\nis not supported for target groups with the TCP protocol\", \"type\": \"Sender\"}, \"msg\": \"An error occurred (InvalidConfigurationRequest) \r\nwhen calling the ModifyTargetGroupAttributes operation: Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\", \r\n\"response_metadata\": {\"http_headers\": {\"connection\": \"close\", \"content-length\": \"359\", \"content-type\": \"text/xml\", \"date\": \"Tue, 03 Mar 2020 11:51:08 GMT\", \r\n\"x-amzn-requestid\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\"}, \"http_status_code\": 400, \"request_id\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\", \"retry_attempts\": 0}}\r\n\r\n```\r\n\r\n##### References\r\nI can see a similar issue occurred for terraform users as well.\r\n\r\nhttps://github.com/terraform-providers/terraform-provider-aws/issues/10494\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [20], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} @@ -1002,14 +1002,14 @@ {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "a14bc1d32452d92613551eb5d523e00950913710", "is_iss": 0, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/353", "iss_label": "enhancement", "title": "[Help] \u5982\u4f55\u652f\u6301\u591a\u663e\u5361", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u516c\u53f8\u5185\u90e8\u4f7f\u7528\uff0c\u88c5\u4e862\u5361\uff0c\u53d1\u73b0\u9ed8\u8ba4\u914d\u7f6e\u53ea\u67091\u5361\u5728\u8dd1\uff0c\u8bf7\u95ee\u5982\u4f55\u4f7f\u7528\u624d\u53ef\u4ee5\u4f7f\u7528\u591a\u5361\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n\u65e0\n\n### Environment\n\n```markdown\nOS: Ubuntu 20.04\r\nPython: 3.8\r\nTransformers: 4.26.1\r\nPyTorch: 1.12\r\nCUDA Support: True\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a14bc1d32452d92613551eb5d523e00950913710", "files": [{"path": "README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\n\u5982\u4f55\u652f\u6301\u591a\u663e\u5361", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} {"organization": "huggingface", "repo_name": "transformers", "base_commit": "34f28b2a1342fd72c2e4d4e5613855bfb9f35d34", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/1225", "iss_label": "wontfix", "title": "Bert output last hidden state", "body": "## \u2753 Questions & Help\r\n\r\nHi,\r\n\r\nSuppose we have an utterance of length 24 (considering special tokens) and we right-pad it with 0 to max length of 64.\r\nIf we use Bert pertained model to get the last hidden states, the output would be of size [1, 64, 768]. \r\nCan we use just the first 24 as the hidden states of the utterance? I mean is it right to say that the output[0, :24, :] has all the required information?\r\nI realized that from index 24:64, the outputs has float values as well.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "34f28b2a1342fd72c2e4d4e5613855bfb9f35d34", "files": [{"path": "src/transformers/models/bert/modeling_bert.py", "Loc": {"('BertSelfAttention', 'forward', 276)": {"mod": [279]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["src/transformers/models/bert/modeling_bert.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "huggingface", "repo_name": "transformers", "base_commit": "82c7e879876822864b5ceaf2c99eb01159266bcd", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/27200", "iss_label": "", "title": "dataset download error in speech recognition examples", "body": "### System Info\n\n- `transformers` version: 4.35.0.dev0\r\n- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.18\r\n- Huggingface_hub version: 0.17.3\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: 0.24.1\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 1.10.0+cu111 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \n\n### Who can help?\n\n@stevhliu and @MKhalusova\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nCUDA_VISIBLE_DEVICES=0 python run_speech_recognition_ctc.py \\\r\n\t--dataset_name=\"common_voice\" \\\r\n\t--model_name_or_path=\"facebook/wav2vec2-large-xlsr-53\" \\\r\n\t--dataset_config_name=\"tr\" \\\r\n\t--output_dir=\"./wav2vec2-common_voice-tr-demo\" \\\r\n\t--overwrite_output_dir \\\r\n\t--num_train_epochs=\"15\" \\\r\n\t--per_device_train_batch_size=\"16\" \\\r\n\t--gradient_accumulation_steps=\"2\" \\\r\n\t--learning_rate=\"3e-4\" \\\r\n\t--warmup_steps=\"500\" \\\r\n\t--evaluation_strategy=\"steps\" \\\r\n\t--text_column_name=\"sentence\" \\\r\n\t--length_column_name=\"input_length\" \\\r\n\t--save_steps=\"400\" \\\r\n\t--eval_steps=\"100\" \\\r\n\t--layerdrop=\"0.0\" \\\r\n\t--save_total_limit=\"3\" \\\r\n\t--freeze_feature_encoder \\\r\n\t--gradient_checkpointing \\\r\n\t--chars_to_ignore , ? . ! - \\; \\: \\\" \u201c % \u2018 \u201d \ufffd \\\r\n\t--fp16 \\\r\n\t--group_by_length \\\r\n\t--push_to_hub \\\r\n\t--do_train --do_eval \n\n### Expected behavior\n\nWhen I run the default command, which set `dataset_name` as \"common_voice\", and I got a warning:\r\n```\r\n/home/xintong/.cache/huggingface/modules/datasets_modules/datasets/common_voice/220833898d6a60c50f621126e51fb22eb2dfe5244392c70dccd8e6e2f055f4bf/common_voice.py:634: FutureWarning: \r\n This version of the Common Voice dataset is deprecated.\r\n You can download the latest one with\r\n >>> load_dataset(\"mozilla-foundation/common_voice_11_0\", \"en\")\r\n \r\n warnings.warn(\r\nGenerating train split: 0%| | 0/1831 [00:00>> from transformers import AutoTokenizer, FlaxAutoModelForCausalLM\r\n\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"gpt2-medium\")\r\n>>> model = FlaxAutoModelForCausalLM.from_pretrained(\"gpt2-medium\")\r\n>>> input_context = \"The dog\"\r\n>>> # encode input context\r\n>>> input_ids = tokenizer(input_context, return_tensors=\"jax\").input_ids\r\n>>> # generate candidates using sampling\r\n>>> outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True)\r\n\r\nTypeError: JAX only supports number and bool dtypes, got dtype object in array\r\n```\r\n\r\n@patrickvonplaten @patil-suraj ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494", "files": [{"path": "src/transformers/models/gpt2/modeling_flax_gpt2.py", "Loc": {"('FlaxGPT2LMHeadModule', None, 553)": {"mod": []}}, "status": "modified"}, {"path": "src/transformers/models/gpt2/tokenization_gpt2_fast.py", "Loc": {"('GPT2TokenizerFast', None, 70)": {"mod": []}}, "status": "modified"}, {"Loc": {"": [6, 7]}, "path": null}]}, "own_code_loc": [{"Loc": [6, 7], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "max_topk": 2, "file_topk": 2, "loctype": {"code": [null, "src/transformers/models/gpt2/tokenization_gpt2_fast.py", "src/transformers/models/gpt2/modeling_flax_gpt2.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/12081", "iss_label": "", "title": "GPT2 Flax \"TypeError: JAX only supports number and bool dtypes, got dtype object in array\"", "body": "On GPU\r\n\r\n```\r\n>>> from transformers import AutoTokenizer, FlaxAutoModelForCausalLM\r\n\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"gpt2-medium\")\r\n>>> model = FlaxAutoModelForCausalLM.from_pretrained(\"gpt2-medium\")\r\n>>> input_context = \"The dog\"\r\n>>> # encode input context\r\n>>> input_ids = tokenizer(input_context, return_tensors=\"jax\").input_ids\r\n>>> # generate candidates using sampling\r\n>>> outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True)\r\n\r\nTypeError: JAX only supports number and bool dtypes, got dtype object in array\r\n```\r\n\r\n@patrickvonplaten @patil-suraj ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494", "files": [{"path": "src/transformers/models/gpt2/modeling_flax_gpt2.py", "Loc": {"('FlaxGPT2LMHeadModule', None, 553)": {"mod": []}}, "status": "modified"}, {"path": "src/transformers/models/gpt2/tokenization_gpt2_fast.py", "Loc": {"('GPT2TokenizerFast', None, 70)": {"mod": []}}, "status": "modified"}, {"Loc": {"": {"mod": [6, 7]}}, "path": null}]}, "own_code_loc": [{"Loc": [6, 7], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "max_topk": 2, "file_topk": 2, "loctype": {"code": [null, "src/transformers/models/gpt2/tokenization_gpt2_fast.py", "src/transformers/models/gpt2/modeling_flax_gpt2.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "huggingface", "repo_name": "transformers", "base_commit": "322037e842e5e89080918c824998c17722df6f19", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/10079", "iss_label": "", "title": "Unclear error \"NotImplementedError: \"while saving tokenizer. How fix it?", "body": "Here is my tokenizer code and how I save it to a json file\" /content/bert-datas7.json\"\r\n\r\n````\r\nfrom tokenizers import normalizers\r\nfrom tokenizers.normalizers import Lowercase, NFD, StripAccents\r\n\r\nbert_tokenizer.pre_tokenizer = Whitespace()\r\n\r\nfrom tokenizers.processors import TemplateProcessing\r\n\r\nbert_tokenizer.post_processor = TemplateProcessing(\r\n single=\"[CLS] $A [SEP]\",\r\n pair=\"[CLS] $A [SEP] $B:1 [SEP]:1\",\r\n special_tokens=[\r\n (\"[CLS]\", 1),\r\n (\"[SEP]\", 2),\r\n (\"[PAD]\", 3),\r\n ],\r\n \r\n)\r\nfrom tokenizers.trainers import WordPieceTrainer\r\n\r\ntrainer = WordPieceTrainer(\r\n vocab_size=30522, special_tokens=[\"[UNK]\", \"[CLS]\", \"[SEP]\", \"[PAD]\", \"[MASK]\"], pad_to_max_length=True\r\n)\r\nfiles = [f\"/content/For_ITMO.txt\" for split in [\"test\", \"train\", \"valid\"]]\r\nbert_tokenizer.train(trainer, files)\r\n\r\nmodel_files = bert_tokenizer.model.save(\"data\", \"/content/For_ITMO.txt\")\r\n\r\nbert_tokenizer.model = WordPiece.from_file(*model_files, unk_token=\"[UNK]\", pad_to_max_length=True)\r\n\r\nbert_tokenizer.save(\"/content/bert-datas7.json\") \r\n````\r\n\r\nWhen I output tokenizer name_or_path = nothing is displayed. This is normal?\r\n\r\n\r\n````\r\ntokenizer = PreTrainedTokenizerFast(tokenizer_file='/content/bert-datas7.json')\r\ntokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\n\r\nprint(tokenizer)\r\n>>> PreTrainedTokenizerFast(name_or_path='', vocab_size=1435, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', special_tokens={'pad_token': '[PAD]'})\r\n````\r\nAlso, when I try to save my tokenizer, I get an error without explanation. How can I rewrite the code so that all this???\r\n#9658 \r\n#10039 \r\n[For_ITMO.txt-vocab (1) (1).txt](https://github.com/huggingface/transformers/files/5945659/For_ITMO.txt-vocab.1.1.txt)\r\n \r\n````\r\ntokenizer.save_pretrained(\"/content/tokennizerrrr\")\r\n\r\nNotImplementedError Traceback (most recent call last)\r\n in ()\r\n----> 1 tokenizer.save_pretrained(\"/content/tokennizerrrr\")\r\n\r\n2 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in save_vocabulary(self, save_directory, filename_prefix)\r\n 2042 :obj:`Tuple(str)`: Paths to the files saved.\r\n 2043 \"\"\"\r\n-> 2044 raise NotImplementedError\r\n 2045 \r\n 2046 def tokenize(self, text: str, pair: Optional[str] = None, add_special_tokens: bool = False, **kwargs) -> List[str]:\r\n\r\nNotImplementedError: \r\n````\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "322037e842e5e89080918c824998c17722df6f19", "files": [{"path": "src/transformers/tokenization_utils_fast.py", "Loc": {"('PreTrainedTokenizerFast', '_save_pretrained', 505)": {"mod": [509]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["src/transformers/tokenization_utils_fast.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "huggingface", "repo_name": "transformers", "base_commit": "77a257fc210a56f1fd0d75166ecd654cf58111f3", "is_iss": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/8403", "iss_label": "", "title": "[s2s finetune] huge increase in memory demands with --fp16 native amp", "body": "While working on https://github.com/huggingface/transformers/issues/8353 I discovered that `--fp16` causes a 10x+ increase in gpu memory demands.\r\n\r\ne.g. I can run bs=12 w/o `--fp16` \r\n\r\n```\r\ncd examples/seq2seq\r\nexport BS=12; rm -rf distilbart-cnn-12-6; python finetune.py --learning_rate=3e-5 --gpus 1 \\\r\n--do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 2 --freeze_encoder \\\r\n--freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length=142 \\\r\n--train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps 1 \\\r\n--model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large \\\r\n--warmup_steps 500 --output_dir distilbart-cnn-12-6\r\n\r\n```\r\nBut if I add:\r\n```\r\n--fp16\r\n```\r\n\r\n(w/ or w/o `--fp16_opt_level O1`)\r\n\r\nI get OOM even with bs=1 on a 8GB card and it barely manages on a 24GB card - I think the increase in memory demand is more than 10x.\r\n\r\nThe OOM either right away when it does the sanity check step, or after just 10-20 batches - so within a few secs\r\n\r\nThis is with pytorch-1.6. Same goes for pytorch-1.7 and 1.8-nightly.\r\n\r\nI wasn't able to test `--fp16` with pytorch-1.5, since I can't build apex on ubuntu-20.04. Without `--fp16` pytorch-1.5 works the same as pytorch-1.6 gpu memory-wise.\r\n\r\nI tested with pytorch-1.5 + apex and there is no problem there. Memory consumption is about half.\r\n\r\nHere is the table of the batch sizes that fit into a 8gb rtx-1070 (bigger BS leads to an instant OOM):\r\n\r\nbs | version\r\n---|--------\r\n12 | pt15\r\n20 | pt15+fp16\r\n12 | pt16\r\n1 | pt16+fp16\r\n\r\n\r\n\r\nIf you'd like to reproduce the problem here are the full steps:\r\n\r\n```\r\n# prep library\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install -e .[dev]\r\npip install -r examples/requirements.txt\r\ncd examples/seq2seq\r\n\r\n# prep data\r\nwget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz\r\ntar -xzvf cnn_dm_v2.tgz # empty lines removed\r\nmv cnn_cln cnn_dm\r\n\r\n# run\r\nexport BS=12; \r\nrm -rf distilbart-cnn-12-6\r\npython finetune.py --learning_rate=3e-5 --gpus 1 \\\r\n--do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 2 --freeze_encoder \\\r\n--freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length=142 \\\r\n--train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps 1 \\\r\n--model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large \\\r\n--warmup_steps 500 --output_dir distilbart-cnn-12-6 \r\n```\r\n\r\nThis issue is to track the problem and hopefully finding a solution.\r\n\r\n@sshleifer ", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/57bffc3a8e4fee0cce31e1ff1f662ccf7b16db57", "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "pytorch", "pro": "pytorch", "path": ["{'base_commit': '57bffc3a8e4fee0cce31e1ff1f662ccf7b16db57', 'files': [{'path': 'aten/src/ATen/autocast_mode.cpp', 'status': 'modified', 'Loc': {\"(None, 'cached_cast', 67)\": {'mod': [71]}}}, {'path': 'test/test_cuda.py', 'status': 'modified', 'Loc': {\"('TestCuda', None, 92)\": {'add': [2708]}}}]}"]}], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["aten/src/ATen/autocast_mode.cpp"], "doc": [], "test": ["test/test_cuda.py"], "config": [], "asset": ["pytorch"]}} {"organization": "huggingface", "repo_name": "transformers", "base_commit": "1a688709b34b10bd372e3e0860c8d39d170ebf53", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/17201", "iss_label": "", "title": "a memory leak in qqp prediction using bart", "body": "### System Info\n\n```shell\n- `transformers` version: 4.19.0.dev0\r\n- Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.10\r\n- Huggingface_hub version: 0.4.0\r\n- PyTorch version (GPU?): 1.10.1 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\n```\n\n\n### Who can help?\n\n@sgugger\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nI met the same issue #11011. If not using `--eval_accumulation_steps`, it caused CUDA out of memory. If using it, it caused out of RAM and killed by system.\r\n\r\nI only did prediction on GLUE QQP dataset using bart without fine-tuning. Considering QQP having a large test set (300k), the prediction got slower and slower, and finally got out of memory.\r\n\r\nThis is the script to reproduce:\r\n```\r\nCUDA_VISIBLE_DEVICES=0 python run_glue.py --model_name_or_path facebook/bart-large --task_name qqp --output_dir bart-large_qqp --eval_accumulation_steps 100 --do_predict --per_device_eval_batch_size 24\r\n```\n\n### Expected behavior\n\n```shell\nPrediction without out memory.\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1a688709b34b10bd372e3e0860c8d39d170ebf53", "files": [{"path": "src/transformers/trainer.py", "Loc": {"('Trainer', 'evaluation_loop', 2549)": {"mod": [2635]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2\nOr\n5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["src/transformers/trainer.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "huggingface", "repo_name": "transformers", "base_commit": "cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/28435", "iss_label": "", "title": "Skip some weights for load_in_8bit and keep them as fp16/32?", "body": "### Feature request\r\n\r\nHello,\r\n\r\nI am looking for a way to load a checkpoint where I only load some of the weights in 8 bit and keep others in 16/32 bit.\r\n\r\n### Motivation\r\n\r\nMy motivation is for vision-language models like Llava or BLIP2 where I want to load the LLM part in 8 bit but the image encoder should stay in 16 bit because I notice performance degradations with CLIP in 8 bit and also want to be able to train this part without LoRA.\r\n\r\nAs far as I can see in the documentation, issues and with Google (both here and for bitsandbytes), there is currently no way to do this.\r\n\r\n### Your contribution\r\n\r\nI can in theory help implement something like this but I don't know where and how in the code this should be done.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5", "files": [{"path": "src/transformers/modeling_utils.py", "Loc": {"('PreTrainedModel', 'from_pretrained', 2528)": {"mod": [3524]}}, "status": "modified"}, {"path": "src/transformers/utils/quantization_config.py", "Loc": {"('BitsAndBytesConfig', None, 151)": {"mod": [176]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 2, "file_topk": 2, "loctype": {"code": ["src/transformers/modeling_utils.py", "src/transformers/utils/quantization_config.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "huggingface", "repo_name": "transformers", "base_commit": "705ca7f21b2b557e0cfd5d0853b297fa53489d20", "is_iss": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/14938", "iss_label": "", "title": "Question: Object of type EncoderDecoderConfig is not JSON serializable", "body": "Hi.\r\nAn error occurred when I used Trainer to train and save EncoderDecoderModel.\r\n\r\n```python\r\n File \"/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.py\", line 482, in \r\n run(model_args, data_args, training_args)\r\n File \"/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.py\", line 465, in run\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1391, in train\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1495, in _maybe_log_save_evaluate\r\n self._save_checkpoint(model, trial, metrics=metrics)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1557, in _save_checkpoint\r\n self.save_model(output_dir)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1961, in save_model\r\n self._save(output_dir)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 2009, in _save\r\n self.model.save_pretrained(output_dir, state_dict=state_dict)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/modeling_utils.py\", line 1053, in save_pretrained\r\n model_to_save.config.save_pretrained(save_directory)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 416, in save_pretrained\r\n self.to_json_file(output_config_file, use_diff=True)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 739, in to_json_file\r\n writer.write(self.to_json_string(use_diff=use_diff))\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 725, in to_json_string\r\n return json.dumps(config_dict, indent=2, sort_keys=True) + \"\\n\"\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/__init__.py\", line 238, in dumps\r\n **kw).encode(obj)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 201, in encode\r\n chunks = list(chunks)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 431, in _iterencode\r\n yield from _iterencode_dict(o, _current_indent_level)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 405, in _iterencode_dict\r\n yield from chunks\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 438, in _iterencode\r\n o = _default(o)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type EncoderDecoderConfig is not JSON serializable\r\n```\r\nMy model and Config define the following code. \r\n```python\r\n tokenizer = RobertaTokenizerFast.from_pretrained(model_args.tokenizer_name)\r\n encoder_config = RobertaConfig.from_pretrained(model_args.encoder_model_name_or_path)\r\n decoder_config = RobertaConfig.from_pretrained(model_args.decoder_model_name_or_path)\r\n encoder_decoder_config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)\r\n model = RobertaForSeq2Seq.from_encoder_decoder_pretrained(model_args.encoder_model_name_or_path,\r\n model_args.decoder_model_name_or_path,\r\n config=encoder_decoder_config, tie_encoder_decoder=True)\r\n model.config.decoder_start_token_id = tokenizer.bos_token_id\r\n model.config.eos_token_id = tokenizer.eos_token_id\r\n model.config.max_length = 64\r\n model.config.early_stopping = True\r\n model.config.no_repeat_ngram_size = 3\r\n model.config.length_penalty = 2.0\r\n model.config.num_beams = 4\r\n model.config.pad_token_id = tokenizer.pad_token_id\r\n```\r\nThis error occurred because EncoderDecoderConfig cannot be converted to json format. But I don't know how to modify it.\r\n```python\r\nERROR OCCURRED:\r\n\r\n if use_diff is True:\r\n config_dict = self.to_diff_dict()\r\n else:\r\n config_dict = self.to_dict()\r\n return json.dumps(config_dict, indent=2, sort_keys=True) + \"\\n\"\r\n```\r\n\r\nI look forward to your help! Thanks!\r\n @jplu @patrickvonplaten ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [46, 47], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "huggingface", "repo_name": "transformers", "base_commit": "45d21502f0b67eb8a5ad244d469dcc0dfb7517a7", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/653", "iss_label": "", "title": "Different Results from version 0.4.0 to version 0.5.0", "body": "Hi, I found the results after training is different from version 0.4.0 to version 0.5.0. I have fixed all initialization to reproduce the results. And I also test version 0.2.0 and 0.3.0, the results are the same to version 0.4.0, but from version 0.5.0 +, the results is different. I am wondering that have you trained a new model, so the weights changed? ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "45d21502f0b67eb8a5ad244d469dcc0dfb7517a7", "files": [{"path": "pytorch_pretrained_bert/modeling.py", "Loc": {"('BertPreTrainedModel', 'init_bert_weights', 515)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["pytorch_pretrained_bert/modeling.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/10202", "iss_label": "", "title": "Fast Tokenizers instantiated via vocab/merge files do not respect skip_special_tokens=True", "body": "## Environment info\r\n- `transformers` version: 4.3.2\r\n- Platform: macOS-11.2.1-x86_64-i386-64bit\r\n- Python version: 3.9.1\r\n- PyTorch version (GPU?): 1.7.1 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\r\n\r\n## Information\r\n\r\nSee title; this issue does not reproduce with slow tokenizers. Does not reproduce with serialized tokenizers.\r\n\r\nFound while investigating https://github.com/minimaxir/aitextgen/issues/88\r\n\r\n## To reproduce\r\n\r\nUsing [gpt2_merges.txt](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_merges.txt) and [gpt2_vocab.json](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_vocab.json) as linked:\r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, GPT2Tokenizer, GPT2TokenizerFast\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"distilgpt2\")\r\n\r\noutputs = model.generate(max_length=40)\r\n\r\n# tensor([[50256, 383, 471, 13, 50, 13, 2732, 286, 4796, 468,\r\n# 587, 10240, 262, 1918, 286, 257, 1966, 5349, 5797, 508,\r\n# 373, 2823, 290, 2923, 416, 257, 23128, 287, 262, 471,\r\n# 13, 50, 13, 13241, 319, 3583, 13, 198, 198, 198]])\r\n\r\ntokenizer_fast = GPT2TokenizerFast(vocab_file=\"gpt2_vocab.json\", merges_file=\"gpt2_merges.txt\")\r\ntokenizer_fast.decode(outputs[0], skip_special_tokens=True)\r\n\r\n# '<|endoftext|> The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\\n\\n\\n'\r\n\r\ntokenizer_slow = GPT2Tokenizer(vocab_file=\"gpt2_vocab.json\", merges_file=\"gpt2_merges.txt\")\r\ntokenizer_slow.decode(outputs[0], skip_special_tokens=True)\r\n\r\n# ' The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\\n\\n\\n'\r\n\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885", "files": [{"path": "src/transformers/tokenization_utils_base.py", "Loc": {"('SpecialTokensMixin', 'add_special_tokens', 900)": {"mod": []}}, "status": "modified"}, {"Loc": {"": [33]}, "path": null}]}, "own_code_loc": [{"Loc": [33], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "Cment\u6307\u51fa\u7528\u6237\u4ee3\u7801\u95ee\u9898\uff0c\u7ed9\u51fa\u9700\u8981\u4f7f\u7528\u7684API\n\u81ea\u5df1\u4ee3\u7801\u7684\u95ee\u9898 \u53e6\u4e00\u4e2aissue\u4e2d\u6307\u51facmit\nI think this is happening because when you load it from the vocab and merge files, it doesn't know <|endoftext|> is a special token. For the skip_special_tokens to work, I believe it would be necessary to add them to the tokenizer:\ntokenizer_fast.add_special_tokens({\n \"additional_special_tokens\": \"<|endoftext|>\"\n})\n", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["src/transformers/tokenization_utils_base.py", null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/10202", "iss_label": "", "title": "Fast Tokenizers instantiated via vocab/merge files do not respect skip_special_tokens=True", "body": "## Environment info\r\n- `transformers` version: 4.3.2\r\n- Platform: macOS-11.2.1-x86_64-i386-64bit\r\n- Python version: 3.9.1\r\n- PyTorch version (GPU?): 1.7.1 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\r\n\r\n## Information\r\n\r\nSee title; this issue does not reproduce with slow tokenizers. Does not reproduce with serialized tokenizers.\r\n\r\nFound while investigating https://github.com/minimaxir/aitextgen/issues/88\r\n\r\n## To reproduce\r\n\r\nUsing [gpt2_merges.txt](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_merges.txt) and [gpt2_vocab.json](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_vocab.json) as linked:\r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, GPT2Tokenizer, GPT2TokenizerFast\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"distilgpt2\")\r\n\r\noutputs = model.generate(max_length=40)\r\n\r\n# tensor([[50256, 383, 471, 13, 50, 13, 2732, 286, 4796, 468,\r\n# 587, 10240, 262, 1918, 286, 257, 1966, 5349, 5797, 508,\r\n# 373, 2823, 290, 2923, 416, 257, 23128, 287, 262, 471,\r\n# 13, 50, 13, 13241, 319, 3583, 13, 198, 198, 198]])\r\n\r\ntokenizer_fast = GPT2TokenizerFast(vocab_file=\"gpt2_vocab.json\", merges_file=\"gpt2_merges.txt\")\r\ntokenizer_fast.decode(outputs[0], skip_special_tokens=True)\r\n\r\n# '<|endoftext|> The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\\n\\n\\n'\r\n\r\ntokenizer_slow = GPT2Tokenizer(vocab_file=\"gpt2_vocab.json\", merges_file=\"gpt2_merges.txt\")\r\ntokenizer_slow.decode(outputs[0], skip_special_tokens=True)\r\n\r\n# ' The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\\n\\n\\n'\r\n\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885", "files": [{"path": "src/transformers/tokenization_utils_base.py", "Loc": {"('SpecialTokensMixin', 'add_special_tokens', 900)": {"mod": []}}, "status": "modified"}, {"Loc": {"": {"mod": [33]}}, "path": null}]}, "own_code_loc": [{"Loc": [33], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "Cment\u6307\u51fa\u7528\u6237\u4ee3\u7801\u95ee\u9898\uff0c\u7ed9\u51fa\u9700\u8981\u4f7f\u7528\u7684API\n\u81ea\u5df1\u4ee3\u7801\u7684\u95ee\u9898 \u53e6\u4e00\u4e2aissue\u4e2d\u6307\u51facmit\nI think this is happening because when you load it from the vocab and merge files, it doesn't know <|endoftext|> is a special token. For the skip_special_tokens to work, I believe it would be necessary to add them to the tokenizer:\ntokenizer_fast.add_special_tokens({\n \"additional_special_tokens\": \"<|endoftext|>\"\n})\n", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["src/transformers/tokenization_utils_base.py", null], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "huggingface", "repo_name": "transformers", "base_commit": "5bcbdff15922b1d0eeb035879630ca61c292122a", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/32661", "iss_label": "bug", "title": "RoBERTa config defaults are inconsistent with fairseq implementation", "body": "### System Info\n\n python 3.12, transformers 4.14, latest mac os\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nfrom transformers import RobertaConfig\r\nmy_config = RobertaConfig()\r\nroberta_config = RobertaConfig.from_pretrained(\"roberta-base\")\r\n\r\nassert (\r\n my_config.max_position_embeddings == roberta_config.max_position_embeddings\r\n), \"%d %d\" % (my_config.max_position_embeddings, roberta_config.max_position_embeddings)\n\n### Expected behavior\n\nThe config defaults should correspond the the base model?\r\n\r\nThis is an implementation detail, but it did send me on a debugging spree as it hid as a sticky CUDA assertion error.\r\n```Assertion `srcIndex < srcSelectDimSize` failed```\r\n\r\nThe problem is that by default if you create the position_ids yourself or if you let transformers roberta_modelling take care of it (it also does it the way fairseq implemented it), it will create indeces that are out of bounds with the default configuration as everything is shifted by pad_token_id.\r\n\r\nThis is more of a heads up. Do transformers generally provide defaults aligned with the original models, or are the defaults here meant to be agnostic of that?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "5bcbdff15922b1d0eeb035879630ca61c292122a", "files": [{"path": "src/transformers/models/roberta/configuration_roberta.py", "Loc": {"('RobertaConfig', None, 29)": {"mod": [59]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["src/transformers/models/roberta/configuration_roberta.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "f0df3144d68ed288f5ccce0c34d3939f8462ba98", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1345", "iss_label": "", "title": "Not able to run any MetaGPT examples", "body": "Referred Issue #1322 , but not able to resolve the issue. I added azure based api endpoint and api key in config2.yaml\r\n\r\n\r\n\u2502 105 \u2502 \u2502 typer.echo(\"Missing argument 'IDEA'. Run 'metagpt --help' for more information.\" \u2502\r\n\u2502 106 \u2502 \u2502 raise typer.Exit() \u2502\r\n\u2502 107 \u2502 \u2502\r\n\u2502 \u2771 108 \u2502 return generate_repo( \u2502\r\n\u2502 109 \u2502 \u2502 idea, \u2502\r\n\u2502 110 \u2502 \u2502 investment, \u2502\r\n\u2502 111 \u2502 \u2502 n_round, \u2502\r\n\u2502 \u2502\r\n\\metagpt\\software_company.py:30 in generate_repo \u2502\r\n\u2502 \u2502\r\n\u2502 27 \u2502 recover_path=None, \u2502\r\n\u2502 28 ) -> ProjectRepo: \u2502\r\n\u2502 29 \u2502 \"\"\"Run the startup logic. Can be called from CLI or other Python scripts.\"\"\" \u2502\r\n\u2502 \u2771 30 \u2502 from metagpt.config2 import config \u2502\r\n\u2502 31 \u2502 from metagpt.context import Context \u2502\r\n\u2502 32 \u2502 from metagpt.roles import ( \u2502\r\n\u2502 33 \u2502 \u2502 Architect, \u2502\r\n\u2502 \u2502\r\n\\new_meta_env\\Lib\\site-packages\\metagpt-0.8.1-py3.11.egg\\metagpt\\ \u2502\r\n\u2502 config2.py:164 in \u2502\r\n\u2502 \u2502\r\n\u2502 161 \u2502 return result \u2502\r\n\u2502 162 \u2502\r\n\u2502 163 \u2502\r\n\u2502 \u2771 164 config = Config.default() \u2502\r\n\\new_meta_env\\Lib\\site-packages\\metagpt-0.8.1-py3.11.egg\\metagpt\\ \u2502\r\n\u2502 config2.py:106 in default \u2502\r\n\u2502 \u2502\r\n\u2502 103 \u2502 \u2502 dicts = [dict(os.environ)] \u2502\r\n\u2502 104 \u2502 \u2502 dicts += [Config.read_yaml(path) for path in default_config_paths] \u2502\r\n\u2502 105 \u2502 \u2502 final = merge_dict(dicts) \u2502\r\n\u2502 \u2771 106 \u2502 \u2502 return Config(**final) \u2502\r\n\u2502 107 \u2502 \u2502\r\n\u2502 108 \u2502 @classmethod \u2502\r\n\u2502 109 \u2502 def from_llm_config(cls, llm_config: dict): \u2502\r\n\u2502 \u2502\r\n\\new_meta_env\\Lib\\site-packages\\pydantic\\main.py:176 in __init__ \u2502\r\n\u2502 \u2502\r\n\u2502 173 \u2502 \u2502 \"\"\" \u2502\r\n\u2502 174 \u2502 \u2502 # `__tracebackhide__` tells pytest and some other tools to omit this function fr \u2502\r\n\u2502 175 \u2502 \u2502 __tracebackhide__ = True \u2502\r\n\u2502 \u2771 176 \u2502 \u2502 self.__pydantic_validator__.validate_python(data, self_instance=self) \u2502\r\n\u2502 177 \u2502 \u2502\r\n\u2502 178 \u2502 # The following line sets a flag that we use to determine when `__init__` gets overr \u2502\r\n\u2502 179 \u2502 __init__.__pydantic_base_init__ = True # pyright: ignore[reportFunctionMemberAccess \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nValidationError: 1 validation error for Config\r\nllm\r\n Field required [type=missing, input_value={'ALLUSERSPROFILE': 'C:\\\\..._INIT_AT_FORK': 'FALSE'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.7/v/missing", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "f0df3144d68ed288f5ccce0c34d3939f8462ba98", "files": [{"path": "config/config2.yaml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}} {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "e43aaec9322054f4dec92f44627533816588663b", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/576", "iss_label": "", "title": "\u8bf7\u95eemetagpt\u662f\u5426\u652f\u6301\u5411\u91cf\u6570\u636e\uff0c\u6784\u5efa\u81ea\u5df1\u7684\u77e5\u8bc6\u5e93", "body": "\u8bf7\u95eemetagpt\u662f\u5426\u652f\u6301\u5411\u91cf\u6570\u636e\uff0c\u6784\u5efa\u81ea\u5df1\u7684\u77e5\u8bc6\u5e93", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "e43aaec9322054f4dec92f44627533816588663b", "files": [{"path": "/metagpt/document_store", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": [], "doc": ["/metagpt/document_store"], "test": [], "config": [], "asset": []}} @@ -1019,7 +1019,7 @@ {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "dfa33fcdaade1e4f8019835bf065d372d76724ae", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/924", "iss_label": "", "title": "GLM4\u4e00\u76f4\u62a5\u9519", "body": "2024-02-22 16:50:26.666 | ERROR | metagpt.utils.common:log_it:476 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 80.109(s), this was the 5th time calling it. exp: 1 validation error for PM_NODE_AN\r\n Value error, Missing fields: {'Full API spec', 'Required Python packages', 'Required Other language third-party packages'} [type=value_error, input_value={'Required JavaScript pac...ation and development.'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.5/v/value_error", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "dfa33fcdaade1e4f8019835bf065d372d76724ae", "files": [{"path": "config/config2.yaml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}} {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "80a189ad4a1546f8c1a9dbe00c42725868c35e5e", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/135", "iss_label": "", "title": "failed to launch chromium browser process errors", "body": "get errors on launch of browser process; below is the error from terminal which happens for all browser processes trying to launch.\r\n\r\n```\r\nINFO | metagpt.utils.mermaid:mermaid_to_file:38 - Generating /Users/lopezdp/DevOps/Ai_MetaGPT/workspace/test_app/resources/competitive_analysis.pdf..\r\n\r\nError: Failed to launch the browser process! spawn /usr/bin/chromium ENOENT\r\n\r\n\r\nTROUBLESHOOTING: https://pptr.dev/troubleshooting\r\n\r\n at ChildProcess.onClose (file:///Users/lopezdp/DevOps/Ai_MetaGPT/node_modules/@puppeteer/browsers/lib/esm/launch.js:253:24)\r\n at ChildProcess.emit (node:events:513:28)\r\n at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)\r\n at onErrorNT (node:internal/child_process:485:16)\r\n at processTicksAndRejections (node:internal/process/task_queues:83:21)\r\n```\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "80a189ad4a1546f8c1a9dbe00c42725868c35e5e", "files": [{"path": "config/puppeteer-config.json", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": ["config/puppeteer-config.json"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1115", "iss_label": "", "title": "The following error appears on every run", "body": "![image](https://github.com/geekan/MetaGPT/assets/115678682/1fb58e0b-47a7-4e1f-a7b7-924ea9adedb0)\r\n\r\n2024-03-27 11:15:59.019 | ERROR | metagpt.utils.common:wrapper:631 - Exception occurs, start to serialize the project, exp:\r\nTraceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 382, in __call__\r\n result = fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\repair_llm_raw_output.py\", line 296, in retry_parse_json_text\r\n parsed_data = CustomDecoder(strict=False).decode(output)\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 13 column 25 (char 3485)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 50, in __call__\r\n result = await fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 425, in _aask_v1\r\n parsed_data = llm_output_postprocess(\r\ntenacity.RetryError: RetryError[]\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\common.py\", line 640, in wrapper\r\n return await func(self, *args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 550, in run\r\n rsp = await self.react()\r\ntenacity.RetryError: RetryError[]\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\common.py\", line 626, in wrapper\r\n result = await func(self, *args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\team.py\", line 134, in run\r\n await self.env.run()\r\nException: Traceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 382, in __call__\r\n result = fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\repair_llm_raw_output.py\", line 296, in retry_parse_json_text\r\n parsed_data = CustomDecoder(strict=False).decode(output)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 297, in decode\r\n return super().decode(s)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\json\\decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\json\\decoder.py\", line 353, in raw_decode\r\n obj, end = self.scan_once(s, idx)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 65, in scan_once\r\n return _scan_once(string, idx)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 36, in _scan_once\r\n return parse_object((string, idx + 1), strict, _scan_once, object_hook, object_pairs_hook, memo)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 164, in JSONObject\r\n value, end = scan_once(s, end)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 34, in _scan_once\r\n return parse_string(string, idx + 1, strict, delimiter=nextchar)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 227, in py_scanstring\r\n raise JSONDecodeError(\"Unterminated string starting at\", s, begin)\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 13 column 25 (char 3485)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 50, in __call__\r\n result = await fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 425, in _aask_v1\r\n parsed_data = llm_output_postprocess(\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\llm_output_postprocess.py\", line 19, in llm_output_postprocess\r\n result = postprocess_plugin.run(output=output, schema=schema, req_key=req_key)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\base_postprocess_plugin.py\", line 68, in run\r\n new_output = self.run_repair_llm_output(output=output, schema=schema, req_key=req_key)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\base_postprocess_plugin.py\", line 32, in run_repair_llm_output\r\n parsed_data = self.run_retry_parse_json_text(content)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\base_postprocess_plugin.py\", line 47, in run_retry_parse_json_text\r\n parsed_data = retry_parse_json_text(output=content) # should use output=content\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 289, in wrapped_f\r\n return self(f, *args, **kw)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 379, in __call__\r\n do = self.iter(retry_state=retry_state)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 326, in iter\r\n raise retry_exc from fut.exception()\r\ntenacity.RetryError: RetryError[]\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\common.py\", line 640, in wrapper\r\n return await func(self, *args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 550, in run\r\n rsp = await self.react()\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 517, in react\r\n rsp = await self._react()\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 463, in _react\r\n rsp = await self._act()\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 392, in _act\r\n response = await self.rc.todo.run(self.rc.history)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\design_api.py\", line 58, in run\r\n doc = await self._update_system_design(filename=filename)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\design_api.py\", line 86, in _update_system_design\r\n system_design = await self._new_system_design(context=prd.content)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\design_api.py\", line 73, in _new_system_design\r\n node = await DESIGN_API_NODE.fill(context=context, llm=self.llm)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 505, in fill\r\n return await self.simple_fill(schema=schema, mode=mode, images=images, timeout=timeout, exclude=exclude)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 457, in simple_fill\r\n content, scontent = await self._aask_v1(\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 88, in async_wrapped\r\n return await fn(*args, **kwargs)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 47, in __call__\r\n do = self.iter(retry_state=retry_state)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 326, in iter\r\n raise retry_exc from fut.exception()\r\ntenacity.RetryError: RetryError[]", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d", "files": [{"path": "metagpt/strategy/planner.py", "Loc": {"('Planner', 'update_plan', 68)": {"mod": [75]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["metagpt/strategy/planner.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "bdf9d224b5a05228897553a29214adc074fbc465", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/754", "iss_label": "", "title": "SubscriptionRunner", "body": "import asyncio\r\nfrom metagpt.subscription import SubscriptionRunner\r\nfrom metagpt.roles import Searcher\r\nfrom metagpt.schema import Message\r\n\r\nasync def trigger():\r\n while True:\r\n yield Message(\"the latest news about OpenAI\")\r\n await asyncio.sleep(1)\r\n\r\n\r\nasync def callback(msg: Message):\r\n print(msg.content)\r\n\r\n\r\n# async def main():\r\n# aa = trigger()\r\n# async for i in aa:\r\n# await callback(i)\r\nasync def main():\r\n pd = SubscriptionRunner()\r\n await pd.subscribe(Searcher(), trigger(), callback)\r\n await pd.run()\r\n\r\nasyncio.run(main())\r\n\u5728\u521b\u5efaRunner\u65f6\u5019\u62a5\u9519\uff0c0.6.3\u7248\u672c\r\nTraceback (most recent call last):\r\n File \"e:\\tmp\\metatest\\OSSWatcher .py\", line 44, in \r\n asyncio.run(main())\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\asyncio\\runners.py\", line 190, in run\r\n return runner.run(main)\r\n ^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\uweih034\\.conda\\envs\\mp\\Lib\\asyncio\\runners.py\", line 118, in run\r\n return self._loop.run_until_complete(task)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\asyncio\\base_events.py\", line 653, in run_until_complete\r\n return future.result()\r\n ^^^^^^^^^^^^^^^\r\n File \"e:\\tmp\\metatest\\OSSWatcher .py\", line 40, in main\r\n pd = SubscriptionRunner()\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\site-packages\\pydantic\\main.py\", line 164, in __init__\r\n __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\site-packages\\pydantic\\_internal\\_mock_val_ser.py\", line 47, in __getattr__\r\n raise PydanticUserError(self._error_message, code=self._code)\r\npydantic.errors.PydanticUserError: `SubscriptionRunner` is not fully defined; you should define `Environment`, then call `SubscriptionRunner.model_rebuild()`.\r\n\r\nFor further information visit https://errors.pydantic.dev/2.5/u/class-not-fully-defined", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "bdf9d224b5a05228897553a29214adc074fbc465", "files": [{"path": "metagpt/environment.py", "Loc": {"('Environment', None, 27)": {"mod": []}}, "status": "modified"}, {"Loc": {"": [21]}, "path": null}]}, "own_code_loc": [{"Loc": [21], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": [null, "metagpt/environment.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "bdf9d224b5a05228897553a29214adc074fbc465", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/754", "iss_label": "", "title": "SubscriptionRunner", "body": "import asyncio\r\nfrom metagpt.subscription import SubscriptionRunner\r\nfrom metagpt.roles import Searcher\r\nfrom metagpt.schema import Message\r\n\r\nasync def trigger():\r\n while True:\r\n yield Message(\"the latest news about OpenAI\")\r\n await asyncio.sleep(1)\r\n\r\n\r\nasync def callback(msg: Message):\r\n print(msg.content)\r\n\r\n\r\n# async def main():\r\n# aa = trigger()\r\n# async for i in aa:\r\n# await callback(i)\r\nasync def main():\r\n pd = SubscriptionRunner()\r\n await pd.subscribe(Searcher(), trigger(), callback)\r\n await pd.run()\r\n\r\nasyncio.run(main())\r\n\u5728\u521b\u5efaRunner\u65f6\u5019\u62a5\u9519\uff0c0.6.3\u7248\u672c\r\nTraceback (most recent call last):\r\n File \"e:\\tmp\\metatest\\OSSWatcher .py\", line 44, in \r\n asyncio.run(main())\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\asyncio\\runners.py\", line 190, in run\r\n return runner.run(main)\r\n ^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\uweih034\\.conda\\envs\\mp\\Lib\\asyncio\\runners.py\", line 118, in run\r\n return self._loop.run_until_complete(task)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\asyncio\\base_events.py\", line 653, in run_until_complete\r\n return future.result()\r\n ^^^^^^^^^^^^^^^\r\n File \"e:\\tmp\\metatest\\OSSWatcher .py\", line 40, in main\r\n pd = SubscriptionRunner()\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\site-packages\\pydantic\\main.py\", line 164, in __init__\r\n __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\site-packages\\pydantic\\_internal\\_mock_val_ser.py\", line 47, in __getattr__\r\n raise PydanticUserError(self._error_message, code=self._code)\r\npydantic.errors.PydanticUserError: `SubscriptionRunner` is not fully defined; you should define `Environment`, then call `SubscriptionRunner.model_rebuild()`.\r\n\r\nFor further information visit https://errors.pydantic.dev/2.5/u/class-not-fully-defined", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "bdf9d224b5a05228897553a29214adc074fbc465", "files": [{"path": "metagpt/environment.py", "Loc": {"('Environment', None, 27)": {"mod": []}}, "status": "modified"}, {"Loc": {"": {"mod": [21]}}, "path": null}]}, "own_code_loc": [{"Loc": [21], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": [null, "metagpt/environment.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "f88fa9e2df09c28f867bda54ec24fa25b50be830", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/178", "iss_label": "", "title": "Specify Directory of pdf documents as Knowledge Base", "body": "Hi, how can we specify any folder which includes pdf documents as a knowledge base and create a new Role of Document Controller to extract specific information from within the documents in KB?\r\n\r\nAny help would be highly appreciated\r\n\r\nThanks much appreciated", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "f88fa9e2df09c28f867bda54ec24fa25b50be830", "files": [{"path": "metagpt/document_store", "Loc": {}}, {"path": "tests/metagpt/document_store", "Loc": {}}, {"path": "examples/search_kb.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": ["examples/search_kb.py"], "doc": ["metagpt/document_store", "tests/metagpt/document_store"], "test": [], "config": [], "asset": []}} {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "7e756b9db56677636e6920c1e6628d13e980aec7", "is_iss": 1, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/6006", "iss_label": "bug", "title": "All custom components throw errors after update to latest version", "body": "### Bug Description\n\n```\n[01/29/25 00:15:00] ERROR 2025-01-29 00:15:00 - ERROR - chat - Error building vertices: Error serializing vertex build response: Unable to serialize unknown type: chat.py:405\n \n``` \n\n### Reproduction\n\n1. langflow updated to v1.1.2 from v1.1.1\n2. all previously created custom components throwing error:\n\n[01/29/25 00:24:09] ERROR 2025-01-29 00:24:09 - ERROR - chat - Error building vertices: Error serializing vertex build response: Unable to serialize unknown type: chat.py:405\n \n\n### Expected behavior\n\nLangflow should build tool correctly, as on previous version. \n\nSimplified failing code:\n```python\nfrom langflow.custom import Component\nfrom langflow.io import Output\nfrom langflow.schema import Data\nfrom langflow.field_typing import Tool\nfrom langchain.tools import StructuredTool\nfrom pydantic import BaseModel, Field\n\nclass MinimalSchema(BaseModel):\n input_text: str = Field(..., description=\"Text Input\")\n\nclass SimpleToolComponentMinimalSchema(Component):\n display_name = \"Simple Tool Minimal Schema Test\"\n description = \"Component with StructuredTool and minimal schema\"\n outputs = [Output(display_name=\"Tool\", name=\"test_tool\", method=\"build_tool\")]\n\n class MinimalSchema(BaseModel): # Define inner schema\n input_text: str = Field(..., description=\"Text Input\")\n\n def build_tool(self) -> Tool:\n return StructuredTool.from_function( # Return directly - simplified\n name=\"minimal_tool\",\n description=\"Minimal tool for testing schema\",\n func=self.run_tool,\n args_schema=SimpleToolComponentMinimalSchema.MinimalSchema\n )\n\n def run_tool(self, input_text: str) -> str:\n return f\"Tool received: {input_text}\"\n``` \n\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nwsl Ubuntu latest\n\n### Langflow Version\n\n1.1.2\n\n### Python Version\n\n3.12\n\n### Screenshot\n\n_No response_\n\n### Flow File\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [40], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "19818db68b507332be71f30dd90d16bf4c7d6f83", "is_iss": 0, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/3718", "iss_label": "enhancement", "title": "Add pgVector in the building instructions for the PostgreSQL Docker image", "body": "### Feature Request\r\n\r\nInclude the pgVector component with the Docker build instructions. This would provide the use with a fully functional PostgreSQL Vector DB, ready to be used inside LangFlow.\r\n\r\n### Motivation\r\n\r\nI am not a programmer, neither I do have proper knowledge of SQL, but I liked to play with some RAG ideas and LangFlow seems perfect. \r\nSo, after installing the Docker version for development of LangFlow, I noticed that the PostgreSQL server is missing the pgVector component, or at least that is what I understood from the error messages. \r\nPerhaps, it would be useful if the pgVector could be included in the Docker container, so having the user to just activate it on the SQL database. Anyway, I might be wrong, so in that case please forgive me.\r\n\r\n### Your Contribution\r\n\r\nAfter looking into the repository and searching around, with the help of AI (of course!), I found that the Docker instructions for the PostgreSQL server are defined inside the file \\docker\\cdk.Dockerfile (hope it's correct), and these might be the instructions to include pgVector:\r\n\r\n```\r\nFROM --platform=linux/amd64 python:3.10-slim\r\n\r\nWORKDIR /app\r\n\r\n# Install Poetry and build dependencies\r\nRUN apt-get update && apt-get install -y \\\r\n gcc \\\r\n g++ \\\r\n curl \\\r\n build-essential \\\r\n git \\\r\n postgresql-server-dev-all \\\r\n && rm -rf /var/lib/apt/lists/*\r\n\r\n# Install Poetry\r\nRUN curl -sSL https://install.python-poetry.org | python3 -\r\n\r\n# Add Poetry to PATH\r\nENV PATH=\"${PATH}:/root/.local/bin\"\r\n\r\n# Copy the pyproject.toml and poetry.lock files\r\nCOPY poetry.lock pyproject.toml ./\r\n\r\n# Copy the rest of the application codes\r\nCOPY ./ ./\r\n\r\n# Install dependencies\r\nRUN poetry config virtualenvs.create false && poetry install --no-interaction --no-ansi\r\n\r\n# Install pgvector extension\r\nRUN git clone https://github.com/pgvector/pgvector.git /tmp/pgvector && \\\r\n cd /tmp/pgvector && \\\r\n make && \\\r\n make install && \\\r\n rm -rf /tmp/pgvector\r\n\r\n# Install additional dependencies\r\nRUN poetry add botocore\r\nRUN poetry add pymysql\r\n\r\n# Command to run your application\r\nCMD [\"sh\", \"./container-cmd-cdk.sh\"]\r\n```\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "19818db68b507332be71f30dd90d16bf4c7d6f83", "files": [{"path": "docker_example/docker-compose.yml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\nor\n4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": [], "doc": ["docker_example/docker-compose.yml"], "test": [], "config": [], "asset": []}} @@ -1035,7 +1035,7 @@ {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "c6aa28bea2f751a91078bd8d845133ff83f352bf", "is_iss": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/5422", "iss_label": "question\nquestion-migrate", "title": "Unidirectional websocket connections where only the server pushes data to the clients", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\n@app.websocket(\"/ws\")\r\nasync def websocket_endpoint(websocket: WebSocket):\r\n await websocket.accept()\r\n while True:\r\n data = await websocket.receive_text()\r\n await websocket.send_text(f\"Message text was: {data}\")\n```\n\n\n### Description\n\nHello,\r\nIs there a way I could send data to clients over websocket without listening for when clients send data back. I'm trying to have a websocket endpoint where the server is pushing data to the client in a unidirectional way without the option for the client to send responses back. There doesn't seem to be any code that I could find that supports this since all the documentation seems to require that the server is listening for a `websocket.recieve_text()`. Any help would be much appreciated, thanks.\n\n### Operating System\n\nLinux\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\n0.81.0\n\n### Python Version\n\n3.8.13\n\n### Additional Context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [23], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "55afb70b3717969565499f5dcaef54b1f0acc7da", "is_iss": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/891", "iss_label": "question\nanswered\nquestion-migrate", "title": "SQL related tables and corresponding nested pydantic models in async", "body": "Really impressed with FastAPI so far... I have search docs github, tickets and googled the issue described below.\r\n\r\n### Description\r\n\r\nHow best to work with related tables and corresponding nested pydantic models whilst persisting data in a relational database in an async application?\r\n\r\n### Additional context\r\n\r\nI have been attempting to extend the example in the docs \r\nhttps://fastapi.tiangolo.com/advanced/async-sql-databases/\r\nwhich relies on https://github.com/encode/databases\r\n\r\nUsing three test pydantic models as an example:\r\n\r\n```\r\nclass UserModel(BaseModel):\r\n id: int\r\n title: str = Field(..., min_length=2, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n username: str = Field(..., min_length=3, max_length=50)\r\n email: str = Field(..., min_length=3, max_length=50)\r\n favourite_book: int = Field(...)\r\n\r\nclass FavouriteBook(BaseModel):\r\n id: int\r\n title: str = Field(...)\r\n author: str = Field(...)\r\n\r\n\r\nclass ExtendedUser(BaseModel):\r\n id: int\r\n title: str = Field(..., min_length=2, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n username: str = Field(..., min_length=3, max_length=50)\r\n email: str = Field(..., min_length=3, max_length=50)\r\n favourite_book: FavouriteBook\r\n\r\n```\r\n\r\nthe route would ideally be along the lines of...\r\n\r\n```\r\n@router.get(\"/extended\", response_model=List[ExtendedUser])\r\nasync def list():\r\n query = **sqlAlchemy/databases call that works**\r\n return database.fetch_all(query=query)\r\n\r\n```\r\n\r\n\r\nHow can a user create a route that returns the nested ExtendedUser from the database without resorting to performing two queries? \r\nAn SQL join is a standard way to do this with a single query. However, this does not work with SQLAlchemy core as the two tables contain 'id' and 'title' columns. \r\nIt is possible to work with SQLAlchemy orm - but not in an async way as far as I know. (async is my reason for using FastAPI ). I could rename the columns to something unique ( but to rename 'id' column seems like poor database design to me).\r\n\r\n\r\n\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [31], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "1760da0efa55585c19835d81afa8ca386036c325", "is_iss": 0, "iss_html_url": "https://github.com/fastapi/fastapi/issues/3882", "iss_label": "question\nquestion-migrate", "title": "Doing work after the HTTP response has been sent", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\nfrom fastapi import FastAPI, Request\r\n\r\napp = FastAPI()\r\n\r\n@app.middleware(\"http\")\r\nasync def write_log(request: Request, call_next):\r\n response = await call_next(request)\r\n # write log\r\n return response\n```\n\n\n### Description\n\nI want to log data for each request, however since my application is latency sensitive, I would want to return as quickly as possible. Is there an equivalent to Symfony's \"[terminate](https://symfony.com/doc/current/reference/events.html#kernel-terminate)\" event (which I guess is the `request_finished` signal in Django)? The idea is to do the log writing after the HTTP response has been sent.\r\n\r\nThe above code is from the middleware documentation, but it basically means the code for writing the log will be executed before the response is sent.\n\n### Operating System\n\nLinux\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\n0.65.1\n\n### Python Version\n\n3.8.5\n\n### Additional Context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1760da0efa55585c19835d81afa8ca386036c325", "files": [{"path": "fastapi/background.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": ["fastapi/background.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "a0e4d38bea74940de013e04a6d6f399d62f04280", "is_iss": 0, "iss_html_url": "https://github.com/fastapi/fastapi/issues/1498", "iss_label": "question\nreviewed\nquestion-migrate", "title": "RedirectResponse from a POST request route to GET request route shows 405 Error code.", "body": "_Summary of the total issue is:_ **How to do a Post/Redirect/Get (PRG) in FastAPI?**\r\n\r\n_This is not necessarily a bug, rather a question._\r\n### Things i tried:\r\nI want to redirect response from 2nd route to 1st route. This [Issue#199](https://github.com/tiangolo/fastapi/issues/199) here explains **GET to GET** but not a **POST to GET**. **N.B:** `I have done this type of POST -> GET redirecting in flask, it was working there but not here.` And also this [Issue#863](https://github.com/tiangolo/fastapi/issues/863) has the same problem but doesn't really solves the problem. To re produce the error check the bottom.\r\n\r\n```Python3\r\n#1st route (GET request)\r\n@admin_content_edit_router.get('/admin/edit_content/set_category')\r\nasync def set_category(request:Request):\r\n return templates.TemplateResponse(\"admin/category_edit.html\", {'request': request})\r\n\r\n#2nd route (POST request)\r\n@admin_content_edit_router.post('/admin/edit_content/add_category')\r\nasync def add_category(request:Request):\r\n # here forms are getting processed\r\n return RedirectResponse(app.url_path_for('set_category')) # from here to 1st route\r\n```\r\nBut it shows :\r\n```Python3\r\n {\"detail\":\"Method Not Allowed\"}\r\n```\r\nFull traceback:\r\n```Python3\r\nINFO: 127.0.0.1:58415 - \"POST /admin/edit_content/add_category HTTP/1.1\" 307 Temporary Redirect\r\nINFO: 127.0.0.1:58415 - \"POST /admin/edit_content/set_category HTTP/1.1\" 405 Method Not Allowed\r\nERROR: Exception in callback _SelectorSocketTransport._read_ready()\r\nhandle: \r\nTraceback (most recent call last):\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\events.py\", line 145, in _run\r\n self._callback(*self._args)\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\selector_events.py\", line 730, in _read_ready\r\n self._protocol.data_received(data)\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 162, in data_received\r\n self.handle_events()\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 247, in handle_events\r\n self.transport.resume_reading()\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\selector_events.py\", line 711, in resume_reading\r\n raise RuntimeError('Not paused')\r\nRuntimeError: Not paused\r\n```\r\n\r\nBut when i do a GET to GET redirect response it works without any issue but a POST to GET blows things up! Am i completely missing something here? i did look up in starlette doc here on reverse route lookup but nothing helps. [https://www.starlette.io/routing/#reverse-url-lookups](url)\r\n\r\nQuick Re produce the error:\r\n```Python3\r\n\r\nfrom fastapi import FastAPI\r\nfrom starlette.responses import RedirectResponse\r\nimport os\r\nfrom starlette.status import HTTP_302_FOUND,HTTP_303_SEE_OTHER\r\n\r\napp = FastAPI()\r\n\r\n@app.post(\"/\")\r\nasync def login():\r\n # HTTP_302_FOUND,HTTP_303_SEE_OTHER : None is working:(\r\n return RedirectResponse(url=\"/ressource/1\",status_code=HTTP_303_SEE_OTHER)\r\n\r\n@app.get(\"/ressource/{r_id}\")\r\nasync def get_ressource(r_id:str):\r\n return {\"r_id\": r_id}\r\n\r\nif __name__ == '__main__':\r\n os.system(\"uvicorn tes:app --host 0.0.0.0 --port 80\")\r\n```\r\n\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a0e4d38bea74940de013e04a6d6f399d62f04280", "files": [{"Loc": {"": [58]}, "path": null}]}, "own_code_loc": [{"Loc": [58], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "a0e4d38bea74940de013e04a6d6f399d62f04280", "is_iss": 0, "iss_html_url": "https://github.com/fastapi/fastapi/issues/1498", "iss_label": "question\nreviewed\nquestion-migrate", "title": "RedirectResponse from a POST request route to GET request route shows 405 Error code.", "body": "_Summary of the total issue is:_ **How to do a Post/Redirect/Get (PRG) in FastAPI?**\r\n\r\n_This is not necessarily a bug, rather a question._\r\n### Things i tried:\r\nI want to redirect response from 2nd route to 1st route. This [Issue#199](https://github.com/tiangolo/fastapi/issues/199) here explains **GET to GET** but not a **POST to GET**. **N.B:** `I have done this type of POST -> GET redirecting in flask, it was working there but not here.` And also this [Issue#863](https://github.com/tiangolo/fastapi/issues/863) has the same problem but doesn't really solves the problem. To re produce the error check the bottom.\r\n\r\n```Python3\r\n#1st route (GET request)\r\n@admin_content_edit_router.get('/admin/edit_content/set_category')\r\nasync def set_category(request:Request):\r\n return templates.TemplateResponse(\"admin/category_edit.html\", {'request': request})\r\n\r\n#2nd route (POST request)\r\n@admin_content_edit_router.post('/admin/edit_content/add_category')\r\nasync def add_category(request:Request):\r\n # here forms are getting processed\r\n return RedirectResponse(app.url_path_for('set_category')) # from here to 1st route\r\n```\r\nBut it shows :\r\n```Python3\r\n {\"detail\":\"Method Not Allowed\"}\r\n```\r\nFull traceback:\r\n```Python3\r\nINFO: 127.0.0.1:58415 - \"POST /admin/edit_content/add_category HTTP/1.1\" 307 Temporary Redirect\r\nINFO: 127.0.0.1:58415 - \"POST /admin/edit_content/set_category HTTP/1.1\" 405 Method Not Allowed\r\nERROR: Exception in callback _SelectorSocketTransport._read_ready()\r\nhandle: \r\nTraceback (most recent call last):\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\events.py\", line 145, in _run\r\n self._callback(*self._args)\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\selector_events.py\", line 730, in _read_ready\r\n self._protocol.data_received(data)\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 162, in data_received\r\n self.handle_events()\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 247, in handle_events\r\n self.transport.resume_reading()\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\selector_events.py\", line 711, in resume_reading\r\n raise RuntimeError('Not paused')\r\nRuntimeError: Not paused\r\n```\r\n\r\nBut when i do a GET to GET redirect response it works without any issue but a POST to GET blows things up! Am i completely missing something here? i did look up in starlette doc here on reverse route lookup but nothing helps. [https://www.starlette.io/routing/#reverse-url-lookups](url)\r\n\r\nQuick Re produce the error:\r\n```Python3\r\n\r\nfrom fastapi import FastAPI\r\nfrom starlette.responses import RedirectResponse\r\nimport os\r\nfrom starlette.status import HTTP_302_FOUND,HTTP_303_SEE_OTHER\r\n\r\napp = FastAPI()\r\n\r\n@app.post(\"/\")\r\nasync def login():\r\n # HTTP_302_FOUND,HTTP_303_SEE_OTHER : None is working:(\r\n return RedirectResponse(url=\"/ressource/1\",status_code=HTTP_303_SEE_OTHER)\r\n\r\n@app.get(\"/ressource/{r_id}\")\r\nasync def get_ressource(r_id:str):\r\n return {\"r_id\": r_id}\r\n\r\nif __name__ == '__main__':\r\n os.system(\"uvicorn tes:app --host 0.0.0.0 --port 80\")\r\n```\r\n\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a0e4d38bea74940de013e04a6d6f399d62f04280", "files": [{"Loc": {"": {"mod": [58]}}, "path": null}]}, "own_code_loc": [{"Loc": [58], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "b93f8a709ab3923d1268dbc845f41985c0302b33", "is_iss": 0, "iss_html_url": "https://github.com/fastapi/fastapi/issues/4551", "iss_label": "question\nquestion-migrate", "title": "Attribute not found while testing a Beanie Model inside fast api", "body": "### First Check\r\n\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I used the GitHub search to find a similar issue and didn't find it.\r\n- [X] I searched the FastAPI documentation, with the integrated search.\r\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\r\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\r\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\r\n- [x] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\r\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\r\n\r\n### Commit to Help\r\n\r\n- [X] I commit to help with one of those options \ud83d\udc46\r\n\r\n### Example Code\r\n\r\n```python\r\nMy Code:\r\n\r\n\r\nMy Route:\r\n\r\n@router.post(\"/login\")\r\nasync def internalLogin(request: Request,\r\n email: str = Form(...),\r\n password: str = Form(...)):\r\n try:\r\n res, token = await Controller.internalLogin(email=email, password=password)\r\n if res:\r\n return {\"message\": \"Success\"}\r\n else:\r\n return {\"message\": \"Failure\"}\r\n except DocumentNotFound as documentNotFoundException:\r\n return {\"message\": \"Error\"}\r\n```\r\n\r\nController:\r\n```\r\n@staticmethod\r\n async def internalLogin(email: str, password: str) -> List[bool | str]:\r\n logger.info(message=\"Inside OpenApi Controller\", fileName=__name__, functionName=\"OpenApiController\")\r\n try:\r\n user = await internalUserDb(email=email)\r\n if user is not None and user.verifyPassword(password):\r\n print(\"Logged In\")\r\n return [True, \"\"]\r\n else:\r\n print(\"Failed)\r\n return [False, \"\"]\r\n except DocumentNotFound as documentNotFound:\r\n raise documentNotFound\r\n\r\n```\r\n\r\nDB:\r\n\r\n```\r\nasync def internalUserDb(email: str) -> InternalUserModel:\r\n try:\r\n user: InternalUserModel = await InternalUserModel.find_one(InternalUserModel.email == email)\r\n return user\r\n except DocumentNotFound as documentNotFound:\r\n raise documentNotFound\r\n```\r\n\r\nMy TestCode:\r\n\r\n```\r\n@pytest.mark.anyio\r\nasync def testLogin():\r\n response = await asyncClient.post(\"/internalLogin\",\r\n data={\"email\": \"sample@mail.com\", \"password\": \"samplePass\"})\r\n assert response.status_code == 303\r\n```\r\n\r\nMy error while testing is: \r\n\r\n```\r\nFAILED Tests/TestLogin.py::testLogin[asyncio] - AttributeError: type object 'InternalUserModel' has no attribute 'email'\r\nFAILED Tests/TestLogin.py::testLogin[trio] - AttributeError: type object 'InternalUserModel' has no attribute 'email'\r\n```\r\n\r\n\r\n### Description\r\n\r\nHello, I am new to FastAPI. I am trying to test the fast api with PyTest. Normal tests are working perfectly fine but I am using MongoDB as backend to store my data. While I try to test the route that does some data fetching from database it shows error like `attribute not inside the model`. I am using Beanie ODM for MongoDB.\r\n\r\n### Operating System\r\n\r\nmacOS\r\n\r\n### Operating System Details\r\n\r\n_No response_\r\n\r\n### FastAPI Version\r\n\r\n0.73\r\n\r\n### Python Version\r\n\r\n3.10\r\n\r\n### Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "b93f8a709ab3923d1268dbc845f41985c0302b33", "files": [{"path": "docs/en/docs/advanced/testing-events.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": [], "doc": ["docs/en/docs/advanced/testing-events.md"], "test": [], "config": [], "asset": []}} {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "78b07cb809e97f400e196ff3d89862b9d5bd5dc2", "is_iss": 0, "iss_html_url": "https://github.com/fastapi/fastapi/issues/4587", "iss_label": "question\nquestion-migrate", "title": "Use the raw response in Reponse classes", "body": "### First Check\r\n\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I used the GitHub search to find a similar issue and didn't find it.\r\n- [X] I searched the FastAPI documentation, with the integrated search.\r\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\r\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\r\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\r\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\r\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\r\n\r\n### Commit to Help\r\n\r\n- [X] I commit to help with one of those options \ud83d\udc46\r\n\r\n### Example Code\r\n\r\n```python\r\nclass CustomEncoder():\r\n def encode(self, dict_data)\r\n return dict_data\r\n\r\nclass PhotonJSONResponse(JSONResponse):\r\n def __init__(self, content: typing.Any = None, status_code: int = 200, headers: dict = None, media_type: str = None,\r\n background: BackgroundTask = None) -> None:\r\n # Fetch the untouched response in the upper stacks\r\n current_frame = inspect.currentframe()\r\n self.raw_response = None\r\n while current_frame.f_back:\r\n if 'raw_response' in current_frame.f_locals:\r\n self.raw_response = current_frame.f_locals['raw_response']\r\n break\r\n current_frame = current_frame.f_back\r\n \r\n self._encoder = CustomEncoder()\r\n super().__init__(content, status_code, headers, media_type, background)\r\n\r\n def render(self, content: Any) -> bytes:\r\n dict_data = self._encoder.encode(self.raw_response)\r\n return super().render(dict_data)\r\n```\r\n\r\n\r\n### Description\r\n\r\nI want to access the raw response that hasn't been through the json_encoder inside my response class. This is because I have custom types that are handled in a custom encoder. I have looked through the relevant fastapi code and I can't find a way to override the encoder for all requests either. As you can see in the example code I currently use reflection to fetch the raw_response in the upper stack frame, however this is not very reliable. I also can't seem to do this using an APIRoute implementation because it would require re-implementing the route handler which is messy, maybe it would be more relevant in there though.\r\n\r\n### Operating System\r\n\r\nWindows\r\n\r\n### Operating System Details\r\n\r\n_No response_\r\n\r\n### FastAPI Version\r\n\r\n0.63.0\r\n\r\n### Python Version\r\n\r\n3.8.12\r\n\r\n### Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "78b07cb809e97f400e196ff3d89862b9d5bd5dc2", "files": [{"path": "fastapi/routing.py", "Loc": {"('APIRoute', None, 300)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "max_topk": 1, "file_topk": 1, "loctype": {"code": ["fastapi/routing.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "ecd92d6a4e9a7c74d2bf436571f2c7e96beb9f92", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/3341", "iss_label": "bug", "title": "state isn't clearly understood how to incorporate for script.py", "body": "### Describe the bug\n\nI see that output_modifier and a few other functions require state object, which is not defined in script.py nor are any of the existing plugins (that I looked at) use a state object.\r\n\r\nAs a result, I am unable to use the functions. I get a message about needing to pass state\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\ntry to use this snippet\r\n\r\nhttps://github.com/ChobPT/oobaboogas-webui-langchain_agent/blob/main/script.py#L185-L190\r\n\r\n```\r\ndef input_modifier(string):\r\n if string[:3] == \"/do Story\":\r\n print('hi')\r\n string += ' Tell me a story.'\r\n else:\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0])\r\n return string.replace('/do ', '')\r\n\r\n```\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nFile \"/home/user/oobabooga_linux/text-generation-webui/extensions/helloworld/script.py\", line 144, in input_modifier\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0],state_dict)\r\nNameError: name 'state_dict' is not defined\r\n\r\n```\r\n```\r\n File \"/home/user/oobabooga_linux/text-generation-webui/extensions/helloworld/script.py\", line 144, in input_modifier\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0],state)\r\nNameError: name 'state' is not defined\r\n\r\n```\r\n\r\n```\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0])\r\nTypeError: output_modifier() missing 1 required positional argument: 'state'\r\n\r\n```\r\n\r\nand if I removed state from output_modifier (as you see in my snippet above w print) I get no modified output nor print statement at console\r\nOutput generated in 1.99 seconds (9.06 tokens/s, 18 tokens, context 66, seed 123523724)\r\nTraceback (most recent call last):\r\n File \"/home/user/oobabooga_linux/text-generation-webui/server.py\", line 1181, in \r\n time.sleep(0.5)\n```\n\n\n### System Info\n\n```shell\npython 3.9 oracle linux 8.5\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "ecd92d6a4e9a7c74d2bf436571f2c7e96beb9f92", "files": []}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ChobPT", "pro": "oobaboogas-webui-langchain_agen", "path": ["script.py"]}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "max_topk": 0, "file_topk": 0, "loctype": {"code": ["script.py"], "doc": [], "test": [], "config": [], "asset": []}}