{"query":"Is there a melt command in Snowflake?\n\nIs there a Snowflake command that will transform a table like this:\n```\na,b,c\n1,10,0.1\n2,11,0.12\n3,12,0.13\n```\nto a table like this:\n```\nkey,value\na,1\na,2\na,3\nb,10\nb,11\nb,13\nc,0.1\nc,0.12\nc,0.13\n```\n?\n\nThis operation is often called melt in other tabular systems, but the basic idea is to convert the table into a list of key value pairs.\n\nThere is an UNPIVOT in SnowSQL, but as I understand it UNPIVOT requires to manually specify every single column. This doesn't seem practical for a large number of columns.","reasoning":"The desired behavior looks like some kind of table transformation. We should try to find some operation to make a two-dimensional table flat.","id":"0","excluded_ids":["N\/A"],"gold_ids_long":["snowflake_docs\/flatten.txt"],"gold_ids":["snowflake_docs\/flatten_2_0.txt","snowflake_docs\/flatten_1_0.txt","snowflake_docs\/flatten_6_3.txt","snowflake_docs\/flatten_4_0.txt","snowflake_docs\/flatten_6_0.txt","snowflake_docs\/flatten_6_1.txt","snowflake_docs\/flatten_6_2.txt","snowflake_docs\/flatten_6_4.txt","snowflake_docs\/flatten_3_0.txt"],"gold_answer":"Snowflake's SQL is powerful enough to perform such operation without help of\nthird-party tools or other extensions.\n\nData prep:\n\n \n \n CREATE OR REPLACE TABLE t(a INT, b INT, c DECIMAL(10,2))\n AS\n SELECT 1,10,0.1\n UNION SELECT 2,11,0.12\n UNION SELECT 3,12,0.13;\n \n\n[ ![enter image description here](https:\/\/i.sstatic.net\/o3qiS.png)\n](https:\/\/i.sstatic.net\/o3qiS.png)\n\nQuery(aka \"dynamic\" UNPIVOT):\n\n \n \n SELECT f.KEY, f.VALUE\n FROM (SELECT OBJECT_CONSTRUCT_KEEP_NULL(*) AS j FROM t) AS s\n ,TABLE(FLATTEN(input => s.j)) f\n ORDER BY f.KEY;\n \n\nOutput:\n\n[ ![enter image description here](https:\/\/i.sstatic.net\/HGmXI.png)\n](https:\/\/i.sstatic.net\/HGmXI.png)\n\n* * *\n\nHow does it work?\n\n 1. Transform row into JSON(row 1 becomes ` { \"A\": 1,\"B\": 10,\"C\": 0.1 } ` ) \n 2. Parse the JSON into key-value pairs using FLATTEN"} {"query":"Python split a column value into multiple columns and keep remaining column same\n\nThe task is to split the values of column A to into different columns and have values of corresponding column2 values and need column3 to contain the corresponding value of the group.\n\n```\nColumn1 Column2 Column3\nGroup1 Value1 V1\nGroup1 Value2 V2\nGroup1 Value3 V3\nGroup1 Value4 V4\nGroup2 Value1 x1\nGroup2 Value2 x2\nGroup2 Value3 x3\nGroup2 Value4 x4\nGroup3 Value1 y1\nGroup3 Value2 y2\n```\n\nto a table like this:\n\n```\nGroup1 Group2 Group3 Column3.Group1 Column3.Group2 Column3.Group3\nValue1 Value1 Value1 v1 x1 y1\nValue2 Value2 Value2 v2 x2 y1\nValue3 Value3 NaN v3 x3 NaN\nValue4 Value4 NaN v4 x4 NaN\n```\n\nAny way to achieve this output in python?","reasoning":"The desired behavior looks like some kind of table transformation. We should try to find some operation to make column 1's values to become head values of new table and let the original column 2's value to become the values of the new table. Then rearrange the column 3's values into new columns to correspond to the new table's values. If there is no corresponding value, set the value into NaN.","id":"1","excluded_ids":["N\/A"],"gold_ids_long":["Python_pandas_functions_with_style\/DataFrame.txt"],"gold_ids":["Python_pandas_functions_with_style\/DataFrame_93_5.txt","Python_pandas_functions_with_style\/DataFrame_93_4.txt"],"gold_answer":"tmp = df.assign(cc=df.groupby('Column1').cumcount())\n out = pd.concat(\n [tmp.pivot(index='cc', columns='Column1', values='Column2'), \n tmp.pivot(index='cc', columns='Column1', values='Column3').add_prefix('Column3.')\n ], axis=1).rename_axis(index=None, columns=None)\n \n\nout:\n\n \n \n Group1 Group2 Group3 Column3.Group1 Column3.Group2 Column3.Group3\n 0 Value1 Value1 Value1 V1 x1 y1\n 1 Value2 Value2 Value2 V2 x2 y2\n 2 Value3 Value3 NaN V3 x3 NaN\n 3 Value4 Value4 NaN V4 x4 NaN"} {"query":"Python type hints: what should I use for a variable that can be any iterable sequence?\n\n```\ndef mysum(x)->int:\n s = 0\n for i in x:\n s += i\n return s\n```\n\nThe argument x can be list[int] or set[int], it can also be d.keys() where d is a dict, it can be range(10), as well as possibly many other sequences, where the item is of type int. What is the correct type-hint for x?","reasoning":"Programmers want to indicate the type of Python function parameters, but can only indicate the base type (such as int, float, etc.). If they want to indicate that the type of the parameter is any iterable sequence, they need to import an additional library.","id":"2","excluded_ids":["N\/A"],"gold_ids_long":["Python_development_tools\/typing.txt"],"gold_ids":["Python_development_tools\/typing_12_1.txt","Python_development_tools\/typing_12_0.txt"],"gold_answer":"You can use [ ` typing.Iterable `\n](https:\/\/docs.python.org\/library\/typing.html#typing.Iterable) :\n\n \n \n from typing import Iterable\n \n def mysum(x: Iterable[int]) -> int:\n s = 0\n for i in x:\n s += i\n return s\n \n\n* * *\n\nEdit: ` typing.Iterable ` is an alias for [ ` collections.abc.Iterable `\n](https:\/\/docs.python.org\/library\/collections.abc.html#collections.abc.Iterable)\n, so you should use that instead, as suggested in the comments."} {"query":"I have below scenario where list str columns need to be merged with the dataframe.\n\n```\ncolumns = [\"server\", \"ip\"]\n \ndataframes = [\n df1,\n df2,\n df3,\n df4,\n]\n\ndf_res = pd.merge(dataframes, columns )\n```\n\ndf1 name , server , df1,df2,df3 and df4 contains column \"server\" and \"ip\" and other columns too.\n\nSo, i want to merge all the columns with server and ip in the df_res.\n\nBut i am getting issue as below:\n\nCan only merge Series or DataFrame objects, a was passed. Please help.","reasoning":"The programmer wants to extract specific columns from the given dataframe and form a new table.","id":"3","excluded_ids":["N\/A"],"gold_ids_long":["Python_pandas_functions_with_style\/General_Function.txt"],"gold_ids":["Python_pandas_functions_with_style\/General_Function_5_1.txt","Python_pandas_functions_with_style\/General_Function_5_2.txt"],"gold_answer":"Use ` concat ` instead of ` merge ` to stack datframes vertically.\n\n \n \n df_res = pd.concat([df[columns] for df in dataframes],ignore_index = True)\n \n\nThis will create a new dataframe df_res with columns =['server', 'ip'] by\nstacking all rows vertically. It also replaces the original index with new\nindex."} {"query":"I have a custom PyTorch model that bottlenecks my application due to how it is currently used.\n\nThe application is a web server built in Flask that receives job submissions for the PyTorch model to process. Due to the processing time of each job, I use Celery to handle the computation, where Flask queues the tasks for Celery to execute.\n\nEach job consists of loading the PyTorch model from the disk, moving the model and data to a GPU, and making a prediction on the data submitted. However, loading the model takes around 6 seconds. In many instances, that is a magnitude or two larger than prediction time.\n\nThus, is it possible to load the model and move it to a GPU on server startup (specifically when the Celery worker starts), avoiding the time needed to load the model and copy it to the GPU every job? Ideally, I'd want to load the model and copy it to every available GPU on server startup, leaving each Celery job to choose an available GPU and copy the data over. Currently, I only have one GPU, so a multi-GPU solution is not a requirement at the moment, but I'm planning ahead.\n\nFurther, the memory constraints of the model and data allow for only one job per GPU at a time, so I have a single Celery worker that processes jobs sequentially. This could reduce the complexity of the solution due to avoiding multiple jobs attempting to use the model in shared memory at the same time, so I figured I'd mention it.\n\nHow can I deal with it?","reasoning":"The application is a Flask-based web server that receives job submissions for the PyTorch model to process. Each job involves loading the model, moving it to a GPU, and making predictions on the submitted data. However, the model loading process takes around 6 seconds, which is significantly longer than the prediction time. Therefore, a method is needed that can load the model and move it to a GPU during server startup is needed.","id":"4","excluded_ids":["N\/A"],"gold_ids_long":["pytorch_torch_tensor_functions\/pytorch_torch_tensor_functions.txt"],"gold_ids":["pytorch_torch_tensor_functions\/pytorch_torch_tensor_functions_294_0.txt"],"gold_answer":"Yes, there are ways to share a PyTorch model across multiple processes without\ncreating copies.\n\n**torch.multiprocessing** and **model.share_memory_()** :\n\nThis method utilizes the **torch.multiprocessing** module from PyTorch. You\ncan call **model.share_memory_()** on your model to allocate shared memory for\nits parameters. This allows all processes to access the same model parameters\nin shared memory, avoiding redundant copies. This approach is efficient for\ntraining a model in parallel across multiple CPU cores.\n\nSome resources for further exploration: [\nhttps:\/\/www.geeksforgeeks.org\/getting-started-with-pytorch\/\n](https:\/\/www.geeksforgeeks.org\/getting-started-with-pytorch\/)"} {"query":"Dropping elements from lists in a nested Polars column\n\nHow do I get this behaviour:\n\n```\npl.Series(['abc_remove_def', 'remove_abc_def', 'abc_def_remove']).str.split('_').map_elements(lambda x: [y for y in x if y != 'remove']).list.join('_')\n```\n\nWithout using the slower map_elements? I have tried using .list.eval and pl.element() but I can't find anything that actually excludes elements from a list by name (i.e. by the word 'remove' in this case)","reasoning":"Use the fastest method to remove elements from the given list.","id":"5","excluded_ids":["N\/A"],"gold_ids_long":["more_polar_functions\/Expression.txt","more_polar_functions\/List.txt"],"gold_ids":["more_polar_functions\/List_6_26.txt","more_polar_functions\/Expression_65_25.txt"],"gold_answer":"[ ` list.eval ` ](https:\/\/docs.pola.rs\/py-\npolars\/html\/reference\/series\/api\/polars.Series.list.eval.html) , in\ncombination with [ ` filter ` ](https:\/\/docs.pola.rs\/py-\npolars\/html\/reference\/expressions\/api\/polars.Expr.filter.html) would work as\nfollowing:\n\n \n \n # list_eval\n (pl\n .Series(['abc_remove_def', 'remove_abc_def', 'abc_def_remove']).str.split('_')\n .list.eval(pl.element().filter(pl.element() != 'remove'))\n )\n \n\nThat said, [ ` list.set_difference ` ](https:\/\/docs.pola.rs\/py-\npolars\/html\/reference\/series\/api\/polars.Series.list.set_difference.html) as\nsuggested by @jqurious is the most straightforward and fastest:\n\n \n \n # list_set_difference\n (pl\n .Series(['abc_remove_def', 'remove_abc_def', 'abc_def_remove']).str.split('_')\n .list.set_difference(['remove'])\n )\n \n\nOutput:\n\n \n \n shape: (3,)\n Series: '' [list[str]]\n [\n [\"abc\", \"def\"]\n [\"abc\", \"def\"]\n [\"abc\", \"def\"]\n ]\n \n\n##### Timings and differences\n\nlists of 3 items [ ![polars timing comparison remove item\nlist](https:\/\/i.sstatic.net\/gWbsB.png) ](https:\/\/i.sstatic.net\/gWbsB.png)\n\nlists of 100 items with many duplicates [ ![polars timing comparison remove\nitem list](https:\/\/i.sstatic.net\/LnYJ6.png) ](https:\/\/i.sstatic.net\/LnYJ6.png)\n\nlists of 100 items without duplicates [ ![polars timing comparison remove item\nlist](https:\/\/i.sstatic.net\/qpe3L.png) ](https:\/\/i.sstatic.net\/qpe3L.png)\n\n_NB. the timings exclude the creation of the Series._\n\nAdditionally, it is important to note that ` list.set_difference ` would also\nremove duplicated values.\n\nFor instance on:\n\n \n \n s = pl.Series(['abc_remove_abc_def', 'remove_abc_def']).str.split('_')\n \n # output after set_difference\n shape: (2,)\n Series: '' [list[str]]\n [\n [\"abc\", \"def\"]\n [\"def\", \"abc\"]\n ]\n \n # output for the other approaches\n shape: (2,)\n Series: '' [list[str]]\n [\n [\"abc\", \"abc\", \"def\"]\n [\"abc\", \"def\"]\n ]"} {"query":"I'm trying to write an LLM class as below:\n\n```\nfrom langchain_openai import ChatOpenAI\n\nclass LLM:\n def __init__(self,model_name):\n super().__init__(model_name=model_name)\n self.model_name = model_name \n\nclass OpenAILLM(LLM,ChatOpenAI):\n def __init__(self,model_name):\n super().__init__(model_name=model_name)\n self.model_name = model_name\n```\nwhich works fine but when I try to add another variable in __init__ of OpenAILLM class as below\n\n```\nfrom langchain_openai import ChatOpenAI\n\nclass LLM:\n def __init__(self,model_name):\n super().__init__(model_name=model_name)\n self.model_name = model_name \n\nclass OpenAILLM(LLM,ChatOpenAI):\n def __init__(self,model_name):\n super().__init__(model_name=model_name)\n self.model_name = model_name\n self.var = 'var'\n```\n\nThe object creation for OpenAILLM fails and gives ValueError: \"OpenAILLM\" object has no field \"var\"\n\nCan anyone help me how I can add more variables?\n\nAnother thing I tried is the get_var function define below works but the set_var function defined below gives same error as above\n\n```\nfrom langchain_openai import ChatOpenAI\n\nclass LLM():\n var = 1\n def __init__(self,model_name):\n super().__init__(model_name=model_name)\n self.model_name = model_name \n\nclass OpenAILLM(LLM,ChatOpenAI):\n def __init__(self,model_name):\n super().__init__(model_name=model_name)\n self.model_name = model_name\n def get_var(self):\n return self.var\n def set_var(self, var):\n print('self var',self.var)\n self.var = var\n return True\n```\n\nThe mro of OpenAILLM is as below\n\n```\n(__main__.OpenAILLM,\n __main__.LLM,\n langchain_openai.chat_models.base.ChatOpenAI,\n langchain_core.language_models.chat_models.BaseChatModel,\n langchain_core.language_models.base.BaseLanguageModel,\n langchain_core.runnables.base.RunnableSerializable,\n langchain_core.load.serializable.Serializable,\n pydantic.v1.main.BaseModel,\n pydantic.v1.utils.Representation,\n langchain_core.runnables.base.Runnable,\n typing.Generic,\n abc.ABC,\n object)\n```\n\nI think this has to do something with pydantic, but I'm not sure. And I don't know how to resolve it. Please help.","reasoning":"When trying to add another variable in the __init__ method of the OpenAILLM class, a ValueError is raised and states that the \"OpenAILLM\" object has no field \"var\". When defining `get_var` and `set_var` functions in the OpenAILLM class, the same error is encountered. OpenAILLM might not allow programmers to add new members in its syntax criteria. ","id":"6","excluded_ids":["N\/A"],"gold_ids_long":["python_data_model\/python_data_model.txt"],"gold_ids":["python_data_model\/python_data_model_49_1.txt","python_data_model\/python_data_model_49_0.txt"],"gold_answer":"There is a ` __slots__ ` attribute defined on the class ` OpenAILLM ` (by\nPydantic, which I'm not familiar with).\n\nRefer to [ https:\/\/docs.python.org\/3\/reference\/datamodel.html#slots\n](https:\/\/docs.python.org\/3\/reference\/datamodel.html#slots) to see how `\n__slots__ ` works.\n\nSimply said, if a ` __slots__ ` is defined on a class, its members are set in\nstone and you may not add additional members to its **direct** instances. And\nI believe that ` ChatOpenAI ` is not meant to be subclassed, so just avoid\nthat usage."} {"query":"Create a New Column Based on the Value of two Columns in Pandas with conditionals\n\nI have a dataframe with two different columns i need to use to calculate a score:\n```\nid Pos Player GW VP Final Drop TournamentPoints\n0 1 1 Alessio Bianchi 2 7.0 5.0 NaN\n1 2 2 Gianluca Bianco 2 7.0 0.0 NaN\n2 3 2 Sara Rossi 1 5.0 0.0 NaN\n3 4 2 Gabriele Verdi 1 4.5 0.0 NaN\n4 5 2 Luca Gialli 1 3.0 0.0 NaN\n```\n\nTournament points is calculated from GW and VP with a formula:\n```\ndf['TournamentPoints'] = ((number_of_players \/ 10) * (df[\"VP\"] + 1)) + (df['GW'] * x)\n```\nwhere number_of_players and X are calculated previously.\n\nHowever i need another step:\n1. add 50 to the row with the highest value in \"Final\" columns (in this case Alessio Bianchi)\n2. if two rows have the same value in \"Final\" and it's the highest, only the row with the lowest \"Pos\" must receive the 50 boost","reasoning":"The last column's values is the result of the combination of previous 2 column. It depends on the maximum value of one of 2. If meeting 2 same maximum values. One is chosen depending on another value of 2nd column.","id":"7","excluded_ids":["N\/A"],"gold_ids_long":["Python_pandas_functions\/Series.txt"],"gold_ids":["Python_pandas_functions\/Series_286_6.txt","Python_pandas_functions\/Series_286_5.txt"],"gold_answer":"Assuming Pos is already sorted, you can use [ ` idxmax `\n](https:\/\/pandas.pydata.org\/pandas-\ndocs\/stable\/reference\/api\/pandas.Series.idxmax.html) , this will select the\nfirst row that has the maximum value:\n\n \n \n df['TournamentPoints'] = ((number_of_players \/ 10) * (df[\"VP\"] + 1)) + (df['GW'] * x)\n \n df.loc[df['Final'].idxmax(), 'TournamentPoints'] += 50\n \n\nIf Pos is not sorted:\n\n \n \n df.loc[df.sort_values(by='Pos')['Final'].idxmax(), 'TournamentPoints'] += 50"} {"query":"I set up a local simple http server in python on windows.\n\nI set up an HTML file with javascript that uses setInterval() to request a JSON in the local directory every second. (Then updates the webpage with json content.)\n\nGreat, this works. Until after 15-60 seconds, the browser stops making requests, based on the server log in the terminal. The response 304 always precedes the break.\n\nHow can I avoid this and force the server always serve the json file every second?\n\nVery new to this attempt and had to do tricks to make the browser even make the requests for local jsons or so due to security protocols. In part why the server is necessary, as via file:\/\/\/ it can't be done.\n\nMy thought is change something in the server so if it wants to respond with 304, it will give a 200. I hope if it's always 200, the browser won't pause requests.\n\nMy other thought is ensure the json is always changing, hopefully avoiding the idea of it cached.","reasoning":"The issue is about caching. The browser always fetch the JSON file from the cache. The browser is receiving a 304 response from the server, indicating that the content has not been modified since the last request, since it uses the cached version. Therefore, it should avoid using cache.","id":"8","excluded_ids":["N\/A"],"gold_ids_long":["Mmdn_HTTP\/HTTP_headers.txt"],"gold_ids":["Mmdn_HTTP\/HTTP_headers_38_12.txt","Mmdn_HTTP\/HTTP_headers_38_13.txt","Mmdn_HTTP\/HTTP_headers_38_10.txt","Mmdn_HTTP\/HTTP_headers_38_11.txt","Mmdn_HTTP\/HTTP_headers_38_9.txt","Mmdn_HTTP\/HTTP_headers_38_8.txt"],"gold_answer":"It seems like your issue is related to caching. The browser is receiving a 304\nresponse from the server, indicating that the content has not been modified\nsince the last request, so it's using the cached version instead of making a\nnew request.\n\nTo force the browser to always fetch the JSON file from the server instead of\nusing the cache, you can try to disable caching on the server side. You can\nmodify your Python HTTP server to include headers that prevent caching. You\ncan do this by setting the [ Cache-Control ](https:\/\/developer.mozilla.org\/en-\nUS\/docs\/Web\/HTTP\/Headers\/Cache-Control) headers to no-cache.\n\n \n \n import sys\n from http.server import HTTPServer, SimpleHTTPRequestHandler, test\n \n \n class CORSRequestHandler(SimpleHTTPRequestHandler):\n def end_headers(self):\n self.send_header('Access-Control-Allow-Origin', '*')\n self.send_header('Cache-Control', 'no-cache')\n SimpleHTTPRequestHandler.end_headers(self)\n \n \n if __name__ == '__main__':\n test(CORSRequestHandler, HTTPServer, port=int(sys.argv[1]) if len(sys.argv) > 1 else 8000)\n \n\nIn my understanding it looks something like this: **main.py**\n\n \n \n import os\n import sys\n from http.server import HTTPServer, SimpleHTTPRequestHandler\n \n \n class CORSRequestHandler(SimpleHTTPRequestHandler):\n def end_headers(self):\n self.send_header('Access-Control-Allow-Origin', '*')\n self.send_header('Cache-Control', 'no-cache')\n super().end_headers()\n \n def do_GET(self):\n if self.path == '\/data.json':\n self.send_response(200)\n self.send_header('Content-type', 'application\/json')\n self.end_headers()\n with open(os.path.join(os.getcwd(), 'data.json'), 'rb') as file:\n self.wfile.write(file.read())\n else:\n super().do_GET()\n \n \n def run_server(port):\n server_address = ('', port)\n httpd = HTTPServer(server_address, CORSRequestHandler)\n print(f\"Server running on port {port}\")\n httpd.serve_forever()\n \n \n if __name__ == '__main__':\n port = int(sys.argv[1]) if len(sys.argv) > 1 else 8000\n run_server(port)\n \n \n\n**index.html**\n\n \n \n \n \n \n \n JSON<\/title>\n <\/head>\n <body>\n <h1>JSON<\/h1>\n <div id=\"my-content\"><\/div>\n <script>\n function fetchData() {\n fetch('http:\/\/localhost:8000\/data.json')\n .then(response => response.json())\n .then(data => {\n document.getElementById('my-content').innerText = JSON.stringify(data, null, 2);\n })\n .catch(error => console.error('Error fetching data:', error));\n }\n \n setInterval(fetchData, 1000);\n <\/script>\n <\/body>\n <\/html>"} {"query":"I have a dataframe with people and the food they like:\n\n```\ndf_class = pl.DataFrame(\n {\n 'people': ['alan', 'bob', 'charlie'],\n 'food': [['orange', 'apple'], ['banana', 'cherry'], ['banana', 'grape']]\n }\n)\nprint(df_class)\n\nshape: (3, 2)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 people \u2506 food \u2502\n\u2502 --- \u2506 --- \u2502\n\u2502 str \u2506 list[str] \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 alan \u2506 [\"orange\", \"apple\"] \u2502\n\u2502 bob \u2506 [\"banana\", \"cherry\"] \u2502\n\u2502 charlie \u2506 [\"banana\", \"grape\"] \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\nAnd, I have a data structure with animals and the things they like to eat:\n\n```\nanimals = [\n ('squirrel', ('acorn', 'almond')),\n ('parrot', ('cracker', 'grape', 'guava')),\n ('dog', ('chicken', 'bone')),\n ('monkey', ('banana', 'plants'))\n]\n```\n\nI want to add a new column pets in df_class, such that pets is a list of the animals that have at least one food in common with the corresponding person:\n\n```\ndf_class.with_columns(pets=???) # <-- not sure what to do here\n\nshape: (3, 3)\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 people \u2506 fruits \u2506 pets \u2502\n\u2502 --- \u2506 --- \u2506 --- \u2502\n\u2502 str \u2506 list[str] \u2506 list[str] \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 alan \u2506 [\"orange\", \"apple\"] \u2506 [] \u2502\n\u2502 bob \u2506 [\"banana\", \"cherry\"] \u2506 [\"monkey\"] \u2502\n\u2502 charlie \u2506 [\"banana\", \"grape\"] \u2506 [\"monkey\", \"parrot\"] \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\n1. I have some flexibility in the data structure for animals, in case for e.g. this is easier to do with some sort of set intersection\n2. the order of pets is unimportant\n3. pets should contain unique values\n4. I'm looking for a single expression that would achieve the desired result, so as to fit within a larger framework of a list of expressions that perform transformations on other columns of my actual dataset\n\nSide note: my title seems kind of clunky and I'm open to suggestions to reword so that it's easier to find by others that might be trying to solve a similar problem.","reasoning":"The programmer wants to add a new column of data (such as the data structure is like, name: information) to the original table, and this set of data will be related to each column in the original table. For example, if the second column has the same information with new piece of data, a new corresponding name can be added to the new column.","id":"9","excluded_ids":["N\/A"],"gold_ids_long":["polar_functions\/Expression.txt"],"gold_ids":["polar_functions\/Expression_63_25.txt"],"gold_answer":"It looks like a ` join ` on the exploded lists.\n\nIt can be kept as a \"single expression\" by putting it inside [ `\n.map_batches() `\n](https:\/\/docs.pola.rs\/docs\/python\/dev\/reference\/expressions\/api\/polars.map_batches.html#polars-\nmap-batches)\n\n \n \n df_class.with_columns(pets = \n pl.col(\"food\").map_batches(lambda col:\n pl.LazyFrame(col)\n .with_row_index()\n .explode(\"food\")\n .join(\n pl.LazyFrame(animals, schema=[\"pets\", \"food\"]).explode(\"food\"),\n on = \"food\",\n how = \"left\"\n )\n .group_by(\"index\", maintain_order=True)\n .agg(\n pl.col(\"pets\").unique().drop_nulls()\n )\n .collect()\n .get_column(\"pets\")\n )\n )\n \n \n \n shape: (3, 3)\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 people \u2506 food \u2506 pets \u2502\n \u2502 --- \u2506 --- \u2506 --- \u2502\n \u2502 str \u2506 list[str] \u2506 list[str] \u2502\n \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n \u2502 alan \u2506 [\"orange\", \"apple\"] \u2506 [] \u2502\n \u2502 bob \u2506 [\"banana\", \"cherry\"] \u2506 [\"monkey\"] \u2502\n \u2502 charlie \u2506 [\"banana\", \"grape\"] \u2506 [\"monkey\", \"parrot\"] \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \n\n### Explanation\n\nWe add a row index to the \"left\" frame and [ explode\n](https:\/\/docs.pola.rs\/docs\/python\/dev\/reference\/dataframe\/api\/polars.DataFrame.explode.html#polars.DataFrame.explode)\nthe lists.\n\n(The row index will allow us to rebuild the rows later on.)\n\n \n \n df_class_long = df_class.with_row_index().explode(\"food\")\n \n # shape: (6, 3)\n # \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n # \u2502 index \u2506 people \u2506 food \u2502\n # \u2502 --- \u2506 --- \u2506 --- \u2502\n # \u2502 u32 \u2506 str \u2506 str \u2502\n # \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n # \u2502 0 \u2506 alan \u2506 orange \u2502\n # \u2502 0 \u2506 alan \u2506 apple \u2502\n # \u2502 1 \u2506 bob \u2506 banana \u2502\n # \u2502 1 \u2506 bob \u2506 cherry \u2502\n # \u2502 2 \u2506 charlie \u2506 banana \u2502\n # \u2502 2 \u2506 charlie \u2506 grape \u2502\n # \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \n \n \n df_pets_long = pl.DataFrame(animals, schema=[\"pets\", \"food\"]).explode(\"food\")\n \n # shape: (9, 2)\n # \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n # \u2502 pets \u2506 food \u2502\n # \u2502 --- \u2506 --- \u2502\n # \u2502 str \u2506 str \u2502\n # \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n # \u2502 squirrel \u2506 acorn \u2502\n # \u2502 squirrel \u2506 almond \u2502\n # \u2502 parrot \u2506 cracker \u2502\n # \u2502 parrot \u2506 grape \u2502\n # \u2502 parrot \u2506 guava \u2502\n # \u2502 dog \u2506 chicken \u2502\n # \u2502 dog \u2506 bone \u2502\n # \u2502 monkey \u2506 banana \u2502\n # \u2502 monkey \u2506 plants \u2502\n # \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \n\nWe then use a [ Left Join ](https:\/\/docs.pola.rs\/user-\nguide\/transformations\/joins\/#left-join) to find the \"intersections\" (whilst\nkeeping all the rows from the left side).\n\n \n \n df_class_long.join(df_pets_long, on=\"food\", how=\"left\")\n \n # shape: (6, 4)\n # \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n # \u2502 index \u2506 people \u2506 food \u2506 pets \u2502\n # \u2502 --- \u2506 --- \u2506 --- \u2506 --- \u2502\n # \u2502 u32 \u2506 str \u2506 str \u2506 str \u2502\n # \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n # \u2502 0 \u2506 alan \u2506 orange \u2506 null \u2502\n # \u2502 0 \u2506 alan \u2506 apple \u2506 null \u2502\n # \u2502 1 \u2506 bob \u2506 banana \u2506 monkey \u2502\n # \u2502 1 \u2506 bob \u2506 cherry \u2506 null \u2502\n # \u2502 2 \u2506 charlie \u2506 banana \u2506 monkey \u2502\n # \u2502 2 \u2506 charlie \u2506 grape \u2506 parrot \u2502\n # \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \n\nWe can then rebuild the \"rows\" with [ ` .group_by() `\n](https:\/\/docs.pola.rs\/docs\/python\/dev\/reference\/dataframe\/api\/polars.DataFrame.group_by.html#polars.DataFrame.group_by)\n\n * You stated you want [ ` .unique() ` ](https:\/\/docs.pola.rs\/docs\/python\/dev\/reference\/expressions\/api\/polars.Expr.unique.html#polars.Expr.unique) pets only. \n * We also [ drop nulls. ](https:\/\/docs.pola.rs\/docs\/python\/dev\/reference\/expressions\/api\/polars.Expr.drop_nulls.html#polars.Expr.drop_nulls)\n\n \n \n (df_class_long.join(df_pets_long, on=\"food\", how=\"left\")\n .group_by(\"index\", maintain_order=True) # we need to retain row order\n .agg(pl.col(\"pets\").unique().drop_nulls())\n )\n \n # shape: (3, 2)\n # \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n # \u2502 index \u2506 pets \u2502\n # \u2502 --- \u2506 --- \u2502\n # \u2502 u32 \u2506 list[str] \u2502\n # \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n # \u2502 0 \u2506 [] \u2502\n # \u2502 1 \u2506 [\"monkey\"] \u2502\n # \u2502 2 \u2506 [\"monkey\", \"parrot\"] \u2502\n # \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518"} {"query":"I\u00b4ve got the following DataFrame:\n\n value\nA B\n111 2024-03-22 00:00:00 1\n111 2024-03-22 01:00:00 2\n111 2024-03-22 02:00:00 3\n222 2024-03-22 00:00:00 4\n222 2024-03-22 01:00:00 5\n222 2024-03-22 02:00:00 6\nNow I want to resample and sum index B to days and would expect the following result:\n\n value\nA B\n111 2024-03-22 00:00:00 6\n222 2024-03-22 00:00:00 15\nHow can I achieve something like that?\n\nAnother Example would be the following:\n\n value\nA B\n111 2024-03-22 00:00:00 1\n111 2024-03-22 01:00:00 2\n111 2024-03-22 02:00:00 3\n222 2024-03-22 00:00:00 4\n222 2024-03-22 01:00:00 5\n222 2024-03-22 02:00:00 6\n333 2024-03-22 05:00:00 7\nOf which I want the following result with resampling by 1h:\n\n value\nA B\n111 2024-03-22 00:00:00 1\n111 2024-03-22 01:00:00 2\n111 2024-03-22 02:00:00 3\n111 2024-03-22 03:00:00 0\n111 2024-03-22 04:00:00 0\n111 2024-03-22 05:00:00 0\n222 2024-03-22 00:00:00 4\n222 2024-03-22 01:00:00 5\n222 2024-03-22 02:00:00 6\n222 2024-03-22 03:00:00 0\n222 2024-03-22 04:00:00 0\n222 2024-03-22 05:00:00 0\n333 2024-03-22 00:00:00 0\n333 2024-03-22 01:00:00 0\n333 2024-03-22 02:00:00 0\n333 2024-03-22 03:00:00 0\n333 2024-03-22 04:00:00 0\n333 2024-03-22 05:00:00 7\nPandas Version: 2.0.1\n\nI tried using level on resample but that way I lose Index A.\n\nI have the same issue when I have two timestamps in the index and want one to be resampled to days and the other to hours.\n\nI\u00b4ve looked at other answers of related questions here but couldn\u00b4t find a way to get them working for me.","reasoning":"Programmers want to merge or expand the rows of a table based on the characteristics of a particular column.","id":"10","excluded_ids":["N\/A"],"gold_ids_long":["Python_pandas_functions\/DataFrame.txt"],"gold_ids":["Python_pandas_functions\/DataFrame_74_5.txt","Python_pandas_functions\/DataFrame_74_4.txt","Python_pandas_functions\/DataFrame_98_6.txt","Python_pandas_functions\/DataFrame_74_6.txt","Python_pandas_functions\/DataFrame_74_7.txt","Python_pandas_functions\/DataFrame_98_5.txt","Python_pandas_functions\/DataFrame_98_4.txt"],"gold_answer":"You need to ` groupby ` before you ` resample ` to preserve the ` A ` index.\n\n \n \n import pandas as pd\n \n df = pd.DataFrame.from_dict({'value': \n {(111, pd.Timestamp('2024-03-22 00:00:00')): 1,\n (111, pd.Timestamp('2024-03-22 01:00:00')): 2,\n (111, pd.Timestamp('2024-03-22 02:00:00')): 3,\n (222, pd.Timestamp('2024-03-22 00:00:00')): 4,\n (222, pd.Timestamp('2024-03-22 01:00:00')): 5,\n (222, pd.Timestamp('2024-03-22 02:00:00')): 6}}\n )\n \n df.groupby(level=0).resample('d', level=1).sum()\n # returns:\n value\n A B\n 111 2024-03-22 6\n 222 2024-03-22 15"} {"query":"My dataframe looks like\n\n```\ndata = {\n \"ReportName\": [\"Sample cycle\", 'Message', \"ID\", \"m1\", \"Uncertainty m1\", \"Message\", \"Sample cycle\", 'Message', \"ID\", \"m0\", \"Uncertainty m0\", \"Message\", \"ID\", \"m1\", \"Uncertainty m1\", \"Message\"],\n \"Values\": [ \"1\",\"NO\", \"II\", None, None, \"NO\", \"1\", \"NO\", \"ID1\", \"1.8\", \"0.43\", \"NO\", \"ID2\", \"1.5\", \"0.41\", \"NO\"],\n}\n\ndf = pd.DataFrame(data)\n```\n\nI I created a new \"ID\" column with this function\n\n```\ndef extract_id(row):\n if row['ReportName'] == 'ID':\n return row['Values']\n return None\n```\n\nNow I want fill Na with ID from ReportName == 'Sample cycle' to next 'Sample cycle'.\n\nDesired output\n```\n ReportName Values ID\n0 Sample cycle 1 None\n1 Message NO II\n2 ID II II\n3 m1 None II\n4 Uncertainty m1 None II\n5 Message NO II\n6 Sample cycle 1 None\n7 Message NO ID1\n8 ID ID1 ID1\n9 m0 1.8 ID1\n10 Uncertainty m0 0.43 ID1\n11 Message NO ID1\n12 ID ID2 ID2\n13 m1 1.5 ID2\n14 Uncertainty m1 0.41 ID2\n15 Message NO ID2\n```","reasoning":"The programmer wants to add a column and fill it with the value when \"sample cycle\" appears in the first column until the next \"sample cycle\" is encountered. The value of the new column will be filled with the value of the ID of the sample cycle.","id":"11","excluded_ids":["N\/A"],"gold_ids_long":["Python_pandas_functions\/GroupBy.txt"],"gold_ids":["Python_pandas_functions\/GroupBy_78_4.txt"],"gold_answer":"You can use [ ` groupby.transform ` ](https:\/\/pandas.pydata.org\/pandas-\ndocs\/stable\/reference\/api\/pandas.core.groupby.DataFrameGroupBy.transform.html#pandas.core.groupby.DataFrameGroupBy.transform)\nwith masks:\n\n \n \n # identify rows with ID\n m1 = df['ReportName'].eq('ID')\n # identify rows with \"Sample cycle\"\n # this is used both to form groups\n # and to mask the output\n m2 = df['ReportName'].eq('Sample cycle')\n \n df.loc[~m2, 'ID'] = (df['Values'].where(m1).groupby(m2.cumsum())\n .transform(lambda x: x.ffill().bfill())\n )\n \n\nOutput:\n\n \n \n ReportName Values ID\n 0 Sample cycle 1 NaN\n 1 Message NO II\n 2 ID II II\n 3 m1 None II\n 4 Uncertainty m1 None II\n 5 Message NO II\n 6 Sample cycle 1 NaN\n 7 Message NO ID1\n 8 ID ID1 ID1\n 9 m0 1.8 ID1\n 10 Uncertainty m0 0.43 ID1\n 11 Message NO ID1\n 12 ID ID2 ID2\n 13 m1 1.5 ID2\n 14 Uncertainty m1 0.41 ID2\n 15 Message NO ID2"} {"query":"I wrote a bot that change name of the telegram channel every N minutes But, when name of the channel is changed telegram automaticly send message, that channel name has been changed\n\ni tried bot.delete_message command, but could't figure out how to delete that exact message","reasoning":"The programmer want to delete the automatic message sent by Telegram when he changed the name of the telegram every N mintues. He needs a decorator to intercept and delete the messages.","id":"12","excluded_ids":["N\/A"],"gold_ids_long":["TeleBot\/TeleBot_methods.txt"],"gold_ids":["TeleBot\/TeleBot_methods_2_44.txt","TeleBot\/TeleBot_methods_2_31.txt","TeleBot\/TeleBot_methods_2_47.txt","TeleBot\/TeleBot_methods_2_29.txt","TeleBot\/TeleBot_methods_2_17.txt","TeleBot\/TeleBot_methods_2_7.txt","TeleBot\/TeleBot_methods_2_53.txt","TeleBot\/TeleBot_methods_2_18.txt","TeleBot\/TeleBot_methods_2_22.txt","TeleBot\/TeleBot_methods_2_8.txt","TeleBot\/TeleBot_methods_2_3.txt","TeleBot\/TeleBot_methods_2_56.txt","TeleBot\/TeleBot_methods_2_24.txt","TeleBot\/TeleBot_methods_2_25.txt","TeleBot\/TeleBot_methods_2_27.txt","TeleBot\/TeleBot_methods_2_55.txt","TeleBot\/TeleBot_methods_2_33.txt","TeleBot\/TeleBot_methods_2_54.txt","TeleBot\/TeleBot_methods_2_49.txt","TeleBot\/TeleBot_methods_2_21.txt","TeleBot\/TeleBot_methods_2_46.txt","TeleBot\/TeleBot_methods_2_4.txt","TeleBot\/TeleBot_methods_2_58.txt","TeleBot\/TeleBot_methods_2_43.txt","TeleBot\/TeleBot_methods_2_30.txt","TeleBot\/TeleBot_methods_2_1.txt","TeleBot\/TeleBot_methods_2_52.txt","TeleBot\/TeleBot_methods_2_32.txt","TeleBot\/TeleBot_methods_2_50.txt","TeleBot\/TeleBot_methods_2_41.txt","TeleBot\/TeleBot_methods_2_0.txt","TeleBot\/TeleBot_methods_2_26.txt","TeleBot\/TeleBot_methods_2_9.txt","TeleBot\/TeleBot_methods_2_34.txt","TeleBot\/TeleBot_methods_2_37.txt","TeleBot\/TeleBot_methods_2_39.txt","TeleBot\/TeleBot_methods_2_35.txt","TeleBot\/TeleBot_methods_2_36.txt","TeleBot\/TeleBot_methods_2_10.txt","TeleBot\/TeleBot_methods_2_57.txt","TeleBot\/TeleBot_methods_2_16.txt","TeleBot\/TeleBot_methods_2_5.txt","TeleBot\/TeleBot_methods_2_14.txt","TeleBot\/TeleBot_methods_2_45.txt","TeleBot\/TeleBot_methods_2_19.txt","TeleBot\/TeleBot_methods_2_12.txt","TeleBot\/TeleBot_methods_2_2.txt","TeleBot\/TeleBot_methods_2_6.txt","TeleBot\/TeleBot_methods_2_13.txt","TeleBot\/TeleBot_methods_2_42.txt","TeleBot\/TeleBot_methods_2_28.txt","TeleBot\/TeleBot_methods_2_51.txt","TeleBot\/TeleBot_methods_2_23.txt","TeleBot\/TeleBot_methods_2_48.txt","TeleBot\/TeleBot_methods_2_38.txt","TeleBot\/TeleBot_methods_2_20.txt","TeleBot\/TeleBot_methods_2_11.txt","TeleBot\/TeleBot_methods_2_15.txt","TeleBot\/TeleBot_methods_2_40.txt"],"gold_answer":"To delete the message that Telegram sends automatically when the channel name\nis changed, you can use the **channel_post_handler** decorator provided by the\nTelebot library to intercept and delete these messages.\n\nExample: **bot.py**\n\n \n \n import telebot\n \n bot = telebot.TeleBot('BOT_TOKEN')\n \n \n @bot.channel_post_handler(content_types=['new_chat_title'])\n def channel_name_changed(message):\n try:\n bot.delete_message(message.chat.id, message.message_id)\n except Exception as e:\n print(\"Error deleting message:\", e)\n \n \n if __name__ == '__main__':\n bot.polling()\n \n\n**changer.py**\n\n \n \n import random\n from time import sleep\n \n import telebot\n \n bot = telebot.TeleBot('BOT_TOKEN')\n \n \n def change_channel_name():\n new_name = random.choice([\"Channel 1\", \"Channel 2\", \"Channel 3\", \"Channel 4\", \"Channel 5\"])\n try:\n bot.set_chat_title(\n \"@CHAT\",\n new_name\n )\n except Exception as e:\n print(\"Error changing channel name:\", e)\n \n \n def main():\n while True:\n change_channel_name()\n sleep(10)\n \n \n if __name__ == '__main__':\n main()"} {"query":"I am working on a huge denormalized table on a SQL server (10 columns x 130m rows). Take this as data example :\n\n```\nimport pandas as pd\nimport numpy as np\ndata = pd.DataFrame({\n 'status' : ['pending', 'pending','pending', 'canceled','canceled','canceled', 'confirmed', 'confirmed','confirmed'],\n 'clientId' : ['A', 'B', 'C', 'A', 'D', 'C', 'A', 'B','C'],\n 'partner' : ['A', np.nan,'C', 'A',np.nan,'C', 'A', np.nan,'C'],\n 'product' : ['afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard'],\n 'brand' : ['brand_1', 'brand_2', 'brand_3','brand_1', 'brand_2', 'brand_3','brand_1', 'brand_3', 'brand_3'],\n 'gmv' : [100,100,100,100,100,100,100,100,100]})\n\ndata = data.astype({'partner':'category','status':'category','product':'category', 'brand':'category'})\n```\n\nAs you can see, many of it columns are categories\/strings that could be factorize (replaced by a small int identification to another x.1 join).\n\nMy question is if there is a easy way to extract another \"dataframe\" from each category columns and factory factorize the main table, so the bytes transmitted over a single query could be faster! Is there any easy library for it?\n\nI would expect to get this output:\n\n```\n data = pd.DataFrame({\n 'status' : ['1', '1','1', '2','2','2', '3', '3','3'],\n 'clientId' : ['1', '2', '3', '1', '4', '3', '1', '2','3'],\n 'partner' : ['A', np.nan,'C', 'A',np.nan,'C', 'A', np.nan,'C'],\n 'product' : ['afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard'],\n 'brand' : ['brand_1', 'brand_2', 'brand_3','brand_1', 'brand_2', 'brand_3','brand_1', 'brand_3', 'brand_3'],\n 'gmv' : [100,100,100,100,100,100,100,100,100]})\n \nstatus_df = {1 : 'pending', 2:'canceled', 3:'confirmed'} \nclientid = {1 : 'A', 2:'B', 3:'C', 4:'D'}\n```","reasoning":"The goal is to factorize columns with different categories to optimize data transfer efficiency on SQL servers, which can improve query performance by replacing category columns with small integer identifiers.","id":"13","excluded_ids":["N\/A"],"gold_ids_long":["Python_pandas_functions\/General_Function.txt"],"gold_ids":["Python_pandas_functions\/General_Function_27_0.txt","Python_pandas_functions\/General_Function_27_1.txt","Python_pandas_functions\/General_Function_27_2.txt"],"gold_answer":"You can use [ ` factorize ` ](https:\/\/pandas.pydata.org\/pandas-\ndocs\/stable\/reference\/api\/pandas.factorize.html) to do this. For example:\n\n \n \n codes, uniques = pd.factorize(data['status'])\n data['status'] = codes\n status_df = pd.DataFrame(uniques)\n \n\nOutput (data):\n\n \n \n status clientId partner product brand gmv\n 0 0 A A afiliates brand_1 100\n 1 0 B NaN pre-paid brand_2 100\n 2 0 C C giftcard brand_3 100\n 3 1 A A afiliates brand_1 100\n 4 1 D NaN pre-paid brand_2 100\n 5 1 C C giftcard brand_3 100\n 6 2 A A afiliates brand_1 100\n 7 2 B NaN pre-paid brand_3 100\n 8 2 C C giftcard brand_3 100\n \n\nOutput (status_df):\n\n \n \n 0\n 0 pending\n 1 canceled\n 2 confirmed\n \n\nFor columns like ` partner ` , where there are ` NaN ` values, you can choose\nto have them replaced with ` -1 ` (the default behaviour), or to have ` NaN `\nincluded in ` partner_df ` (along with its own index) by specifying `\nuse_na_sentinel=False ` ."} {"query":"Is any supertype and also a subtype of all types in TypeScript?\n\nI am teaching a class on TypeScript fundamentals and I am trying to really understand the relationships between basic TS types. In all articles I saw, they put the any type very close to the top of the type hierarchy. Either\n\n+---------+\n| unknown |\n+---------+\n |\n v\n +-----+\n | any |\n +-----+\n |\n v\nall other types\nOr maybe\n\n+--------------+\n| any, unknown |\n+--------------+\n |\n v\nall other types\nBut I do not see it that way. You can assign value of all other types to an unknown variable but you cannot assign unknown value to anything. That puts it on top. But with any, you can assign all values of all types to any and also assign an any value to variables of all types. So it behaves as uknown and never at the same time. From this I would argue, that any stands completely aside of the type hierarchy tree not fitting anywhere in it. I would say any sidesteps the type system completely and in essence means \"turn off typechecking\"\n\nAm I wrong?","reasoning":"The questioner wants to confirm whether their understanding of relationships between basic TS types (unknown, any, all other types) is correct, and hopes that there is a literature or a definition to support their understanding.","id":"14","excluded_ids":["N\/A"],"gold_ids_long":["TypeScript\/Top_type.txt"],"gold_ids":["TypeScript\/Top_type_1_0.txt"],"gold_answer":"From the [ docs ](https:\/\/www.typescriptlang.org\/docs\/handbook\/basic-\ntypes.html#any) :\n\n> The any type is a powerful way to work with existing JavaScript, allowing\n> you to gradually opt-in and opt-out of type checking during compilation.\n\nSo i guess your understanding is correct. ` any ` is a way to opt out of type\nchecking.\n\nBut it (and also, unknown) are top types as all types are assignable to them.\n\nFrom [ wiki definition\n](https:\/\/en.wikipedia.org\/wiki\/Top_type#:%7E:text=The%20top%20type%20is%20sometimes,object%20of%20the%20type%20system.)\nof top types:\n\n> The top type is sometimes called also universal type, or universal supertype\n> as all other types in the type system of interest are subtypes of it, and in\n> most cases, it contains every possible object of the type system.\n\nIt seems that all that is required from a top type is all other types to be\nassignable to it, which is true of both ` unknown ` and ` any ` ."} {"query":"Better option than pandas iterrows\n\nI have following table in pandas. the table contains time and the price of the product.\n\nFor analysis purposes, I want to have 2 columns which would contain the next time when the product is more than $100 price change & less than $100 price change.\n\ne.g. if I am at cell 09:19 cell the next price more than $100 would be 14:02 & less than $100 would be 11:39 so 14:02 & 11:39 should come in 09:19 row in respective columns.\n\nSame way against cell 09:56, next price more than $100 would be 14:02 & less than $100 would be 12:18 so these 2 values would come in against the row of 09:56.\n\n```\nTable\nTime Price Up_Time Down_Time\n09:19:00 3252.25 \n09:24:00 3259.9 \n09:56:00 3199.4 \n10:17:00 3222.5 \n10:43:00 3191.25 \n11:39:00 3143 \n12:18:00 2991.7 \n13:20:00 3196.35 \n13:26:00 3176.1 \n13:34:00 3198.85 \n13:37:00 3260.75 \n14:00:00 3160.85 \n14:02:00 3450 \n14:19:00 3060.5 \n14:30:00 2968.7 \n14:31:00 2895.8 \n14:52:00 2880.7 \n14:53:00 2901.55 \n14:55:00 2885.55 \n14:57:00 2839.05 \n14:58:00 2871.5 \n15:00:00 2718.95\n```\n \nI am using following code, which works but takes 15-20 mins for 1 dataset.\n\n```\nfor i, row in df.iterrows():\n time_up = np.nan\n time_down = np.nan\n\n for j in range(i+1, len(df)):\n diff = df.iloc[j]['Price'] - row['Price']\n if diff > 100:\n time_up = df.iloc[j]['Time']\n elif diff < -100:\n time_down = df.iloc[j]['Time']\n\n if not pd.isna(time_up) or not pd.isna(time_down):\n break\n\n df.at[i, 'Up_Time'] = time_up\n df.at[i, 'Down_Time'] = time_down\n```\n\nIs there any efficient ways to do it?","reasoning":"Programmers are maintaining a fluctuating dollar schedule. He\/She wants to add two new columns to the table, recording the time when there is a price difference greater than or equal to +100 USD compared to the current time, and the time when the price difference is greater than or equal to -100 USD. He wants to know a more efficient function to improve processing speed.","id":"15","excluded_ids":["N\/A"],"gold_ids_long":["Python_pandas_functions\/DataFrame.txt"],"gold_ids":["Python_pandas_functions\/DataFrame_148_4.txt","Python_pandas_functions\/DataFrame_148_5.txt"],"gold_answer":"You do need to compare each row's ` Price ` value with all the rows that come\nafter it, so some amount of iteration is necessary. You can do that with `\napply ` and a function using numpy to find the first value which meets the\nchange requirement of >100 or <-100:\n\n \n \n def updown(row, df):\n rownum = row.name\n up = (row['Price'] < df.loc[rownum:, 'Price'] - 100).argmax()\n down = (row['Price'] > df.loc[rownum:, 'Price'] + 100).argmax()\n return (\n df.loc[up + rownum, 'Time'] if up > 0 else pd.NaT,\n df.loc[down + rownum, 'Time'] if down > 0 else pd.NaT\n )\n \n df[['Up_Time', 'Down_Time']] = df.apply(updown, axis=1, result_type='expand', df=df)\n \n\nOutput:\n\n \n \n Time Price Up_Time Down_Time\n 0 09:19:00 3252.25 14:02:00 11:39:00\n 1 09:24:00 3259.90 14:02:00 11:39:00\n 2 09:56:00 3199.40 14:02:00 12:18:00\n 3 10:17:00 3222.50 14:02:00 12:18:00\n 4 10:43:00 3191.25 14:02:00 12:18:00\n 5 11:39:00 3143.00 13:37:00 12:18:00\n 6 12:18:00 2991.70 13:20:00 14:52:00\n 7 13:20:00 3196.35 14:02:00 14:19:00\n 8 13:26:00 3176.10 14:02:00 14:19:00\n 9 13:34:00 3198.85 14:02:00 14:19:00\n 10 13:37:00 3260.75 14:02:00 14:19:00\n 11 14:00:00 3160.85 14:02:00 14:19:00\n 12 14:02:00 3450.00 NaT 14:19:00\n 13 14:19:00 3060.50 NaT 14:31:00\n 14 14:30:00 2968.70 NaT 14:57:00\n 15 14:31:00 2895.80 NaT 15:00:00\n 16 14:52:00 2880.70 NaT 15:00:00\n 17 14:53:00 2901.55 NaT 15:00:00\n 18 14:55:00 2885.55 NaT 15:00:00\n 19 14:57:00 2839.05 NaT 15:00:00\n 20 14:58:00 2871.50 NaT 15:00:00\n 21 15:00:00 2718.95 NaT NaT"} {"query":"Trying to interact with a contract using is function selector, how do i get the block hash?\n\nI have the below code in a index.js file, but when i run it i get an error message: The method eth_sendTransaction does not exist\/is not available. Please can anyone help?\n\n```\nrequire('dotenv').config();\nrequire('events').EventEmitter.defaultMaxListeners = 0\n\nconst { ethers } = require('ethers');\n\n\/\/ Provider and contract address\nconst provider = new ethers.providers.JsonRpcProvider('https:\/\/polygon-mumbai.infura.io\/v3\/7d803b173d114ba8a1bffafff7ff541a');\nconst wallet = new ethers.Wallet(process.env.KEY, provider)\n\/\/const signer = wallet.provider.getSigner(wallet.address);\nconst contractAddress = '0x57C98f1f2BC34A0054CBc1257fcc9333c1b6730c';\n\n\/\/ Function selector\nconst functionSelector = '0x838ad0ee';\n\n\n\/\/ Call the contract\nconst send = async () => {\n try {\n const result = await wallet.call({\n to: contractAddress,\n data: functionSelector\n });\n console.log(\"Result:\", result);\n} catch (error) {\n console.error(\"Error:\", error);\n}\n}\n\nsend()\n```","reasoning":"The desired behavior is to record the transaction. However, the Mumbai TestNet explorer does not work. Therefore, if the behavior of sending a read-write transaction that will be stored on the chain is required, another method should be used.","id":"16","excluded_ids":["N\/A"],"gold_ids_long":["Providers\/Providers.txt"],"gold_ids":["Providers\/Providers_9_0.txt","Providers\/Providers_51_0.txt"],"gold_answer":"const result = await wallet.call({\n to: contractAddress,\n data: functionSelector\n });\n \n\n> _The code produces and output but it does not record the transaction on the\n> Mumbai TestNet explorer_\n\nCalls are read-only actions that are not recorded on the chain. This way, you\ncan retrieve data from the chain without paying transaction fees, but you\ncan't modify any state with a call.\n\nIf you want to send a (read-write) transaction that will be stored on the\nchain, one of the ways to do it is to invoke the [ wallet.sendTransaction()\n](https:\/\/docs.ethers.org\/v6\/api\/providers\/#Signer-sendTransaction) method _(`\nWallet ` type inherits from the ` Signer ` type) _ ."} {"query":"Selenium Python how to run an XPath error handling exception on loop with multiple def?\n\nis it possible to schedule my code by running it with schedule.every(10).seconds.do(example) or similar? I'm trying to schedule the Error Handling part in my code (XPath) which works with a While loop although because its in a While loop it refuses to run the other def functions, I want the XPath\/error handling part to run in a loop with my Selenium window but detect and not interfere with the other def functions. it should just detect the XPath being there or not being there then run an exception if it doesn't detect.. Does anyone have a solution?\n\n```\ndef example():\r\n options = Options()\r\n options.add_experimental_option(\"detach\", True)\r\n\r\n driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()),\r\n options=options)\r\n\r\n driver.get(\"example.com\")\r\n driver.set_window_position(0, 0)\r\n driver.set_window_size(750, 512)\r\n\r\n while True:\r\n e = driver.find_elements(By.XPATH,\"\/html\/body\/div\/div\/div\/div\/div[2]\/div\/div\/div[2]\/div[2]\/div[1]\/span\/button\/span\")\r\n if not e:\r\n print(\"Element not found\")\r\n pyautogui.moveTo(89, 56)\r\n time.sleep(1)\r\n pyautogui.click()\r\n\r\n time.sleep(10)\r\n\r\n\r\ndef example2():\r\n options = Options()\r\n options.add_experimental_option(\"detach\", True)\r\n\r\n driver = webdriver.Firefox(service=FirefoxService(GeckoDriverManager().install()))\r\n\r\n driver.get(\"example.com\")\r\n driver.set_window_position(750, 0)\r\n driver.set_window_size(750, 512)\r\n\r\n while True:\r\n e = driver.find_elements(By.XPATH,\"\/html\/body\/div\/div\/div\/div\/div[2]\/div\/div\/div[2]\/div[2]\/div[1]\/span\/button\/span\")\r\n if not e:\r\n print(\"Element not found\")\r\n pyautogui.moveTo(850, 57)\r\n time.sleep(1)\r\n pyautogui.click()\r\n\r\nschedule.every().day.at(\"22:59\").do(example)\r\nschedule.every().day.at(\"22:59\").do(example2)\r\n\r\nwhile True:\r\n schedule.run_pending()\r\n time.sleep(1)\n```","reasoning":"Try to schedule the Error Handling part in the code while the Selenium window is still working. The while loop will refuse to run the other def functions. Python is single-threaded, another function should be imported to solve.","id":"17","excluded_ids":["N\/A"],"gold_ids_long":["Python_Threading_Schedule\/Python_threading.txt","Python_Threading_Schedule\/Python_schedule.txt"],"gold_ids":["Python_Threading_Schedule\/Python_threading__2_2.txt","Python_Threading_Schedule\/Python_threading__2_1.txt","Python_Threading_Schedule\/Python_schedule__1_0.txt","Python_Threading_Schedule\/Python_threading__2_0.txt"],"gold_answer":"Your current approach tries to use a blocking while True loop inside your example and example2 functions, which continuously checks for the presence of an element by its XPath. This will indeed block any other code execution including scheduled tasks, as Python's default execution model is single-threaded. If one part of the code is in an infinite loop or a long-running operation, it effectively freezes everything else.\n\nMy suggestion is that you should use Python's threading library to run your blocking loops in separate threads. This allows your main program to continue running and executing other scheduled tasks without being blocked by the infinite loops in example and example2.\n\nimport threading\nimport schedule\nimport time\n# Import other necessary modules like Selenium, pyautogui, etc.\n\ndef example():\n # Your existing code for example function\n\ndef example2():\n # Your existing code for example2 function\n\n# Schedule tasks as before\nschedule.every().day.at(\"22:59\").do(example)\nschedule.every().day.at(\"22:59\").do(example2)\n\n# Run the example functions in their threads\nexample_thread = threading.Thread(target=example)\nexample2_thread = threading.Thread(target=example2)\n\nexample_thread.start()\nexample2_thread.start()\n\n# Run the scheduler in the main thread\nwhile True:\n schedule.run_pending()\n time.sleep(1)\nThere is another approach that you can use.Instead of using a while True loop within your example and example2 functions, consider restructuring your code to check for the element at scheduled intervals. This can be done directly with the schedule library by scheduling the check itself, instead of putting it in an infinite loop.\n\nFor instance, you could create a function dedicated to checking the element's presence and then schedule this function to run every 10 seconds or so, alongside your other tasks. This avoids the need for a blocking loop and allows the scheduler to manage when the checks occur.\n\nFor handling exceptions such as an element not being found (which seems to be your primary concern), make sure to wrap the relevant part of your code in a try-except block. This will catch exceptions related to the element not being present and allow your code to react accordingly (e.g., clicking somewhere else on the page).\n\nPlease be cautious about accessing shared resources from multiple threads simultaneously, as this can lead to race conditions. Use threading locks if necessary to avoid such issues.\n\nHope it helps. Thanks"} {"query":"How to create an image from a string in python\n\nI'm currently having trouble creating an image from a binary string of data in my Python program. I receive the binary data via a socket but when I try the methods I read about in Imaging Library Handbook like this:\n\n\u00b7\u00b7\u00b7\nbuff = StringIO.StringIO() #buffer where image is stored\n#Then I concatenate data by doing a \nbuff.write(data) #the data from the socket\nim = Image.open(buff)\n\u00b7\u00b7\u00b7\nI get an exception to the effect of \"image type not recognized\". I know that I am receiving the data correctly because if I write the image to a file and then open a file it works:\n\n\u00b7\u00b7\u00b7\nbuff = StringIO.StringIO() #buffer where image is stored\nbuff.write(data) #data is from the socket\noutput = open(\"tmp.jpg\", 'wb')\noutput.write(buff)\noutput.close()\nim = Image.open(\"tmp.jpg\")\nim.show()\n\u00b7\u00b7\u00b7\nI figure I am probably doing something wrong in using the StringIO class but I'm not sure","reasoning":"The programmer tries to create an image form a string. However, the data is received and the content is correct. The possible reason is wrong use of StringIO functions.","id":"18","excluded_ids":["N\/A"],"gold_ids_long":["Python_StringIO\/Python_StringIO.txt"],"gold_ids":["Python_StringIO\/Python_StringIO_3_4.txt","Python_StringIO\/Python_StringIO_3_5.txt","Python_StringIO\/Python_StringIO_3_7.txt","Python_StringIO\/Python_StringIO_3_1.txt","Python_StringIO\/Python_StringIO_3_2.txt","Python_StringIO\/Python_StringIO_3_8.txt","Python_StringIO\/Python_StringIO_3_0.txt","Python_StringIO\/Python_StringIO_3_6.txt","Python_StringIO\/Python_StringIO_3_3.txt"],"gold_answer":"I suspect that you're not ` seek ` -ing back to the beginning of the buffer\nbefore you pass the StringIO object to PIL. Here's some code the demonstrates\nthe problem and solution:\n\n \n \n >>> buff = StringIO.StringIO()\n >>> buff.write(open('map.png', 'rb').read())\n >>> \n >>> #seek back to the beginning so the whole thing will be read by PIL\n >>> buff.seek(0)\n >>>\n >>> Image.open(buff)\n <PngImagePlugin.PngImageFile instance at 0x00BD7DC8>\n >>> \n >>> #that worked.. but if we try again:\n >>> Image.open(buff)\n Traceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"c:\\python25\\lib\\site-packages\\pil-1.1.6-py2.5-win32.egg\\Image.py\", line 1916, in open\n raise IOError(\"cannot identify image file\")\n IOError: cannot identify image file\n \n\nMake sure you call ` buff.seek(0) ` before reading any StringIO objects.\nOtherwise you'll be reading from the end of the buffer, which will look like\nan empty file and is likely causing the error you're seeing."} {"query":"Retrieve bounded time periods as a step function where beginning of period returns 1, end of period returns 0\n\nI have a table in Microsoft SQL Server called Downtime that looks like this (I have omitted some irrelevant columns and many entries):\n\n```\nID StartTime EndTime\n5 2024-03-27 09:07:20.653 2024-03-27 09:09:07.690\n17 2024-03-27 09:59:50.557 2024-03-27 10:59:50.137\n24 2024-03-27 11:04:07.497 2024-03-27 11:07:02.657\n```\n\nI need to write a query that turns the above data into this format:\n\n```\nt_stamp CurrentlyDown\n2024-03-27 09:07:20.653 1\n2024-03-27 09:09:07.690 0\n2024-03-27 09:59:50.557 1\n2024-03-27 10:59:50.137 0\n2024-03-27 11:04:07.497 1\n2024-03-27 11:07:02.657 0\n```\n\nIn words, this query should split each original entry into two entries (one t_stamp for StartTime and one t_stamp for EndTime) and return a value (CurrentlyDown) of 1 (if t_stamp is from the StartTime column) or 0 (if t_stamp is from the EndTime column).\n\nI can think to try two things:\n\nA self join around the ID field with a CASE statement checking the timestamp fields\nTwo CTE's (one focused on grabbing StartTimes and the other focused on EndTimes) with a final query to join these two CTE's together around the ID column. Maybe just one CTE is needed here?\n\nI am concerned with performance so I want to do this as efficiently as possible. I am far from a SQL expert, so I don't really know which path is best to take (if either).","reasoning":"This is a SQL table issue where the author wants the original entries to be split into two entries by rearranging the StartTime and EndTime that were originally in each row into columns. The StartTime is then represented by a 1 and the EndTime by a 0. The author would like to have very efficient ways of dealing with this that could be different from what he has proposed for larger tables.","id":"19","excluded_ids":["N\/A"],"gold_ids_long":["U_SQL_cross_apply\/cross_apply.txt"],"gold_ids":["U_SQL_cross_apply\/cross_apply_0_0.txt","U_SQL_cross_apply\/cross_apply_0_1.txt"],"gold_answer":"Here is one option using ` CROSS APPLY `\n\n**Example**\n\n \n \n Select B.* \n From YourTable A\n Cross Apply ( values (StartTime,1)\n ,(EndTime ,0)\n ) B(t_stamp,currentlydown)\n \n\n**Results**\n\n \n \n t_stamp currentlydown\n 2024-03-27 09:07:20.653 1\n 2024-03-27 09:09:07.690 0\n 2024-03-27 09:59:50.557 1\n 2024-03-27 10:59:50.137 0\n 2024-03-27 11:04:07.497 1\n 2024-03-27 11:07:02.657 0"} {"query":"Why does JavaScript return different results for RegExp test() method with empty object?\n\nI recently came across a code snippet (a joke) where the test() method of JavaScript's RegExp object was used with an empty object as an argument. Here's the code:\n\n```\nconsole.log(new RegExp({}).test('mom')); \/\/ true\nconsole.log(new RegExp({}).test('dad')); \/\/ false\n```\n\nCan someone explain why is it happens?","reasoning":"It is not clear to the questioner why RegExp's test() function returns different results, and he\/she does not seem to have a deep understanding of RegExp's constructor.","id":"20","excluded_ids":["N\/A"],"gold_ids_long":["Mmdn_RegExp\/Mmdn_RegExp.txt"],"gold_ids":["Mmdn_RegExp\/Mmdn_RegExp_2_3.txt","Mmdn_RegExp\/Mmdn_RegExp_2_2.txt"],"gold_answer":"This is a curious fact. [ RegExp constructor\n](https:\/\/developer.mozilla.org\/en-\nUS\/docs\/Web\/JavaScript\/Reference\/Global_Objects\/RegExp) accepts a string as\nits first argument. Since you are passing ` {} ` it gets coerced to string,\nand the coercion of an object is the literal string ` [object Object] ` .\n\nBy a fortuitous coincidence, the square brackets have a precise meaning in a\nregular expression, and it means \"one of these characters of the set\".\n\nThus, the regular expression is equal to ` [objectO ] ` . In other words, your\ncode is equal to:\n\n \n \n console.log(new RegExp('[object Object]').test('mom'));\n \n\nwhich is equal to:\n\n \n \n console.log(new RegExp('[objectO ]').test('mom'));\n \n\nwhich means: tests ` true ` if any of these characters is present: o, b, j, e,\nc, t, O, space. ` mom ` satisfies this condition, while ` dad ` doesn't."} {"query":"Problem generating causal sentences with js [closed]\n\nI am creating a sentence translation app, but I have a problem with the generation of causal sentences. I have to generate the sentences with an external url, but in my code instead of generating me a whole sentence, it only generates one character. Can you help me understand where I am going wrong?\n\n```\n<main>\n <section class=\"panel translation\">\n <div class=\"translation-flag\"><\/div>\n <button class=\"favorites\">\u2b50<\/button>\n <div class=\"translation-result\">\n <p class=\"translation-text\">Traduzione<\/p>\n <\/div>\n <\/section>\n\n <section class=\"panel controls\">\n <input class=\"text-input\" type=\"text\" placeholder=\"inserisci il testo da tradurre\">\n \n <button class=\"lang-button\" data-lang=\"en\">\ud83c\uddec\ud83c\udde7<\/button>\n <button class=\"lang-button\" data-lang=\"fr\">\ud83c\uddeb\ud83c\uddf7<\/button>\n <button class=\"lang-button\" data-lang=\"es\">\ud83c\uddea\ud83c\uddf8<\/button>\n <button class=\"reset-button\">\u274c<\/button>\n <button class=\"random-button\">\n <i class=\"fa-solid fa-dice icon-dice\"><\/i>\n <\/button>\n <\/section>\n\n <div class=\"panel translate\">\n <h1>Le tue parole preferite<\/h1>\n <ul class=\"translate-favorites\">\n \n <\/ul> \n\n <\/p>\n <\/div>\n<\/main>\n\n\n\n const langButtons = document.querySelectorAll('.lang-button');\nconst textInput = document.querySelector('.text-input');\nconst translationText = document.querySelector('.translation-text');\nconst translationFlag = document.querySelector('.translation-flag');\nconst resetButton = document.querySelector('.reset-button');\nconst randomButton = document.querySelector('.random-button');\n\n\n\/\/Funzione per aggiornare la lista delle traduzioni preferite\nfunction updateFavoriteList(translation) {\n const translateFavorite = document.querySelector('.translate-favorites');\n translateFavorite.innerHTML = '';\n translation.forEach(function(translateTexts) {\n const li = document.createElement('li');\n li.textContent = translateTexts;\n translateFavorite.appendChild(li);\n });\n}\n\nasync function translate(text, lang, flag) {\n const url = `https:\/\/api.mymemory.translated.net\/get?q=${text}&langpair=it|${lang}`;\n const response = await fetch(url);\n const jsonData = await response.json();\n const result = jsonData.responseData.translatedText;\n console.log(result);\n\n translationText.innerText = result;\n translationFlag.innerText = flag;\n}\n\nlangButtons.forEach(function(langButton) {\n langButton.addEventListener('click', function() {\n\n \/\/ recupero il testo dal campo di input e rimuovo eventuali spazi extra\n \/\/ all'inizio e alla fine della stringa inserita con il metodo .trim()\n \/\/ https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/JavaScript\/Reference\/Global_Objects\/String\/trim\n const text = textInput.value.trim();\n\n \/\/ recupero il codice lingua dal data-attribute del pulsante\n const lang = langButton.dataset.lang;\n \/\/ recupero la bandierina dalla testo del pulsante\n const flag = langButton.innerText;\n\n \/\/ se il campo di input ha effettvamente del testo\n \/\/ invoco la funzione e faccio partire la chiamata alle API\n if(text.length > 0) {\n translate(text, lang, flag);\n }\n });\n});\n\nrandomButton.addEventListener('click', function() {\n const randomUrls = 'https:\/\/random-word-api.herokuapp.com\/word?length=5';\n const randomIndex = Math.floor(Math.random() * randomUrls.length);\n const randomWork = randomUrls[randomIndex];\n console.log(randomWork);\n textInput.value = randomWork;\n});\n\n\n\nresetButton.addEventListener('click', reset);\n```\n\nThe new code for the my question is:\n```\nasync function randomString() {\n const response = await fetch('https:\/\/random-word-api.herokuapp.com\/word?lang=it');\n const work = await response.json();\n console.log(work);\n textInput.value = work;\n}\n\nrandomButton.addEventListener('click', function() {\n randomString();\n});\n```","reasoning":"The programmer is creating a sentence translation app, the app also uses an external url to generate the causal sentence. However, the generation only contains one character. It seems that he should use another function to solve.","id":"21","excluded_ids":["N\/A"],"gold_ids_long":["Mmdn_fetch_File\/Mmdn_fetch.txt"],"gold_ids":["Mmdn_fetch_File\/Mmdn_fetch_7_3.txt","Mmdn_fetch_File\/Mmdn_fetch_7_2.txt","Mmdn_fetch_File\/Mmdn_fetch_7_1.txt","Mmdn_fetch_File\/Mmdn_fetch_7_5.txt","Mmdn_fetch_File\/Mmdn_fetch_7_4.txt","Mmdn_fetch_File\/Mmdn_fetch_7_6.txt"],"gold_answer":"[ Stack Overflow ](https:\/\/stackoverflow.com)\n\n 1. [ About ](https:\/\/stackoverflow.co\/)\n 2. Products \n 3. [ OverflowAI ](https:\/\/stackoverflow.co\/teams\/ai\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav-bar&utm_content=overflowai)\n\n 1. [ Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers ](https:\/\/stackoverflow.co\/teams\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=stack-overflow-for-teams)\n 2. [ Advertising & Talent Reach devs & technologists worldwide about your product, service or employer brand ](https:\/\/stackoverflow.co\/advertising\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=stack-overflow-advertising)\n 3. [ OverflowAI GenAI features for Teams ](https:\/\/stackoverflow.co\/teams\/ai\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=overflow-ai)\n 4. [ OverflowAPI Train & fine-tune LLMs ](https:\/\/stackoverflow.co\/api-solutions\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=overflow-api)\n 5. [ Labs The future of collective knowledge sharing ](https:\/\/stackoverflow.co\/labs\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=labs)\n 6. [ About the company ](https:\/\/stackoverflow.co\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=about-the-company) [ Visit the blog ](https:\/\/stackoverflow.blog\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=blog)\n\nLoading\u2026\n\n 1. ### [ current community ](https:\/\/stackoverflow.com)\n\n * [ Stack Overflow ](https:\/\/stackoverflow.com)\n\n[ help ](https:\/\/stackoverflow.com\/help) [ chat\n](https:\/\/chat.stackoverflow.com\/?tab=site&host=stackoverflow.com)\n\n * [ Meta Stack Overflow ](https:\/\/meta.stackoverflow.com)\n\n### your communities\n\n[ Sign up\n](https:\/\/stackoverflow.com\/users\/signup?ssrc=site_switcher&returnurl=https%3a%2f%2fstackoverflow.com%2fquestions%2f78232924%2fproblem-\ngenerating-causal-sentences-with-js) or [ log in\n](https:\/\/stackoverflow.com\/users\/login?ssrc=site_switcher&returnurl=https%3a%2f%2fstackoverflow.com%2fquestions%2f78232924%2fproblem-\ngenerating-causal-sentences-with-js) to customize your list.\n\n### [ more stack exchange communities ](https:\/\/stackexchange.com\/sites)\n\n[ company blog ](https:\/\/stackoverflow.blog)\n\n 2. 3. [ Log in ](https:\/\/stackoverflow.com\/users\/login?ssrc=head&returnurl=https%3a%2f%2fstackoverflow.com%2fquestions%2f78232924%2fproblem-generating-causal-sentences-with-js)\n 4. [ Sign up ](https:\/\/stackoverflow.com\/users\/signup?ssrc=head&returnurl=https%3a%2f%2fstackoverflow.com%2fquestions%2f78232924%2fproblem-generating-causal-sentences-with-js)\n\n 1. 1. [ Home ](\/)\n 2. [ Questions ](\/questions)\n 3. [ Tags ](\/tags)\n 4. 5. [ Users ](\/users)\n 6. [ Companies ](https:\/\/stackoverflow.com\/jobs\/companies?so_medium=stackoverflow&so_source=SiteNav)\n 7. [ Labs ](javascript:void\\(0\\))\n 8. [ Jobs ](\/jobs?source=so-left-nav)\n 9. [ Discussions ](\/beta\/discussions)\n 10. [ Collectives ](javascript:void\\(0\\))\n\n 11. Communities for your favorite technologies. [ Explore all Collectives ](\/collectives-all)\n\n 2. Teams \n\nNow available on Stack Overflow for Teams! AI features where you work:\nsearch, IDE, and chat.\n\n[ Learn more\n](https:\/\/stackoverflow.co\/teams\/ai\/?utm_medium=referral&utm_source=stackoverflow-\ncommunity&utm_campaign=side-bar&utm_content=overflowai-learn-more) [ Explore\nTeams\n](https:\/\/stackoverflow.co\/teams\/?utm_medium=referral&utm_source=stackoverflow-\ncommunity&utm_campaign=side-bar&utm_content=explore-teams)\n\n 3. [ Teams ](javascript:void\\(0\\))\n 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. [ Explore Teams ](https:\/\/stackoverflow.co\/teams\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=side-bar&utm_content=explore-teams-compact)\n\n##### Collectives\u2122 on Stack Overflow\n\nFind centralized, trusted content and collaborate around the technologies you\nuse most.\n\n[ Learn more about Collectives ](\/collectives)\n\n**Teams**\n\nQ&A for work\n\nConnect and share knowledge within a single location that is structured and\neasy to search.\n\n[ Learn more about Teams ](https:\/\/stackoverflow.co\/teams\/)\n\nGet early access and see previews of new features.\n\n[ Learn more about Labs ](https:\/\/stackoverflow.co\/labs\/)\n\n# Page not found\n\nThis question was removed from Stack Overflow for reasons of moderation .\nPlease refer to the help center for [ possible explanations why a question\nmight be removed ](\/help\/deleted-questions) .\n\nHere are some similar questions that might be relevant:\n\n * [ Setting \"checked\" for a checkbox with jQuery ](\/questions\/426258\/setting-checked-for-a-checkbox-with-jquery)\n * [ Generating random whole numbers in JavaScript in a specific range ](\/questions\/1527803\/generating-random-whole-numbers-in-javascript-in-a-specific-range)\n * [ Get the current URL with JavaScript? ](\/questions\/1034621\/get-the-current-url-with-javascript)\n * [ How can I change an element's class with JavaScript? ](\/questions\/195951\/how-can-i-change-an-elements-class-with-javascript)\n * [ Using async\/await with a forEach loop ](\/questions\/37576685\/using-async-await-with-a-foreach-loop)\n * [ How can I upload files asynchronously with jQuery? ](\/questions\/166221\/how-can-i-upload-files-asynchronously-with-jquery)\n * [ Compare two dates with JavaScript ](\/questions\/492994\/compare-two-dates-with-javascript)\n * [ Scroll to an element with jQuery ](\/questions\/6677035\/scroll-to-an-element-with-jquery)\n * [ How to format a number with commas as thousands separators? ](\/questions\/2901102\/how-to-format-a-number-with-commas-as-thousands-separators)\n * [ Disable\/enable an input with jQuery? ](\/questions\/1414365\/disable-enable-an-input-with-jquery)\n\nTry a [ Google Search\n](http:\/\/www.google.com\/search?q=site:stackoverflow.com\/questions+problem+generating+causal+sentences+with+js)\n\nTry [ searching for similar questions\n](\/search?q=problem%20generating%20causal%20sentences%20with%20js)\n\nBrowse our [ recent questions ](\/questions)\n\nBrowse our [ popular tags ](\/tags)\n\nIf you feel something is missing that should be here, [ contact us ](\/contact)\n."} {"query":"CannotDeliverBroadcastException On Samsung devices running Android 13 (API 33)\n\nWe got lot of crash reports from Firebase regarding this crash.\n\n```\nFatal Exception: android.app.RemoteServiceException$CannotDeliverBroadcastException: can't deliver broadcast\n at android.app.ActivityThread.throwRemoteServiceException(ActivityThread.java:2219)\n at android.app.ActivityThread.-$$Nest$mthrowRemoteServiceException()\n at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2508)\n at android.os.Handler.dispatchMessage(Handler.java:106)\n at android.os.Looper.loopOnce(Looper.java:226)\n at android.os.Looper.loop(Looper.java:313)\n at android.app.ActivityThread.main(ActivityThread.java:8762)\n at java.lang.reflect.Method.invoke(Method.java)\n at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:604)\n at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1067)\n```\n\nCrash happens only on Samsung devices running android 13 (API 33)\n\nWe followed this post CannotDeliverBroadcastException only on Pixel devices running Android 12\n\nThe thing is - we can't just silence\/absorb this crash, as some of our app functionality is build on the broadcast (BroadcastReceiver, registered in Manifest).\n\nWe need to actually prevent this crash (somehow :))\n\nWe checked also this one on google issue tracker but it seems there is no solution https:\/\/issuetracker.google.com\/issues\/245258072\n\nAny of you found a solution for this? Thanks","reasoning":"The programmer got a crash report related to broadcast on Samung devices running android 13. He\/she has checked a relevant solution which is about solving BroadcastException on Pixel devices running Android 12. Maybe the real crash is not broadcase but something else.","id":"22","excluded_ids":["N\/A"],"gold_ids_long":["android_memory_battery\/android_memory.txt"],"gold_ids":["android_memory_battery\/android_memory_2_9.txt","android_memory_battery\/android_memory_2_5.txt","android_memory_battery\/android_memory_2_8.txt","android_memory_battery\/android_memory_2_7.txt","android_memory_battery\/android_memory_2_6.txt"],"gold_answer":"This may not be a broadcast bug at all, but a knock-on error caused by memory\nfragmentation (OOM).\n\nProfile your app while it is running and make sure it is not running out of\nits allocated memory.\n\nHere are some very [ good suggestions\n](https:\/\/developer.android.com\/topic\/performance\/memory) to try. Bouncing the\nservice (kill\/relaunch) on a timer might prevent this behavior too."} {"query":"How to control Argo(helm) flow with Script output\n\nI have a script defined in Argo WorkflowTemplate as follows. This may either print status:true or status:false (I have simplified this)\n\n```\n- name:util-script\n script:\n image: python3\n command: [\"python3\"]\n script |\n print(status:true)\n```\n\nIs there a way to control dag flow based on the above script output? I know helm provides flow control like Helm Doc - Flow control\n\nFollowing is what I tried so far, but this always jumps to else condition. anything Im doing wrong? Appreciate any input on this\n\n```\n{{- if contains \"status:true\" `tasks.util-script.outputs.result` }}\n # even result is `status:true`, does not reach here\n{{ else }}\n # always reach here\n{{ end }}\n```\n\nI have verified tasks.util-script.outputs.result indeed returns the expected result.","reasoning":"The programmer has defined a script in the Argo WorkflowTemplate that prints either `status:true` or `status:false`. The programmer wants to control the workflow based on the script's output but is unable to correctly evaluate the output. They have used the `contains` function to check if the output contains `status:true`, but it always goes to the else condition. The programmer has verified that `tasks.util-script.outputs.result` indeed returns the expected result. Another conditional script should be used.","id":"23","excluded_ids":["N\/A"],"gold_ids_long":["Argo_Workflows_Walk_through\/Argo_Workflows_Walk_through.txt"],"gold_ids":["Argo_Workflows_Walk_through\/Argo_Workflows_Walk_through_4_3.txt"],"gold_answer":"Helm executes before the workflow runs. That's why you'd end up in the else\nblock.\n\n 1. Your task is not evaluated \n 2. When template is rendered, \n \n `tasks.util-script.outputs.result`\n\ncreates a literal string, which doesn't contain status text\n\nYou need to use a ` when ` conditional, as shown in the documentation - [\nhttps:\/\/argo-workflows.readthedocs.io\/en\/stable\/walk-through\/conditionals\/\n](https:\/\/argo-workflows.readthedocs.io\/en\/stable\/walk-through\/conditionals\/)\n\n \n \n - name: next\n template: next\n when: {{`{{tasks.util-script.outputs.result}}`}} == \"status:true\""} {"query":"How can I convert a PredictResponse to JSON?\n\nI have a VertexAI project I want to access. I'm currently trying two approaches, via a React frontend and via a Python backend, which I would then connect to the FE. I posted a question about making requests to VertexAI from Node here.\n\nIn the python approach, I'm able to make the request and receive the correct response. However, in order for it to be accessible by the FE, I would need to convert it to JSON. I'm struggling with how to do that.\n\nHere's the code I'm using:\n\n```\n# The AI Platform services require regional API endpoints.\nclient_options = {\"api_endpoint\": api_endpoint}\n# Initialize client that will be used to create and send requests.\n# This client only needs to be created once, and can be reused for multiple requests.\nclient = PredictionServiceClient(\n client_options=client_options, credentials=credentials\n)\ninstance = schema.predict.instance.TextClassificationPredictionInstance(\n content=content,\n).to_value()\ninstances = [instance]\nparameters_dict = {}\nparameters = json_format.ParseDict(parameters_dict, Value())\nendpoint = client.endpoint_path(\n project=project_id, location=compute_region, endpoint=endpoint_id\n)\nresponse = client.predict(\n endpoint=endpoint, instances=instances, parameters=parameters\n)\nresponse_dict = [dict(prediction) for prediction in response.predictions]\n```\n\n`response_dict` is printable, but I can't convert `response` to json using `json.dumps` because:\n\n```\nTypeError: Object of type PredictResponse is not JSON serializable\n```\nThis is the error that has been plaguing me in every attempt. DuetAI simply tells me to use `json.dumps`.\n\nEDIT\nHere's the working code using the accepted response:\n```\n ...\n response = client.predict(\n endpoint=endpoint, instances=instances, parameters=parameters\n )\n\n predictions = MessageToDict(response._pb)\n predictions = predictions[\"predictions\"][0]\n```","reasoning":"The programmer wants to access a project through Python. Although he can make the request and receive the correct response, he does not how to convert it into JSON, which is a necessary step for being accessed by FE. It seems that he\/she does not notice the transmission format for each function related to json.","id":"24","excluded_ids":["N\/A"],"gold_ids_long":["google_protobuf\/google_protobuf.txt"],"gold_ids":["google_protobuf\/google_protobuf_20_1.txt","google_protobuf\/google_protobuf_20_0.txt"],"gold_answer":"Since ` json.dumps ` requires serialization of your response\/s, I believe it's\nbetter to use [ ` json_format.MessageToDict `\n](https:\/\/stackoverflow.com\/questions\/73588294\/unable-to-convert-protobuf-\nbinary-file-to-json-in-python-bytes-object-has-no?rq=3) instead of `\njson_format.ParseDict ` .\n\nThey work quite the opposite:\n\n * ` ParseDict ` takes a JSON dictionary representation and merges it into a pre-existing Protobuf message object. It parses the JSON data and populates the corresponding fields in the Protobuf message. \n\n * ` MessageToDict ` takes a Protobuf message object and converts it into a Python dictionary representation which can then be easily serialized to JSON using ` json.dumps ` . \n\nI found some online resources which discuss the ` MessageToDict ` function of\nthe protobuf library. Please check them out for more information.\n\n * [ Unable to convert protobuf binary file to json in python - bytes' object has no attribute 'DESCRIPTOR' ](https:\/\/stackoverflow.com\/questions\/73588294\/unable-to-convert-protobuf-binary-file-to-json-in-python-bytes-object-has-no?rq=3)\n\n * [ JSON to Protobuf in Python ](https:\/\/stackoverflow.com\/questions\/60345426\/json-to-protobuf-in-python?rq=3)\n\n * [ Unable to parse Vertex job responses lack ](https:\/\/cloud.google.com\/knowledge\/kb\/unable-to-parse-vertex-job-responses-lack-000004947)"} {"query":"How can I initialise a constexpr array with values using std::generate\n\nFor example, if I wanted a constexpr std::array<int,100> initialised with all the multiples of 3 from 1-300 at compile time how can I do this?\n\nMy first thought was to use std::generate, something like:\n\n```\nconstexpr std::array<int,100> a { std::generate(a.begin(), a.end(), [n=0]()mutable{ return n+=3; });\n```\n\nI get an error such as `<source>:9:52: error: void value not ignored as it ought to be`.\n\nand I can't use std::generate after this because of course, it's read only at that point.","reasoning":"An attempt was made to initialize a constexpr array using std::generate and a lambda expression to generate multiples of 3 between 1 and 300. However, an error was encountered in the code: <source>:9:52: error: void value not ignored as it ought to be. Additionally, since the array is read-only after initialization, it is not possible to use std::generate after that point. A helper alias template is needed to overcome these challenges.","id":"25","excluded_ids":["N\/A"],"gold_ids_long":["cpp_utility\/cpp_classes.txt"],"gold_ids":["cpp_utility\/cpp_classes_5_0.txt","cpp_utility\/cpp_classes_5_4.txt","cpp_utility\/cpp_classes_5_3.txt"],"gold_answer":"You can use ` index_sequence ` :\n\n \n \n template <size_t ... Indices>\n constexpr auto gen(std::index_sequence<Indices...>) {\n return std::array<int,100>{ (Indices * 3)... };\n }\n \n int main() {\n constexpr std::array<int,100> a = gen(std::make_index_sequence<100>());\n }"} {"query":"django add filtes in template is not work expectedly\n\nI am a Django and Python beginner, and I encountered a problem while using Django. I hope to use a filter in the template to get the string I need, but the result below is not what I expected.\n\n```\n# filter definition\n\n@register.filter\ndef to_class_name(obj):\n return obj.__class__.__name__\n```\n\n```\n# HTML template for UpdateView (This template will be used by multiple models.)\n# object = Order()\n\n{{ object|to_class_name }} #reulst: Order\n\n{{ 'wms'|add:object|to_class_name }} #result: str, expect: object\n\n```\n\nI roughly understand that the issue lies with the order, but it seems I can't add parentheses to modify it.\n\n```\n{{ 'wms'|add:(object|to_class_name) }} #cause SyntaxError\n```\n\nIs there any way to solve this problem? Or is there a better way to determine the page data I need to output based on the model class when multiple models share one template? Thank you all.","reasoning":"The programmer encountered an issue while using Django. The aim is to use a filter in the template to retrieve the desired string, but the resulting output does not match the expectations. He\/she wants another syntax, which is a better approach to determine the page data to output based on the model class when multiple models share one template.","id":"26","excluded_ids":["N\/A"],"gold_ids_long":["Django_template\/Built-in_template_tags_filters.txt"],"gold_ids":["Django_template\/Built-in_template_tags_filters_38_0.txt"],"gold_answer":"> I roughly understand that the issue lies with the order, but it seems I\n> can't add parentheses to modify it.\n\nYou can use variables. [ link\n](https:\/\/docs.djangoproject.com\/en\/5.0\/ref\/templates\/builtins\/#with)\n\nOne way is to use ` with ` tag.\n\n> it caches a complex variable under a simpler name. The variable is available\n> until the ` {% endwith %} ` tag appears.\n\neg.\n\n \n \n {% with firstname=\"Tobias\" %}\n <h1>Hello {{ firstname }}, how are you?<\/h1>\n {% endwith %}\n \n\nIn your case, something like this\n\n \n \n {% with class_name=object|to_class_name %}\n {{ 'wms'|add:class_name }}\n {% endwith %}\n \n\nAnother approach is to use ` firstof ` tag. [ link\n](https:\/\/docs.djangoproject.com\/en\/5.0\/ref\/templates\/builtins\/#firstof)\n\n \n \n {% firstof object|to_class_name as class_name %}\n {{ 'wms'|add:class_name }}"} {"query":"Saving a scipy.sparse matrix directly as a regular txt file\n\nI have a scipy.sparse matrix (csr_matrix()). But I need to save it to a file not in the .npz format but as a regular .txt or .csv file. My problem is that I don't have enough memory to convert the sparse matrix into a regular np.array() and then save it to a file. Is there a way to have the data as a sparse matrix in memory but save it directly as a regular matrix to the disk? Or is there a way to \"unzip\" a .npz file without loading it into memory inside Python?","reasoning":"Due to memory constraints, converting the sparse matrix into a regular np.array() and subsequently saving it becomes problematic. The challenge is to find a solution that allows the data to remain as a sparse matrix in memory while directly saving it as a regular matrix to the disk.","id":"27","excluded_ids":["N\/A"],"gold_ids_long":["scipy_io\/scipy_io.txt"],"gold_ids":["scipy_io\/scipy_io_18_0.txt","scipy_io\/scipy_io_20_0.txt"],"gold_answer":"Answer to new question:\n\n \n \n import numpy as np\n from scipy import sparse, io\n A = sparse.eye(5, format='csr') * np.pi\n np.set_printoptions(precision=16, linewidth=1000)\n with open('matrix.txt', 'a') as f:\n for row in A:\n f.write(str(row.toarray()[0]))\n f.write('\\n')\n \n # [3.141592653589793 0. 0. 0. 0. ]\n # [0. 3.141592653589793 0. 0. 0. ]\n # [0. 0. 3.141592653589793 0. 0. ]\n # [0. 0. 0. 3.141592653589793 0. ]\n # [0. 0. 0. 0. 3.141592653589793]\n \n\nAnd with begin\/end brackets:\n\n \n \n import numpy as np\n from scipy import sparse, io\n A = sparse.eye(5, format='csr') * np.pi\n np.set_printoptions(precision=16, linewidth=1000)\n with open('matrix.txt', 'a') as f:\n for i, row in enumerate(A):\n f.write('[' if (i == 0) else ' ')\n f.write(str(row.toarray()[0]))\n f.write(']' if (i == A.shape[0] - 1) else '\\n')\n \n # [[3.141592653589793 0. 0. 0. 0. ]\n # [0. 3.141592653589793 0. 0. 0. ]\n # [0. 0. 3.141592653589793 0. 0. ]\n # [0. 0. 0. 3.141592653589793 0. ]\n # [0. 0. 0. 0. 3.141592653589793]]\n \n\nYou may have to fiddle with ` set_printoptions ` depending on your data.\n\n* * *\n\nAnswer to original question, which did not require that the matrix be written\nas dense.\n\n[ Harwell-Boeing format\n](https:\/\/docs.scipy.org\/doc\/scipy\/reference\/io.html#harwell-boeing-files) is\nplain text:\n\n \n \n import numpy as np\n from scipy import sparse, io\n A = sparse.eye(3, format='csr') * np.pi\n \n # Default title 0 \n # 3 1 1 1\n # RUA 3 3 3 0\n # (40I2) (40I2) (3E25.16) \n # 1 2 3 4\n # 1 2 3\n # 3.1415926535897931E+00 3.1415926535897931E+00 3.1415926535897931E+00\n \n io.hb_write('matrix.txt', A) # saves as matrix.txt\n A2 = io.hb_read('matrix.txt')\n assert not (A2 != A).nnz # efficient check for equality\n \n\nSo is [ Matrix Market\n](https:\/\/docs.scipy.org\/doc\/scipy\/reference\/io.html#matrix-market-files) :\n\n \n \n io.mmwrite('matrix', A) # saves as matrix.mtx\n \n # %%MatrixMarket matrix coordinate real symmetric\n # %\n # 3 3 3\n # 1 1 3.141592653589793e+00\n # 2 2 3.141592653589793e+00\n # 3 3 3.141592653589793e+00\n \n A2 = io.mmread('matrix')\n assert not (A2 != A).nnz\n \n\n* * *\n\nIf you want an even simpler _format_ , although it involves more code:\n\n \n \n import numpy as np\n from scipy import sparse\n A = sparse.eye(10, format='csr')*np.pi\n \n np.savetxt('data.txt', A.data)\n np.savetxt('indices.txt', A.indices, fmt='%i')\n np.savetxt('indptr.txt', A.indptr, fmt='%i')\n \n\nTo load:\n\n \n \n data = np.loadtxt('data.txt')\n indices = np.loadtxt('indices.txt', dtype=np.int32)\n indptr = np.loadtxt('indptr.txt', dtype=np.int32)\n \n A2 = sparse.csr_matrix((data, indices, indptr))\n assert not (A2 != A).nnz\n \n\nBut the important idea is that all you need to save are the ` data ` , `\nindices ` , and ` indptr ` attributes of the ` csr_matrix ` ."} {"query":"dbms_random.value() in Snowflake - Oracle to snowflake conversion\n\nBelow is the oracle sql query that I got to convert to snowflake. In here, i am blocked in creating dbms_random.value() in snowflake\n\n```\nselect emp_id, emp_name, emp_mob,\n(case when dbms_random.value() >= 0.85 then 'Y' else 'N' end) as tag\nfrom eds_dwg.employee_data\n```\n\nCan someone help me on this?\n\nThanks","reasoning":"Trying to convert a piece of oracle sql code to a snowflake version. But the author doesn't know how to mimic dbms_random.value() with snowflake's function.","id":"28","excluded_ids":["N\/A"],"gold_ids_long":["snowflake_random_controlled_distribution\/snowflake_controlled_distribution.txt","snowflake_random_controlled_distribution\/snowflake_random.txt"],"gold_ids":["snowflake_random_controlled_distribution\/snowflake_controlled_distribution_0_1.txt","snowflake_random_controlled_distribution\/snowflake_random_2_1.txt","snowflake_random_controlled_distribution\/snowflake_controlled_distribution_4_1.txt"],"gold_answer":"You can use Snowflake Data generation functions: [\nhttps:\/\/docs.snowflake.com\/en\/sql-reference\/functions-data-generation.html\n](https:\/\/docs.snowflake.com\/en\/sql-reference\/functions-data-generation.html)\n\nNORMAL() returns a floating point number with a specified mean and standard\ndeviation. Something like this with correct adaptions of the parameters could\nto the trick: [ https:\/\/docs.snowflake.com\/en\/sql-\nreference\/functions\/normal.html ](https:\/\/docs.snowflake.com\/en\/sql-\nreference\/functions\/normal.html)\n\nAn alternative can be using UNIFORM(): [ https:\/\/docs.snowflake.com\/en\/sql-\nreference\/functions\/uniform.html ](https:\/\/docs.snowflake.com\/en\/sql-\nreference\/functions\/uniform.html)\n\nExample from docs to generate a value between 0 and 1:\n\n \n \n select uniform(0::float, 1::float, random()) from table(generator(rowcount => 5));"} {"query":"Inconsistent delay when using RegisterHotkey and PeekMessagew In Go\n\nI'm trying to write a simple Go program that will listen for global windows hotkeys while in the background and send API calls when specific ones have been pressed. This is my first time learning how to work with Windows messages so this is probably just me not understanding how they work.\n\nMy calls to PeekMessageW often return 0 even though hotkeys are being pressed, and then after a minute or so suddenly returns them all at once. Is this just expected behaviour?\n\nI'm doing everything on the same goroutine. First I create a new window and save the HWND like this:\n\n```\nfunc createWindow(\n user32 *syscall.DLL,\n dwExStyle uint32,\n lpClassName, lpWindowName *uint16,\n dwStyle uint32,\n x, y, nWidth, nHeight int,\n hWndParent, hMenu, hInstance uintptr,\n lpParam unsafe.Pointer,\n) uintptr {\n procCreateWindowEx := user32.MustFindProc(\"CreateWindowExW\")\n\n ret, _, _ := procCreateWindowEx.Call(\n uintptr(dwExStyle),\n uintptr(unsafe.Pointer(lpClassName)),\n uintptr(unsafe.Pointer(lpWindowName)),\n uintptr(dwStyle),\n uintptr(x),\n uintptr(y),\n uintptr(nWidth),\n uintptr(nHeight),\n hWndParent,\n hMenu,\n hInstance,\n uintptr(lpParam),\n )\n return ret\n}\n\nfunc GiveSimpleWindowPls(user32 *syscall.DLL) uintptr {\n var hwnd uintptr\n className, _ := syscall.UTF16PtrFromString(\"STATIC\")\n windowName, _ := syscall.UTF16PtrFromString(\"Simple Window\")\n\n hwnd = createWindow(\n user32,\n 0,\n className,\n windowName,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n nil,\n )\n\n if hwnd != 0 {\n fmt.Println(\"HWND:\", hwnd)\n } else {\n fmt.Println(\"Could not create window\")\n }\n return hwnd\n}\n```\n\nI then register my hotkeys:\n\n```\nfunc Register(user32 *syscall.DLL, hwnd uintptr) (map[int]*Hotkey, error) {\n reghotkey := user32.MustFindProc(\"RegisterHotKey\")\n\n \/\/ Hotkeys to listen to:\n \/\/ (hardcoded for now)\n keys := map[int]*Hotkey{\n 6: {6, ModAlt + ModCtrl + ModShift, '6'},\n 7: {7, ModAlt + ModCtrl + ModShift, '7'},\n 8: {8, ModAlt + ModCtrl + ModShift, '8'},\n 9: {9, ModAlt + ModCtrl + ModShift, '9'},\n 10: {10, ModAlt + ModCtrl + ModShift, '0'},\n }\n\n \/\/ Register hotkeys:\n for _, v := range keys {\n r1, _, err := reghotkey.Call(hwnd, uintptr(v.Id), uintptr(v.Modifiers), uintptr(v.KeyCode))\n if r1 != 1 {\n return nil, fmt.Errorf(\"error registering hotkey %#v: %w\", v, err)\n }\n }\n return keys, nil\n}\n```\n\nAnd finally call PeekMessageW in a loop:\n\n```\nconst WM_HOTKEY = 0x0312\n\nfunc Listen(keys map[int]*Hotkey, switcher Switcher, hwnd uintptr) {\n peekmsg := user32.MustFindProc(\"PeekMessageW\")\n\n for {\n var msg = &MSG{}\n a, _, _ := peekmsg.Call(uintptr(unsafe.Pointer(msg)), hwnd, WM_HOTKEY, WM_HOTKEY, 1)\n fmt.Printf(\"%#v %d \\n\", msg, a)\n\n if a == 0 {\n time.Sleep(time.Millisecond * 50)\n continue\n }\n\n if key, ok := keys[int(msg.WPARAM)]; ok {\n fmt.Println(\"Hotkey was pressed:\", key)\n\n switcher.Switch(key.Id) \/\/ for now this just does nothing and returns\n }\n }\n}\n```\n\nAnd at first this seems to work perfectly, but every minute or so PeekMessageW starts to only return 0 even though I am pressing my hotkeys. Then after a while, it returns all the messages one after the other, as if the queue had congested for a while and finally gets released.\r\n\r\nAm I wrong in expecting to be able to peek WM_HOTKEY messages immediately (or at least within a couple seconds) after the hotkey is pressed? The program sometimes stops receiving these messages for up to a minute at a time, and then suddenly processes them all at once. Is the queue being blocked by some other process I have running that Windows gives priority?\r\n\r\nThis is my first time trying to figure out how this works, but I've spent hours searching for a solution online and can't see where I'm going wrong.","reasoning":"The issue involves writing a Go program that listens for global Windows hotkeys in the background and sends API calls when specific hotkeys are pressed. However, there are inconsistencies with the delay when using RegisterHotkey and PeekMessageW. The PeekMessageW function sometimes returns 0 even when hotkeys are pressed, and after a minute or so, it suddenly returns all the messages at once. The question is whether this behavior is expected or if there is an error in the code. Maybe another function needs to be used here.","id":"29","excluded_ids":["N\/A"],"gold_ids_long":["go_runtime\/go_runtime.txt"],"gold_ids":["go_runtime\/go_runtime_15_0.txt"],"gold_answer":"After several red herrings I finally found the solution to the problem. I\nneeded to call runtime.LockOsThread() before everything to lock my goroutine\nto the current system thread, as the hotkey messages are thread specific."} {"query":"Command-line to reverse byte order\/change endianess\n\nI'm hacking around in some scripts trying to parse some data written by Javas `DataOutputStream#writeLong(...)`. Since java always seems to write big endian, I have a problem feeding the bytes to `od`. This is due to the fact that `od` always assumes that the endianess matches the endianess of the arch that you are currently on, and I'm on a little endian machine.\n\nI'm looking for an easy one-liner to reverse the byte order. Let's say that you know that the last 8 bytes of a file is a long written by the aforementioned `writeLong(...)` method. My current best attempt to print this long is\n\n```\ntail -c 8 file | tac | od -t d8\n```\n\n, but `tac` only seems to work on text (fair enough). I've found some references to `dd conv=swab`, but this only swaps bytes in pairs, and cannot reverse these eight bytes.\n\nDoes anyone know a good one-liner for this?","reasoning":"A better easy one-liner to reverse the byte order is required. That can be used in data returned by DataOutputStream#writeLong(...), where Java always seems to write big endian.","id":"30","excluded_ids":["N\/A"],"gold_ids_long":["linux_man_1\/linux_man_1.txt"],"gold_ids":["linux_man_1\/linux_man_1_227_11.txt","linux_man_1\/linux_man_1_227_1.txt","linux_man_1\/linux_man_1_227_7.txt","linux_man_1\/linux_man_1_227_12.txt","linux_man_1\/linux_man_1_227_5.txt","linux_man_1\/linux_man_1_227_0.txt","linux_man_1\/linux_man_1_227_8.txt","linux_man_1\/linux_man_1_227_9.txt","linux_man_1\/linux_man_1_227_4.txt","linux_man_1\/linux_man_1_227_6.txt","linux_man_1\/linux_man_1_227_3.txt","linux_man_1\/linux_man_1_227_2.txt","linux_man_1\/linux_man_1_227_10.txt"],"gold_answer":"You could use objcopy:\n\n \n \n $ objcopy -I binary -O binary --reverse-bytes=num inputfile.bin outputfile.bin\n \n\nwhere num is either 2 or 4."} {"query":"How to disable Swagger ui documentation in Fastapi for production server?\n\nHow to disable Swagger ui documentation in Fastapi for production server? i need disable fastapi documentation","reasoning":"The programmer wants to disable fastapi documentation.","id":"31","excluded_ids":["N\/A"],"gold_ids_long":["fastapi_security_advanced_user_guide\/fastapi_security.txt"],"gold_ids":["fastapi_security_advanced_user_guide\/fastapi_security_5_3.txt","fastapi_security_advanced_user_guide\/fastapi_security_5_5.txt","fastapi_security_advanced_user_guide\/fastapi_security_5_4.txt"],"gold_answer":"[ Construct your app with ` docs_url=None ` and ` redoc_url=None `\n](https:\/\/fastapi.tiangolo.com\/tutorial\/metadata\/#docs-urls) and FastAPI won't\nmount those endpoints.\n\n \n \n app = FastAPI(docs_url=None, redoc_url=None)"} {"query":"Django-Forms with json fields\n\nI am looking to accept json data in a form field and than validate it using some database operations. The data will mostly consist of an array of integers. So can you please help me as to how can i do so.\n\nI have tried to google this but didn't get any decent answer. Please help.","reasoning":"A validation using some database operations is required on a json data in a form field. The json data mostly consist of an array of integers.","id":"32","excluded_ids":["N\/A"],"gold_ids_long":["django_CharField\/django_CharField.txt"],"gold_ids":["django_CharField\/django_CharField_26_0.txt"],"gold_answer":"You need to take it as text input using ` CharField ` . And in the clean\nmethod of this field, you can validate it as per your requirement to check if\ninput is valid.\n\nSomething like:\n\n \n \n class myForm(forms.Form):\n jsonfield = forms.CharField(max_length=1024)\n \n def clean_jsonfield(self):\n jdata = self.cleaned_data['jsonfield']\n try:\n json_data = json.loads(jdata) #loads string as json\n #validate json_data\n except:\n raise forms.ValidationError(\"Invalid data in jsonfield\")\n #if json data not valid:\n #raise forms.ValidationError(\"Invalid data in jsonfield\")\n return jdata\n \n\nYou may also find a custom field for JSON data input."} {"query":"Vectorization with multiple rows and columns of dataframe instead of one\n\nCurrently working on building a csv file that include historical stock info, while not only includes historical prices, but momentum indicators. I've successfully added indicators by looping through an entire dataframe (w\/ > 25,000,000 rows), but it takes too long (30 - 36 h).\n\nWhat I'm trying to accomplish: I'd like start with the the 3 day high:\n\n```\nhigh = stock_df.loc[x+1:x+4, \"High\"].max(axis=0)\n```\n\nand divide that by the day 0 low:\n```\nlow = stock_df.loc[x, \"Low\"] \n```\n\nwithout iterating through entire loop like as shown below:\n```\nstock_ticker_list = df.Symbol.unique()\nfor ticker in stock_ticker_list:\n #return dataframe thats historical infor for one stock\n print(ticker)\n stock_df = df.loc[df.Symbol == ticker]\n start = stock_df.index[stock_df['Symbol'] == ticker][0]\n for x in range(start, start + len(stock_df) - 2):\n try:\n high = stock_df.loc[x+1:x+4, \"High\"].max(axis=0) \n low = stock_df.loc[x, \"Low\"] \n df2.loc[x,\"H\/L\"] = high\/low\n except:\n df2.loc[x,\"H\/L\"] = pd.NA\n```\n\nI've look through the documentation and found methods like pandas.Series.pct_change and pandas.Series.div but it does not appear as though these functions will work without me also creating a column for the 3 day high. I tried to create a column for the three day high\n\n```\ns = stock_df[\"High\"] \nstock_df['Three_day_high'] = max([s.diff(-1),s.diff(-2),s.diff(-3)]) + s\n```\n\nbut got a ValueError (`ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().`)","reasoning":"By looping through an entire dataframe with greater than 25,000,000 rows, adding indicators takes 30 to 36 hours. Now, the programmer wants to start with the 3 day high and divide that by the day 0 low. It is desired to find a vectorised method to efficiently calculate the required ratio without using loop and to improve computational performance.","id":"33","excluded_ids":["N\/A"],"gold_ids_long":["pandas_user_guide\/pandas_user_guide.txt"],"gold_ids":["pandas_user_guide\/pandas_user_guide_13_3.txt","pandas_user_guide\/pandas_user_guide_13_6.txt","pandas_user_guide\/pandas_user_guide_13_5.txt","pandas_user_guide\/pandas_user_guide_13_7.txt","pandas_user_guide\/pandas_user_guide_13_2.txt","pandas_user_guide\/pandas_user_guide_13_8.txt","pandas_user_guide\/pandas_user_guide_13_1.txt","pandas_user_guide\/pandas_user_guide_13_4.txt","pandas_user_guide\/pandas_user_guide_13_0.txt"],"gold_answer":"Consider windowing operations: [ https:\/\/pandas.pydata.org\/pandas-\ndocs\/stable\/user_guide\/window.html ](https:\/\/pandas.pydata.org\/pandas-\ndocs\/stable\/user_guide\/window.html)\n\n \n \n import pandas as pd\n import numpy as np\n \n # for testing, have generated a 50 million row dataframe with random numbers in the range 500 to 1500\n l1 = np.round(500 + 1000 * np.random.rand(5000000,1), 0)\n df = pd.DataFrame(l1, columns = [\"Val\"])\n \n # rolling window applied to the Val column\n %timeit df[\"Val\"].rolling(window=3).max() \/ df[\"Val\"] # optional timing function\n df[\"H\"] = df[\"Val\"].rolling(window=3, closed = 'left').max() # optional to show 3 day high\n df[\"HL\"] = df[\"Val\"].rolling(window=3, closed = 'left').max() \/ df[\"Val\"]\n df[:7]\n \n\n1.07 s \u00b1 10.4 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each) [ ![enter\nimage description here](https:\/\/i.sstatic.net\/n3863.png)\n](https:\/\/i.sstatic.net\/n3863.png)"} {"query":"What effect does the `virtual` modifier have on an interface member?\n\nSay I have an interface like interface IThing { virtual string A => \"A\"; }. What effect does the virtual keyword have on implementing types here? Surprisingly, I couldn't find anything regarding this on either SO, dotnet GitHub pages and discussions, nor on learn.microsoft.com, possibly buried under all the unrelated content where the words interface and virtual appear in any combination. From my quick testing it doesn't appear to have any sensible effect. My intuition tells me to expect from the implementing classes' hierarchy to have been introduced the given virtual members, as if the base implementing class has declared them itself, but it's not the case.\n\nFor example:\n```\nclass Thing: IThing\n{ \n} \n \nclass Thing2: Thing\n{\n override public string A => \"!\"; \/\/ ERROR: No suitable method found to override.\n}\n```\n\nIf I declare the A property on Thing, but do not declare it explicitly as virtual, it still won't compile. I also have the freedom apparently to just define them w\/o the modifier despite it being present in the interface and it compiles. And code using IThing only sees the default implementation of A regardless of what I do unless the class implements the interface directly and not through inheritance in this setup.\n\nCan somebody clarify the use of the virtual modifier on interface members? I use the latest stable language version of C#.","reasoning":"The programmer want to know clearly about the use of the virtual modifier on interface members, and he\/she uses the latest stable language version of C#.","id":"34","excluded_ids":["N\/A"],"gold_ids_long":["Microsoft_interfaces_attributes\/default_interface_methods.txt"],"gold_ids":["Microsoft_interfaces_attributes\/default_interface_methods_14_0.txt"],"gold_answer":"The following:\n\n \n \n interface IThing { virtual string A => \"A\"; }\n \n\nIs the so called [ default interface method ](https:\/\/learn.microsoft.com\/en-\nus\/dotnet\/csharp\/language-reference\/proposals\/csharp-8.0\/default-interface-\nmethods) and virtual as far as I can see actually does nothing because:\n\n> The ` virtual ` modifier may be used on a function member that would\n> otherwise be implicitly ` virtual `\n\nI.e. it just an explicit declaration of the fact that **interface** method is\nvirtual since the language team decided to not restrict such things.\n\nSee also the [ Virtual Modifier vs Sealed Modifier\n](https:\/\/learn.microsoft.com\/en-us\/dotnet\/csharp\/language-\nreference\/proposals\/csharp-8.0\/default-interface-methods#virtual-modifier-vs-\nsealed-modifier) part of the spec:\n\n> Decisions: Made in the LDM 2017-04-05:\n>\n> * non- ` virtual ` should be explicitly expressed through ` sealed ` or `\n> private ` .\n> * ` sealed ` is the keyword to make interface instance members with bodies\n> non- ` virtual `\n> * We want to allow all modifiers in interfaces\n> * ...\n>\n\nNote that default implemented ` IThing.A ` is not part of the ` Thing ` ,\nhence you can't do ` new Thing().A ` , you need to cast to the interface\nfirst.\n\nIf you want to override the ` IThing.A ` in the ` Thing2 ` then you can\nimplement the interface directly:\n\n \n \n class Thing2 : Thing, IThing\n {\n public string A => \"!\"; \n }\n \n Console.WriteLine(((IThing)new Thing()).A); \/\/ Prints \"A\"\n Console.WriteLine(((IThing)new Thing2()).A); \/\/ Prints \"!\"\n \n\nAnother way would be declaring ` public virtual string A ` in the ` Thing ` so\nyour current code for ` Thing2 ` works:\n\n \n \n class Thing : IThing\n {\n public virtual string A => \"A\";\n } \n \n class Thing2 : Thing\n {\n public override string A => \"!\"; \n }\n \n\nTo understand meaning of ` sealed ` \/ ` virtual ` identifiers in interfaces\nyou can create a second interface:\n\n \n \n interface IThing { string A => \"A\"; }\n interface IThing2 : IThing { string IThing.A => \"B\"; }\n class Thing : IThing2 {}\n \n Console.WriteLine(((IThing)new Thing()).A); \/\/ Prints \"B\"\n \n\nWhile if you declare ` IThing.A ` as ` sealed ` then ` IThing2 ` will not\ncompile:\n\n \n \n interface IThing { sealed string A => \"A\"; }\n \n interface IThing2 : IThing\n {\n \/\/ Does not compile:\n \/\/ string IThing.A => \"B\";\n }\n \n class AntoherThing : IThing\n {\n \/\/ Does not compile too:\n string IThing.A => \"B\";\n }\n \n\nAlso check out the [ why virtual is allowed while implementing the interface\nmethods? ](https:\/\/stackoverflow.com\/q\/4470446\/2501279) linked by [ wohlstad\n](https:\/\/stackoverflow.com\/users\/18519921\/wohlstad) in the comments."} {"query":"Child component not re-rendering after parent component gets updated\n\nI have 1 parent(Container) and 2 (List, SelectedItem) child components, Initially I render all data to Container and send all data to list and first item as selectedItem to selected Item component. So ar so good\n\nWhen a user clicks an item in the List component, it updates the selected item of the Parent through a function, the parent is able to update state, But it is not re-rendering selected item component.\n\nList Component:\n```\n import \".\/SelectedImage.css\"\n\nfunction ListSection({items, updateSelectedItem}) {\n return (\n <div className=\"list-section\">\n <ul>\n {\n (items).map((item) => {\n return <li key={item.id} onClick={() => updateSelectedItem(item.id)}>{item.name}<\/li>\n })\n }\n <\/ul>\n <\/div>\n );\n}\n\nexport default ListSection;\n```\n\nContainer Component\n```\nimport LeftSection from '.\/LeftSection';\nimport logo from '.\/logo.svg';\nimport RightSection from '.\/RightSection ';\nimport SelectedItem from '.\/SelectedItem';\nimport \".\/App.css\";\nimport ListSection from '.\/ListSection';\nimport { Component, useState } from 'react';\n\nconst products = [\n {id: 1, name: \"Lime\", size: [\"large\", \"medium\", \"small\"], category: \"Juice\", image: \"lime-juice.jpg\"},\n {id: 2, name: \"Orange\", size: [\"large\", \"medium\", \"small\"], category: \"Juice\", image: \"orange-juice.jpg\"},\n {id: 3, name: \"Mango\", size: [\"large\", \"medium\", \"small\"], category: \"Juice\", image: \"mango-juice.jpg\"},\n]\n\nfunction Container() {\n let [selectItem, setSelectItem] = useState(products[0]);\n\n function chooseSelectedItem(id) {\n let item = products.filter((value) => {\n console.log(value.id);\n return (value.id == id)\n })\n setSelectItem(item[0]);\n console.log(item[0]);\n }\n\n return (\n <div className=\"Container\">\n <ListSection items={products} updateSelectedItem={chooseSelectedItem}\/>\n <SelectedItem CurrentSelectedItem={selectItem}\/>\n <\/div>\n );\n}\n\nexport default Container;\n```\n\nSelectedItemComponent\n\n```\nimport QuantityBar from \".\/QuantityBar\";\nimport \".\/SelectedImage.css\"\nimport { useState } from \"react\";\n\nfunction SelectedItem({CurrentSelectedItem}) {\n let [item, setItem] = useState(CurrentSelectedItem); \n\n return (\n <div className=\"selected-item\">\n {\/* <img src={item.image} className=\"selected-image\" \/> *\/}\n {item.name}\n {\/* <QuantityBar itemCount={0} \/> *\/}\n <\/div>\n );\n}\n\nexport default SelectedItem;\n```","reasoning":"The state of the parent (Container) can be updated through a function, however the selected item component is not re-rendered. Another function is needed to solve the update of selected item component.","id":"35","excluded_ids":["N\/A"],"gold_ids_long":["react_hooks_components\/react_hooks.txt"],"gold_ids":["react_hooks_components\/react_hooks_16_10.txt","react_hooks_components\/react_hooks_16_3.txt","react_hooks_components\/react_hooks_16_11.txt","react_hooks_components\/react_hooks_16_9.txt","react_hooks_components\/react_hooks_16_7.txt","react_hooks_components\/react_hooks_16_14.txt","react_hooks_components\/react_hooks_16_4.txt","react_hooks_components\/react_hooks_16_13.txt","react_hooks_components\/react_hooks_16_5.txt","react_hooks_components\/react_hooks_16_12.txt","react_hooks_components\/react_hooks_16_8.txt","react_hooks_components\/react_hooks_16_6.txt","react_hooks_components\/react_hooks_16_2.txt"],"gold_answer":"Since you're initializing the state using ` useState ` with the prop `\nCurrentSelectedItem ` , any changes to ` CurrentSelectedItem ` won't be\nreflected in the state of the component.\n\nYou can use the ` useEffect ` hook to update the state of ` SelectedItem `\nwhenever ` CurrentSelectedItem ` changes:\n\n \n \n function SelectedItem({ CurrentSelectedItem }) {\n const [item, setItem] = useState(CurrentSelectedItem);\n \n useEffect(() => {\n setItem(CurrentSelectedItem);\n }, [CurrentSelectedItem]);\n \n ...\n }\n \n\nOr if ` SelectedItem ` is solely dependent on ` CurrentSelectedItem ` , you\ncan remove the local state in ` SelectedItem ` and directly use `\nCurrentSelectedItem ` as a prop, like\n\n \n \n function SelectedItem({ CurrentSelectedItem }) {\n return (\n <div className=\"selected-item\">\n {\/* <img src={CurrentSelectedItem.image} className=\"selected-image\" \/> *\/}\n {CurrentSelectedItem.name}\n {\/* <QuantityBar itemCount={0} \/> *\/}\n <\/div>\n );\n }"} {"query":"How to show python\/openCV outcome images on ssh-client, not in ssh server?\n\nI am connected to a ssh server using ssh -Y username@adress. On the server I run python2.7 using IDLE. If I use matplotlib I can see the outcome graphs on client. This suggests the graphical forwarding has no problem. However, when I am using OpenCV:\n\n```\ncv2.imshow('img_final', img_final)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n```\n\nIt opens and show the image in the ssh server screen, not in the client ssh computer.\n\nI did search and research, and in response of typical trobleshooting: - On my computer running client-ssh, echo $DISPLAY responds :0. It runs xterm. -On my server ssh computer, My sshd_config file seems to be ok (X11Forwarding yes). echo $DISPLAY shows localhost:10.0.\n\nMoreover, I can use imageviewer such as 'feh' and shows images on client without any problem.\n\nI do not think I have a configuration problem, because server is able to display graphics on client.\n\nIs there a way to execute python scripts on server, and show outcome images from OpenCV directly on client (as MAtplotlib does) ?\n\nThanks","reasoning":"When connecting to a ssh server and using matplotlib library, it is able to show the outcome graphs on client, which means the graphical forwarding works well. However, when trying to OpenCV library, the outcomes graphs are shown on server instead. Another method related to OpenCV is needed.","id":"36","excluded_ids":["N\/A"],"gold_ids_long":["picamera_basic_advanced_recipes\/picamera_advanced_recipes.txt"],"gold_ids":["picamera_basic_advanced_recipes\/picamera_advanced_recipes_7_2.txt","picamera_basic_advanced_recipes\/picamera_advanced_recipes_7_1.txt","picamera_basic_advanced_recipes\/picamera_advanced_recipes_7_0.txt"],"gold_answer":"If it is of use to you, you could do something like a continuous capture but\nstoring each frame as an image:\n\n \n \n def deferred_init(self):\n self.total_frames = 200\n for i in range(self.total_frames):\n self.stream = self.camera.capture_sequence(\n ['image%02d.jpg' % i]\n )\n return self.stream\n \n\n(This snippet was in an object, hence the self.x) This is assuming you have\ncamera = PiCamera and the usual initialization as well. Anyway, once called a)\nyour working dir will be cluttered (recommend making writing to a different\nfolder) and b) you can view your images from ssh with whatever you prefer.\nThis way tests that a capture is working but also allows you to view from ssh."} {"query":"Selenium code working fine on main laptop, but not on my other one\n\nI wrote some code in Pycharm last year, to loop through VAT numbers entered into the Gov website, to make sure they were still valid. It still works fine on the original laptop, but not on my other laptop, even thought the code is exactly the same (the only adjustment was for the location of the spreadsheet). I have given my code down below. When I try to run it on my other laptop, it comes up with the following error.\n\n```\nTraceback (most recent call last): File \"C:\\Users\\neils\\Documents\\Pycharm Projects\\VATChecker\\VATChecker.py\", line 30, in VAT = web.find_element_by_xpath('\/\/*[@id=\"target\"]') ^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'WebDriver' object has no attribute 'find_element_by_xpath'\n\nProcess finished with exit code 1\n```\n\nOne thing I also notice, is that the import sys and import datetime are both greyed out. I guess that's because it crashed before these imports were used?\n\nI should mention, the version of Chrome is the same for both laptops (version 123), and I have the same chromedriver installed for both. Both laptops are windows 64 bit, in case you're thinking that might be the issue.\n\nAre you able to advise me what the problem is please? Many thanks in advance.\n\n```\nimport datetime\nimport sys\n\nfrom selenium import webdriver\nfrom openpyxl import workbook, load_workbook\nfrom datetime import date\n\nweb = webdriver.Chrome()\n\nwb = load_workbook('C:\\\\Users\\\\neils\\\\Documents\\\\NSO\\\\Self Billing agreements\\\\VATMusiciansCheckerUK.xlsx', data_only=True, read_only=False)\n\nws = wb.active\nx=2\ny=1\ncurrent_datetime = datetime.datetime.now()\ncurrent_datetime.strftime('%x %X')\n\ninvalid = \"\"\n\nwhile ws.cell(x,1).value !=None:\n\n ws.cell(x,9).value = \"\"\n\n web.get(\"https:\/\/www.tax.service.gov.uk\/check-vat-number\/enter-vat-details\")\n\n web.implicitly_wait(10)\n\n VatNumber = ws.cell(x,4).value\n\n VAT = web.find_element_by_xpath('\/\/*[@id=\"target\"]')\n VAT.send_keys(VatNumber)\n\n VAT.submit()\n\n web.implicitly_wait(4)\n\n registered = web.find_element_by_xpath('\/html\/body\/div[2]')\n if (registered.text.find(\"Invalid\")) > 0:\n ws.cell(x,9).value = \"Invalid VAT number\"\n invalid = invalid + str(y) + \" \" + ws.cell(x,1).value + \" \" + ws.cell(x,2).value + \", \"\n y=y+1\n else:\n ws.cell(x,9).value = \"Valid VAT number\"\n\n ws.cell(x,6).value = current_datetime\n\n x=x+1\n\nif invalid == \"\":\n\n print(\"All VAT records are correct\")\nelse:\n print(\"Invalid VAT records are \" + invalid)\n\n\nwb.save('C:\\\\Users\\\\neils\\\\Documents\\\\NSO\\\\Self Billing agreements\\\\VATMusiciansCheckerUK.xlsx')\n```","reasoning":"The version of Chrome is the same for both 2 laptops (version 123), however, the new laptop cannot run the same code which was on the original computer before. This seems to be a problem with the line of code mentioned in Traceback.","id":"37","excluded_ids":["N\/A"],"gold_ids_long":["selenium_locating_strategies_basics\/locating_strategies.txt"],"gold_ids":["selenium_locating_strategies_basics\/locating_strategies_0_22.txt","selenium_locating_strategies_basics\/locating_strategies_0_23.txt"],"gold_answer":"Use\n\n \n \n web.find_element(By.XPATH, \"xpath\")\n \n\nInstead of\n\n \n \n web.find_element_by_xpath(\"xpath\")\n \n\nDon't forget to add\n\n \n \n from selenium.webdriver.common.by import By\n \n\nFor more reference refer to [ this\n](https:\/\/www.geeksforgeeks.org\/find_element_by_xpath-driver-method-selenium-\npython\/) ."} {"query":"Getting a list of occurrences for a given vCalendar\n\nI'm trying to use vobject, without success, to get a list of datetime objects for all event occurrences (see the RRULE paramether that sets the event to be repeated daily until a date) for a given vCalendar, which seems to be well-formatted (apparently):\n\n```\nBEGIN:VCALENDAR\nCALSCALE:GREGORIAN\nPRODID:iCalendar-Ruby\nVERSION:2.0\nBEGIN:VEVENT\nDTEND:20110325T200000\nDTSTAMP:20110926T135132\nDTSTART:20110325T080000\nEXDATE:\nRRULE:FREQ=DAILY;UNTIL=20110331;INTERVAL=1\nSEQUENCE:0\nUID:2011-09-26T13:51:32+02:00_944954531@cultura0306.gnuine.com\nEND:VEVENT\nEND:VCALENDAR\n```\n\nDocumentation is not really friendly, and google results are scarce...Any idea or example? (the example can be for vobject or any other library or method).\n\nThanks!\n\nH.","reasoning":"The programmer failed to use \"vobject\" to get a list of datetime objects for all event occurences. The format of VCALENDAR object is not strictly valid.","id":"38","excluded_ids":["N\/A"],"gold_ids_long":["iCalendar_3\/iCalendar.txt"],"gold_ids":["iCalendar_3\/iCalendar_16_1.txt"],"gold_answer":"The VCALENDAR object, as supplied, is not strictly valid: the EXDATE property\nmust have one or more date-time or date values (see [ 3.8.5.1. Exception Date-\nTimes ](http:\/\/icalendar.org\/iCalendar-RFC-5545\/3-8-5-1-exception-date-\ntimes.html) ).\n\nDespite that, _vobject_ parses the data successfully, ending up with an empty\nlist of exceptions in the VEVENT object.\n\nTo get the list of occurrences, try:\n\n \n \n >>> s = \"BEGIN:VCALENDAR ... END:VCALENDAR\"\n >>> ical = vobject.readOne(s)\n >>> rrule_set = ical.vevent.getrruleset()\n >>> print(list(rrule_set))\n [datetime.datetime(2011, 3, 25, 8, 0), datetime.datetime(2011, 3, 26, 8, 0), datetime.datetime(2011, 3, 27, 8, 0), datetime.datetime(2011, 3, 28, 8, 0), datetime.datetime(2011, 3, 29, 8, 0), datetime.datetime(2011, 3, 30, 8, 0)]\n >>>\n \n\nIf we add a valid EXDATE property value, like\n\n \n \n EXDATE:20110327T080000\n \n\nre-parse the string, and examine the RRULE set again, we get:\n\n \n \n >>> list(ical.vevent.getrruleset())\n [datetime.datetime(2011, 3, 25, 8, 0), datetime.datetime(2011, 3, 26, 8, 0), datetime.datetime(2011, 3, 28, 8, 0), datetime.datetime(2011, 3, 29, 8, 0), datetime.datetime(2011, 3, 30, 8, 0)]\n >>> \n \n\nwhich is correctly missing the 27th March, as requested."} {"query":"Issue with Passing Retrieved Documents to Large Language Model in RetrievalQA Chain\n\nI'm currently enrolled in a course on Coursera where I'm learning to implement a retrieval-based question-answering (RetrievalQA) system in Python. The course provides code that utilizes the RetrievalQA.from_chain_type() method to create a RetrievalQA chain with both a large language model (LLM) and a vector retriever.\n\nUpon reviewing the provided code, it's evident that relevant documents are retrieved from the vector store using vectordb.similarity_search(). However, there doesn't appear to be a clear step for explicitly passing these retrieved documents to the LLM for question-answering within the RetrievalQA chain.\n\nMy understanding is that in a typical RetrievalQA process, relevant documents retrieved from the vector store are subsequently passed to the LLM. This ensures that the LLM can utilize the retrieved information to generate accurate responses to user queries.\n\nI'm seeking clarification on the proper methodology for integrating the retrieved documents into the RetrievalQA chain to ensure effective utilization by the LLM. Any insights, suggestions, or code examples on how to achieve this integration would be greatly appreciated. Thank you for your assistance!\n\n```\nfrom langchain.vectorstores import Chroma\nfrom langchain.embeddings.openai import OpenAIEmbeddings\npersist_directory = 'docs\/chroma\/'\nembedding = OpenAIEmbeddings()\nvectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding)\nquestion = \"What are major topics for this class?\"\ndocs = vectordb.similarity_search(question,k=3)\nlen(docs)\nfrom langchain.chat_models import ChatOpenAI\nllm = ChatOpenAI(model_name=llm_name, temperature=0)\nfrom langchain.chains import RetrievalQA\n\nqa_chain = RetrievalQA.from_chain_type(\n llm,\n retriever=vectordb.as_retriever()\n)result = qa_chain({\"query\": question})\nresult[\"result\"]\n```","reasoning":"Any insights, suggestions, or code examples on how to integrate the retrieved documents into the RetrievaQA chain is accepted.","id":"39","excluded_ids":["N\/A"],"gold_ids_long":["langchain\/LangChain.txt"],"gold_ids":["langchain\/LangChain_34_2.txt"],"gold_answer":"You are passing the documents using the retriever. You need to build the\nprompt using the prompt template. Following is a sample of how you can use the\ndata yourself\n\n \n \n query = \"\"\"Use the given below context to answer the user query. \n {context}\n Question: What are major topics for this class?\"\"\"\n \n QA_CHAIN_PROMPT = PromptTemplate.from_template(query)\n qa_chain = RetrievalQA.from_chain_type(\n llm,\n retriever=vectordb.as_retriever(),\n return_source_documents=True,\n chain_type_kwargs={\"prompt\": QA_CHAIN_PROMPT}\n \n\n)\n\nYou can read more about [ langchain retreivers\n](https:\/\/python.langchain.com\/docs\/modules\/data_connection\/retrievers\/) here.\n\nYou should also read the [ Langchain QA private data\n](https:\/\/medium.com\/@onkarmishra\/using-langchain-for-question-answering-on-\nown-data-3af0a82789ed) article as it uses retrieval prompt with prompt."} {"query":"How i can send TOO MANY requests to DEFFERENT sites and get responses?\n\nHow can I send a lot of requests to DIFFERENT sites, I have a database of sites (1kk) and need to check whether they are alive or not, conditionally if you just do it through grequests(python) chunks of software (100 requests in 10 threads ~128second) it will take 12.5 days, but for me it's too long and I I am sure that this can be done much faster. Can you tell me what I can use in this case? I'm just collecting information about the main page of the sites.\r\n\r\nHere is my code, I want to improve it somehow, what can you recommend? I tried to throw every request into the stream, but it feels like something is blocking it, I will use a proxy so that my IP is not blocked due to more requests Help who can!\n\n```\ndef start_parting(urls:list,chunk_num,chunks):\r\n if len(urls)>0:\r\n chunk_num+=1\r\n print(f'Chunck [{Fore.CYAN}{chunk_num}\/{chunks}{Style.RESET_ALL}] started! Length: {len(urls)}')\r\n headers = {\r\n \"User-Agent\": \"Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/123.0.0.0 Safari\/537.36\"}\r\n rs = [grequests.get(url.split(' ')[0].strip(), headers=headers,timeout=10) for url in urls]\r\n responses = grequests.map(rs)\r\n for response in responses:\r\n if response is None:\r\n continue\r\n if response.status_code == 200:\r\n check_pattern = r'(pat1|pat2)'\r\n match = re.search(check_pattern, response.text, re.IGNORECASE)\r\n if match:\r\n site = match.group(1)\r\n print(f'Site {site}')\r\n print(f'Chunck [{Fore.LIGHTCYAN_EX}{chunk_num}\/{chunks}{Style.RESET_ALL}] ended!')\r\n\r\ndef test_sites_for_file(file,num_threads = 10,chunk_size=100):\r\n print('Start check!')\r\n urls = file.readlines()\r\n with ThreadPoolExecutor(max_workers=num_threads) as executor:\r\n parts = [urls[i:i + chunk_size] for i in range(0, len(urls), chunk_size)]\r\n finals = [executor.submit(start_parting, part , part_num,len(parts)) for part_num,part in enumerate(parts)]\r\n t = time.time()\r\n for final in as_completed(finals):\r\n pass\r\n print(f'Resultate: {time.time()-t}')\n```","reasoning":"The existing code about grequest takes at least 10 days to collect the information about the main pages of the sites. The speed is that 100 requests in 10 threas requires about 128 seconds. A much faster grequest method is required.","id":"40","excluded_ids":["N\/A"],"gold_ids_long":["gorequest\/gorequest_readme.txt"],"gold_ids":["gorequest\/gorequest_readme_27_0.txt"],"gold_answer":"You can try switching to HTTP HEAD (via grequests.head) instead of GET. That, in principle, should be faster, because the bodies of these pages, which you're going to ignore anyway, won't have to be transmitted. Other than that, I don't think there's much you can do to speed this up purely in software, if you've already parallelized it. Making a huge amount of HTTP requests takes time."} {"query":"How can I observe the attribute lookup chain in Python?\n\nI am trying to delve into Python's class descriptors, and have come across an article explaining the lookup chain of class attributes. Namely, it is said that the first priority is the __get__ method of the data descriptor (that is, a class that implements __set__ or __delete__ method. Then comes the dict of the object followed by get method of non-data descriptor.\n\nThis seems extremely confusing to me, and I am hopeless to find a real example demonstrating how, say, data descriptors come before dict lookup of an instance. I would really appreciate a minimum example with prints that illustrates this theoretical stuff.","reasoning":"The first priority is the `__get__` method of the data descriptor. Then comes the `dict` of the object followed by `get` method of non-data descriptor. An example is needed to explain why data descriptors come before dict lookup of an instance.","id":"41","excluded_ids":["N\/A"],"gold_ids_long":["python_descriptors_attribute\/Peter_Lamut_blog.txt","python_descriptors_attribute\/real_python.txt"],"gold_ids":["python_descriptors_attribute\/real_python_0_7.txt","python_descriptors_attribute\/Peter_Lamut_blog_8_1.txt","python_descriptors_attribute\/real_python_0_5.txt","python_descriptors_attribute\/real_python_0_8.txt","python_descriptors_attribute\/real_python_0_4.txt","python_descriptors_attribute\/real_python_0_3.txt","python_descriptors_attribute\/real_python_0_6.txt","python_descriptors_attribute\/Peter_Lamut_blog_8_0.txt","python_descriptors_attribute\/real_python_0_1.txt","python_descriptors_attribute\/real_python_0_2.txt","python_descriptors_attribute\/Peter_Lamut_blog_8_2.txt"],"gold_answer":"For a summary from [ Real Python ](https:\/\/realpython.com\/python-descriptors\/)\n:\n\n> * First, you'll get the result returned from the ` __get__ ` method of the\n> ` data descriptor ` named after the attribute you're looking for.\n> * If that fails, then you'll get the value of your object's ` __dict__ `\n> for the key named after the attribute your're looking for.\n> * If that fails, then you'll get the result returned from the ` __get__ `\n> method of the ` non-data descriptor ` named after the attribute you're\n> looking for.\n> * If that fails, then you'll get the value of your object type's `\n> __dict__ ` for the key named after the attribute you're looking for.\n> * If that fails, then you'll get the value of your object parent type's `\n> __dict__ ` for the key named after the attribute you're looking for.\n> * If that fails, then the previous steps is repeated for all the parent's\n> types in the ` method resolution order ` of your object.\n> * If everything else has failed, then you'll get an ` AttributeError `\n> exception.\n>\n\nAnother summary from [ Peter Lamut's blog\n](https:\/\/blog.peterlamut.com\/2018\/11\/04\/python-attribute-lookup-explained-in-\ndetail\/) :\n\n> * Check the class hierarchy using MRO (Method Resolution Order), but do\n> not examine metaclasses:\n> * If a data (overriding) descriptor is found in the class hierarchy,\n> call its ` __get__() ` method;\n> * Otherwise, check the ` instance.__dict__ ` (assuming no ` __slots__ `\n> for the sake of example). If an attribute is there, return it;\n> * If attribute not in ` instance.__dict__ ` but found in the class\n> hierarchy:\n> * If non-data descriptor, calll its ` __get__() ` method;\n> * If not a descriptor, return the attribute itself;\n> * If still not found, invoke ` __getattr__() ` , if implemented on a\n> class;\n> * Finally give up and raise ` AttributeError ` .\n>"} {"query":"EF Core ignore column when requesting a list, but include when getting by Id\n\nI have a list of invoices, which have generated files which are stored in the database.\n\nI want to avoid loading the file data itself when retrieving a list of the invoices, but load it when the user clicks the download button for it.\n\nIs there a way to achieve this? I saw the ignore for fluent configuration but it ignores it all the time.\n\nExample code:\n\nClass:\n```\npublic class InvoiceFile\n{\n public Guid Id { get; set; }\n public string FileName { get; set; }\n public byte[] File { get; set; }\n\n public Guid InvoiceId { get; set; }\n public Invoice Invoice { get; set; }\n}\n```\n\nI want the File property to be ignored in the first query, but loaded in the second:\n\n```\nvar invoices = await _payoutDbContext.Invoices\n .Include(x => x.InvoiceFiles)\n .AsNoTracking()\n .AsSplitQuery()\n .ToListAsync();\nvar file - await _payoutDbContext.InvoiceFiles\n .Where(x => x.Id == invoiceId)\n .AsNoTracking()\n .FirstOrDefaultAsync();\n```","reasoning":"Programmer want the data to be loaded when the user clicks the download button for it. However, the ignore for fluent configuration ignores it all the time. Another custom is required since the programmer does not use tracking.","id":"42","excluded_ids":["N\/A"],"gold_ids_long":["Entity_Framework_Core\/Entity_Framework_Core.txt"],"gold_ids":["Entity_Framework_Core\/Entity_Framework_Core_0_0.txt","Entity_Framework_Core\/Entity_Framework_Core_0_2.txt","Entity_Framework_Core\/Entity_Framework_Core_0_1.txt"],"gold_answer":"You cannot control what to load and what do not without ` Select ` , full\nentity is loaded by default. But you can configure your entity using [ Table\nsplitting ](https:\/\/learn.microsoft.com\/en-us\/ef\/core\/modeling\/table-\nsplitting#table-splitting)\n\nWith ` Select ` it is easy and preferrable, but I recommend to use special DTO\nclassses for that:\n\n \n \n var invoices = await _payoutDbContext.Invoices\n .Select(x => new InvoiceDto\n {\n Id = x.Id\n \/\/ ... other properties\n \n InvoiceFiles = x.InvoiceFiles.Select(f => new InvoiceFileDto\n {\n Id = f.Id,\n FileName = f.FileName\n }).ToList()\n })\n .AsSplitQuery()\n .ToListAsync();"} {"query":"Static data member of template class type: constexpr vs. const constinit\n\nI have a class:\n\n```\n#include <array>\n\ntemplate<class T, std::size_t N>\n requires std::is_arithmetic_v<T> && (N >= 1)\nclass Vector\n{\n static constexpr std::size_t Dimension = N;\n std::array<T, Dimension> Elements;\n\npublic:\n constexpr Vector() noexcept : Elements{} {}\n constexpr ~Vector() = default;\n static constexpr Vector ZeroVector{};\n};\n\nint main()\n{\n Vector<float, 7> boo = Vector<float, 7>::ZeroVector;\n}\n```\n\nThe above fails to compile on compiler explorer with MSVC and Clang (trunk) with C++23 compiler flags, but it compiles on GCC (trunk) with C++23 compiler flags.\n\nClang gives the following error:\n\n```\n<source>:13:29: error: constexpr variable cannot have non-literal type 'const Vector<float, 7>'\n 13 | static constexpr Vector ZeroVector{};\n | ^\n<source>:18:28: note: in instantiation of template class 'Vector<float, 7>' requested here\n 18 | Vector<float, 7> boo = Vector<float, 7>::ZeroVector;\n | ^\n<source>:13:29: note: incomplete type 'const Vector<float, 7>' is not a literal type\n 13 | static constexpr Vector ZeroVector{};\n | ^\n<source>:5:7: note: definition of 'Vector<float, 7>' is not complete until the closing '}'\n 5 | class Vector\n | ^\n1 error generated.\nCompiler returned: 1\n```\n\nMSVC gives the following error:\n```\n<source>(13): error C2027: use of undefined type 'Vector<float,7>'\n<source>(5): note: see declaration of 'Vector<float,7>'\n<source>(13): note: the template instantiation context (the oldest one first) is\n<source>(18): note: see reference to class template instantiation 'Vector<float,7>' being compiled\nCompiler returned: 2\n```\n\nWhen I change constexpr to const constinit for ZeroVector, this compiles on all three major compilers when the definition is moved outside the class like so:\n\n```\ntemplate<class T, size_t N> requires std::is_arithmetic_v<T> && (N >= 1)\nconst constinit Vector<T, N> Vector<T, N>::ZeroVector{};\n```\n\nSo why does constexpr compile only on GCC and const constinit compiles on all three major compilers?","reasoning":"A C++ class template, Vector, includes a constexpr static member, ZeroVector. While this code compiles successfully with the GCC compiler as C++23, it fails to compile with MSVC and Clang compilers. The compilation errors indicate that the constexpr variable has a non-literal type. By replacing constexpr with const constinit and moving the definition of ZeroVector outside the class, the code can be compiled successfully across major compilers. Others also encountered the same bug before. Reason(s) is (are) required.","id":"43","excluded_ids":["N\/A"],"gold_ids_long":["C++_Classes_Templates\/gcc_bugs.txt","C++_Classes_Templates\/C++_Classes.txt"],"gold_ids":["C++_Classes_Templates\/C++_Classes_7_3.txt","C++_Classes_Templates\/C++_Classes_7_4.txt","C++_Classes_Templates\/gcc_bugs_0_0.txt","C++_Classes_Templates\/C++_Classes_7_5.txt","C++_Classes_Templates\/gcc_bugs_0_1.txt"],"gold_answer":"This is an [ old gcc bug\n](https:\/\/gcc.gnu.org\/bugzilla\/show_bug.cgi?id=107945) and the program is\n**ill-formed** .\n\nWhen using ` constexpr ` or ` inline ` with the declaration of a static data\nmember, the type must be a **complete type** . But ` Vector<T,N> ` is not\ncomplete at the point of the static data member's definition\/declaration.\n\nFrom [ static data member ](https:\/\/en.cppreference.com\/w\/cpp\/language\/static)\n:\n\n> However, if the declaration uses constexpr or inline (since C++17)\n> specifier, **the member must be declared to have complete type.**\n\n* * *\n\nHere is the old gcc bug report:\n\n[ static constexpr incomplete (depedent) data member of a class template and\nin-member initialized incorrectly accepted\n](https:\/\/gcc.gnu.org\/bugzilla\/show_bug.cgi?id=107945)"} {"query":"What is \"Verbose_name\" and \"Ordering\" in class Meta? And please explain a little about Meta Class in django\n\n```\nfrom django.db import models\r\nimport uuid\r\nclass Book(models.Model):\r\n name=models.CharField(max_length=100)\r\n isbn=models.UUIDField(default=uuid.uuid4, \r\n primary_key=True)\r\n writer=models.CharField(max_length=100)\r\n \r\n class Meta:\r\n ordering=['name']\r\n ordering='User MetaData\n```","reasoning":"The programmer want to know clearly about the \"verbose_name\" and \"ordering\" in class Meta, and the introduction about Meta Class as well. An article and\/or code is required to have a clear explanation.","id":"44","excluded_ids":["N\/A"],"gold_ids_long":["Django_meta\/django_meta.txt","Django_meta\/django_models.txt"],"gold_ids":["Django_meta\/django_meta_24_0.txt","Django_meta\/django_models_5_0.txt","Django_meta\/django_meta_13_0.txt"],"gold_answer":"With class Meta you can give to your model metata such as database table name\nor ordering options. You can check it out in [ documentation\n](https:\/\/docs.djangoproject.com\/en\/3.2\/topics\/db\/models\/#meta-options)\n\nBy using \"verbose_name\" in class Meta you can specify human-readable name for\nsingular object. [ Documentation\n](https:\/\/docs.djangoproject.com\/en\/4.0\/ref\/models\/options\/#django.db.models.Options.verbose_name)\n\nBy using 'ordering' in class Meta you can specify in which order you will get\nlist of objects. By ['-field_name'] you specify descending order. By\n['field_name'] you specify ascending order. You can make ordering by several\nfields: ordering = ['field1', 'field2']. [ Documentation\n](https:\/\/docs.djangoproject.com\/en\/4.0\/ref\/models\/options\/#django.db.models.Options.ordering)"} {"query":"Next.js (react) to format date and time using moment or not\n\nwhat is best for my next front-end app to handle date format using Moment.js or JS functions? Data come from the backend in the date type and I want it to appear in a readable format according to performance and loading time.\n\n```\nMon Apr 01 2024 14:47:56 GMT+0300 (GMT+03:00) \n```\nThis must format to:\n```\n01\/04\/2024 02:48 pm\n```","reasoning":"The best method related to Moment.js is required to handle date format for a front-end app.","id":"45","excluded_ids":["N\/A"],"gold_ids_long":["Moment_js_Documentation\/Moment_js_Documentation.txt"],"gold_ids":["Moment_js_Documentation\/Moment_js_Documentation_71_1.txt","Moment_js_Documentation\/Moment_js_Documentation_71_0.txt"],"gold_answer":"**Problem :**\n\n> _Format date using library or manually using Javascript_\n\n**Possible Solution :**\n\n * If you need date formatting just once or twice in few pages, (in your whole project) then do it manually. \n\n * If there are **multiple formats needed or logic** needed to convert date is **getting complex then use library** which will save time & it will also reduce code to almost one line (depending on conditions). \n\n * Also size of library is small & offers many functionality. \n\n * You may further also look at functions provided by library which will help you deciding. \n\n_Small Example to demonstrate ease of use (make necessary changes wherever\nrequired):_\n\n \n \n let From = new Date(\"Mon Apr 01 2024 14:47:56 GMT+0300 (GMT+03:00)\");\n console.log(\"With Library :\")\n console.log(moment(From).format(\"DD\/MM\/YYYY hh:mm a\"))\n \/\/ LIBRARY HANDLES ACCORDING TO SPECIFIED FORMAT\n \n console.log(\"\\n\")\n \n console.log(\"Without Library :\")\n console.log(`${From.getDate()}\/${From.getMonth()+1}\/${From.getFullYear()} ${From.getHours() % 12 || 12}:${From.getMinutes()} ${From.getHours() > 12 ? \"pm\" : \"am\"}`)\n \n \/\/ HERE I HAD TO HANDLE 12HR. FORMAT & ALSO AM PM\n \n \n <script src=\"https:\/\/cdnjs.cloudflare.com\/ajax\/libs\/moment.js\/2.30.1\/moment.min.js\"><\/script>\n\n**Please Read :**\n\n * **Format :** [ https:\/\/momentjs.com\/docs\/#\/displaying\/format\/ ](https:\/\/momentjs.com\/docs\/#\/displaying\/format\/)"} {"query":"What is the purpose of leading slash in HTML URLs?\n\nI have noticed that some blogs posts have links using a value starting with \/ in the href.\n\nFor example:\n```\n<a href=\"\/somedir\/somepage.html\">My Page<\/a>\n\nDoes the leading `\/` mean the path is starting from the site root?\r\n\r\nIn other words, if the site URL is `www.mysite.com`, the effective `href` value is `www.mysite.com\/somedir\/somepage.html`?\r\n\r\nIs this a convention accepted in all browsers?","reasoning":"An article or a code is required to explain why some blog post links use values starting with `\/` in `href`.","id":"46","excluded_ids":["N\/A"],"gold_ids_long":["URI_General_Syntax\/URI_General_Syntax.txt"],"gold_ids":["URI_General_Syntax\/URI_General_Syntax_17_0.txt"],"gold_answer":"It's important to start a URL with a ` \/ ` so that the intended URL is\nreturned instead of its parent or child.\n\nLet's say if your browsing ` \/cream-cakes\/ ` then you have a link on the page\nthat has ` blah.html ` without the forward slash it is going to attempt to\nvisit page ` \/cream-cakes\/blah.html ` while with the forward slash it'll\nassume you mean the top level which will be ` domain.com\/blah.html ` .\n\nGenerally, it's best to always use ` \/ ` in my experience as its more friendly\nwhen you change the structure of your site, though there is no right or wrong\nassuming that the intended page gets returned."} {"query":"Why does ExecuteNonQuery() always return -1?\n\nIve used this method before to return the amount of rows changed. I am it to run an insert method, the insert runs fine in the stored procedure, but the return value from `ExecuteNonQuery()` always returns `-1`.\n\nHere is my C# code:\n\n```\nint ret = 0;\r\n\r\nusing (SqlConnection conn = new SqlConnection(this.ConnectionString))\r\n{\r\n using (SqlCommand cmd = new SqlCommand(QueryName, conn))\r\n {\r\n conn.Open();\r\n\r\n if (Params != null)\r\n cmd.Parameters.AddRange(Params);\r\n\r\n cmd.CommandType = CommandType.StoredProcedure;\r\n \r\n ret = cmd.ExecuteNonQuery();\r\n\r\n conn.Close();\r\n }\r\n}\r\n\r\nreturn ret;\n```\nWhy do I get -1 instead of the actual number of rows changed?","reasoning":"The function `ExecuteNonQuery()` always returns -1 rather than the actual number of rows changed. The definition of this function needs to rechecked.","id":"47","excluded_ids":["N\/A"],"gold_ids_long":["NET_Methods_Properties\/NET_Methods.txt"],"gold_ids":["NET_Methods_Properties\/NET_Methods_4_1.txt","NET_Methods_Properties\/NET_Methods_4_0.txt"],"gold_answer":"From MSDN:\n\nIf you use this method to call a store procedure that perform UPDATE\/INSERT in a table the method return -1 if the stored procedure has the SET NOCOUNT at ON value."} {"query":"SharePoint Online Error: The remote server returned an error: (403) Forbidden\n\nI would like to connect a SharePoint Online with the current user on .NET with Single Sign On. I don't want to specify a username and password in my code. Unfortunately, I've the following error message on ExecuteQuery():\n\nThe remote server returned an error: (403) Forbidden\n\nMy code:\n```\nstring siteCollectionUrl = \"https:\/\/xxx.sharepoint.com\/teams\/yyyy\";\r\nSystem.Net.ICredentials credentials = System.Net.CredentialCache.DefaultNetworkCredentials;\r\nSharePoint.ClientContext context = new SharePoint.ClientContext(siteCollectionUrl);\r\ncontext.Credentials = credentials;\r\nSharePoint.Web web = context.Web;\r\ncontext.Load(web);\r\ncontext.ExecuteQuery();\r\nstring tt = web.Title;\n```","reasoning":"The programmer wants to connect a SharePoint Online with the current user on .NET with Single Sign On without specifying a username and password in my code. An authentication is required for client.","id":"48","excluded_ids":["N\/A"],"gold_ids_long":["Learn_SharePointOnline_Permission\/Learn_SharePointOnline.txt"],"gold_ids":["Learn_SharePointOnline_Permission\/Learn_SharePointOnline_0_0.txt"],"gold_answer":"You have to use the [ `\nMicrosoft.SharePoint.Client.SharePointOnlineCredentials `\n](https:\/\/learn.microsoft.com\/en-us\/previous-versions\/office\/sharepoint-\ncsom\/jj164693\\(v=office.15\\)) class for authentication.\n\nLike this:\n\n \n \n String name = \"[[email\u00a0protected]](\/cdn-cgi\/l\/email-protection)\";\n String password = \"xxxx\";\n SecureString securePassword = new SecureString();\n foreach (char c in password.ToCharArray())\n {\n securePassword.AppendChar(c);\n }\n var credentials = new SharePointOnlineCredentials(name, securePassword);\n string siteCollectionUrl = \"https:\/\/xxx.sharepoint.com\/teams\/yyyy\";\n \n ClientContext ctx = new ClientContext(siteCollectionUrl );\n ctx.Credentials = credentials;"} {"query":"Accessing user provided env variables in cloudfoundry in Spring Boot application\n\nI have the following user provided env variable defined for my app hosted in cloudfoundry\/pivotal webservices:\n\n```\nMY_VAR=test\n```\nI am trying to access like so:\n```\nSystem.getProperty(\"MY_VAR\")\n```\nbut I am getting null in return. Any ideas as to what I am doing wrong would be appreciated.","reasoning":"When accessing the user-provided env variable for his\/her app, it always returns null. Some functions are required to specify values the get injected into his\/her application without the application code explicitly retrieving the value.","id":"49","excluded_ids":["N\/A"],"gold_ids_long":["spring_io\/spring_io.txt"],"gold_ids":["spring_io\/spring_io_3_18.txt","spring_io\/spring_io_3_3.txt","spring_io\/spring_io_3_10.txt","spring_io\/spring_io_3_11.txt","spring_io\/spring_io_3_1.txt","spring_io\/spring_io_3_0.txt","spring_io\/spring_io_3_12.txt","spring_io\/spring_io_3_8.txt","spring_io\/spring_io_3_19.txt","spring_io\/spring_io_3_15.txt","spring_io\/spring_io_3_6.txt","spring_io\/spring_io_3_17.txt","spring_io\/spring_io_3_9.txt","spring_io\/spring_io_3_5.txt","spring_io\/spring_io_3_13.txt","spring_io\/spring_io_3_7.txt","spring_io\/spring_io_3_4.txt","spring_io\/spring_io_3_14.txt","spring_io\/spring_io_3_16.txt","spring_io\/spring_io_3_2.txt"],"gold_answer":"Environment variables and system properties are two different things. If you\nset an environment variable with ` cf set-env my-app MY_VAR test ` then you\nwould retrieve it in Java with ` System.getenv(\"MY_VAR\") ` , not with `\nSystem.getProperty ` .\n\nA better option is to take advantage of the Spring environment abstraction\nwith features like the ` @Value ` annotation. As shown in the [ Spring Boot\ndocumentation ](http:\/\/docs.spring.io\/spring-\nboot\/docs\/current\/reference\/html\/boot-features-external-config.html) , this\nallows you to specify values that get injected into your application as\nenvironment variables, system properties, static configuration, or external\nconfiguration without the application code explicitly retrieving the value.\n\n**Note** : You'll have to restage your application for your app instances to\npick up the new env var."} {"query":"Container on custom network - can't reach from host machine\n\nI created two Docker images and ran them. One is a simple Flask application (for testing purposes only) and the other is a PostgreSQL.\nI attached the code, Dockerfiles, and commands I use to run them below.\nI have tested simple commands like curl from the Flask container interactive mode, it communicates with the DB container successfully and does what it needs to do.\nBut here's the problem - whenever I try to reach it using the host machine (and I tried this on both MacOS and Ubuntu machines), by using localhost:8080 or 127.0.0.1:8080 or even the network's IP (the network I created, command is below), it simply won't reach the Flask container. Meaning I can't even check the container's logs to see what is happening because it doesn't get there.\nI checked the firewall to make sure nothing is blocked and I have tried mapping the container to a different port on my machines... didn't work.\nWould love some help trying to understand why is that happening.\n\napp.py:\n\n```\nfrom flask import Flask, request, jsonify\nfrom flask_sqlalchemy import SQLAlchemy\nfrom flask_migrate import Migrate\n\napp = Flask(__name__)\napp.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql:\/\/postgres:postgres@my-postgres-container:5432\/db'\n\ndb = SQLAlchemy(app)\n\nmigrate = Migrate(app, db)\n\nclass MyUser(db.Model):\n id = db.Column(db.Integer, primary_key=True)\n name = db.Column(db.String(50), unique=True, nullable=False)\n\n@app.route('\/users', methods=['POST'])\ndef create_user():\n data = request.get_json()\n new_user = MyUser(name=data['name'])\n db.session.add(new_user)\n db.session.commit()\n return jsonify({'message': 'New user created!'})\n\n@app.route('\/users', methods=['GET'])\ndef get_all_users():\n users = MyUser.query.all()\n output = []\n for user in users:\n user_data = {'id': user.id, 'name': user.name}\n output.append(user_data)\n return jsonify({'users': output})\n\nif __name__ == '__main__':\n app.run(debug=True, host='0.0.0.0')\n```\n\nDockerfile-app:\n\n```\nFROM python:3.8-slim\n\nWORKDIR \/app\n\nCOPY requirements.txt .\nRUN pip install --no-cache-dir -r requirements.txt\n\nCOPY . .\n\nEXPOSE 5000\n\nENTRYPOINT [\"bash\", \"-c\"]\nCMD [\"flask db init && flask db migrate && flask db upgrade && flask run\"]\n```\n\nDockerfile-db:\n```\nFROM postgres:latest\n\nENV POSTGRES_USER postgres\nENV POSTGRES_PASSWORD postgres\nENV POSTGRES_DB db\n\nEXPOSE 5432\n\nCMD [\"postgres\"]\n```\nrequirements.txt:\n```\nalembic==1.13.1\nblinker==1.7.0\nclick==8.1.7\nFlask==3.0.2\nFlask-Migrate==4.0.7\nFlask-SQLAlchemy==3.1.1\nitsdangerous==2.1.2\nJinja2==3.1.3\nMako==1.3.2\nMarkupSafe==2.1.5\npsycopg2-binary==2.9.9\nSQLAlchemy==2.0.29\ntyping_extensions==4.10.0\nWerkzeug==3.0.1\n```\n\nCommands I use to run them:\n```\ndocker network create my-network\ndocker volume create my-volume\ndocker build -t my-postgres -f Dockerfile-db .\ndocker run -d -p 5432:5432 --name my-postgres-container --network my-network -v my-volume:\/var\/lib\/postgresql\/data my-postgres\ndocker build -t my-app -f Dockerfile-app .\ndocker run -d -p 8080:5000 --name my-app-container --network my-network my-app\n```","reasoning":"After using `localhost:8080` or `127.0.0.1:8080` or even the network's IP, the Flask container is not reached. The firewall is checked and the programmer has tried to map the container to a different port. The instruction related to connection is needed to be updated.","id":"50","excluded_ids":["N\/A"],"gold_ids_long":["flask_documentation\/flask_documentation.txt"],"gold_ids":["flask_documentation\/flask_documentation_1_0.txt"],"gold_answer":"Your app binds to 127.0.0.1, so it'll only accept connections from inside the\ncontainer. To get it to bind to 0.0.0.0 so it'll accept connections from\noutside, you need to have the ` --host=0.0.0.0 ` option on your ` flask run `\ncommand.\n\nChange the last line in your Dockerfile-app from\n\n \n \n CMD [\"flask db init && flask db migrate && flask db upgrade && flask run\"]\n \n\nto\n\n \n \n CMD [\"flask db init && flask db migrate && flask db upgrade && flask run --host=0.0.0.0\"]\n \n\nand it'll work.\n\nI can see in your app.py file that you try to bind to 0.0.0.0. For some reason\nthat doesn't work. I'm not a Python expert, so I don't know why."} {"query":"R piping: object not found. What am I missing?\n\nSo this is really a basic question, however I am stuck with trying to use the following basic code:\n\n```\ndc_test |> table(testVar)\n```\n\nIt leads to:\n```\nError: Object 'testVar' not found\n```\ndc_test is a data.frame. The column testVar is \"character\" class.\r\n\r\nIt works without an error if i directly use the table() function, however this is not as handy for including filtering and other operations:\n```\ntable(dc_test$testVar)\n```\n-> I get the table without an error.\r\n\r\nDoes the table() function not work with piping or am I using it wrong?\r\n\r\nI tried\n```\ndc_test |> table(testVar)\r\ndc_test |> table(data = _$testVar)\n```","reasoning":"The definition of function \"table\" is required for telling the difference between 2 relative codes written by the programmer. Another funtion is required to make the programmer be able to use column names without quotes or `$`.","id":"51","excluded_ids":["N\/A"],"gold_ids_long":["R_documentation\/R_functions.txt"],"gold_ids":["R_documentation\/R_functions_1_1.txt","R_documentation\/R_functions_0_2.txt","R_documentation\/R_functions_0_1.txt","R_documentation\/R_functions_0_0.txt","R_documentation\/R_functions_1_0.txt"],"gold_answer":"` dc_test |> table(testVar) ` is exactly the same as ` table(dc_test, testVar)\n` , and if you run that line you will get the same exact error.\n\nThe pipe doesn't let you use column names without ` $ ` . Many ` dplyr `\nfunctions do let you do that, and they are often used with pipes, but you can\nuse pipes without ` dplyr ` and you can use ` dplyr ` without pipes. They're\nunrelated, just often combined.\n\nWhat you can do is pipe into the ` with() ` function. The whole point of the `\nwith() ` function is to let you use column names without quotes or ` $ ` . So\nthis should work:\n\n \n \n dc_test |> with(table(testVar))\n \n ## exact equivalent to\n with(dc_test, table(testVar))\n \n\nOr you could use ` count ` , the ` dplyr ` version of ` table ` :\n\n \n \n library(dplyr)\n dc_test |> count(testVar)"} {"query":"Getting error \"Get-LMFunctionList : The security token included in the request is invalid\" when using PowerShell function Get-LMFunctionList\n\nI am trying to run the PowerShell command Get-LMFunctionList -Region \"eu-west-1\" after importing the module AWS.Tools.Lambda. I am using PowerShell version 5.1 and AWS.Tools.Lambda version 4.1.530.\n\nWhen I run the command I get the error message\n```\nGet-LMFunctionList : The security token included in the request is invalid\n```\n\nIf I try to run the equivalent command via the AWS CLI (i.e. aws lambda list-functions --region eu-west-1) everything runs fine and I get a list of functions with their details. So it looks like this is specific to the PowerShell command.\n\nI have tried all the suggestions listed in the Stack Overflow article How can I resolve the error \"The security token included in the request is invalid\" when running aws iam upload-server-certificate? but none of these answers have solved my issue.\n\n1. I created a new access key and secret key and added them to the credentials file\n2. I regenerated the .aws\\credentials file using the command aws configure\n3. I have restarted the session multiple times\n4. I am not using a session key to connect\n5. It is not a profile problem because it works when I use the AWS CLI\n6. There were no cache files at ~\\.aws\\cli\\cache\\","reasoning":"When using the PowerShell function Get-LMFunctionList, the error \"The security token included in the request is invalid\" appears. Various workarounds have been tried, but none of them work. It is worth noting that using the AWS CLI commands works fine, which suggests that the problem is specific to PowerShell commands.The description for \"AWS Credentials\" needs to be re-checked.","id":"52","excluded_ids":["N\/A"],"gold_ids_long":["aws_legacy_credentials_and_S3\/aws_legacy_credentials.txt"],"gold_ids":["aws_legacy_credentials_and_S3\/aws_legacy_credentials_0_0.txt","aws_legacy_credentials_and_S3\/aws_legacy_credentials_0_4.txt","aws_legacy_credentials_and_S3\/aws_legacy_credentials_0_1.txt","aws_legacy_credentials_and_S3\/aws_legacy_credentials_0_2.txt","aws_legacy_credentials_and_S3\/aws_legacy_credentials_0_3.txt"],"gold_answer":"The problem was that the default profile included in the ` .aws\\credentials `\nfile contained the correct credentials but the default profile that has been\ncreated in the ` AWS SDK ` (i.e. stored in the ` NetSDKCredentialsFile `\nstore) had not been updated.\n\nAs described in the article [ Using AWS Credentials\n](https:\/\/docs.aws.amazon.com\/powershell\/latest\/userguide\/specifying-your-aws-\ncredentials.html) shared by @jdweng the AWS SDK takes priority over the shared\ncredentials in the ` .aws\\credentials ` file.\n\nIn my particular case, I had a default profile created in the `\n.aws\\credentials ` file and a default profile created in the AWS SDK store,\nwhich was outdated. However because the AWS SDK credentials were taking\npriority, only the out-of-date credentials were being used.\n\nI could see the default profiles created in both stores when I ran the command\n` Get-AWSCredential -ListProfileDetail ` .\n\nThe solution was removing the default profile from the AWS SDK using the `\nRemove-AWSCredentialProfile -ProfileName default ` command. After that, the\ncorrect credentials in the shared credentials file were used.\n\nNote that I tried to update the credentials in the AWS SDK and that worked but\nI found that the region-specific configuration stored in the ` .aws\\config `\nwas then ignored. So I reverted to using the shared credentials (i.e. all the\nfiles in the ` .aws\\ ` folder)."} {"query":"Quantization and torch_dtype in huggingface transformer\n\nNot sure if its the right forum to ask but.\r\n\r\nAssuming i have a gptq model that is 4bit. how does using from_pretrained(torch_dtype=torch.float16) work? In my understanding 4 bit meaning changing the weights from either 32-bit precision to 4bit precision using quantization methods.\r\n\r\nHowever, calling it the torch_dtype=torch.float16 would mean the weights are in 16 bits? Am i missing something here.","reasoning":"The description and basic logic of `from_pretrained(torch_dtype=torch.float16)` are needed.","id":"53","excluded_ids":["N\/A"],"gold_ids_long":["hugging_face\/hugging_face.txt"],"gold_ids":["hugging_face\/hugging_face_0_3.txt","hugging_face\/hugging_face_0_2.txt","hugging_face\/hugging_face_0_1.txt"],"gold_answer":"GPTQ is a Post-Training Quantization method. This means a GPTQ model was\ncreated in full precision and then compressed. Not all values will be in 4\nbits unless every weight and activation layer has been quantized.\n\nThe [ GPTQ method ](https:\/\/huggingface.co\/blog\/gptq-integration) does not do\nthis:\n\n> Specifically, GPTQ adopts a mixed int4\/fp16 quantization scheme where\n> weights are quantized as int4 while activations remain in float16.\n\nAs these values need to be multiplied together, this means that,\n\n> during inference, weights are dequantized on the fly and the actual compute\n> is performed in float16.\n\nIn a Hugging Face quantization blog post from Aug 2023, they talk about the\npossibility of quantizing activations as well in the [ Room for Improvement\n](https:\/\/huggingface.co\/blog\/gptq-integration#room-for-improvement) section.\nHowever, at that time there were no open source implementations.\n\nSince then, they have released [ Quanto\n](https:\/\/github.com\/huggingface\/quanto) . This does support quantizing\nactivations. It looks promising but it is not yet quicker than other\nquantization methods. It is in beta and the docs say to expect breaking\nchanges in the API and serialization. There are some accuracy and perplexity [\nbenchmarks ](https:\/\/github.com\/huggingface\/quanto\/tree\/main\/bench\/generation)\nwhich look pretty good with most models. Surprisingly, at the moment it is [\nslower\n](https:\/\/github.com\/huggingface\/quanto\/blob\/b9ee78335a6f0f90363da5909b5b749a1beaa4ce\/README.md?plain=1#L87)\nthan 16-bit models due to lack of optimized kernels, but that seems to be\nsomething they're working on.\n\nSo this does not just apply to GPTQ. You will find yourself using float16 with\nany of the popular quantization methods at the moment. For example, [\nActivation-aware Weight Quantization (AWQ)\n](https:\/\/huggingface.co\/docs\/transformers\/main\/en\/quantization?bnb=4-bit#awq)\nalso preserves in full precision a small percentage of the weights that are\nimportant for performance. [ This ](https:\/\/huggingface.co\/blog\/overview-\nquantization-transformers) is a useful blog post comparing GPTQ with other\nquantization methods."} {"query":"Is it possible to sort a ES6 map object?\n\nIs it possible to sort the entries of a es6 map object?\n```\nvar map = new Map();\r\nmap.set('2-1', foo);\r\nmap.set('0-1', bar);\n```\n\nresults in:\r\n```\nmap.entries = {\r\n 0: {\"2-1\", foo },\r\n 1: {\"0-1\", bar }\r\n}\n```\n\nIs it possible to sort the entries based on their keys?\r\n```\nmap.entries = {\r\n 0: {\"0-1\", bar },\r\n 1: {\"2-1\", foo }\r\n}\n```","reasoning":"The function(s) is\/are needed to sort the entries of a es6 map object based on their keys.","id":"54","excluded_ids":["N\/A"],"gold_ids_long":["Mmdn_built_in_objects\/Mmdn_built_in_objects.txt"],"gold_ids":["Mmdn_built_in_objects\/Mmdn_built_in_objects_106_3.txt","Mmdn_built_in_objects\/Mmdn_built_in_objects_106_4.txt","Mmdn_built_in_objects\/Mmdn_built_in_objects_81_4.txt","Mmdn_built_in_objects\/Mmdn_built_in_objects_81_5.txt","Mmdn_built_in_objects\/Mmdn_built_in_objects_81_3.txt","Mmdn_built_in_objects\/Mmdn_built_in_objects_106_5.txt"],"gold_answer":"According MDN documentation:\n\n> A Map object iterates its elements in insertion order.\n\nYou could do it this way:\n\n \n \n var map = new Map();\n map.set('2-1', \"foo\");\n map.set('0-1', \"bar\");\n map.set('3-1', \"baz\");\n \n var mapAsc = new Map([...map.entries()].sort());\n \n console.log(mapAsc)\n\nUsing ` .sort() ` , remember that the array is sorted according to each\ncharacter's Unicode code point value, according to the string conversion of\neach element. So ` 2-1, 0-1, 3-1 ` will be sorted correctly."} {"query":"Count number of items in an array with a specific property value\n\nI have a Person() class:\n```\nclass Person : NSObject {\n\n var firstName : String\n var lastName : String\n var imageFor : UIImage?\n var isManager : Bool?\n\n init (firstName : String, lastName: String, isManager : Bool) {\n self.firstName = firstName\n self.lastName = lastName\n self.isManager = isManager\n }\n}\n```\n\nI have an array of Person()\n```\nvar peopleArray = [Person]()\n```\n\nI want to count the number of people in the array who have\n```\nisManager: true\n```\n\nI feel this is out there, but I can;t find it, or find the search parameters.\r\n\r\nThanks.","reasoning":"One class is defined with some initial attributes involved. The programmar creates an array of this class type. One method is required to count the number of varaibles who have the certain attribute value.","id":"55","excluded_ids":["N\/A"],"gold_ids_long":["swift_array_methods\/swift_array_methods.txt"],"gold_ids":["swift_array_methods\/swift_array_methods_0_2.txt"],"gold_answer":"Use ` filter ` method:\n\n \n \n let managersCount = peopleArray.filter { (person : Person) -> Bool in\n return person.isManager!\n }.count\n \n\nor even simpler:\n\n \n \n let moreCount = peopleArray.filter{ $0.isManager! }.count"} {"query":"Refreshing UI with FutureBuilder or StreamBuilder\n\nI have been using a FutureBuilder to read data from an api, and update the UI when it completes. All good, but now I want to refresh this every 5 minutes. I thought I could do this with a Timer. My code\n\n```\nclass _MyTileState extends State<MyTile> {\n\n late Future<MyData?> myFuture = fetchMyData();\n\n @override\n Widget build(BuildContext context) {\n return Column(\n children: [\n FutureBuilder<MyData?>(\n future: myFuture, \n builder: (BuildContext context, AsyncSnapshot<MyData?> snapshot) {\n if (snapshot.connectionState == ConnectionState.waiting) {\n return CircularProgressIndicator();\n } else if (snapshot.hasError) {\n return Text('Error: ${snapshot.error}');\n } else {\n return Text(snapshot.data.title);\n }\n },\n ),\n ],\n );\n }\n\n Future<MyData?> fetchMyData() async {\n var myData = await readMyData();\n return myData;\n }\n```\n\nIf I add a timer I get an error message : The instance member 'fetchMyData' can't be accessed in an initializer\n```\nTimer timer = Timer(Duration(minutes: 5), () {\n myFuture = fetchMyData();\n setState(() {});\n});\n```\n\nWhile trying to sort this, it has become apparent that FutureBuilder should not be used to refresh. Instead use a StreamBuilder. But this just displays the CircularProgressIndicator and only appears to run once\n```\n StreamBuilder<MyData>(\n stream: fetchMyDataStream(),\n builder: (context, snapshot) {\n if (snapshot.connectionState == ConnectionState.active) {\n return Text(snapshot.data.title);\n }\n\n return CircularProgressIndicator();\n }\n ),\n\n\n Stream<MyData> get fetchMyDataStream() async* {\n print(\"start fetch my data stream\");\n \n await Future.delayed(Duration(seconds: 5)); \/\/ set to seconds for testing\n\n var myData = await readMyData();\n yield myData;\n }\n}\n```\nI am trying to work out which way I should be going - FutureBuilder or StreamBuilder? And for the preferred option, what I am doing wrong.","reasoning":"Using FutureBuilder with a Timer in an initialiser results in the error message \"The instance member 'fetchMyData' can't be accessed in an initializer\". This is because the instance member cannot be accessed in an initialiser. Now wanting to refresh the data, consider using the StreamBuilder. but when using the StreamBuilder, only the CircularProgressIndicator is displayed and seems to run only once.NeedThere needs to be a solutionThere needs to be a solutionThere needs to be a solution that canThere needs to be a solution that can provide real-time data streaming and support cyclic refreshing.There needs to be a solution that can provide real-time data streaming and support periodic refreshing.","id":"56","excluded_ids":["N\/A"],"gold_ids_long":["flutter_classes\/flutter_classes.txt"],"gold_ids":["flutter_classes\/flutter_classes_0_0.txt","flutter_classes\/flutter_classes_0_1.txt","flutter_classes\/flutter_classes_1_0.txt","flutter_classes\/flutter_classes_0_2.txt","flutter_classes\/flutter_classes_1_1.txt"],"gold_answer":"You may simplify it:\n\n 1. use initState() for an initial fetch and to start a periodic timer. You can update the UI (loading, loaded, error). \n 2. do not forget to dispose of timer. \n\n[ ![enter image description here](https:\/\/i.sstatic.net\/HZlrH.png)\n](https:\/\/i.sstatic.net\/HZlrH.png)"} {"query":"What is the best way to slice a dataframe up to the first instance of a mask?\n\nThis is my DataFrame:\n```\nimport pandas as pd\ndf = pd.DataFrame(\n {\n 'a': [10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70],\n 'b': [1, 1, 1, -1, -1, -2, -1, 2, 2, -2, -2, 1, -2],\n }\n)\n```\n\nThe mask is:\n```\nmask = (\n (df.b == -2) &\n (df.b.shift(1) > 0)\n)\n```\n\nExpected output: slicing df up to the first instance of the mask:\n```\n a b\n0 10 1\n1 15 1\n2 20 1\n3 25 -1\n4 30 -1\n5 35 -2\n6 40 -1\n7 45 2\n8 50 2\n```\n\nThe first instance of the mask is at row 9. So I want to slice the df up to this index.\n\nThis is what I have tried. It works but I am not sure if it is the best way:\n```\nidx = df.loc[mask.cumsum().eq(1) & mask].index[0]\nresult = df.iloc[:idx]\n```","reasoning":"A function is required to slice the `df` up to the given index.","id":"57","excluded_ids":["N\/A"],"gold_ids_long":["pandas_series\/pandas_series.txt"],"gold_ids":["pandas_series\/pandas_series_48_5.txt","pandas_series\/pandas_series_48_6.txt"],"gold_answer":"You can filter by inverted mask with [ ` Series.cummax `\n](http:\/\/pandas.pydata.org\/pandas-\ndocs\/stable\/reference\/api\/pandas.Series.cummax.html) :\n\n \n \n out = df[~mask.cummax()]\n print (out)\n \n a b\n 0 10 1\n 1 15 1\n 2 20 1\n 3 25 -1\n 4 30 -1\n 5 35 -2\n 6 40 -1\n 7 45 2\n 8 50 2\n \n\n**How it working:**\n\n \n \n print (df.assign(mask=mask,\n cumax=mask.cummax(),\n inv_cummax=~mask.cummax()))\n \n a b mask cumax inv_cummax\n 0 10 1 False False True\n 1 15 1 False False True\n 2 20 1 False False True\n 3 25 -1 False False True\n 4 30 -1 False False True\n 5 35 -2 False False True\n 6 40 -1 False False True\n 7 45 2 False False True\n 8 50 2 False False True\n 9 55 -2 True True False\n 10 60 -2 False True False\n 11 65 1 False True False\n 12 70 -2 True True False"} {"query":"Is there a reason why I am getting a null-related error in this code\n\n```\ndata class Person(val name: String, val age: Int)\n\nfun checkNameOccurrence(people: List<Person>): Map<String, Int>{\n \/\/ The map has a key that represents a name and a value of how many times that name is repeated\n val namesMap: MutableMap<String, Int> = mutableMapOf()\n for(person in people){\n if(person.name in namesMap.keys){\n namesMap[person.name] += 1\n }\n namesMap[person.name] = 1\n }\n return namesMap\n}\n```\n\nI don't understand the reason for the null-related error in the code. Isn't that code block only meant to run if the key is found in the map, how is there still be a null error. I'm partly new to Kotlin and I've only worked on python before (there weren't any issues with null values in python)","reasoning":"The code always has the null-related error. When having `namesMap[person.name] += 1`. Therefore, a method related to kotlin is required to enforce non-null value.","id":"58","excluded_ids":["N\/A"],"gold_ids_long":["kotlin_ObservableProperty_delegates\/ObservableProperty.txt"],"gold_ids":["kotlin_ObservableProperty_delegates\/ObservableProperty_0_0.txt"],"gold_answer":"The compiler is just not smart enough to understand ` namesMap[person.name] `\ncan't be null in this case. You can use ` !! ` or ` getValue() ` to enforce\nnon-null value:\n\n \n \n namesMap[person.name] = namesMap.getValue(person.name) + 1\n \n\nHowever, there are much shorter and cleaner alternatives to your code. First,\nwe don't have to check if the value exists, but just default to 0:\n\n \n \n namesMap[person.name] = namesMap.getOrDefault(person.name, 0) + 1\n \n\nThis replaces the whole if-else block, not only the single line from the\noriginal code.\n\nEven better, the whole function could be replaced with:\n\n \n \n people.groupingBy { it.name }.eachCount()\n \n\nIt groups people by the name and counts how many times each name existed in\nthe list."} {"query":"How can we check the format of the date coming in Payload using Dataweave\n\nWe have a requirement where the source is a file and it has a date field that can contain different date format. For example MM\/dd\/yyyy or MM\/dd\/yy or MM-dd-yyyy or MM-dd-yy. This date field we want to finally convert into MM-dd-yyyy\n\nHow can we check the date format from source and accordingly transform it into required format? or is there a way where we can transform any date format into desired format?\n\nInput:\n```\n[\n {\n \"date\": \"01-03-2023\"\n },\n {\n \"date\": \"01-03-23\"\n },\n {\n \"date\": \"01\/03\/2023\"\n },\n {\n \"date\": \"01\/03\/23\"\n }\n]\n```\n\nExpected Output: format \"MM-dd-yyyy\"\n```\n[\n {\n \"date\": \"01-03-2023\"\n },\n {\n \"date\": \"01-03-2023\"\n },\n {\n \"date\": \"01-03-2023\"\n },\n {\n \"date\": \"01-03-2023\"\n }\n]\n```","reasoning":"The desired behavior looks like some kind of format transformation. There are different types of date format. Therefore, some functions are required to try different date patterns to convert the inputs.","id":"59","excluded_ids":["N\/A"],"gold_ids_long":["mulesoft_runtime_Objects\/mulesoft_runtime.txt"],"gold_ids":["mulesoft_runtime_Objects\/mulesoft_runtime_6_4.txt"],"gold_answer":"You can use the ` try() ` \/ ` orElseTry() ` \/ ` orElse() ` functions to try\ndifferent date patterns to convert the inputs. If no pattern matches then I\nreturned null to signal the error (in the last ` orElse() ` . You can add\nadditional input patterns by adding ` orElseTry() ` calls. One benefit of this\nsolution is that avoids using string manipulation for handling dates, which is\nan anti-pattern.\n\n \n \n %dw 2.0\n output application\/json\n import * from dw::Runtime\n fun attemptDateParsing(d:String)=\n (try(() -> d as Date {format: \"MM-dd-yyyy\"} as String { format: \"MM-dd-yyyy\" }) \n orElseTry (() -> d as Date {format: \"MM-dd-yy\"} as String { format: \"MM-dd-yyyy\" }) \n orElseTry (() -> d as Date {format: \"MM\/dd\/yyyy\"} as String { format: \"MM-dd-yyyy\" }) \n orElseTry (() -> d as Date {format: \"MM\/dd\/yy\"} as String { format: \"MM-dd-yyyy\" }) \n orElse (() -> null))\n ---\n payload map { date: attemptDateParsing($.date)}\n \n\nOutput:\n\n \n \n [\n {\n \"date\": \"01-03-2023\"\n },\n {\n \"date\": \"01-03-2023\"\n },\n {\n \"date\": \"01-03-2023\"\n },\n {\n \"date\": \"01-03-2023\"\n }\n ]"} {"query":"How to print the result from a loop in one line on the console in Javascript\n\nI'm trying to print out the result from a loop onto the console but it keeps printing on a new line instead of printing everything on the same line.\n\nThis is the output I'm expecting:\n```\n...17\u00b0C in 1 days ... 21\u00b0C in 2 days ... 21\u00b0C in 3 days\n```\n\nBut this is the output I keep getting:\n```\n... 17\u00b0C in 1 days\n\n... 21\u00b0C in 2 days\n\n... 23\u00b0C in 3 days\n```\n\nI've tried most of the answers I found but they are not working.\n```\nconst arr =[17, 21, 23];\n\nfunction printForcast(arr) { \n for (let i = 0; i < arr.length; i++) {\n let days = i;\n\n if(i <= days){\n days +=1;\n }\n\n console.log(`... ${arr[i]}\u00b0C in ${days} days`); \n }\n return arr;\n}\n\nprintForcast(arr);\n<!DOCTYPE html>\n```","reasoning":"The programer hopes to print the result from a loop in one line on the console in Javascript. However, the code keeps printing on a new line. An explanantion is needed.","id":"60","excluded_ids":["N\/A"],"gold_ids_long":["Mmdn_Instance_console\/Mmdn_Instance_console.txt"],"gold_ids":["Mmdn_Instance_console\/Mmdn_Instance_console_3_1.txt"],"gold_answer":"Since ` console.log() ` will always output a newline, you'll need to [ append\n( ` += ` ) ](https:\/\/stackoverflow.com\/questions\/31845895\/how-can-i-build-\nconcatenate-strings-in-javascript) your output to a string, and then log that\nafter the loop:\n\n \n \n const arr =[17, 21, 23];\n \n function printForcast(arr) {\n \n let output = '';\n \n for (let i = 0; i < arr.length; i++) {\n let days = i;\n \n if(i <= days){\n days +=1;\n }\n output += `... ${arr[i]}\u00b0C in ${days} days`; \n }\n \n console.log(output);\n }\n \n printForcast(arr);"} {"query":"Do GTK file chooser dialogs come with localized strings for buttons and titles?\n\nOn Windows and macOS, file chooser dialogs are built into the operating system, and there is a way for an application to tell the dialog to localize the text on buttons and titles to the system language (usually by not explicitly setting the button and title text).\n\nOn Linux, while there is no system file chooser dialog, the vast majority of applications use GTK. However, GTK's file chooser dialogs require users to provide strings for the button and title text:\n\n```\ndialog = gtk_file_chooser_dialog_new (\"Open File\",\n parent_window,\n action,\n \"_Cancel\",\n GTK_RESPONSE_CANCEL,\n \"_Open\",\n GTK_RESPONSE_ACCEPT,\n NULL);\n```\n\nLeaving those strings empty or null leads to the buttons and title having no text at all, instead of having some default text.\n\nDoes GTK have its own set of localized default button and title text that application developers can tap into?\n\nNote that many Linux applications use the gettext internationalization library, but that depends on the application to provide a translation database for every language they wish to support, which is not what I'm looking for. I'm looking for a way for my application to show proper dialog text in every language that GTK knows about, even if my application doesn't have translated strings for them.\n\nNote also that that GTK message dialogs created with `gtk_message_dialog_new` can have correctly translated button text, assuming that the user has installed the correct language packs, as no text is explicitly specified (you just pass `GTK_BUTTONS_OK_CANCEL` to it to get correctly translated \"OK\" and \"Cancel\" buttons), but there seems to be no equivalent for file chooser dialogs.","reasoning":"The programmar wants to have correctly translated button text by using `gtk_message_dialog_new` rather than `gtk_file_chooser_dialog_new`. However, it might does not work. An explanation is required on difference between these 2 functions.","id":"61","excluded_ids":["N\/A"],"gold_ids_long":["Gtk_FileChooserDialog_MessageDialog\/MessageDialog.txt","Gtk_FileChooserDialog_MessageDialog\/Gtk_FileChooserDialog.txt"],"gold_ids":["Gtk_FileChooserDialog_MessageDialog\/Gtk_FileChooserDialog_189_0.txt","Gtk_FileChooserDialog_MessageDialog\/MessageDialog_127_0.txt"],"gold_answer":"Short answer: no.\n\nThere is an essential difference between the behavior of ` gtk_message_dialog\n` and ` gtk_file_chooser_dialog ` . With gnome focusing on flatpak\ndevelopment, while ` gtk_message_dialog ` will always use the GTK dialog, a\ncall from an application to ` gtk_file_chooser_dialog ` will not always use\nthe GTK chooser dialog. If the application is running in a QT environment,\nthis call will open the standard QT environment chooser dialog. The same thing\ngoes for any other environment.\n\nWhen this happens, the translation of the content of the buttons may differ\nfrom the way GTK does it and therefore the application is responsible for\ncarrying out this translation, since we do not know whether or not the other\nenvironment will do this, or how it will do it. to do."} {"query":"invalid hook call, one works but the other does not\n\nI am trying to modularise my code so im trying to separate the data that is being read\n\nhere is my previous code that works without error:\n\n```\nfunction Home({navigation, route}): React.JSX.Element {\n const [showAddBillModal, setShowAddBillModal] = useState(false)\n const isDarkMode = useColorScheme() === 'dark';\n const backgroundStyle = {\n backgroundColor: isDarkMode ? Colors.darker : Colors.lighter,\n };\n\n const toggleShowAddBillModal = () => {\n setShowAddBillModal(!showAddBillModal)\n }\n\n \/\/ const user = getLoggedInUser() \/\/works\n\n return (...)\n```\n\nbut when i try to modularise it by separating the user data read into another file\/class:\n\nlogin page:\n```\nconst login = (text, navigation) =>{\nlet userData = getLoggedInUser(text)\nif(userData !== null || userData !== undefined){\n () => navigation.navigate('Home', {user: userData})\n}\n}\n\nfunction Login({ navigation }): React.JSX.Element {\nconst [name, onChangeNameText] = React.useState('');\nconst isDarkMode = useColorScheme() === 'dark';\nconst backgroundStyle = {\n backgroundColor: isDarkMode ? Colors.darker : Colors.lighter,\n};\n\n\/\/ const user = getLoggedInUser()\nreturn (....\n <Pressable\n style={[styles.button, styles.buttonClose]}\n onPress={() => login(name, navigation)}\n >...\n)\n}\n```\n\nfile to read data:\n```\nexport default getLoggedInUser = async (name) => {\nconst [userData, setUserData] = useState(null)\n\/\/ const userData = null\ntry{\n console.log(\"test\")\n await firestore()\n .collection('users')\n .get()\n .then(querySnapshot => {\n querySnapshot.forEach(doc => {\n if (doc.data().name == name) {\n setUserData(doc)\n \/\/ return doc\n \/\/ userData = doc\n \/\/ return userData\n }\n });\n });\n return userData\n}\ncatch(error){\n console.log(error)\n}\n}\n```\n\ni get invalid hook call here i have tried normally doing \"function name() {hook ....}\" but it also gives the same error, its working everywhere except this part","reasoning":"When the programmer call React hooks outside React components or custom React hooks, he\/she gets an invalid hook call. A reason is required to explain why this error happens.","id":"62","excluded_ids":["N\/A"],"gold_ids_long":["react_dev\/react_dev.txt"],"gold_ids":["react_dev\/react_dev_1_1.txt","react_dev\/react_dev_1_0.txt"],"gold_answer":"You cannot call React hooks outside React components or custom React hooks as\nthis breaks the [ Rules of React Hooks ](https:\/\/react.dev\/warnings\/invalid-\nhook-call-warning#breaking-rules-of-hooks) .\n\nYou could convert your ` getLoggedInUser ` function into a React hook so that\nit can also use the ` useState ` hook. I'd suggest returning a function the UI\ncan call to initiate the data fetching from your firestore backend.\n\nExample Implementation:\n\n \n \n export default useGetLoggedInUser = () => {\n const [userData, setUserData] = useState(null);\n \n const getLoggedInUser = async (name) => {\n try {\n const users = await firestore()\n .collection('users')\n .get();\n \n let userData = null;\n querySnapshot.forEach(doc => {\n if (doc.data().name == name) {\n setUserData(doc);\n }\n });\n \n return userData;\n } catch(error) {\n console.log(error);\n \/\/ re-throw for callers\n throw error;\n }\n };\n \n return {\n userData,\n getLoggedInUser,\n };\n }\n \n\nMove the ` login ` handler into the ` Login ` component to close over ` name `\nand ` navigator ` in callback scope. Call your new ` useGetLoggedInUser ` hook\nand access the returned ` getLoggedInUser ` function, to be called in the `\nlogin ` callback. ` await ` its return value, and if truthy, issue the\nnavigation action.\n\n \n \n function Login({ navigation }): React.JSX.Element {\n const [name, onChangeNameText] = React.useState('');\n const isDarkMode = useColorScheme() === 'dark';\n \n const { getLoggedInUser } = useGetLoggedInUser();\n \n const backgroundStyle = {\n backgroundColor: isDarkMode ? Colors.darker : Colors.lighter,\n };\n \n const login = async () => {\n try {\n const user = await getLoggedInUser(name);\n \n if (user) {\n navigation.navigate('Home', { user });\n }\n } catch(error) {\n \/\/ handle\/ignore\/etc\n }\n };\n \n return (\n ...\n <Pressable\n style={[styles.button, styles.buttonClose]}\n onPress={login}\n >\n ...\n <\/Pressable>\n );\n }"} {"query":"Why Send requires Sync in this case?\n\nOn this very simple example, I require `OnConsume` to be a `Send` function so I can send it to threads.\n\n```\nuse std::sync::Arc;\n\ntrait EncodedPacket {}\n\npub type OnConsume = Arc<dyn Fn() -> Option<Box<dyn EncodedPacket>> + Send>;\n\npub trait Decoder: Send{}\n\npub struct DummyDecoder {\n pub on_consume: Option<OnConsume>,\n}\n\nimpl Decoder for DummyDecoder {}\n```\n\nI'm getting this error about `Sync` though.\n\nError:\n```\nerror[E0277]: `(dyn Fn() -> Option<Box<(dyn EncodedPacket + 'static)>> + Send + 'static)` cannot be shared between threads safely\n --> src\/lib.rs:14:6\n |\n7 | pub trait Decoder: Send\n | ---- required by this bound in `Decoder`\n...\n14 | impl Decoder for DummyDecoder {\n | ^^^^^^^ `(dyn Fn() -> Option<Box<(dyn EncodedPacket + 'static)>> + Send + 'static)` cannot be shared between threads safely\n |\n = help: the trait `Sync` is not implemented for `(dyn Fn() -> Option<Box<(dyn EncodedPacket + 'static)>> + Send + 'static)`\n = note: required because of the requirements on the impl of `Send` for `Arc<(dyn Fn() -> Option<Box<(dyn EncodedPacket + 'static)>> + Send + 'static)>`\n = note: required because it appears within the type `Option<Arc<(dyn Fn() -> Option<Box<(dyn EncodedPacket + 'static)>> + Send + 'static)>>`\nnote: required because it appears within the type `DummyDecoder`\n --> src\/lib.rs:10:12\n |\n10 | pub struct DummyDecoder {\n```\n\nWhy is it that it requires `Sync` for `(dyn Fn() -> Option<Box<(dyn EncodedPacket + 'static)>> + Send + 'static)`?","reasoning":"When writing `send` function inside `Arc`, an error is raised, that is about that the code line with `send` funtcion \"cannot be shared between threads safely\", and `Sync` should be added in this line as well. The definition of `Arc` should be rechecked to find the answer.","id":"63","excluded_ids":["N\/A"],"gold_ids_long":["rust_std_library_types_Traits\/rust_std_library_types.txt"],"gold_ids":["rust_std_library_types_Traits\/rust_std_library_types_1_3.txt"],"gold_answer":"If you look at the docs for ` Arc ` you will see:\n\n \n \n impl<T> Send for Arc<T>\n where\n T: Sync + Send + ?Sized, \n \n\nThat is, ` Arc<T> ` implements ` Send ` only when ` T ` implements ` Sync ` as\nwell. Here's why: if you put something in an ` Arc ` , clone the ` Arc ` , and\n_send_ the clone to another thread, then the thing is now _shared_ between\nthose two threads \u2014 and therefore must be required to be ` Sync ` .\n\nYou _can_ put a non- ` Sync ` type in an ` Arc ` \u2014 there's no trait bound that\nprevents this \u2014 but you can't ever send such an ` Arc ` . That's why the\ncompiler attributes this to the ` impl Decoder for DummyDecoder ` rather than\nsomething else:\n\n 1. You write ` impl Decoder for DummyDecoder {} ` , \n 2. which requires ` DummyDecoder: Send ` because of the supertrait, \n 3. which requires ` Option<OnConsume>: Send ` because of the struct field, \n 4. which requires ` Arc<dyn Fn() -> Option<Box<dyn EncodedPacket>> + Send>: Send ` , \n 5. which requires ` (dyn Fn() -> Option<Box<dyn EncodedPacket>> + Send): Sync ` because of the bounds on ` impl Send for Arc ` , \n 6. which is false, because ` Sync ` isn't in the list of traits of the ` dyn ` trait object. \n\nSo, you need to add ` + Sync ` to your ` dyn Fn ` (or wrap it in a ` Mutex `\nif you don't want to require the function to do its own thread-safety).\n\nIn general, when you want thread-safety, these sets of trait bounds for\nfunctions make sense:\n\n * ` Fn() + Send + Sync `\n * ` FnMut() + Send `\n * ` FnOnce() + Send `\n\nIt's still _possible_ to send ` Fn + Send ` across threads and call it, but\nsince the point of ` Fn ` is to be shareable, you might as well include the `\nSync ` bound and make it shareable across threads, unless there's some\nparticular reason the actual functions are likely to be obligated to be `\n!Sync ` \u2014 in which case you should consider making them also ` FnMut ` , to\ngive the functions opportunity to take advantage of their non-sharedness."} {"query":"Vector in a HashMap [duplicate]\n\nI know how to use, String I know how to use, Vector But I am facing issue to use HashMap\n\n```\n#[derive(Serialize)]\npub struct TestStructs {\n pub ttStrngg: String,\n pub ttVctorr: Vec<String>,\n pub ttHshMpp: HashMap<String, Vec<String>>,\n}\n```\n\nHere is what I am trying\n\nDon't have any issues with String\n```\nttStrngg: \"Stringgg\".to_string()\n```\n\nDon't have any issues with Vector\n```\nttVctorr: vec![\"VecStr1\".to_string(), \"VecStr2\".to_string()]\n```\n\nBut do have an issue with HashMap\n```\nttHshMpp: \"HMK1\".to_string(), vec![\"K1V1\".to_string(), \"K1V2\".to_string()]\n```\n\nI've shared what I tried and now looking for that missing thing in HashMap\n\nHere is the Rust Playground link to try your own\n\nAnd here is the error\n\n```\n Compiling playground v0.0.1 (\/playground)\nerror: expected one of `,`, `:`, or `}`, found `!`\n --> src\/main.rs:9:46\n |\n6 | let ttStrctts = TestStructs {\n | ----------- while parsing this struct\n...\n9 | ttHshMpp: \"HMK1\".to_string(), vec![\"K1V1\".to_string(), \"K1V2\".to_string()]\n | ---^ expected one of `,`, `:`, or `}`\n | |\n | while parsing this struct field\n\nerror[E0308]: mismatched types\n --> src\/main.rs:9:23\n |\n9 | ttHshMpp: \"HMK1\".to_string(), vec![\"K1V1\".to_string(), \"K1V2\".to_string()]\n | ^^^^^^^^^^^^^^^^^^ expected `HashMap<String, Vec<String>>`, found `String`\n |\n = note: expected struct `HashMap<std::string::String, Vec<std::string::String>>`\n found struct `std::string::String`\n```","reasoning":"The programmer knows less on `HashMap`, especially the use and implementation of pair `(k, v)` and array `[]` of `HashMap`. The relative docs are required.","id":"64","excluded_ids":["N\/A"],"gold_ids_long":["r_std_hashmap\/r_std_hashmap.txt"],"gold_ids":["r_std_hashmap\/r_std_hashmap_16_0.txt","r_std_hashmap\/r_std_hashmap_16_2.txt","r_std_hashmap\/r_std_hashmap_16_1.txt"],"gold_answer":"](..\/..\/std\/index.html)\n\n[ ![logo](..\/..\/static.files\/rust-logo-151179464ae7ed46.svg)\n](..\/..\/std\/index.html)\n\n## [ std ](..\/..\/std\/index.html) 1.80.1\n\n(3f5fd8dd4 2024-08-06)\n\n## HashMap\n\n### Methods\n\n * capacity \n * clear \n * contains_key \n * drain \n * entry \n * extract_if \n * get \n * get_key_value \n * get_many_mut \n * get_many_unchecked_mut \n * get_mut \n * hasher \n * insert \n * into_keys \n * into_values \n * is_empty \n * iter \n * iter_mut \n * keys \n * len \n * new \n * raw_entry \n * raw_entry_mut \n * remove \n * remove_entry \n * reserve \n * retain \n * shrink_to \n * shrink_to_fit \n * try_insert \n * try_reserve \n * values \n * values_mut \n * with_capacity \n * with_capacity_and_hasher \n * with_hasher \n\n### Trait Implementations\n\n * Clone \n * Debug \n * Default \n * Eq \n * Extend<(&'a K, &'a V)>\n * Extend<(K, V)>\n * From<[(K, V); N]>\n * FromIterator<(K, V)>\n * Index<&Q>\n * IntoIterator \n * IntoIterator \n * IntoIterator \n * PartialEq \n * UnwindSafe \n\n### Auto Trait Implementations\n\n * Freeze \n * RefUnwindSafe \n * Send \n * Sync \n * Unpin \n\n### Blanket Implementations\n\n * Any \n * Borrow<T>\n * BorrowMut<T>\n * From<T>\n * Into<U>\n * ToOwned \n * TryFrom<U>\n * TryInto<U>\n\n## [ In std::collections ](index.html)\n\n# Struct [ std ](..\/index.html) :: [ collections ](index.html) :: HashMap\nCopy item path\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#213-215) \u00c2\u00b7 [\n\u2212 ]\n\n \n \n pub struct HashMap<K, V, S = [RandomState](hash_map\/struct.RandomState.html \"struct std::collections::hash_map::RandomState\")> { \/* private fields *\/ }\n\nExpand description\n\nA [ hash map ](index.html#use-a-hashmap-when \"mod std::collections\")\nimplemented with quadratic probing and SIMD lookup.\n\nBy default, ` HashMap ` uses a hashing algorithm selected to provide\nresistance against HashDoS attacks. The algorithm is randomly seeded, and a\nreasonable best-effort is made to generate this seed from a high quality,\nsecure source of randomness provided by the host without blocking the program.\nBecause of this, the randomness of the seed depends on the output quality of\nthe system\u00e2\u0080\u0099s random number coroutine when the seed is created. In\nparticular, seeds generated when the system\u00e2\u0080\u0099s entropy pool is abnormally low\nsuch as during system boot may be of a lower quality.\n\nThe default hashing algorithm is currently SipHash 1-3, though this is subject\nto change at any point in the future. While its performance is very\ncompetitive for medium sized keys, other hashing algorithms will outperform it\nfor small keys such as integers as well as large keys such as long strings,\nthough those algorithms will typically _not_ protect against attacks such as\nHashDoS.\n\nThe hashing algorithm can be replaced on a per- ` HashMap ` basis using the [\n` default ` ](..\/default\/trait.Default.html#tymethod.default \"associated\nfunction std::default::Default::default\") , [ ` with_hasher `\n](hash_map\/struct.HashMap.html#method.with_hasher \"associated function\nstd::collections::hash_map::HashMap::with_hasher\") , and [ `\nwith_capacity_and_hasher `\n](hash_map\/struct.HashMap.html#method.with_capacity_and_hasher \"associated\nfunction std::collections::hash_map::HashMap::with_capacity_and_hasher\")\nmethods. There are many alternative [ hashing algorithms available on\ncrates.io ](https:\/\/crates.io\/keywords\/hasher) .\n\nIt is required that the keys implement the [ ` Eq ` ](..\/cmp\/trait.Eq.html\n\"trait std::cmp::Eq\") and [ ` Hash ` ](..\/hash\/trait.Hash.html \"trait\nstd::hash::Hash\") traits, although this can frequently be achieved by using `\n#[derive(PartialEq, Eq, Hash)] ` . If you implement these yourself, it is\nimportant that the following property holds:\n\n \n \n k1 == k2 -> hash(k1) == hash(k2)\n \n\nIn other words, if two keys are equal, their hashes must be equal. Violating\nthis property is a logic error.\n\nIt is also a logic error for a key to be modified in such a way that the\nkey\u00e2\u0080\u0099s hash, as determined by the [ ` Hash ` ](..\/hash\/trait.Hash.html \"trait\nstd::hash::Hash\") trait, or its equality, as determined by the [ ` Eq `\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") trait, changes while it is in the\nmap. This is normally only possible through [ ` Cell `\n](..\/cell\/struct.Cell.html \"struct std::cell::Cell\") , [ ` RefCell `\n](..\/cell\/struct.RefCell.html \"struct std::cell::RefCell\") , global state,\nI\/O, or unsafe code.\n\nThe behavior resulting from either logic error is not specified, but will be\nencapsulated to the ` HashMap ` that observed the logic error and not result\nin undefined behavior. This could include panics, incorrect results, aborts,\nmemory leaks, and non-termination.\n\nThe hash table implementation is a Rust port of Google\u00e2\u0080\u0099s [ SwissTable\n](https:\/\/abseil.io\/blog\/20180927-swisstables) . The original C++ version of\nSwissTable can be found [ here ](https:\/\/github.com\/abseil\/abseil-\ncpp\/blob\/master\/absl\/container\/internal\/raw_hash_set.h) , and this [ CppCon\ntalk ](https:\/\/www.youtube.com\/watch?v=ncHmEUmJZf4) gives an overview of how\nthe algorithm works.\n\n## \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n \/\/ Type inference lets us omit an explicit type signature (which\n \/\/ would be `HashMap<String, String>` in this example).\n let mut book_reviews = HashMap::new();\n \n \/\/ Review some books.\n book_reviews.insert(\n \"Adventures of Huckleberry Finn\".to_string(),\n \"My favorite book.\".to_string(),\n );\n book_reviews.insert(\n \"Grimms' Fairy Tales\".to_string(),\n \"Masterpiece.\".to_string(),\n );\n book_reviews.insert(\n \"Pride and Prejudice\".to_string(),\n \"Very enjoyable.\".to_string(),\n );\n book_reviews.insert(\n \"The Adventures of Sherlock Holmes\".to_string(),\n \"Eye lyked it alot.\".to_string(),\n );\n \n \/\/ Check for a specific one.\n \/\/ When collections store owned values (String), they can still be\n \/\/ queried using references (&str).\n if !book_reviews.contains_key(\"Les Mis\u00c3\u00a9rables\") {\n println!(\"We've got {} reviews, but Les Mis\u00c3\u00a9rables ain't one.\",\n book_reviews.len());\n }\n \n \/\/ oops, this review has a lot of spelling mistakes, let's delete it.\n book_reviews.remove(\"The Adventures of Sherlock Holmes\");\n \n \/\/ Look up the values associated with some keys.\n let to_find = [\"Pride and Prejudice\", \"Alice's Adventure in Wonderland\"];\n for &book in &to_find {\n match book_reviews.get(book) {\n Some(review) => println!(\"{book}: {review}\"),\n None => println!(\"{book} is unreviewed.\")\n }\n }\n \n \/\/ Look up the value for a key (will panic if the key is not found).\n println!(\"Review for Jane: {}\", book_reviews[\"Pride and Prejudice\"]);\n \n \/\/ Iterate over everything.\n for (book, review) in &book_reviews {\n println!(\"{book}: \\\"{review}\\\"\");\n }\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++\/\/+Type+inference+lets+us+omit+an+explicit+type+signature+\\(which%0A++++\/\/+would+be+%60HashMap%3CString,+String%3E%60+in+this+example\\).%0A++++let+mut+book_reviews+=+HashMap::new\\(\\);%0A++++%0A++++\/\/+Review+some+books.%0A++++book_reviews.insert\\(%0A++++++++%22Adventures+of+Huckleberry+Finn%22.to_string\\(\\),%0A++++++++%22My+favorite+book.%22.to_string\\(\\),%0A++++\\);%0A++++book_reviews.insert\\(%0A++++++++%22Grimms'+Fairy+Tales%22.to_string\\(\\),%0A++++++++%22Masterpiece.%22.to_string\\(\\),%0A++++\\);%0A++++book_reviews.insert\\(%0A++++++++%22Pride+and+Prejudice%22.to_string\\(\\),%0A++++++++%22Very+enjoyable.%22.to_string\\(\\),%0A++++\\);%0A++++book_reviews.insert\\(%0A++++++++%22The+Adventures+of+Sherlock+Holmes%22.to_string\\(\\),%0A++++++++%22Eye+lyked+it+alot.%22.to_string\\(\\),%0A++++\\);%0A++++%0A++++\/\/+Check+for+a+specific+one.%0A++++\/\/+When+collections+store+owned+values+\\(String\\),+they+can+still+be%0A++++\/\/+queried+using+references+\\(%26str\\).%0A++++if+!book_reviews.contains_key\\(%22Les+Mis%C3%A9rables%22\\)+%7B%0A++++++++println!\\(%22We've+got+%7B%7D+reviews,+but+Les+Mis%C3%A9rables+ain't+one.%22,%0A+++++++++++++++++book_reviews.len\\(\\)\\);%0A++++%7D%0A++++%0A++++\/\/+oops,+this+review+has+a+lot+of+spelling+mistakes,+let's+delete+it.%0A++++book_reviews.remove\\(%22The+Adventures+of+Sherlock+Holmes%22\\);%0A++++%0A++++\/\/+Look+up+the+values+associated+with+some+keys.%0A++++let+to_find+=+%5B%22Pride+and+Prejudice%22,+%22Alice's+Adventure+in+Wonderland%22%5D;%0A++++for+%26book+in+%26to_find+%7B%0A++++++++match+book_reviews.get\\(book\\)+%7B%0A++++++++++++Some\\(review\\)+=%3E+println!\\(%22%7Bbook%7D:+%7Breview%7D%22\\),%0A++++++++++++None+=%3E+println!\\(%22%7Bbook%7D+is+unreviewed.%22\\)%0A++++++++%7D%0A++++%7D%0A++++%0A++++\/\/+Look+up+the+value+for+a+key+\\(will+panic+if+the+key+is+not+found\\).%0A++++println!\\(%22Review+for+Jane:+%7B%7D%22,+book_reviews%5B%22Pride+and+Prejudice%22%5D\\);%0A++++%0A++++\/\/+Iterate+over+everything.%0A++++for+\\(book,+review\\)+in+%26book_reviews+%7B%0A++++++++println!\\(%22%7Bbook%7D:+%5C%22%7Breview%7D%5C%22%22\\);%0A++++%7D%0A%7D&edition=2021)\n\nA ` HashMap ` with a known list of items can be initialized from an array:\n\n \n \n use std::collections::HashMap;\n \n let solar_distance = HashMap::from([\n (\"Mercury\", 0.4),\n (\"Venus\", 0.7),\n (\"Earth\", 1.0),\n (\"Mars\", 1.5),\n ]);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+solar_distance+=+HashMap::from\\(%5B%0A++++++++\\(%22Mercury%22,+0.4\\),%0A++++++++\\(%22Venus%22,+0.7\\),%0A++++++++\\(%22Earth%22,+1.0\\),%0A++++++++\\(%22Mars%22,+1.5\\),%0A++++%5D\\);%0A%7D&edition=2021)\n\n` HashMap ` implements an ` Entry ` API , which allows for complex methods\nof getting, setting, updating and removing keys and their values:\n\n \n \n use std::collections::HashMap;\n \n \/\/ type inference lets us omit an explicit type signature (which\n \/\/ would be `HashMap<&str, u8>` in this example).\n let mut player_stats = HashMap::new();\n \n fn random_stat_buff() -> u8 {\n \/\/ could actually return some random value here - let's just return\n \/\/ some fixed value for now\n 42\n }\n \n \/\/ insert a key only if it doesn't already exist\n player_stats.entry(\"health\").or_insert(100);\n \n \/\/ insert a key using a function that provides a new value only if it\n \/\/ doesn't already exist\n player_stats.entry(\"defence\").or_insert_with(random_stat_buff);\n \n \/\/ update a key, guarding against the key possibly not being set\n let stat = player_stats.entry(\"attack\").or_insert(100);\n *stat += random_stat_buff();\n \n \/\/ modify an entry before an insert with in-place mutation\n player_stats.entry(\"mana\").and_modify(|mana| *mana += 200).or_insert(100);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++\/\/+type+inference+lets+us+omit+an+explicit+type+signature+\\(which%0A++++\/\/+would+be+%60HashMap%3C%26str,+u8%3E%60+in+this+example\\).%0A++++let+mut+player_stats+=+HashMap::new\\(\\);%0A++++%0A++++fn+random_stat_buff\\(\\)+-%3E+u8+%7B%0A++++++++\/\/+could+actually+return+some+random+value+here+-+let's+just+return%0A++++++++\/\/+some+fixed+value+for+now%0A++++++++42%0A++++%7D%0A++++%0A++++\/\/+insert+a+key+only+if+it+doesn't+already+exist%0A++++player_stats.entry\\(%22health%22\\).or_insert\\(100\\);%0A++++%0A++++\/\/+insert+a+key+using+a+function+that+provides+a+new+value+only+if+it%0A++++\/\/+doesn't+already+exist%0A++++player_stats.entry\\(%22defence%22\\).or_insert_with\\(random_stat_buff\\);%0A++++%0A++++\/\/+update+a+key,+guarding+against+the+key+possibly+not+being+set%0A++++let+stat+=+player_stats.entry\\(%22attack%22\\).or_insert\\(100\\);%0A++++*stat+%2B=+random_stat_buff\\(\\);%0A++++%0A++++\/\/+modify+an+entry+before+an+insert+with+in-\nplace+mutation%0A++++player_stats.entry\\(%22mana%22\\).and_modify\\(%7Cmana%7C+*mana+%2B=+200\\).or_insert\\(100\\);%0A%7D&edition=2021)\n\nThe easiest way to use ` HashMap ` with a custom key type is to derive [ ` Eq\n` ](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") and [ ` Hash `\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") . We must also derive [ `\nPartialEq ` ](..\/cmp\/trait.PartialEq.html \"trait std::cmp::PartialEq\") .\n\n \n \n use std::collections::HashMap;\n \n #[derive(Hash, Eq, PartialEq, Debug)]\n struct Viking {\n name: String,\n country: String,\n }\n \n impl Viking {\n \/\/\/ Creates a new Viking.\n fn new(name: &str, country: &str) -> Viking {\n Viking { name: name.to_string(), country: country.to_string() }\n }\n }\n \n \/\/ Use a HashMap to store the vikings' health points.\n let vikings = HashMap::from([\n (Viking::new(\"Einar\", \"Norway\"), 25),\n (Viking::new(\"Olaf\", \"Denmark\"), 24),\n (Viking::new(\"Harald\", \"Iceland\"), 12),\n ]);\n \n \/\/ Use derived implementation to print the status of the vikings.\n for (viking, health) in &vikings {\n println!(\"{viking:?} has {health} hp\");\n }\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++%23%5Bderive\\(Hash,+Eq,+PartialEq,+Debug\\)%5D%0A++++struct+Viking+%7B%0A++++++++name:+String,%0A++++++++country:+String,%0A++++%7D%0A++++%0A++++impl+Viking+%7B%0A++++++++\/\/\/+Creates+a+new+Viking.%0A++++++++fn+new\\(name:+%26str,+country:+%26str\\)+-%3E+Viking+%7B%0A++++++++++++Viking+%7B+name:+name.to_string\\(\\),+country:+country.to_string\\(\\)+%7D%0A++++++++%7D%0A++++%7D%0A++++%0A++++\/\/+Use+a+HashMap+to+store+the+vikings'+health+points.%0A++++let+vikings+=+HashMap::from\\(%5B%0A++++++++\\(Viking::new\\(%22Einar%22,+%22Norway%22\\),+25\\),%0A++++++++\\(Viking::new\\(%22Olaf%22,+%22Denmark%22\\),+24\\),%0A++++++++\\(Viking::new\\(%22Harald%22,+%22Iceland%22\\),+12\\),%0A++++%5D\\);%0A++++%0A++++\/\/+Use+derived+implementation+to+print+the+status+of+the+vikings.%0A++++for+\\(viking,+health\\)+in+%26vikings+%7B%0A++++++++println!\\(%22%7Bviking:?%7D+has+%7Bhealth%7D+hp%22\\);%0A++++%7D%0A%7D&edition=2021)\n\n## Implementations \u00c2\u00a7\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#217-254) \u00c2\u00a7\n\n### impl<K, V> [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, [ RandomState\n](hash_map\/struct.RandomState.html \"struct\nstd::collections::hash_map::RandomState\") >\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#232-234)\n\n#### pub fn new () -> [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, [ RandomState\n](hash_map\/struct.RandomState.html \"struct\nstd::collections::hash_map::RandomState\") >\n\nCreates an empty ` HashMap ` .\n\nThe hash map is initially created with a capacity of 0, so it will not\nallocate until it is first inserted into.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n let mut map: HashMap<&str, i32> = HashMap::new();\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++let+mut+map:+HashMap%3C%26str,+i32%3E+=+HashMap::new\\(\\);%0A%7D&edition=2021)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#251-253)\n\n#### pub fn with_capacity (capacity: [ usize ](..\/primitive.usize.html) )\n-> [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, [ RandomState\n](hash_map\/struct.RandomState.html \"struct\nstd::collections::hash_map::RandomState\") >\n\nCreates an empty ` HashMap ` with at least the specified capacity.\n\nThe hash map will be able to hold at least ` capacity ` elements without\nreallocating. This method is allowed to allocate for more elements than `\ncapacity ` . If ` capacity ` is 0, the hash map will not allocate.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n let mut map: HashMap<&str, i32> = HashMap::with_capacity(10);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++let+mut+map:+HashMap%3C%26str,+i32%3E+=+HashMap::with_capacity\\(10\\);%0A%7D&edition=2021)\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#256-731) \u00c2\u00a7\n\n### impl<K, V, S> [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\n1.7.0 (const: [ unstable ](https:\/\/github.com\/rust-lang\/rust\/issues\/102575\n\"Tracking issue for const_collections_with_hasher\") ) \u00c2\u00b7 [ source\n](..\/..\/src\/std\/collections\/hash\/map.rs.html#283-285)\n\n#### pub fn with_hasher (hash_builder: S) -> [ HashMap\n](hash_map\/struct.HashMap.html \"struct std::collections::hash_map::HashMap\")\n<K, V, S>\n\nCreates an empty ` HashMap ` which will use the given hash builder to hash\nkeys.\n\nThe created map has the default initial capacity.\n\nWarning: ` hash_builder ` is normally randomly generated, and is designed to\nallow HashMaps to be resistant to attacks that cause many collisions and very\npoor performance. Setting it manually using this function can expose a DoS\nattack vector.\n\nThe ` hash_builder ` passed should implement the [ ` BuildHasher `\n](..\/hash\/trait.BuildHasher.html \"trait std::hash::BuildHasher\") trait for the\nHashMap to be useful, see its documentation for details.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n use std::hash::RandomState;\n \n let s = RandomState::new();\n let mut map = HashMap::with_hasher(s);\n map.insert(1, 2);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++use+std::hash::RandomState;%0A++++%0A++++let+s+=+RandomState::new\\(\\);%0A++++let+mut+map+=+HashMap::with_hasher\\(s\\);%0A++++map.insert\\(1,+2\\);%0A%7D&edition=2021)\n\n1.7.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#314-316)\n\n#### pub fn with_capacity_and_hasher (capacity: [ usize\n](..\/primitive.usize.html) , hasher: S) -> [ HashMap\n](hash_map\/struct.HashMap.html \"struct std::collections::hash_map::HashMap\")\n<K, V, S>\n\nCreates an empty ` HashMap ` with at least the specified capacity, using `\nhasher ` to hash the keys.\n\nThe hash map will be able to hold at least ` capacity ` elements without\nreallocating. This method is allowed to allocate for more elements than `\ncapacity ` . If ` capacity ` is 0, the hash map will not allocate.\n\nWarning: ` hasher ` is normally randomly generated, and is designed to allow\nHashMaps to be resistant to attacks that cause many collisions and very poor\nperformance. Setting it manually using this function can expose a DoS attack\nvector.\n\nThe ` hasher ` passed should implement the [ ` BuildHasher `\n](..\/hash\/trait.BuildHasher.html \"trait std::hash::BuildHasher\") trait for the\nHashMap to be useful, see its documentation for details.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n use std::hash::RandomState;\n \n let s = RandomState::new();\n let mut map = HashMap::with_capacity_and_hasher(10, s);\n map.insert(1, 2);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++use+std::hash::RandomState;%0A++++%0A++++let+s+=+RandomState::new\\(\\);%0A++++let+mut+map+=+HashMap::with_capacity_and_hasher\\(10,+s\\);%0A++++map.insert\\(1,+2\\);%0A%7D&edition=2021)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#332-334)\n\n#### pub fn capacity (&self) -> [ usize ](..\/primitive.usize.html)\n\nReturns the number of elements the map can hold without reallocating.\n\nThis number is a lower bound; the ` HashMap<K, V> ` might be able to hold\nmore, but is guaranteed to be able to hold at least this many.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n let map: HashMap<i32, i32> = HashMap::with_capacity(100);\n assert!(map.capacity() >= 100);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++let+map:+HashMap%3Ci32,+i32%3E+=+HashMap::with_capacity\\(100\\);%0A++++assert!\\(map.capacity\\(\\)+%3E=+100\\);%0A%7D&edition=2021)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#361-363)\n\n#### pub fn keys (&self) -> [ Keys ](hash_map\/struct.Keys.html \"struct\nstd::collections::hash_map::Keys\") <'_, K, V> \u00e2\u0093\u0098\n\nAn iterator visiting all keys in arbitrary order. The iterator element type is\n` &'a K ` .\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let map = HashMap::from([\n (\"a\", 1),\n (\"b\", 2),\n (\"c\", 3),\n ]);\n \n for key in map.keys() {\n println!(\"{key}\");\n }\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+map+=+HashMap::from\\(%5B%0A++++++++\\(%22a%22,+1\\),%0A++++++++\\(%22b%22,+2\\),%0A++++++++\\(%22c%22,+3\\),%0A++++%5D\\);%0A++++%0A++++for+key+in+map.keys\\(\\)+%7B%0A++++++++println!\\(%22%7Bkey%7D%22\\);%0A++++%7D%0A%7D&edition=2021)\n\n##### \u00c2\u00a7 Performance\n\nIn the current implementation, iterating over keys takes O(capacity) time\ninstead of O(len) because it internally visits empty buckets too.\n\n1.54.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#394-396)\n\n#### pub fn into_keys (self) -> [ IntoKeys ](hash_map\/struct.IntoKeys.html\n\"struct std::collections::hash_map::IntoKeys\") <K, V> \u00e2\u0093\u0098\n\nCreates a consuming iterator visiting all the keys in arbitrary order. The map\ncannot be used after calling this. The iterator element type is ` K ` .\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let map = HashMap::from([\n (\"a\", 1),\n (\"b\", 2),\n (\"c\", 3),\n ]);\n \n let mut vec: Vec<&str> = map.into_keys().collect();\n \/\/ The `IntoKeys` iterator produces keys in arbitrary order, so the\n \/\/ keys must be sorted to test them against a sorted array.\n vec.sort_unstable();\n assert_eq!(vec, [\"a\", \"b\", \"c\"]);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+map+=+HashMap::from\\(%5B%0A++++++++\\(%22a%22,+1\\),%0A++++++++\\(%22b%22,+2\\),%0A++++++++\\(%22c%22,+3\\),%0A++++%5D\\);%0A++++%0A++++let+mut+vec:+Vec%3C%26str%3E+=+map.into_keys\\(\\).collect\\(\\);%0A++++\/\/+The+%60IntoKeys%60+iterator+produces+keys+in+arbitrary+order,+so+the%0A++++\/\/+keys+must+be+sorted+to+test+them+against+a+sorted+array.%0A++++vec.sort_unstable\\(\\);%0A++++assert_eq!\\(vec,+%5B%22a%22,+%22b%22,+%22c%22%5D\\);%0A%7D&edition=2021)\n\n##### \u00c2\u00a7 Performance\n\nIn the current implementation, iterating over keys takes O(capacity) time\ninstead of O(len) because it internally visits empty buckets too.\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#423-425)\n\n#### pub fn values (&self) -> [ Values ](hash_map\/struct.Values.html\n\"struct std::collections::hash_map::Values\") <'_, K, V> \u00e2\u0093\u0098\n\nAn iterator visiting all values in arbitrary order. The iterator element type\nis ` &'a V ` .\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let map = HashMap::from([\n (\"a\", 1),\n (\"b\", 2),\n (\"c\", 3),\n ]);\n \n for val in map.values() {\n println!(\"{val}\");\n }\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+map+=+HashMap::from\\(%5B%0A++++++++\\(%22a%22,+1\\),%0A++++++++\\(%22b%22,+2\\),%0A++++++++\\(%22c%22,+3\\),%0A++++%5D\\);%0A++++%0A++++for+val+in+map.values\\(\\)+%7B%0A++++++++println!\\(%22%7Bval%7D%22\\);%0A++++%7D%0A%7D&edition=2021)\n\n##### \u00c2\u00a7 Performance\n\nIn the current implementation, iterating over values takes O(capacity) time\ninstead of O(len) because it internally visits empty buckets too.\n\n1.10.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#456-458)\n\n#### pub fn values_mut (&mut self) -> [ ValuesMut\n](hash_map\/struct.ValuesMut.html \"struct\nstd::collections::hash_map::ValuesMut\") <'_, K, V> \u00e2\u0093\u0098\n\nAn iterator visiting all values mutably in arbitrary order. The iterator\nelement type is ` &'a mut V ` .\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut map = HashMap::from([\n (\"a\", 1),\n (\"b\", 2),\n (\"c\", 3),\n ]);\n \n for val in map.values_mut() {\n *val = *val + 10;\n }\n \n for val in map.values() {\n println!(\"{val}\");\n }\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map+=+HashMap::from\\(%5B%0A++++++++\\(%22a%22,+1\\),%0A++++++++\\(%22b%22,+2\\),%0A++++++++\\(%22c%22,+3\\),%0A++++%5D\\);%0A++++%0A++++for+val+in+map.values_mut\\(\\)+%7B%0A++++++++*val+=+*val+%2B+10;%0A++++%7D%0A++++%0A++++for+val+in+map.values\\(\\)+%7B%0A++++++++println!\\(%22%7Bval%7D%22\\);%0A++++%7D%0A%7D&edition=2021)\n\n##### \u00c2\u00a7 Performance\n\nIn the current implementation, iterating over values takes O(capacity) time\ninstead of O(len) because it internally visits empty buckets too.\n\n1.54.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#489-491)\n\n#### pub fn into_values (self) -> [ IntoValues\n](hash_map\/struct.IntoValues.html \"struct\nstd::collections::hash_map::IntoValues\") <K, V> \u00e2\u0093\u0098\n\nCreates a consuming iterator visiting all the values in arbitrary order. The\nmap cannot be used after calling this. The iterator element type is ` V ` .\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let map = HashMap::from([\n (\"a\", 1),\n (\"b\", 2),\n (\"c\", 3),\n ]);\n \n let mut vec: Vec<i32> = map.into_values().collect();\n \/\/ The `IntoValues` iterator produces values in arbitrary order, so\n \/\/ the values must be sorted to test them against a sorted array.\n vec.sort_unstable();\n assert_eq!(vec, [1, 2, 3]);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+map+=+HashMap::from\\(%5B%0A++++++++\\(%22a%22,+1\\),%0A++++++++\\(%22b%22,+2\\),%0A++++++++\\(%22c%22,+3\\),%0A++++%5D\\);%0A++++%0A++++let+mut+vec:+Vec%3Ci32%3E+=+map.into_values\\(\\).collect\\(\\);%0A++++\/\/+The+%60IntoValues%60+iterator+produces+values+in+arbitrary+order,+so%0A++++\/\/+the+values+must+be+sorted+to+test+them+against+a+sorted+array.%0A++++vec.sort_unstable\\(\\);%0A++++assert_eq!\\(vec,+%5B1,+2,+3%5D\\);%0A%7D&edition=2021)\n\n##### \u00c2\u00a7 Performance\n\nIn the current implementation, iterating over values takes O(capacity) time\ninstead of O(len) because it internally visits empty buckets too.\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#518-520)\n\n#### pub fn iter (&self) -> [ Iter ](hash_map\/struct.Iter.html \"struct\nstd::collections::hash_map::Iter\") <'_, K, V> \u00e2\u0093\u0098\n\nAn iterator visiting all key-value pairs in arbitrary order. The iterator\nelement type is ` (&'a K, &'a V) ` .\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let map = HashMap::from([\n (\"a\", 1),\n (\"b\", 2),\n (\"c\", 3),\n ]);\n \n for (key, val) in map.iter() {\n println!(\"key: {key} val: {val}\");\n }\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+map+=+HashMap::from\\(%5B%0A++++++++\\(%22a%22,+1\\),%0A++++++++\\(%22b%22,+2\\),%0A++++++++\\(%22c%22,+3\\),%0A++++%5D\\);%0A++++%0A++++for+\\(key,+val\\)+in+map.iter\\(\\)+%7B%0A++++++++println!\\(%22key:+%7Bkey%7D+val:+%7Bval%7D%22\\);%0A++++%7D%0A%7D&edition=2021)\n\n##### \u00c2\u00a7 Performance\n\nIn the current implementation, iterating over map takes O(capacity) time\ninstead of O(len) because it internally visits empty buckets too.\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#553-555)\n\n#### pub fn iter_mut (&mut self) -> [ IterMut\n](hash_map\/struct.IterMut.html \"struct std::collections::hash_map::IterMut\")\n<'_, K, V> \u00e2\u0093\u0098\n\nAn iterator visiting all key-value pairs in arbitrary order, with mutable\nreferences to the values. The iterator element type is ` (&'a K, &'a mut V) `\n.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut map = HashMap::from([\n (\"a\", 1),\n (\"b\", 2),\n (\"c\", 3),\n ]);\n \n \/\/ Update all values\n for (_, val) in map.iter_mut() {\n *val *= 2;\n }\n \n for (key, val) in &map {\n println!(\"key: {key} val: {val}\");\n }\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map+=+HashMap::from\\(%5B%0A++++++++\\(%22a%22,+1\\),%0A++++++++\\(%22b%22,+2\\),%0A++++++++\\(%22c%22,+3\\),%0A++++%5D\\);%0A++++%0A++++\/\/+Update+all+values%0A++++for+\\(_,+val\\)+in+map.iter_mut\\(\\)+%7B%0A++++++++*val+*=+2;%0A++++%7D%0A++++%0A++++for+\\(key,+val\\)+in+%26map+%7B%0A++++++++println!\\(%22key:+%7Bkey%7D+val:+%7Bval%7D%22\\);%0A++++%7D%0A%7D&edition=2021)\n\n##### \u00c2\u00a7 Performance\n\nIn the current implementation, iterating over map takes O(capacity) time\ninstead of O(len) because it internally visits empty buckets too.\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#570-572)\n\n#### pub fn len (&self) -> [ usize ](..\/primitive.usize.html)\n\nReturns the number of elements in the map.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut a = HashMap::new();\n assert_eq!(a.len(), 0);\n a.insert(1, \"a\");\n assert_eq!(a.len(), 1);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+a+=+HashMap::new\\(\\);%0A++++assert_eq!\\(a.len\\(\\),+0\\);%0A++++a.insert\\(1,+%22a%22\\);%0A++++assert_eq!\\(a.len\\(\\),+1\\);%0A%7D&edition=2021)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#588-590)\n\n#### pub fn is_empty (&self) -> [ bool ](..\/primitive.bool.html)\n\nReturns ` true ` if the map contains no elements.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut a = HashMap::new();\n assert!(a.is_empty());\n a.insert(1, \"a\");\n assert!(!a.is_empty());\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+a+=+HashMap::new\\(\\);%0A++++assert!\\(a.is_empty\\(\\)\\);%0A++++a.insert\\(1,+%22a%22\\);%0A++++assert!\\(!a.is_empty\\(\\)\\);%0A%7D&edition=2021)\n\n1.6.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#618-620)\n\n#### pub fn drain (&mut self) -> [ Drain ](hash_map\/struct.Drain.html\n\"struct std::collections::hash_map::Drain\") <'_, K, V> \u00e2\u0093\u0098\n\nClears the map, returning all key-value pairs as an iterator. Keeps the\nallocated memory for reuse.\n\nIf the returned iterator is dropped before being fully consumed, it drops the\nremaining key-value pairs. The returned iterator keeps a mutable borrow on the\nmap to optimize its implementation.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut a = HashMap::new();\n a.insert(1, \"a\");\n a.insert(2, \"b\");\n \n for (k, v) in a.drain().take(1) {\n assert!(k == 1 || k == 2);\n assert!(v == \"a\" || v == \"b\");\n }\n \n assert!(a.is_empty());\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+a+=+HashMap::new\\(\\);%0A++++a.insert\\(1,+%22a%22\\);%0A++++a.insert\\(2,+%22b%22\\);%0A++++%0A++++for+\\(k,+v\\)+in+a.drain\\(\\).take\\(1\\)+%7B%0A++++++++assert!\\(k+==+1+%7C%7C+k+==+2\\);%0A++++++++assert!\\(v+==+%22a%22+%7C%7C+v+==+%22b%22\\);%0A++++%7D%0A++++%0A++++assert!\\(a.is_empty\\(\\)\\);%0A%7D&edition=2021)\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#659-664)\n\n#### pub fn extract_if <F>(&mut self, pred: F) -> [ ExtractIf\n](hash_map\/struct.ExtractIf.html \"struct\nstd::collections::hash_map::ExtractIf\") <'_, K, V, F> \u00e2\u0093\u0098\n\nwhere F: [ FnMut ](..\/ops\/trait.FnMut.html \"trait std::ops::FnMut\") ( [ &K\n](..\/primitive.reference.html) , [ &mut V ](..\/primitive.reference.html) ) ->\n[ bool ](..\/primitive.bool.html) ,\n\n\u00f0\u009f\u0094\u00ac This is a nightly-only experimental API. ( ` hash_extract_if ` [ #59618\n](https:\/\/github.com\/rust-lang\/rust\/issues\/59618) )\n\nCreates an iterator which uses a closure to determine if an element should be\nremoved.\n\nIf the closure returns true, the element is removed from the map and yielded.\nIf the closure returns false, or panics, the element remains in the map and\nwill not be yielded.\n\nNote that ` extract_if ` lets you mutate every value in the filter closure,\nregardless of whether you choose to keep or remove it.\n\nIf the returned ` ExtractIf ` is not exhausted, e.g. because it is dropped\nwithout iterating or the iteration short-circuits, then the remaining elements\nwill be retained. Use [ ` retain `\n](hash_map\/struct.HashMap.html#method.retain \"method\nstd::collections::hash_map::HashMap::retain\") with a negated predicate if you\ndo not need the returned iterator.\n\n##### \u00c2\u00a7 Examples\n\nSplitting a map into even and odd keys, reusing the original map:\n\n \n \n #![feature(hash_extract_if)]\n use std::collections::HashMap;\n \n let mut map: HashMap<i32, i32> = (0..8).map(|x| (x, x)).collect();\n let extracted: HashMap<i32, i32> = map.extract_if(|k, _v| k % 2 == 0).collect();\n \n let mut evens = extracted.keys().copied().collect::<Vec<_>>();\n let mut odds = map.keys().copied().collect::<Vec<_>>();\n evens.sort();\n odds.sort();\n \n assert_eq!(evens, vec![0, 2, 4, 6]);\n assert_eq!(odds, vec![1, 3, 5, 7]);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0A%23!%5Bfeature\\(hash_extract_if\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map:+HashMap%3Ci32,+i32%3E+=+\\(0..8\\).map\\(%7Cx%7C+\\(x,+x\\)\\).collect\\(\\);%0A++++let+extracted:+HashMap%3Ci32,+i32%3E+=+map.extract_if\\(%7Ck,+_v%7C+k+%25+2+==+0\\).collect\\(\\);%0A++++%0A++++let+mut+evens+=+extracted.keys\\(\\).copied\\(\\).collect::%3CVec%3C_%3E%3E\\(\\);%0A++++let+mut+odds+=+map.keys\\(\\).copied\\(\\).collect::%3CVec%3C_%3E%3E\\(\\);%0A++++evens.sort\\(\\);%0A++++odds.sort\\(\\);%0A++++%0A++++assert_eq!\\(evens,+vec!%5B0,+2,+4,+6%5D\\);%0A++++assert_eq!\\(odds,+vec!%5B1,+3,+5,+7%5D\\);%0A%7D&version=nightly&edition=2021)\n\n1.18.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#688-693)\n\n#### pub fn retain <F>(&mut self, f: F)\n\nwhere F: [ FnMut ](..\/ops\/trait.FnMut.html \"trait std::ops::FnMut\") ( [ &K\n](..\/primitive.reference.html) , [ &mut V ](..\/primitive.reference.html) ) ->\n[ bool ](..\/primitive.bool.html) ,\n\nRetains only the elements specified by the predicate.\n\nIn other words, remove all pairs ` (k, v) ` for which ` f(&k, &mut v) `\nreturns ` false ` . The elements are visited in unsorted (and unspecified)\norder.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut map: HashMap<i32, i32> = (0..8).map(|x| (x, x*10)).collect();\n map.retain(|&k, _| k % 2 == 0);\n assert_eq!(map.len(), 4);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map:+HashMap%3Ci32,+i32%3E+=+\\(0..8\\).map\\(%7Cx%7C+\\(x,+x*10\\)\\).collect\\(\\);%0A++++map.retain\\(%7C%26k,+_%7C+k+%25+2+==+0\\);%0A++++assert_eq!\\(map.len\\(\\),+4\\);%0A%7D&edition=2021)\n\n##### \u00c2\u00a7 Performance\n\nIn the current implementation, this operation takes O(capacity) time instead\nof O(len) because it internally visits empty buckets too.\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#710-712)\n\n#### pub fn clear (&mut self)\n\nClears the map, removing all key-value pairs. Keeps the allocated memory for\nreuse.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut a = HashMap::new();\n a.insert(1, \"a\");\n a.clear();\n assert!(a.is_empty());\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+a+=+HashMap::new\\(\\);%0A++++a.insert\\(1,+%22a%22\\);%0A++++a.clear\\(\\);%0A++++assert!\\(a.is_empty\\(\\)\\);%0A%7D&edition=2021)\n\n1.9.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#728-730)\n\n#### pub fn hasher (&self) -> [ &S ](..\/primitive.reference.html)\n\nReturns a reference to the map\u00e2\u0080\u0099s [ ` BuildHasher `\n](..\/hash\/trait.BuildHasher.html \"trait std::hash::BuildHasher\") .\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n use std::hash::RandomState;\n \n let hasher = RandomState::new();\n let map: HashMap<i32, i32> = HashMap::with_hasher(hasher);\n let hasher: &RandomState = map.hasher();\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++use+std::hash::RandomState;%0A++++%0A++++let+hasher+=+RandomState::new\\(\\);%0A++++let+map:+HashMap%3Ci32,+i32%3E+=+HashMap::with_hasher\\(hasher\\);%0A++++let+hasher:+%26RandomState+=+map.hasher\\(\\);%0A%7D&edition=2021)\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#733-1196) \u00c2\u00a7\n\n### impl<K, V, S> [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nwhere K: [ Eq ](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ [ Hash\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") , S: [ BuildHasher\n](..\/hash\/trait.BuildHasher.html \"trait std::hash::BuildHasher\") ,\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#757-759)\n\n#### pub fn reserve (&mut self, additional: [ usize\n](..\/primitive.usize.html) )\n\nReserves capacity for at least ` additional ` more elements to be inserted in\nthe ` HashMap ` . The collection may reserve more space to speculatively avoid\nfrequent reallocations. After calling ` reserve ` , capacity will be greater\nthan or equal to ` self.len() + additional ` . Does nothing if capacity is\nalready sufficient.\n\n##### \u00c2\u00a7 Panics\n\nPanics if the new allocation size overflows [ ` usize `\n](..\/primitive.usize.html \"primitive usize\") .\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n let mut map: HashMap<&str, i32> = HashMap::new();\n map.reserve(10);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++let+mut+map:+HashMap%3C%26str,+i32%3E+=+HashMap::new\\(\\);%0A++++map.reserve\\(10\\);%0A%7D&edition=2021)\n\n1.57.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#783-785)\n\n#### pub fn try_reserve (&mut self, additional: [ usize\n](..\/primitive.usize.html) ) -> [ Result ](..\/result\/enum.Result.html \"enum\nstd::result::Result\") < [ () ](..\/primitive.unit.html) , [ TryReserveError\n](struct.TryReserveError.html \"struct std::collections::TryReserveError\") >\n\nTries to reserve capacity for at least ` additional ` more elements to be\ninserted in the ` HashMap ` . The collection may reserve more space to\nspeculatively avoid frequent reallocations. After calling ` try_reserve ` ,\ncapacity will be greater than or equal to ` self.len() + additional ` if it\nreturns ` Ok(()) ` . Does nothing if capacity is already sufficient.\n\n##### \u00c2\u00a7 Errors\n\nIf the capacity overflows, or the allocator reports a failure, then an error\nis returned.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut map: HashMap<&str, isize> = HashMap::new();\n map.try_reserve(10).expect(\"why is the test harness OOMing on a handful of bytes?\");\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map:+HashMap%3C%26str,+isize%3E+=+HashMap::new\\(\\);%0A++++map.try_reserve\\(10\\).expect\\(%22why+is+the+test+harness+OOMing+on+a+handful+of+bytes?%22\\);%0A%7D&edition=2021)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#805-807)\n\n#### pub fn shrink_to_fit (&mut self)\n\nShrinks the capacity of the map as much as possible. It will drop down as much\nas possible while maintaining the internal rules and possibly leaving some\nspace in accordance with the resize policy.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut map: HashMap<i32, i32> = HashMap::with_capacity(100);\n map.insert(1, 2);\n map.insert(3, 4);\n assert!(map.capacity() >= 100);\n map.shrink_to_fit();\n assert!(map.capacity() >= 2);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map:+HashMap%3Ci32,+i32%3E+=+HashMap::with_capacity\\(100\\);%0A++++map.insert\\(1,+2\\);%0A++++map.insert\\(3,+4\\);%0A++++assert!\\(map.capacity\\(\\)+%3E=+100\\);%0A++++map.shrink_to_fit\\(\\);%0A++++assert!\\(map.capacity\\(\\)+%3E=+2\\);%0A%7D&edition=2021)\n\n1.56.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#831-833)\n\n#### pub fn shrink_to (&mut self, min_capacity: [ usize\n](..\/primitive.usize.html) )\n\nShrinks the capacity of the map with a lower limit. It will drop down no lower\nthan the supplied limit while maintaining the internal rules and possibly\nleaving some space in accordance with the resize policy.\n\nIf the current capacity is less than the lower limit, this is a no-op.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut map: HashMap<i32, i32> = HashMap::with_capacity(100);\n map.insert(1, 2);\n map.insert(3, 4);\n assert!(map.capacity() >= 100);\n map.shrink_to(10);\n assert!(map.capacity() >= 10);\n map.shrink_to(0);\n assert!(map.capacity() >= 2);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map:+HashMap%3Ci32,+i32%3E+=+HashMap::with_capacity\\(100\\);%0A++++map.insert\\(1,+2\\);%0A++++map.insert\\(3,+4\\);%0A++++assert!\\(map.capacity\\(\\)+%3E=+100\\);%0A++++map.shrink_to\\(10\\);%0A++++assert!\\(map.capacity\\(\\)+%3E=+10\\);%0A++++map.shrink_to\\(0\\);%0A++++assert!\\(map.capacity\\(\\)+%3E=+2\\);%0A%7D&edition=2021)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#855-857)\n\n#### pub fn entry (&mut self, key: K) -> [ Entry ](hash_map\/enum.Entry.html\n\"enum std::collections::hash_map::Entry\") <'_, K, V>\n\nGets the given key\u00e2\u0080\u0099s corresponding entry in the map for in-place\nmanipulation.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut letters = HashMap::new();\n \n for ch in \"a short treatise on fungi\".chars() {\n letters.entry(ch).and_modify(|counter| *counter += 1).or_insert(1);\n }\n \n assert_eq!(letters[&'s'], 2);\n assert_eq!(letters[&'t'], 3);\n assert_eq!(letters[&'u'], 1);\n assert_eq!(letters.get(&'y'), None);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+letters+=+HashMap::new\\(\\);%0A++++%0A++++for+ch+in+%22a+short+treatise+on+fungi%22.chars\\(\\)+%7B%0A++++++++letters.entry\\(ch\\).and_modify\\(%7Ccounter%7C+*counter+%2B=+1\\).or_insert\\(1\\);%0A++++%7D%0A++++%0A++++assert_eq!\\(letters%5B%26's'%5D,+2\\);%0A++++assert_eq!\\(letters%5B%26't'%5D,+3\\);%0A++++assert_eq!\\(letters%5B%26'u'%5D,+1\\);%0A++++assert_eq!\\(letters.get\\(%26'y'\\),+None\\);%0A%7D&edition=2021)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#877-883)\n\n#### pub fn get <Q>(&self, k: [ &Q ](..\/primitive.reference.html) ) -> [\nOption ](..\/option\/enum.Option.html \"enum std::option::Option\") < [ &V\n](..\/primitive.reference.html) >\n\nwhere K: [ Borrow ](..\/borrow\/trait.Borrow.html \"trait std::borrow::Borrow\")\n<Q>, Q: [ Hash ](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") \\+ [ Eq\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ ? [ Sized\n](..\/marker\/trait.Sized.html \"trait std::marker::Sized\") ,\n\nReturns a reference to the value corresponding to the key.\n\nThe key may be any borrowed form of the map\u00e2\u0080\u0099s key type, but [ ` Hash `\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") and [ ` Eq `\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") on the borrowed form _must_ match\nthose for the key type.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut map = HashMap::new();\n map.insert(1, \"a\");\n assert_eq!(map.get(&1), Some(&\"a\"));\n assert_eq!(map.get(&2), None);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map+=+HashMap::new\\(\\);%0A++++map.insert\\(1,+%22a%22\\);%0A++++assert_eq!\\(map.get\\(%261\\),+Some\\(%26%22a%22\\)\\);%0A++++assert_eq!\\(map.get\\(%262\\),+None\\);%0A%7D&edition=2021)\n\n1.40.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#903-909)\n\n#### pub fn get_key_value <Q>(&self, k: [ &Q ](..\/primitive.reference.html)\n) -> [ Option ](..\/option\/enum.Option.html \"enum std::option::Option\") <( [ &K\n](..\/primitive.reference.html) , [ &V ](..\/primitive.reference.html) )>\n\nwhere K: [ Borrow ](..\/borrow\/trait.Borrow.html \"trait std::borrow::Borrow\")\n<Q>, Q: [ Hash ](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") \\+ [ Eq\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ ? [ Sized\n](..\/marker\/trait.Sized.html \"trait std::marker::Sized\") ,\n\nReturns the key-value pair corresponding to the supplied key.\n\nThe supplied key may be any borrowed form of the map\u00e2\u0080\u0099s key type, but [ `\nHash ` ](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") and [ ` Eq `\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") on the borrowed form _must_ match\nthose for the key type.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut map = HashMap::new();\n map.insert(1, \"a\");\n assert_eq!(map.get_key_value(&1), Some((&1, &\"a\")));\n assert_eq!(map.get_key_value(&2), None);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map+=+HashMap::new\\(\\);%0A++++map.insert\\(1,+%22a%22\\);%0A++++assert_eq!\\(map.get_key_value\\(%261\\),+Some\\(\\(%261,+%26%22a%22\\)\\)\\);%0A++++assert_eq!\\(map.get_key_value\\(%262\\),+None\\);%0A%7D&edition=2021)\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#957-963)\n\n#### pub fn get_many_mut <Q, const N: [ usize ](..\/primitive.usize.html) >(\n&mut self, ks: [ [ &Q ](..\/primitive.reference.html) ; [ N\n](..\/primitive.array.html) ], ) -> [ Option ](..\/option\/enum.Option.html \"enum\nstd::option::Option\") <[ [ &mut V ](..\/primitive.reference.html) ; [ N\n](..\/primitive.array.html) ]>\n\nwhere K: [ Borrow ](..\/borrow\/trait.Borrow.html \"trait std::borrow::Borrow\")\n<Q>, Q: [ Hash ](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") \\+ [ Eq\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ ? [ Sized\n](..\/marker\/trait.Sized.html \"trait std::marker::Sized\") ,\n\n\u00f0\u009f\u0094\u00ac This is a nightly-only experimental API. ( ` map_many_mut ` [ #97601\n](https:\/\/github.com\/rust-lang\/rust\/issues\/97601) )\n\nAttempts to get mutable references to ` N ` values in the map at once.\n\nReturns an array of length ` N ` with the results of each query. For\nsoundness, at most one mutable reference will be returned to any value. ` None\n` will be returned if any of the keys are duplicates or missing.\n\n##### \u00c2\u00a7 Examples\n\n \n \n #![feature(map_many_mut)]\n use std::collections::HashMap;\n \n let mut libraries = HashMap::new();\n libraries.insert(\"Bodleian Library\".to_string(), 1602);\n libraries.insert(\"Athen\u00c3\u00a6um\".to_string(), 1807);\n libraries.insert(\"Herzogin-Anna-Amalia-Bibliothek\".to_string(), 1691);\n libraries.insert(\"Library of Congress\".to_string(), 1800);\n \n let got = libraries.get_many_mut([\n \"Athen\u00c3\u00a6um\",\n \"Library of Congress\",\n ]);\n assert_eq!(\n got,\n Some([\n &mut 1807,\n &mut 1800,\n ]),\n );\n \n \/\/ Missing keys result in None\n let got = libraries.get_many_mut([\n \"Athen\u00c3\u00a6um\",\n \"New York Public Library\",\n ]);\n assert_eq!(got, None);\n \n \/\/ Duplicate keys result in None\n let got = libraries.get_many_mut([\n \"Athen\u00c3\u00a6um\",\n \"Athen\u00c3\u00a6um\",\n ]);\n assert_eq!(got, None);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0A%23!%5Bfeature\\(map_many_mut\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+libraries+=+HashMap::new\\(\\);%0A++++libraries.insert\\(%22Bodleian+Library%22.to_string\\(\\),+1602\\);%0A++++libraries.insert\\(%22Athen%C3%A6um%22.to_string\\(\\),+1807\\);%0A++++libraries.insert\\(%22Herzogin-\nAnna-Amalia-\nBibliothek%22.to_string\\(\\),+1691\\);%0A++++libraries.insert\\(%22Library+of+Congress%22.to_string\\(\\),+1800\\);%0A++++%0A++++let+got+=+libraries.get_many_mut\\(%5B%0A++++++++%22Athen%C3%A6um%22,%0A++++++++%22Library+of+Congress%22,%0A++++%5D\\);%0A++++assert_eq!\\(%0A++++++++got,%0A++++++++Some\\(%5B%0A++++++++++++%26mut+1807,%0A++++++++++++%26mut+1800,%0A++++++++%5D\\),%0A++++\\);%0A++++%0A++++\/\/+Missing+keys+result+in+None%0A++++let+got+=+libraries.get_many_mut\\(%5B%0A++++++++%22Athen%C3%A6um%22,%0A++++++++%22New+York+Public+Library%22,%0A++++%5D\\);%0A++++assert_eq!\\(got,+None\\);%0A++++%0A++++\/\/+Duplicate+keys+result+in+None%0A++++let+got+=+libraries.get_many_mut\\(%5B%0A++++++++%22Athen%C3%A6um%22,%0A++++++++%22Athen%C3%A6um%22,%0A++++%5D\\);%0A++++assert_eq!\\(got,+None\\);%0A%7D&version=nightly&edition=2021)\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1013-1022)\n\n#### pub unsafe fn get_many_unchecked_mut <Q, const N: [ usize\n](..\/primitive.usize.html) >( &mut self, ks: [ [ &Q\n](..\/primitive.reference.html) ; [ N ](..\/primitive.array.html) ], ) -> [\nOption ](..\/option\/enum.Option.html \"enum std::option::Option\") <[ [ &mut V\n](..\/primitive.reference.html) ; [ N ](..\/primitive.array.html) ]>\n\nwhere K: [ Borrow ](..\/borrow\/trait.Borrow.html \"trait std::borrow::Borrow\")\n<Q>, Q: [ Hash ](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") \\+ [ Eq\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ ? [ Sized\n](..\/marker\/trait.Sized.html \"trait std::marker::Sized\") ,\n\n\u00f0\u009f\u0094\u00ac This is a nightly-only experimental API. ( ` map_many_mut ` [ #97601\n](https:\/\/github.com\/rust-lang\/rust\/issues\/97601) )\n\nAttempts to get mutable references to ` N ` values in the map at once, without\nvalidating that the values are unique.\n\nReturns an array of length ` N ` with the results of each query. ` None ` will\nbe returned if any of the keys are missing.\n\nFor a safe alternative see [ ` get_many_mut `\n](hash_map\/struct.HashMap.html#method.get_many_mut \"method\nstd::collections::hash_map::HashMap::get_many_mut\") .\n\n##### \u00c2\u00a7 Safety\n\nCalling this method with overlapping keys is _[ undefined behavior\n](https:\/\/doc.rust-lang.org\/reference\/behavior-considered-undefined.html) _\neven if the resulting references are not used.\n\n##### \u00c2\u00a7 Examples\n\n \n \n #![feature(map_many_mut)]\n use std::collections::HashMap;\n \n let mut libraries = HashMap::new();\n libraries.insert(\"Bodleian Library\".to_string(), 1602);\n libraries.insert(\"Athen\u00c3\u00a6um\".to_string(), 1807);\n libraries.insert(\"Herzogin-Anna-Amalia-Bibliothek\".to_string(), 1691);\n libraries.insert(\"Library of Congress\".to_string(), 1800);\n \n let got = libraries.get_many_mut([\n \"Athen\u00c3\u00a6um\",\n \"Library of Congress\",\n ]);\n assert_eq!(\n got,\n Some([\n &mut 1807,\n &mut 1800,\n ]),\n );\n \n \/\/ Missing keys result in None\n let got = libraries.get_many_mut([\n \"Athen\u00c3\u00a6um\",\n \"New York Public Library\",\n ]);\n assert_eq!(got, None);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0A%23!%5Bfeature\\(map_many_mut\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+libraries+=+HashMap::new\\(\\);%0A++++libraries.insert\\(%22Bodleian+Library%22.to_string\\(\\),+1602\\);%0A++++libraries.insert\\(%22Athen%C3%A6um%22.to_string\\(\\),+1807\\);%0A++++libraries.insert\\(%22Herzogin-\nAnna-Amalia-\nBibliothek%22.to_string\\(\\),+1691\\);%0A++++libraries.insert\\(%22Library+of+Congress%22.to_string\\(\\),+1800\\);%0A++++%0A++++let+got+=+libraries.get_many_mut\\(%5B%0A++++++++%22Athen%C3%A6um%22,%0A++++++++%22Library+of+Congress%22,%0A++++%5D\\);%0A++++assert_eq!\\(%0A++++++++got,%0A++++++++Some\\(%5B%0A++++++++++++%26mut+1807,%0A++++++++++++%26mut+1800,%0A++++++++%5D\\),%0A++++\\);%0A++++%0A++++\/\/+Missing+keys+result+in+None%0A++++let+got+=+libraries.get_many_mut\\(%5B%0A++++++++%22Athen%C3%A6um%22,%0A++++++++%22New+York+Public+Library%22,%0A++++%5D\\);%0A++++assert_eq!\\(got,+None\\);%0A%7D&version=nightly&edition=2021)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1042-1048)\n\n#### pub fn contains_key <Q>(&self, k: [ &Q ](..\/primitive.reference.html)\n) -> [ bool ](..\/primitive.bool.html)\n\nwhere K: [ Borrow ](..\/borrow\/trait.Borrow.html \"trait std::borrow::Borrow\")\n<Q>, Q: [ Hash ](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") \\+ [ Eq\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ ? [ Sized\n](..\/marker\/trait.Sized.html \"trait std::marker::Sized\") ,\n\nReturns ` true ` if the map contains a value for the specified key.\n\nThe key may be any borrowed form of the map\u00e2\u0080\u0099s key type, but [ ` Hash `\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") and [ ` Eq `\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") on the borrowed form _must_ match\nthose for the key type.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut map = HashMap::new();\n map.insert(1, \"a\");\n assert_eq!(map.contains_key(&1), true);\n assert_eq!(map.contains_key(&2), false);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map+=+HashMap::new\\(\\);%0A++++map.insert\\(1,+%22a%22\\);%0A++++assert_eq!\\(map.contains_key\\(%261\\),+true\\);%0A++++assert_eq!\\(map.contains_key\\(%262\\),+false\\);%0A%7D&edition=2021)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1070-1076)\n\n#### pub fn get_mut <Q>(&mut self, k: [ &Q ](..\/primitive.reference.html) )\n-> [ Option ](..\/option\/enum.Option.html \"enum std::option::Option\") < [ &mut\nV ](..\/primitive.reference.html) >\n\nwhere K: [ Borrow ](..\/borrow\/trait.Borrow.html \"trait std::borrow::Borrow\")\n<Q>, Q: [ Hash ](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") \\+ [ Eq\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ ? [ Sized\n](..\/marker\/trait.Sized.html \"trait std::marker::Sized\") ,\n\nReturns a mutable reference to the value corresponding to the key.\n\nThe key may be any borrowed form of the map\u00e2\u0080\u0099s key type, but [ ` Hash `\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") and [ ` Eq `\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") on the borrowed form _must_ match\nthose for the key type.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut map = HashMap::new();\n map.insert(1, \"a\");\n if let Some(x) = map.get_mut(&1) {\n *x = \"b\";\n }\n assert_eq!(map[&1], \"b\");\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map+=+HashMap::new\\(\\);%0A++++map.insert\\(1,+%22a%22\\);%0A++++if+let+Some\\(x\\)+=+map.get_mut\\(%261\\)+%7B%0A++++++++*x+=+%22b%22;%0A++++%7D%0A++++assert_eq!\\(map%5B%261%5D,+%22b%22\\);%0A%7D&edition=2021)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1105-1107)\n\n#### pub fn insert (&mut self, k: K, v: V) -> [ Option\n](..\/option\/enum.Option.html \"enum std::option::Option\") <V>\n\nInserts a key-value pair into the map.\n\nIf the map did not have this key present, [ ` None `\n](..\/option\/enum.Option.html#variant.None \"variant std::option::Option::None\")\nis returned.\n\nIf the map did have this key present, the value is updated, and the old value\nis returned. The key is not updated, though; this matters for types that can\nbe ` == ` without being identical. See the [ module-level documentation\n](index.html#insert-and-complex-keys \"mod std::collections\") for more.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut map = HashMap::new();\n assert_eq!(map.insert(37, \"a\"), None);\n assert_eq!(map.is_empty(), false);\n \n map.insert(37, \"b\");\n assert_eq!(map.insert(37, \"c\"), Some(\"b\"));\n assert_eq!(map[&37], \"c\");\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map+=+HashMap::new\\(\\);%0A++++assert_eq!\\(map.insert\\(37,+%22a%22\\),+None\\);%0A++++assert_eq!\\(map.is_empty\\(\\),+false\\);%0A++++%0A++++map.insert\\(37,+%22b%22\\);%0A++++assert_eq!\\(map.insert\\(37,+%22c%22\\),+Some\\(%22b%22\\)\\);%0A++++assert_eq!\\(map%5B%2637%5D,+%22c%22\\);%0A%7D&edition=2021)\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1133-1138)\n\n#### pub fn try_insert ( &mut self, key: K, value: V, ) -> [ Result\n](..\/result\/enum.Result.html \"enum std::result::Result\") < [ &mut V\n](..\/primitive.reference.html) , [ OccupiedError\n](hash_map\/struct.OccupiedError.html \"struct\nstd::collections::hash_map::OccupiedError\") <'_, K, V>>\n\n\u00f0\u009f\u0094\u00ac This is a nightly-only experimental API. ( ` map_try_insert ` [ #82766\n](https:\/\/github.com\/rust-lang\/rust\/issues\/82766) )\n\nTries to insert a key-value pair into the map, and returns a mutable reference\nto the value in the entry.\n\nIf the map already had this key present, nothing is updated, and an error\ncontaining the occupied entry and the value is returned.\n\n##### \u00c2\u00a7 Examples\n\nBasic usage:\n\n \n \n #![feature(map_try_insert)]\n \n use std::collections::HashMap;\n \n let mut map = HashMap::new();\n assert_eq!(map.try_insert(37, \"a\").unwrap(), &\"a\");\n \n let err = map.try_insert(37, \"b\").unwrap_err();\n assert_eq!(err.entry.key(), &37);\n assert_eq!(err.entry.get(), &\"a\");\n assert_eq!(err.value, \"b\");\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0A%23!%5Bfeature\\(map_try_insert\\)%5D%0A%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map+=+HashMap::new\\(\\);%0A++++assert_eq!\\(map.try_insert\\(37,+%22a%22\\).unwrap\\(\\),+%26%22a%22\\);%0A++++%0A++++let+err+=+map.try_insert\\(37,+%22b%22\\).unwrap_err\\(\\);%0A++++assert_eq!\\(err.entry.key\\(\\),+%2637\\);%0A++++assert_eq!\\(err.entry.get\\(\\),+%26%22a%22\\);%0A++++assert_eq!\\(err.value,+%22b%22\\);%0A%7D&version=nightly&edition=2021)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1160-1166)\n\n#### pub fn remove <Q>(&mut self, k: [ &Q ](..\/primitive.reference.html) )\n-> [ Option ](..\/option\/enum.Option.html \"enum std::option::Option\") <V>\n\nwhere K: [ Borrow ](..\/borrow\/trait.Borrow.html \"trait std::borrow::Borrow\")\n<Q>, Q: [ Hash ](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") \\+ [ Eq\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ ? [ Sized\n](..\/marker\/trait.Sized.html \"trait std::marker::Sized\") ,\n\nRemoves a key from the map, returning the value at the key if the key was\npreviously in the map.\n\nThe key may be any borrowed form of the map\u00e2\u0080\u0099s key type, but [ ` Hash `\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") and [ ` Eq `\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") on the borrowed form _must_ match\nthose for the key type.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut map = HashMap::new();\n map.insert(1, \"a\");\n assert_eq!(map.remove(&1), Some(\"a\"));\n assert_eq!(map.remove(&1), None);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+mut+map+=+HashMap::new\\(\\);%0A++++map.insert\\(1,+%22a%22\\);%0A++++assert_eq!\\(map.remove\\(%261\\),+Some\\(%22a%22\\)\\);%0A++++assert_eq!\\(map.remove\\(%261\\),+None\\);%0A%7D&edition=2021)\n\n1.27.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1189-1195)\n\n#### pub fn remove_entry <Q>(&mut self, k: [ &Q\n](..\/primitive.reference.html) ) -> [ Option ](..\/option\/enum.Option.html\n\"enum std::option::Option\") < [ (K, V) ](..\/primitive.tuple.html) >\n\nwhere K: [ Borrow ](..\/borrow\/trait.Borrow.html \"trait std::borrow::Borrow\")\n<Q>, Q: [ Hash ](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") \\+ [ Eq\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ ? [ Sized\n](..\/marker\/trait.Sized.html \"trait std::marker::Sized\") ,\n\nRemoves a key from the map, returning the stored key and value if the key was\npreviously in the map.\n\nThe key may be any borrowed form of the map\u00e2\u0080\u0099s key type, but [ ` Hash `\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") and [ ` Eq `\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") on the borrowed form _must_ match\nthose for the key type.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let mut map = HashMap::new();\n map.insert(1, \"a\");\n assert_eq!(map.remove_entry(&1), Some((1, \"a\")));\n assert_eq!(map.remove(&1), None);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Ause+std::collections::HashMap;%0A%0Afn+main\\(\\)+%7B%0Alet+mut+map+=+HashMap::new\\(\\);%0Amap.insert\\(1,+%22a%22\\);%0Aassert_eq!\\(map.remove_entry\\(%261\\),+Some\\(\\(1,+%22a%22\\)\\)\\);%0Aassert_eq!\\(map.remove\\(%261\\),+None\\);%0A%7D&edition=2021)\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1198-1259) \u00c2\u00a7\n\n### impl<K, V, S> [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nwhere S: [ BuildHasher ](..\/hash\/trait.BuildHasher.html \"trait\nstd::hash::BuildHasher\") ,\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1235-1237)\n\n#### pub fn raw_entry_mut (&mut self) -> [ RawEntryBuilderMut\n](hash_map\/struct.RawEntryBuilderMut.html \"struct\nstd::collections::hash_map::RawEntryBuilderMut\") <'_, K, V, S>\n\n\u00f0\u009f\u0094\u00ac This is a nightly-only experimental API. ( ` hash_raw_entry ` [ #56167\n](https:\/\/github.com\/rust-lang\/rust\/issues\/56167) )\n\nCreates a raw entry builder for the HashMap.\n\nRaw entries provide the lowest level of control for searching and manipulating\na map. They must be manually initialized with a hash and then manually\nsearched. After this, insertions into a vacant entry still require an owned\nkey to be provided.\n\nRaw entries are useful for such exotic situations as:\n\n * Hash memoization \n * Deferring the creation of an owned key until it is known to be required \n * Using a search key that doesn\u00e2\u0080\u0099t work with the Borrow trait \n * Using custom comparison logic without newtype wrappers \n\nBecause raw entries provide much more low-level control, it\u00e2\u0080\u0099s much easier to\nput the HashMap into an inconsistent state which, while memory-safe, will\ncause the map to produce seemingly random results. Higher-level and more\nfoolproof APIs like ` entry ` should be preferred when possible.\n\nIn particular, the hash used to initialize the raw entry must still be\nconsistent with the hash of the key that is ultimately stored in the entry.\nThis is because implementations of HashMap may need to recompute hashes when\nresizing, at which point only the keys are available.\n\nRaw entries give mutable access to the keys. This must not be used to modify\nhow the key would compare or hash, as the map will not re-evaluate where the\nkey should go, meaning the keys may become \u00e2\u0080\u009clost\u00e2\u0080\u009d if their location does\nnot reflect their state. For instance, if you change a key so that the map now\ncontains keys which compare equal, search may start acting erratically, with\ntwo keys randomly masking each other. Implementations are free to assume this\ndoesn\u00e2\u0080\u0099t happen (within the limits of memory-safety).\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1256-1258)\n\n#### pub fn raw_entry (&self) -> [ RawEntryBuilder\n](hash_map\/struct.RawEntryBuilder.html \"struct\nstd::collections::hash_map::RawEntryBuilder\") <'_, K, V, S>\n\n\u00f0\u009f\u0094\u00ac This is a nightly-only experimental API. ( ` hash_raw_entry ` [ #56167\n](https:\/\/github.com\/rust-lang\/rust\/issues\/56167) )\n\nCreates a raw immutable entry builder for the HashMap.\n\nRaw entries provide the lowest level of control for searching and manipulating\na map. They must be manually initialized with a hash and then manually\nsearched.\n\nThis is useful for\n\n * Hash memoization \n * Using a search key that doesn\u00e2\u0080\u0099t work with the Borrow trait \n * Using custom comparison logic without newtype wrappers \n\nUnless you are in such a situation, higher-level and more foolproof APIs like\n` get ` should be preferred.\n\nImmutable raw entries have very limited use; you might instead want `\nraw_entry_mut ` .\n\n## Trait Implementations \u00c2\u00a7\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1262-1277) \u00c2\u00a7\n\n### impl<K, V, S> [ Clone ](..\/clone\/trait.Clone.html \"trait\nstd::clone::Clone\") for [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nwhere K: [ Clone ](..\/clone\/trait.Clone.html \"trait std::clone::Clone\") , V: [\nClone ](..\/clone\/trait.Clone.html \"trait std::clone::Clone\") , S: [ Clone\n](..\/clone\/trait.Clone.html \"trait std::clone::Clone\") ,\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1269-1271) \u00c2\u00a7\n\n#### fn [ clone ](..\/clone\/trait.Clone.html#tymethod.clone) (&self) -> Self\n\nReturns a copy of the value. [ Read more\n](..\/clone\/trait.Clone.html#tymethod.clone)\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1274-1276) \u00c2\u00a7\n\n#### fn [ clone_from ](..\/clone\/trait.Clone.html#method.clone_from) (&mut\nself, source: [ &Self ](..\/primitive.reference.html) )\n\nPerforms copy-assignment from ` source ` . [ Read more\n](..\/clone\/trait.Clone.html#method.clone_from)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1305-1313) \u00c2\u00a7\n\n### impl<K, V, S> [ Debug ](..\/fmt\/trait.Debug.html \"trait std::fmt::Debug\")\nfor [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nwhere K: [ Debug ](..\/fmt\/trait.Debug.html \"trait std::fmt::Debug\") , V: [\nDebug ](..\/fmt\/trait.Debug.html \"trait std::fmt::Debug\") ,\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1310-1312) \u00c2\u00a7\n\n#### fn [ fmt ](..\/fmt\/trait.Debug.html#tymethod.fmt) (&self, f: &mut [\nFormatter ](..\/fmt\/struct.Formatter.html \"struct std::fmt::Formatter\") <'_>)\n-> [ Result ](..\/fmt\/type.Result.html \"type std::fmt::Result\")\n\nFormats the value using the given formatter. [ Read more\n](..\/fmt\/trait.Debug.html#tymethod.fmt)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1316-1325) \u00c2\u00a7\n\n### impl<K, V, S> [ Default ](..\/default\/trait.Default.html \"trait\nstd::default::Default\") for [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nwhere S: [ Default ](..\/default\/trait.Default.html \"trait\nstd::default::Default\") ,\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1322-1324) \u00c2\u00a7\n\n#### fn [ default ](..\/default\/trait.Default.html#tymethod.default) () -> [\nHashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nCreates an empty ` HashMap<K, V, S> ` , with the ` Default ` value for the\nhasher.\n\n1.4.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#3159-3179) \u00c2\u00a7\n\n### impl<'a, K, V, S> [ Extend ](..\/iter\/trait.Extend.html \"trait\nstd::iter::Extend\") <( [ &'a K ](..\/primitive.reference.html) , [ &'a V\n](..\/primitive.reference.html) )> for [ HashMap ](hash_map\/struct.HashMap.html\n\"struct std::collections::hash_map::HashMap\") <K, V, S>\n\nwhere K: [ Eq ](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ [ Hash\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") \\+ [ Copy\n](..\/marker\/trait.Copy.html \"trait std::marker::Copy\") , V: [ Copy\n](..\/marker\/trait.Copy.html \"trait std::marker::Copy\") , S: [ BuildHasher\n](..\/hash\/trait.BuildHasher.html \"trait std::hash::BuildHasher\") ,\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#3166-3168) \u00c2\u00a7\n\n#### fn [ extend ](..\/iter\/trait.Extend.html#tymethod.extend) <T: [\nIntoIterator ](..\/iter\/trait.IntoIterator.html \"trait\nstd::iter::IntoIterator\") <Item = ( [ &'a K ](..\/primitive.reference.html) , [\n&'a V ](..\/primitive.reference.html) )>>(&mut self, iter: T)\n\nExtends a collection with the contents of an iterator. [ Read more\n](..\/iter\/trait.Extend.html#tymethod.extend)\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#3171-3173) \u00c2\u00a7\n\n#### fn [ extend_one ](..\/iter\/trait.Extend.html#method.extend_one) (&mut\nself, (k, v): ( [ &'a K ](..\/primitive.reference.html) , [ &'a V\n](..\/primitive.reference.html) ))\n\n\u00f0\u009f\u0094\u00ac This is a nightly-only experimental API. ( ` extend_one ` [ #72631\n](https:\/\/github.com\/rust-lang\/rust\/issues\/72631) )\n\nExtends a collection with exactly one element.\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#3176-3178) \u00c2\u00a7\n\n#### fn [ extend_reserve ](..\/iter\/trait.Extend.html#method.extend_reserve)\n(&mut self, additional: [ usize ](..\/primitive.usize.html) )\n\n\u00f0\u009f\u0094\u00ac This is a nightly-only experimental API. ( ` extend_one ` [ #72631\n](https:\/\/github.com\/rust-lang\/rust\/issues\/72631) )\n\nReserves capacity in a collection for the given number of additional elements.\n[ Read more ](..\/iter\/trait.Extend.html#method.extend_reserve)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#3137-3156) \u00c2\u00a7\n\n### impl<K, V, S> [ Extend ](..\/iter\/trait.Extend.html \"trait\nstd::iter::Extend\") < [ (K, V) ](..\/primitive.tuple.html) > for [ HashMap\n](hash_map\/struct.HashMap.html \"struct std::collections::hash_map::HashMap\")\n<K, V, S>\n\nwhere K: [ Eq ](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ [ Hash\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") , S: [ BuildHasher\n](..\/hash\/trait.BuildHasher.html \"trait std::hash::BuildHasher\") ,\n\nInserts all new key-values from the iterator and replaces values with existing\nkeys with new values returned from the iterator.\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#3143-3145) \u00c2\u00a7\n\n#### fn [ extend ](..\/iter\/trait.Extend.html#tymethod.extend) <T: [\nIntoIterator ](..\/iter\/trait.IntoIterator.html \"trait\nstd::iter::IntoIterator\") <Item = [ (K, V) ](..\/primitive.tuple.html) >>(&mut\nself, iter: T)\n\nExtends a collection with the contents of an iterator. [ Read more\n](..\/iter\/trait.Extend.html#tymethod.extend)\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#3148-3150) \u00c2\u00a7\n\n#### fn [ extend_one ](..\/iter\/trait.Extend.html#method.extend_one) (&mut\nself, (k, v): [ (K, V) ](..\/primitive.tuple.html) )\n\n\u00f0\u009f\u0094\u00ac This is a nightly-only experimental API. ( ` extend_one ` [ #72631\n](https:\/\/github.com\/rust-lang\/rust\/issues\/72631) )\n\nExtends a collection with exactly one element.\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#3153-3155) \u00c2\u00a7\n\n#### fn [ extend_reserve ](..\/iter\/trait.Extend.html#method.extend_reserve)\n(&mut self, additional: [ usize ](..\/primitive.usize.html) )\n\n\u00f0\u009f\u0094\u00ac This is a nightly-only experimental API. ( ` extend_one ` [ #72631\n](https:\/\/github.com\/rust-lang\/rust\/issues\/72631) )\n\nReserves capacity in a collection for the given number of additional elements.\n[ Read more ](..\/iter\/trait.Extend.html#method.extend_reserve)\n\n1.56.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1360-1376) \u00c2\u00a7\n\n### impl<K, V, const N: [ usize ](..\/primitive.usize.html) > [ From\n](..\/convert\/trait.From.html \"trait std::convert::From\") <[ [ (K, V)\n](..\/primitive.tuple.html) ; [ N ](..\/primitive.array.html) ]> for [ HashMap\n](hash_map\/struct.HashMap.html \"struct std::collections::hash_map::HashMap\")\n<K, V, [ RandomState ](hash_map\/struct.RandomState.html \"struct\nstd::collections::hash_map::RandomState\") >\n\nwhere K: [ Eq ](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ [ Hash\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") ,\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1373-1375) \u00c2\u00a7\n\n#### fn [ from ](..\/convert\/trait.From.html#tymethod.from) (arr: [ [ (K, V)\n](..\/primitive.tuple.html) ; [ N ](..\/primitive.array.html) ]) -> Self\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let map1 = HashMap::from([(1, 2), (3, 4)]);\n let map2: HashMap<_, _> = [(1, 2), (3, 4)].into();\n assert_eq!(map1, map2);\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+map1+=+HashMap::from\\(%5B\\(1,+2\\),+\\(3,+4\\)%5D\\);%0A++++let+map2:+HashMap%3C_,+_%3E+=+%5B\\(1,+2\\),+\\(3,+4\\)%5D.into\\(\\);%0A++++assert_eq!\\(map1,+map2\\);%0A%7D&edition=2021)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#3122-3132) \u00c2\u00a7\n\n### impl<K, V, S> [ FromIterator ](..\/iter\/trait.FromIterator.html \"trait\nstd::iter::FromIterator\") < [ (K, V) ](..\/primitive.tuple.html) > for [\nHashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nwhere K: [ Eq ](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ [ Hash\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") , S: [ BuildHasher\n](..\/hash\/trait.BuildHasher.html \"trait std::hash::BuildHasher\") \\+ [ Default\n](..\/default\/trait.Default.html \"trait std::default::Default\") ,\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#3127-3131) \u00c2\u00a7\n\n#### fn [ from_iter ](..\/iter\/trait.FromIterator.html#tymethod.from_iter) <T:\n[ IntoIterator ](..\/iter\/trait.IntoIterator.html \"trait\nstd::iter::IntoIterator\") <Item = [ (K, V) ](..\/primitive.tuple.html) >>(iter:\nT) -> [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nCreates a value from an iterator. [ Read more\n](..\/iter\/trait.FromIterator.html#tymethod.from_iter)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1328-1345) \u00c2\u00a7\n\n### impl<K, Q, V, S> [ Index ](..\/ops\/trait.Index.html \"trait\nstd::ops::Index\") < [ &Q ](..\/primitive.reference.html) > for [ HashMap\n](hash_map\/struct.HashMap.html \"struct std::collections::hash_map::HashMap\")\n<K, V, S>\n\nwhere K: [ Eq ](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ [ Hash\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") \\+ [ Borrow\n](..\/borrow\/trait.Borrow.html \"trait std::borrow::Borrow\") <Q>, Q: [ Eq\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ [ Hash\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") \\+ ? [ Sized\n](..\/marker\/trait.Sized.html \"trait std::marker::Sized\") , S: [ BuildHasher\n](..\/hash\/trait.BuildHasher.html \"trait std::hash::BuildHasher\") ,\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1342-1344) \u00c2\u00a7\n\n#### fn [ index ](..\/ops\/trait.Index.html#tymethod.index) (&self, key: [ &Q\n](..\/primitive.reference.html) ) -> [ &V ](..\/primitive.reference.html)\n\nReturns a reference to the value corresponding to the supplied key.\n\n##### \u00c2\u00a7 Panics\n\nPanics if the key is not present in the ` HashMap ` .\n\n\u00c2\u00a7\n\n#### type [ Output ](..\/ops\/trait.Index.html#associatedtype.Output) = V\n\nThe returned type after indexing.\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#2175-2184) \u00c2\u00a7\n\n### impl<'a, K, V, S> [ IntoIterator ](..\/iter\/trait.IntoIterator.html \"trait\nstd::iter::IntoIterator\") for &'a [ HashMap ](hash_map\/struct.HashMap.html\n\"struct std::collections::hash_map::HashMap\") <K, V, S>\n\n\u00c2\u00a7\n\n#### type [ Item ](..\/iter\/trait.IntoIterator.html#associatedtype.Item) = ( [\n&'a K ](..\/primitive.reference.html) , [ &'a V ](..\/primitive.reference.html)\n)\n\nThe type of the elements being iterated over.\n\n\u00c2\u00a7\n\n#### type [ IntoIter\n](..\/iter\/trait.IntoIterator.html#associatedtype.IntoIter) = [ Iter\n](hash_map\/struct.Iter.html \"struct std::collections::hash_map::Iter\") <'a, K,\nV>\n\nWhich kind of iterator are we turning this into?\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#2181-2183) \u00c2\u00a7\n\n#### fn [ into_iter ](..\/iter\/trait.IntoIterator.html#tymethod.into_iter)\n(self) -> [ Iter ](hash_map\/struct.Iter.html \"struct\nstd::collections::hash_map::Iter\") <'a, K, V> \u00e2\u0093\u0098\n\nCreates an iterator from a value. [ Read more\n](..\/iter\/trait.IntoIterator.html#tymethod.into_iter)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#2187-2196) \u00c2\u00a7\n\n### impl<'a, K, V, S> [ IntoIterator ](..\/iter\/trait.IntoIterator.html \"trait\nstd::iter::IntoIterator\") for &'a mut [ HashMap ](hash_map\/struct.HashMap.html\n\"struct std::collections::hash_map::HashMap\") <K, V, S>\n\n\u00c2\u00a7\n\n#### type [ Item ](..\/iter\/trait.IntoIterator.html#associatedtype.Item) = ( [\n&'a K ](..\/primitive.reference.html) , [ &'a mut V\n](..\/primitive.reference.html) )\n\nThe type of the elements being iterated over.\n\n\u00c2\u00a7\n\n#### type [ IntoIter\n](..\/iter\/trait.IntoIterator.html#associatedtype.IntoIter) = [ IterMut\n](hash_map\/struct.IterMut.html \"struct std::collections::hash_map::IterMut\")\n<'a, K, V>\n\nWhich kind of iterator are we turning this into?\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#2193-2195) \u00c2\u00a7\n\n#### fn [ into_iter ](..\/iter\/trait.IntoIterator.html#tymethod.into_iter)\n(self) -> [ IterMut ](hash_map\/struct.IterMut.html \"struct\nstd::collections::hash_map::IterMut\") <'a, K, V> \u00e2\u0093\u0098\n\nCreates an iterator from a value. [ Read more\n](..\/iter\/trait.IntoIterator.html#tymethod.into_iter)\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#2199-2226) \u00c2\u00a7\n\n### impl<K, V, S> [ IntoIterator ](..\/iter\/trait.IntoIterator.html \"trait\nstd::iter::IntoIterator\") for [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#2223-2225) \u00c2\u00a7\n\n#### fn [ into_iter ](..\/iter\/trait.IntoIterator.html#tymethod.into_iter)\n(self) -> [ IntoIter ](hash_map\/struct.IntoIter.html \"struct\nstd::collections::hash_map::IntoIter\") <K, V> \u00e2\u0093\u0098\n\nCreates a consuming iterator, that is, one that moves each key-value pair out\nof the map in arbitrary order. The map cannot be used after calling this.\n\n##### \u00c2\u00a7 Examples\n\n \n \n use std::collections::HashMap;\n \n let map = HashMap::from([\n (\"a\", 1),\n (\"b\", 2),\n (\"c\", 3),\n ]);\n \n \/\/ Not possible with .iter()\n let vec: Vec<(&str, i32)> = map.into_iter().collect();\n\n[ Run ](https:\/\/play.rust-\nlang.org\/?code=%23!%5Ballow\\(unused\\)%5D%0Afn+main\\(\\)+%7B%0A++++use+std::collections::HashMap;%0A++++%0A++++let+map+=+HashMap::from\\(%5B%0A++++++++\\(%22a%22,+1\\),%0A++++++++\\(%22b%22,+2\\),%0A++++++++\\(%22c%22,+3\\),%0A++++%5D\\);%0A++++%0A++++\/\/+Not+possible+with+.iter\\(\\)%0A++++let+vec:+Vec%3C\\(%26str,+i32\\)%3E+=+map.into_iter\\(\\).collect\\(\\);%0A%7D&edition=2021)\n\n\u00c2\u00a7\n\n#### type [ Item ](..\/iter\/trait.IntoIterator.html#associatedtype.Item) = [\n(K, V) ](..\/primitive.tuple.html)\n\nThe type of the elements being iterated over.\n\n\u00c2\u00a7\n\n#### type [ IntoIter\n](..\/iter\/trait.IntoIterator.html#associatedtype.IntoIter) = [ IntoIter\n](hash_map\/struct.IntoIter.html \"struct std::collections::hash_map::IntoIter\")\n<K, V>\n\nWhich kind of iterator are we turning this into?\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1280-1293) \u00c2\u00a7\n\n### impl<K, V, S> [ PartialEq ](..\/cmp\/trait.PartialEq.html \"trait\nstd::cmp::PartialEq\") for [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nwhere K: [ Eq ](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ [ Hash\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") , V: [ PartialEq\n](..\/cmp\/trait.PartialEq.html \"trait std::cmp::PartialEq\") , S: [ BuildHasher\n](..\/hash\/trait.BuildHasher.html \"trait std::hash::BuildHasher\") ,\n\n[ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1286-1292) \u00c2\u00a7\n\n#### fn [ eq ](..\/cmp\/trait.PartialEq.html#tymethod.eq) (&self, other: & [\nHashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>) -> [ bool\n](..\/primitive.bool.html)\n\nThis method tests for ` self ` and ` other ` values to be equal, and is used\nby ` == ` .\n\n1.0.0 \u00c2\u00b7 [ source ](https:\/\/doc.rust-\nlang.org\/1.80.1\/src\/core\/cmp.rs.html#263) \u00c2\u00a7\n\n#### fn [ ne ](..\/cmp\/trait.PartialEq.html#method.ne) (&self, other: [ &Rhs\n](..\/primitive.reference.html) ) -> [ bool ](..\/primitive.bool.html)\n\nThis method tests for ` != ` . The default implementation is almost always\nsufficient, and should not be overridden without very good reason.\n\n1.0.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/collections\/hash\/map.rs.html#1296-1302) \u00c2\u00a7\n\n### impl<K, V, S> [ Eq ](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") for [\nHashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nwhere K: [ Eq ](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") \\+ [ Hash\n](..\/hash\/trait.Hash.html \"trait std::hash::Hash\") , V: [ Eq\n](..\/cmp\/trait.Eq.html \"trait std::cmp::Eq\") , S: [ BuildHasher\n](..\/hash\/trait.BuildHasher.html \"trait std::hash::BuildHasher\") ,\n\n1.36.0 \u00c2\u00b7 [ source ](..\/..\/src\/std\/panic.rs.html#82-88) \u00c2\u00a7\n\n### impl<K, V, S> [ UnwindSafe ](..\/panic\/trait.UnwindSafe.html \"trait\nstd::panic::UnwindSafe\") for [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nwhere K: [ UnwindSafe ](..\/panic\/trait.UnwindSafe.html \"trait\nstd::panic::UnwindSafe\") , V: [ UnwindSafe ](..\/panic\/trait.UnwindSafe.html\n\"trait std::panic::UnwindSafe\") , S: [ UnwindSafe\n](..\/panic\/trait.UnwindSafe.html \"trait std::panic::UnwindSafe\") ,\n\n## Auto Trait Implementations \u00c2\u00a7\n\n\u00c2\u00a7\n\n### impl<K, V, S> [ Freeze ](..\/marker\/trait.Freeze.html \"trait\nstd::marker::Freeze\") for [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nwhere S: [ Freeze ](..\/marker\/trait.Freeze.html \"trait std::marker::Freeze\") ,\n\n\u00c2\u00a7\n\n### impl<K, V, S> [ RefUnwindSafe ](..\/panic\/trait.RefUnwindSafe.html \"trait\nstd::panic::RefUnwindSafe\") for [ HashMap ](hash_map\/struct.HashMap.html\n\"struct std::collections::hash_map::HashMap\") <K, V, S>\n\nwhere S: [ RefUnwindSafe ](..\/panic\/trait.RefUnwindSafe.html \"trait\nstd::panic::RefUnwindSafe\") , K: [ RefUnwindSafe\n](..\/panic\/trait.RefUnwindSafe.html \"trait std::panic::RefUnwindSafe\") , V: [\nRefUnwindSafe ](..\/panic\/trait.RefUnwindSafe.html \"trait\nstd::panic::RefUnwindSafe\") ,\n\n\u00c2\u00a7\n\n### impl<K, V, S> [ Send ](..\/marker\/trait.Send.html \"trait\nstd::marker::Send\") for [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nwhere S: [ Send ](..\/marker\/trait.Send.html \"trait std::marker::Send\") , K: [\nSend ](..\/marker\/trait.Send.html \"trait std::marker::Send\") , V: [ Send\n](..\/marker\/trait.Send.html \"trait std::marker::Send\") ,\n\n\u00c2\u00a7\n\n### impl<K, V, S> [ Sync ](..\/marker\/trait.Sync.html \"trait\nstd::marker::Sync\") for [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nwhere S: [ Sync ](..\/marker\/trait.Sync.html \"trait std::marker::Sync\") , K: [\nSync ](..\/marker\/trait.Sync.html \"trait std::marker::Sync\") , V: [ Sync\n](..\/marker\/trait.Sync.html \"trait std::marker::Sync\") ,\n\n\u00c2\u00a7\n\n### impl<K, V, S> [ Unpin ](..\/marker\/trait.Unpin.html \"trait\nstd::marker::Unpin\") for [ HashMap ](hash_map\/struct.HashMap.html \"struct\nstd::collections::hash_map::HashMap\") <K, V, S>\n\nwhere S: [ Unpin ](..\/marker\/trait.Unpin.html \"trait std::marker::Unpin\") , K:\n[ Unpin ](..\/marker\/trait.Unpin.html \"trait std::marker::Unpin\") , V: [ Unpin\n](..\/marker\/trait.Unpin.html \"trait std::marker::Unpin\") ,\n\n## Blanket Implementations \u00c2\u00a7\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/core\/any.rs.html#140) \u00c2\u00a7\n\n### impl<T> [ Any ](..\/any\/trait.Any.html \"trait std::any::Any\") for T\n\nwhere T: 'static + ? [ Sized ](..\/marker\/trait.Sized.html \"trait\nstd::marker::Sized\") ,\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/core\/any.rs.html#141) \u00c2\u00a7\n\n#### fn [ type_id ](..\/any\/trait.Any.html#tymethod.type_id) (&self) -> [\nTypeId ](..\/any\/struct.TypeId.html \"struct std::any::TypeId\")\n\nGets the ` TypeId ` of ` self ` . [ Read more\n](..\/any\/trait.Any.html#tymethod.type_id)\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/core\/borrow.rs.html#208) \u00c2\u00a7\n\n### impl<T> [ Borrow ](..\/borrow\/trait.Borrow.html \"trait\nstd::borrow::Borrow\") <T> for T\n\nwhere T: ? [ Sized ](..\/marker\/trait.Sized.html \"trait std::marker::Sized\") ,\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/core\/borrow.rs.html#210) \u00c2\u00a7\n\n#### fn [ borrow ](..\/borrow\/trait.Borrow.html#tymethod.borrow) (&self) -> [\n&T ](..\/primitive.reference.html)\n\nImmutably borrows from an owned value. [ Read more\n](..\/borrow\/trait.Borrow.html#tymethod.borrow)\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/core\/borrow.rs.html#216) \u00c2\u00a7\n\n### impl<T> [ BorrowMut ](..\/borrow\/trait.BorrowMut.html \"trait\nstd::borrow::BorrowMut\") <T> for T\n\nwhere T: ? [ Sized ](..\/marker\/trait.Sized.html \"trait std::marker::Sized\") ,\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/core\/borrow.rs.html#217) \u00c2\u00a7\n\n#### fn [ borrow_mut ](..\/borrow\/trait.BorrowMut.html#tymethod.borrow_mut)\n(&mut self) -> [ &mut T ](..\/primitive.reference.html)\n\nMutably borrows from an owned value. [ Read more\n](..\/borrow\/trait.BorrowMut.html#tymethod.borrow_mut)\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/core\/convert\/mod.rs.html#765)\n\u00c2\u00a7\n\n### impl<T> [ From ](..\/convert\/trait.From.html \"trait std::convert::From\")\n<T> for T\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/core\/convert\/mod.rs.html#768)\n\u00c2\u00a7\n\n#### fn [ from ](..\/convert\/trait.From.html#tymethod.from) (t: T) -> T\n\nReturns the argument unchanged.\n\n[ source ](https:\/\/doc.rust-\nlang.org\/1.80.1\/src\/core\/convert\/mod.rs.html#748-750) \u00c2\u00a7\n\n### impl<T, U> [ Into ](..\/convert\/trait.Into.html \"trait\nstd::convert::Into\") <U> for T\n\nwhere U: [ From ](..\/convert\/trait.From.html \"trait std::convert::From\") <T>,\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/core\/convert\/mod.rs.html#758)\n\u00c2\u00a7\n\n#### fn [ into ](..\/convert\/trait.Into.html#tymethod.into) (self) -> U\n\nCalls ` U::from(self) ` .\n\nThat is, this conversion is whatever the implementation of ` [ From\n](..\/convert\/trait.From.html \"trait std::convert::From\") <T> for U ` chooses\nto do.\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/alloc\/borrow.rs.html#83-85) \u00c2\u00a7\n\n### impl<T> [ ToOwned ](..\/borrow\/trait.ToOwned.html \"trait\nstd::borrow::ToOwned\") for T\n\nwhere T: [ Clone ](..\/clone\/trait.Clone.html \"trait std::clone::Clone\") ,\n\n\u00c2\u00a7\n\n#### type [ Owned ](..\/borrow\/trait.ToOwned.html#associatedtype.Owned) = T\n\nThe resulting type after obtaining ownership.\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/alloc\/borrow.rs.html#88) \u00c2\u00a7\n\n#### fn [ to_owned ](..\/borrow\/trait.ToOwned.html#tymethod.to_owned) (&self)\n-> T\n\nCreates owned data from borrowed data, usually by cloning. [ Read more\n](..\/borrow\/trait.ToOwned.html#tymethod.to_owned)\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/alloc\/borrow.rs.html#92) \u00c2\u00a7\n\n#### fn [ clone_into ](..\/borrow\/trait.ToOwned.html#method.clone_into)\n(&self, target: [ &mut T ](..\/primitive.reference.html) )\n\nUses borrowed data to replace owned data, usually by cloning. [ Read more\n](..\/borrow\/trait.ToOwned.html#method.clone_into)\n\n[ source ](https:\/\/doc.rust-\nlang.org\/1.80.1\/src\/core\/convert\/mod.rs.html#805-807) \u00c2\u00a7\n\n### impl<T, U> [ TryFrom ](..\/convert\/trait.TryFrom.html \"trait\nstd::convert::TryFrom\") <U> for T\n\nwhere U: [ Into ](..\/convert\/trait.Into.html \"trait std::convert::Into\") <T>,\n\n\u00c2\u00a7\n\n#### type [ Error ](..\/convert\/trait.TryFrom.html#associatedtype.Error) = [\nInfallible ](..\/convert\/enum.Infallible.html \"enum std::convert::Infallible\")\n\nThe type returned in the event of a conversion error.\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/core\/convert\/mod.rs.html#812)\n\u00c2\u00a7\n\n#### fn [ try_from ](..\/convert\/trait.TryFrom.html#tymethod.try_from) (value:\nU) -> [ Result ](..\/result\/enum.Result.html \"enum std::result::Result\") <T, <T\nas [ TryFrom ](..\/convert\/trait.TryFrom.html \"trait std::convert::TryFrom\")\n<U>>:: [ Error ](..\/convert\/trait.TryFrom.html#associatedtype.Error \"type\nstd::convert::TryFrom::Error\") >\n\nPerforms the conversion.\n\n[ source ](https:\/\/doc.rust-\nlang.org\/1.80.1\/src\/core\/convert\/mod.rs.html#790-792) \u00c2\u00a7\n\n### impl<T, U> [ TryInto ](..\/convert\/trait.TryInto.html \"trait\nstd::convert::TryInto\") <U> for T\n\nwhere U: [ TryFrom ](..\/convert\/trait.TryFrom.html \"trait\nstd::convert::TryFrom\") <T>,\n\n\u00c2\u00a7\n\n#### type [ Error ](..\/convert\/trait.TryInto.html#associatedtype.Error) = <U\nas [ TryFrom ](..\/convert\/trait.TryFrom.html \"trait std::convert::TryFrom\")\n<T>>:: [ Error ](..\/convert\/trait.TryFrom.html#associatedtype.Error \"type\nstd::convert::TryFrom::Error\")\n\nThe type returned in the event of a conversion error.\n\n[ source ](https:\/\/doc.rust-lang.org\/1.80.1\/src\/core\/convert\/mod.rs.html#797)\n\u00c2\u00a7\n\n#### fn [ try_into ](..\/convert\/trait.TryInto.html#tymethod.try_into) (self)\n-> [ Result ](..\/result\/enum.Result.html \"enum std::result::Result\") <U, <U as\n[ TryFrom ](..\/convert\/trait.TryFrom.html \"trait std::convert::TryFrom\")\n<T>>:: [ Error ](..\/convert\/trait.TryFrom.html#associatedtype.Error \"type\nstd::convert::TryFrom::Error\") >\n\nPerforms the conversion."} {"query":"Type of an object that contains CSS properties\n\nI have a function that takes an element on the page and adds CSS to its style attribute. I want the argument that is passed to it to be an object with keys such as height, minWidth, flexDirection, etc.\n\n```\nfunction addStyle (el: HTMLElement, style: Style): void {\n for (const property in style) {\n el.style[property] = style[property]\n }\n}\n```\n\nMy problem is in typing this object. It's obviously not feasible to define every single CSS property myself. I'm pretty sure that TypeScript should be able to do this itself, I'm just not sure how. This is my best guess:\n```\ntype Style = Partial<CSSStyleDeclaration>\n```\n\n...but that results in the error \"Type 'string | undefined' is not assignable to type 'string'\"\n\nThat specific error can be easily overridden, but I'm wary that it indicates I'm doing something wrong or unintuitive.\n\nIs there a standard way to type an object that contains CSS properties?","reasoning":"Because there might be multiple data in CSS that corresponds to different data types, such as height, minWidth, and so on. One solution is needed to use the language tools and built-in types provided by TypeScript and construct a new type definition that covers all CSS property names. Instead of declaring types one by one.","id":"65","excluded_ids":["N\/A"],"gold_ids_long":["Mmdn_object\/Mmdn_object.txt"],"gold_ids":["Mmdn_object\/Mmdn_object_7_3.txt","Mmdn_object\/Mmdn_object_7_1.txt","Mmdn_object\/Mmdn_object_7_2.txt"],"gold_answer":"I don't think there's a good way to do this sort of thing, When typing a\nproperty that may or may not exist on an object, TypeScript does not\ndifferentiate between the property existing and containing the value `\nundefined ` , and the property not existing at all.\n\nFor example, disregarding styles and everything else, consider typing an object which may have the ` foo ` property which is a string. You can't do ` { foo: string } ` because the property may not exist. You can do ` { foo: string | undefined } ` . You can also do ` { foo?: string } ` , but this type is identical to the previous one. I don't think there are any other options: you can't say \"This property may or may not exist, but if it does exist, it does not contain ` undefined ` .\" \n\nA CSSStyleDeclaration is the same sort of situation, except with lots more\nproperties. ` Partial<CSSStyleDeclaration> ` is the right way to go, but\nunfortunately, since TypeScript won't distinguish between partial types having\nproperties containing ` undefined ` and not having those properties at all,\nyou have to assert that the value _does_ exist, if dealing with an individual\nvalue\n\nBut, there's another way to bypass having to reference and assert individual\nvalues, which is to use ` Object.assign ` :\n\n \n \n type Style = Partial<CSSStyleDeclaration>;\n function addStyle(el: HTMLElement, style: Style) {\n Object.assign(el.style, style);\n }\n \n\n(also remember that there's no need to specify the return type of a function\nif TS can determine it automatically - less code to read and write is usually\na good thing)"} {"query":"Docker Swarm with image versions externalized to .env file\n\nI used to externalized my image versions to my .env file. This make it easy to maintain and I don't modify my docker-compose.yml file just to upgrade a version, so I'm sure I won't delete a line by mistake or whatever.\n\nBut when I try to deploy my services with stack to the swarm, docker engine complains that my image is not a correct reposity\/tag, with the exact following message :\n\n```\nError response from daemon: rpc error: code = 3 desc = ContainerSpec: \"GROUP\/IMAGE:\" is not a valid repository\/tag\n```\n\nTo fix this, I can fix the image version directly in the docker-compose.yml file. Is there any logic here or it that a bug? But this mixes fix part of the docker-compose and variable ones.\n\nCheers, Olivier\n\n","reasoning":"The problem of not being able to read the image version information from the .env file when deploying in Docker Swarm is actually due to the difference in parsing capabilities between Swarm's yaml parser and docker-compose.\nWe need a way to solve this compatibility problem. It would be better to allow the continued use of .env files to store variable information such as image versions, while also meeting the requirements of Swarm deployments, thus achieving versioning flexibility and maintainability.","id":"66","excluded_ids":["N\/A"],"gold_ids_long":["docker_compose_buildx\/docker_compose.txt"],"gold_ids":["docker_compose_buildx\/docker_compose_16_7.txt"],"gold_answer":"The yaml parser in ` docker stack deploy ` doesn't have all the same features\nof that in ` docker-compose ` . However, you can use ` docker-compose config `\nto output a yaml file after it's done all the variable substitutions,\nextending other files, and merging multiple files together. This effectively\nturns ` docker-compose ` into a preprocessor."} {"query":"Sort tbody list which is populated with Javascript getList?\n\nI have a router with a DHCP page which is not sorted by the internal IP number, instead it is fully random. I have full access to the html and javascript, and I can modify this without any issues. However I cannot figure out how to make the list sorted by IP-address by default. I have no need to be able to manually click and sort, I just want the list to always be sorted by IP-address.\n\nI cannot figure out if it is possible to sort a getList, or if it has to be done in the randerBandlist function.\n\nThe list html:\n```\n <table class=\"table\">\n <thead>\n <tr>\n <th width=\"30\"><\/th>\n <th><%:\u8bbe\u5907\u540d\u79f0%><\/th>\n <th><%:IP\u5730\u5740%><\/th>\n <th><%:MAC\u5730\u5740%><\/th>\n <th width=\"150\" class=\"center\"><%:\u64cd\u4f5c%><\/th>\n <\/tr>\n <\/thead>\n <tbody id=\"bandlist\">\n <tr>\n <td class=\"center\" colspan=\"5\"><%:\u67e5\u8be2\u4e2d...%><\/td>\n <\/tr>\n <\/tbody>\n <\/table>\n```\n\nThis is the part that handles the list in question:\n```\ngetList = function( callback ){\n $.getJSON('<%=luci.dispatcher.build_url(\"api\", \"xqnetwork\",\"macbind_info\")%>',{},function(rsp){\n if ( rsp.code != 0 ) {\n return;\n }\n callback( rsp );\n });\n },\n randerBandlist = function( rsp ){\n var tpl = $( '#tplbandlist' ).html(),\n container = $( '#bandlist' ),\n bandlistdata = rsp.list,\n tpldata = [],\n mac, ip, dname;\n currentList = {};\n if ( bandlistdata.length > 0 ) {\n for (var i = 0; i < bandlistdata.length; i++) {\n mac = bandlistdata[i].mac.toUpperCase();\n ip = bandlistdata[i].ip;\n dname = StringH.encode4Html( bandlistdata[i].name );\n tpldata.push({\n dname: dname,\n ip: ip,\n mac: mac\n });\n currentList[ip] = 1;\n }\n }\n container.html( tpl.tmpl( {bandlist: tpldata} ) );\n },\nreturn {\n init: function(){\n getList( randerBandlist );\n addEvent();\n }\n}\n```\n\nAlso included the full function, however I think the function above is the one that needs modifying:\n```\nvar ModelDhcpband = (function(){\nvar lanIP = '<%=lanip%>',\n ipprefix = (function(ip){\n var arr = ip.split('.');\n arr.pop();\n return arr.join('.') + '.';\n })(lanIP),\n currentList = {},\n \/\/ get repeat set\n getRepeat = function( data ){\n data = data || [];\n var repeat = [];\n var _currentList = $.extend( {}, currentList);\n for (var i = 0; i < data.length; i++) {\n var v = data[i];\n if ( typeof( _currentList[v] ) == 'undefined' ) {\n _currentList[v] = 1;\n } else {\n repeat.push(v);\n }\n }\n return repeat;\n },\n getList = function( callback ){\n $.getJSON('<%=luci.dispatcher.build_url(\"api\", \"xqnetwork\",\"macbind_info\")%>',{},function(rsp){\n if ( rsp.code != 0 ) {\n return;\n }\n callback( rsp );\n });\n },\n randerBandlist = function( rsp ){\n var tpl = $( '#tplbandlist' ).html(),\n container = $( '#bandlist' ),\n bandlistdata = rsp.list,\n tpldata = [],\n mac, ip, dname;\n currentList = {};\n if ( bandlistdata.length > 0 ) {\n for (var i = 0; i < bandlistdata.length; i++) {\n mac = bandlistdata[i].mac.toUpperCase();\n ip = bandlistdata[i].ip;\n dname = StringH.encode4Html( bandlistdata[i].name );\n tpldata.push({\n dname: dname,\n ip: ip,\n mac: mac\n });\n currentList[ip] = 1;\n }\n }\n container.html( tpl.tmpl( {bandlist: tpldata} ) );\n },\n randerDevlist = function( rsp ){\n var tpl = $( '#tpldevlist' ).html(),\n devlistdata = rsp.devicelist,\n tpldata = [],\n randerDom,\n mac, ip, dname, tag;\n if ( devlistdata.length > 0 ) {\n for (var i = 0; i < devlistdata.length; i++) {\n mac = devlistdata[i].mac.toUpperCase();\n ip = devlistdata[i].ip;\n iplast = (function( ip ){\n var arr = ip.split('.');\n return arr[arr.length - 1];\n })( ip );\n dname = StringH.encode4HtmlValue( devlistdata[i].name );\n tag = devlistdata[i].tag;\n if ( tag != 2 ) {\n tpldata.push({\n dname: dname,\n ip: iplast,\n mac: mac\n });\n }\n }\n }\n randerDom = tpl.tmpl( {\n devlist: tpldata,\n ipprefix: ipprefix\n } );\n $.dialog({\n title: '<%:\u7ed1\u5b9a\u8bbe\u5907%>',\n content: randerDom,\n lock: true,\n width: 828,\n padding: '30px'\n });\n $.pub( 'done', {id: '#addlist'} );\n },\n serializeForm = function( form ){\n var ips = $( '.ip', form ),\n dnames = $( '.dname', form ),\n macs = $( '.mac', form ),\n data = [],\n item;\n ips.each(function( idx, el ){\n item = {\n ip: ipprefix + $.trim( el.value ),\n mac: $.trim( macs.eq( idx ).val() ),\n name: $.trim( dnames.eq( idx ).val() )\n };\n data.push( item );\n });\n\n return ObjectH.stringify( data );\n }\n unbind = function( e, mac ){\n e.preventDefault();\n var that = this,\n $this = $(that),\n requestURL = '<%=luci.dispatcher.build_url(\"api\", \"xqnetwork\", \"mac_unbind\")%>',\n requestData = {\n mac: mac\n };\n $.pub( 'loading:start' );\n $.ajax({\n url: requestURL,\n type: 'POST',\n data: requestData,\n dataType: 'json',\n success: function( rsp ){\n if ( rsp.code !== 0 ) {\n $.alert( rsp.msg );\n } else {\n getList( randerBandlist );\n $( '#dellist' ).hide();\n }\n $.pub( 'loading:stop' );\n }\n });\n },\n addEvent = function(){\n \/\/ unband\n $('body').delegate( '.btn-unband' ,'click', function( e ){\n var that = this,\n $this = $( that ),\n mac = $this.attr('data-mac'),\n ok = function(){\n unbind.call( that, e, mac );\n };\n $.confirm( '<%:\u4f60\u786e\u5b9a\u8981\u89e3\u9664\u6b64\u9879\u7ed1\u5b9a\uff1f%>', ok );\n });\n \/\/ band\n $( 'body' ).delegate( '#addbandlist', 'submit', function( e ){\n e.preventDefault();\n var form = e.target,\n formName = form.name,\n formels = $( 'input', form ),\n requestURL = '<%=luci.dispatcher.build_url(\"api\", \"xqnetwork\", \"mac_bind\")%>',\n requestData,\n rules,\n name,\n display,\n validator,\n formdata,\n iplist = [],\n formdata,\n formdataojb,\n repeatIP,\n validerules = [];\n\n validator = Valid.checkAll( $('#addbandlist')[0] );\n\n if ( validator) {\n formdata = serializeForm( form );\n formdataojb = StringH.evalExp( formdata );\n for (var i = 0; i < formdataojb.length; i++) {\n iplist.push(formdataojb[i]['ip']);\n }\n repeatIP = getRepeat( iplist );\n if ( repeatIP.length !== 0 ) {\n $.alert( '<%:\u5b58\u5728IP\u51b2\u7a81\uff0c\u8bf7\u68c0\u67e5\u8f93\u5165\u9879%>' + repeatIP.join(' , ') );\n return;\n }\n\n requestData = {\n data: formdata\n };\n $.pub( 'wait', {id: '#submitbandlist'} );\n $.ajax({\n url: requestURL,\n data: requestData,\n type: 'POST',\n dataType: 'json',\n success: function( rsp ){\n if ( rsp.code == 0 ) {\n location.reload( 1 );\n } else {\n $.alert( rsp.msg );\n }\n $.pub( 'done', {id: '#submitbandlist'} );\n }\n });\n }\n\n });\n\n \/\/ add a item\n $( 'body' ).delegate( '#addoneitem', 'click', function( e ){\n e.preventDefault();\n var tpl = $( '#tpldevitem' ).html(),\n form = $( '#banditems' ),\n btnSubmit = $( '#submitbandlist' ),\n lastidx = (function(){\n if ( form.find('tr').length > 0 ) {\n return form.find('tr:last').attr( 'data-idx' );\n }\n return 0;\n }()),\n lastidx = parseInt( lastidx, 10 ),\n item = tpl.tmpl({\n idx: lastidx + 1,\n ipprefix: ipprefix\n });\n if ( !isNaN( lastidx ) ) {\n form.append( item );\n if ( form.find( 'tr' ).length > 0 ) {\n btnSubmit.show();\n } else {\n btnSubmit.hide();\n }\n } else {\n $.alert('<%:\u51fa\u73b0\u5f02\u5e38\uff0c\u8bf7\u5237\u65b0\u9875\u9762%>');\n }\n } );\n\n \/\/ del a item\n $( 'body' ).delegate( '.btn-del-item', 'click', function( e ){\n e.preventDefault();\n var tar = e.target,\n tr = $( tar ).parents( 'tr' ),\n form = $( '#banditems' ),\n btnSubmit = $( '#submitbandlist' ),\n isEmpty = (function(){\n var empty = true;\n tr.find('input').each(function(){\n if ( this.value !== '') {\n empty = false;\n return false;\n }\n });\n return empty;\n }()),\n ok = function(){\n tr.remove();\n if ( form.find( 'tr' ).length > 0 ) {\n btnSubmit.show();\n } else {\n btnSubmit.hide();\n }\n };\n if ( isEmpty ) {\n ok();\n } else {\n $.confirm( '<%:\u786e\u5b9a\u8981\u5220\u9664\u8fd9\u9879\u6570\u636e\u5417%>', ok );\n }\n } );\n\n \/\/ open add dlg\n $( '#addlist' ).click(function( e ){\n e.preventDefault();\n $.pub( 'wait', {id: '#addlist'} );\n getList( randerDevlist );\n });\n\n \/\/ check for del\n $( 'body' ).delegate( '.bandmac', 'click', function( e ){\n if ( $('.bandmac:checked').length > 0 ) {\n $( '#dellist' ).show();\n } else {\n $( '#dellist' ).hide();\n }\n } );\n\n \/\/ del all\n $( '#dellist' ).on( 'click', function( e ){\n e.preventDefault();\n if ( $('.bandmac:checked').length == 0 ) {\n $.alert( '<%:\u4f60\u8fd8\u672a\u9009\u62e9\u4efb\u4f55\u8bbe\u5907%>' );\n return;\n }\n var that = this,\n $this = $(that),\n mac = (function(){\n var tmp = [];\n $('.bandmac:checked').each(function(){\n tmp.push( this.value );\n });\n return tmp.join( ';' );\n }()),\n ok = function(){\n unbind.call( that, e, mac );\n };\n\n $.confirm( '<%:\u786e\u8ba4\u8981\u89e3\u9664\u9009\u4e2d\u9879\u76ee\u7684\u7ed1\u5b9a\u5173\u7cfb\uff1f%>', ok );\n } );\n };\n\ncurrentList[lanIP] = 1;\n\nreturn {\n init: function(){\n getList( randerBandlist );\n \n```","reasoning":"The core code for sorting the array of objects containing IP addresses before rendering the DHCP list is located in the randerBandlist function. The tpldata itself needs to be sorted before pushing objects into the tpldata array, and a related method needs to be used. It will sort by each digit of the IP address.","id":"67","excluded_ids":["N\/A"],"gold_ids_long":["Mmdn_Methods_Properties\/Mmdn_methods.txt"],"gold_ids":["Mmdn_Methods_Properties\/Mmdn_methods_15_4.txt","Mmdn_Methods_Properties\/Mmdn_methods_15_6.txt","Mmdn_Methods_Properties\/Mmdn_methods_15_5.txt","Mmdn_Methods_Properties\/Mmdn_methods_15_3.txt"],"gold_answer":"As far as I understand, this is the part of your code preparing the list of\nobjects containing the ip address before rendering:\n\n \n \n if ( bandlistdata.length > 0 ) {\n for (var i = 0; i < bandlistdata.length; i++) {\n mac = bandlistdata[i].mac.toUpperCase();\n ip = bandlistdata[i].ip;\n dname = StringH.encode4Html( bandlistdata[i].name );\n tpldata.push({\n dname: dname,\n ip: ip,\n mac: mac\n });\n currentList[ip] = 1;\n }\n }\n \n\nSo the route you are looking for is sorting the ` tpldata ` array before\ncalling:\n\n \n \n container.html( tpl.tmpl( {bandlist: tpldata} ) );\n \n\nThis is how you could sort the array, by calling the [ sort\n](https:\/\/developer.mozilla.org\/en-\nUS\/docs\/Web\/JavaScript\/Reference\/Global_Objects\/Array\/sort) array method with\na callback that will split the octets of each ip address to compare and will\nevalute their parts individually to determine which one comes first:\n\n \n \n const array = [\n { ip: \"192.168.1.1\" },\n { ip: \"10.0.0.1\" },\n { ip: \"192.168.1.2\" },\n { ip: \"172.16.0.1\" }\n ];\n \n array.sort((a, b) => {\n \/\/splitting the parts of ip address held by a and b\n const aParts = a.ip.split('.').map(Number);\n const bParts = b.ip.split('.').map(Number);\n \n \/\/comparing the parts of the ip address a and b to determine which comes first\n for (let i = 0; i < 4; i++) {\n if (aParts[i] !== bParts[i]) {\n return aParts[i] - bParts[i];\n }\n }\n \n \/\/ips are the same\n return 0;\n });\n \n console.log(array);\n\nSo in your scenario just replace ` array ` with ` tpldata ` and call ` sort `\njust before setting the container html."} {"query":"setattr for `__call__` method of an object doesn't work in Python\n\nI though that for python object `obj()` is equivalent to `obj.__call__()`. But it looks like it doesn't hold when `setattr` was used.\n\n```\nIn [46]: class A:\n ...: def __call__(self):\n ...: return 1\n ...:\n\nIn [47]: a = A()\n\nIn [48]: a()\nOut[48]: 1\n\nIn [49]: a.__call__()\nOut[49]: 1\n\nIn [50]: getattr(a, '__call__')\nOut[50]: <bound method A.__call__ of <__main__.A object at 0x10a3d2a00>>\n\nIn [51]: setattr(a, '__call__', lambda: 100)\n\nIn [52]: a()\nOut[52]: 1\n\nIn [53]: a.__call__()\nOut[53]: 100\n```\n\nWhy is it so? And how I can set the `call` method in runtime?\n\nNotice in Python version is 3.9.18 and 3.11.4\n\nThe snippet of code above, I'd expected a() returns 100","reasoning":"The programmar thought that \"obj() is equivalent to obj.__call__()\". However, after using setattr(a, 'call', lambda: 100) to try to set a new __call__ attribute for instance a, the result shows that a() still returns 1, and only a.call() returns the newly set 100. Hence the correct way of running implicit calls to special methods is needed.","id":"68","excluded_ids":["N\/A"],"gold_ids_long":["python_data_model\/python_data_model.txt"],"gold_ids":["python_data_model\/python_data_model_13_0.txt"],"gold_answer":"The issue is because ` __call__ ` is defined at the class level, not at the\ninstance level. See the documentation (from Python 2, when this was\nintroduced) [ here\n](https:\/\/docs.python.org\/2\/reference\/datamodel.html#special-method-lookup-\nfor-new-style-classes) .\n\nYou'd need to make sure to define an instance-level call function and apply `\nsetattr ` to that:\n\n \n \n class MyClass:\n def __call__(self, *args, **kwargs): \n return self._call(self, *args, **kwargs)\n \n >>> a = MyClass()\n >>> setattr(a, '_call', lambda self: 100)\n >>> a()\n 100\n \n\nOr, you can patch the class itself, if you want every instance of ` MyClass()\n` to have the exact same ` __call__ ` function. However, consider carefully if\nyou want to do this:\n\n \n \n >>> setattr(MyClass, __call__, lambda self: \"foo\")\n >>> a()\n \"foo\"\n \n\n**Explanation:** Prioritising the class definition of ` __call__ ` over any\ninstance-specific definition is common to the various special method\nnames\/operator overloads in Python's [ data model\n](https:\/\/docs.python.org\/3\/reference\/datamodel.html#special-method-names) .\n\nIt comes from when new-style classes were introduced in Python 2. To quote the\n\"order of precedence\" introduced in ( [ PEP 252\n](https:\/\/peps.python.org\/pep-0252\/) ):\n\n> There\u2019s a conflict between names stored in the instance dict and names\n> stored in the type dict. If both dicts have an entry with the same key,\n> which one should we return? [...]\n>\n> 1. Look in the type dict. If you find a data descriptor, use its get()\n> method to produce the result. This takes care of special attributes like\n> **dict** and **class** .\n> 2. Look in the instance dict. If you find anything, that\u2019s it. (This takes\n> care of the requirement that normally the instance dict overrides the class\n> dict.)\n>\n\nThe \"type dict\" refers to class-level definitions, such as the MyClass' `\n__call__ ` definition, while the \"instance dict\" refers to values we have set\non a specific instance of ` MyClass ` , such as ` a ` above.\n\nIn your question, the initial ` __call__ ` definition is coming from step 1\n(the type dict), while any values you set like so are set in the instance\ndict, which has lower priority:\n\n \n \n setattr(a, '__call__', lambda self: 100) \n a.__call__ = lambda self: 100\n \n\nYou can find more information about how this came to be in [ this related\nStackOverflow question ](https:\/\/stackoverflow.com\/a\/53835975\/7662085) which\nalso points to [ this interesting external article\n](https:\/\/blog.ionelmc.ro\/2015\/02\/09\/understanding-python-metaclasses\/#object-\nattribute-lookup) ."} {"query":"How to replace all CRLF of a text file with ; with W10 cmd batch file that can use Sed, Awk (Gnuwin32)\n\nI have this input with CRLF line endings:\n```\n1019\n1020\n1028\n1021\n```\n\nI want to remove `CRLF` at end of each lines using `sed`, (or `awk`) from `Gnuwin32` using a Windows 10 batch script, (not Powershell).\n\nI want to get the following result inside a text file, without any semicolon or CRLF at the end:\n```\n1019;1020;1028;1021\n```\n\nIt doesn't work with the following lines in the batch file, (it seems there is a problem with GNUwin32 Sed that adds new CRLF at end of each processed line):\n```\nREM This to generate the input example :\n(echo 1019& echo 1020& echo 1028& echo 1021) > test_in.txt\n\nREM This is the first try for getting the desired 1-line output with semicolumn :\n(echo 1019& echo 1020& echo 1028& echo 1021) | .\\GnuWin32\\bin\\sed -e \"s\/ *$\/\/g\" | .\\GnuWin32\\bin\\sed -e \"s\/\\r\\n\/;\/\" > test_out.txt\n\nREM This is the second try for getting the desired 1-line output with semicolumn :\nREM (echo 1019& echo 1020& echo 1028& echo 1021) | .\\GnuWin32\\bin\\sed -e \"s\/ *$\/\/g\" | .\\GnuWin32\\bin\\sed -b -e \"s\/\\x0d\\x0a\/;\/g\" > test_out.txt\n\nREM This is the third try for getting the desired 1-line output with semicolumn :\nREM (echo 1019& echo 1020& echo 1028& echo 1021) | .\\GnuWin32\\bin\\sed -e \"s\/ *$\/\/g\" | .\\GnuWin32\\bin\\awk \"{gsub(\\\"\\\\\\\\r\\\\\\\\n\\\",\\\";\\\")};1\" > test_out.txt\n\nREM This is the fourth try for getting the desired 1-line output with semicolumn :\nREM (echo 1019& echo 1020& echo 1028& echo 1021) | .\\GnuWin32\\bin\\sed -e \"s\/ *$\/\/g\" | .\\GnuWin32\\bin\\awk -v FS=\"\\r\\n\" -v OFS=\";\" -v RS=\"\\\\$\\\\$\\\\$\\\\$\" -v ORS=\"\\r\\n\" \"{$1=$1}1\" > test_out.txt\n```","reasoning":"The programmer tries to remove CRLF at end of each lines using sed and\/or awk and replace CRLF with semicolumn. However, both 2 could not generate the desired answer. Therefore, another viable command line needs to be added to the combination of commands that can do this text line processing task correctly.","id":"69","excluded_ids":["N\/A"],"gold_ids_long":["GNU_operating_fields_and_characters\/GNU_operating_fields_and_characters.txt"],"gold_ids":["GNU_operating_fields_and_characters\/GNU_operating_fields_and_characters_11_0.txt"],"gold_answer":"As you are on Windows, here is a pure batch solution:\n\n \n \n @echo off\n setlocal enabledelayedexpansion\n del test_out.txt 2>nul\n REM This to generate the input example :\n (echo 1019& echo 1020& echo 1028& echo 1021) > test_in.txt\n \n set \"delimiter=\"\n (for \/f %%a in (test_in.txt) do (\n <nul set \/p \"=!delimiter!%%a\" & set \"delimiter=;\"\n ))>test_out.txt\n REM when you need a CRLF at the end of the line:\n echo\/>>test_out.txt\n \n\nThis uses a trick to write without a line ending: ` <nul set \/p =string ` and\nredirects the whole loop in one go to the resulting file (which does access\nthe disk only once, instead of once per line, which in turn makes it _much_\nfaster on big input files (not noticeable with your mere ~100 lines though))"} {"query":"GitHub Actions: how can I run a workflow created on a non-'master' branch from the 'workflow_dispatch' event?\n\nFor actions working on a third party repository, I would like to be able to create an action on a branch and execute it on the workflow_dispatch event. I have not succeeded in doing this, but I have discovered the following:\n\nThe Action tab will change the branch where it finds workflows and action code based on the branch relating to the last executed workflow. e.g. if some workflow is executed from the Action tab using the Run Workflow button and the Use Workflow From dropdown is set to some branch, Branch-A, then the contents of the Workflows panel on the left hand side of the Actions tab will be taken from Branch-A's version of .github\/.\nThe This workflow has a workflow_dispatch event trigger. text does not change with the branch. It seems to be taken from master. Alternatively, it may be being taken from the last set of results. I have not tested for that because either way it is not helpful behaviour.\nThe workaround is the execute on a push event which is OK, but that seems out of kilter with GitHub's high standards of design.\n\nDoes the above sound a) about right and b) whichever way you look at it, not optimal behaviour? Or, is there a better approach to building and testing actions?","reasoning":"The programmar wants to run a workflow created on a non-master branch from the `workflow_dispatch` event. He has tried several methods, but he is not sure whether they are correct. A better approach is required.","id":"70","excluded_ids":["N\/A"],"gold_ids_long":["github_using_workflows_jobs\/github_using_workflows.txt"],"gold_ids":["github_using_workflows_jobs\/github_using_workflows_9_6.txt"],"gold_answer":"You can run a workflow that is still in development in a branch ` branch-name\n` from the command line, with the GitHub CLI. The [ documentation\n](https:\/\/docs.github.com\/en\/actions\/managing-workflow-runs\/manually-running-\na-workflow#running-a-workflow-using-github-cli) says:\n\n> To run a workflow on a branch other than the repository's default branch,\n> use the --ref flag.\n>\n> ` gh workflow run workflow-name --ref branch-name `\n\nTo list valid workflow names, use ` gh workflow list ` .\n\nTo add input parameters, run it like this:\n\n` gh workflow run workflow-name --ref branch-name -f myparameter=myvalue `"} {"query":"Trying to delete entries gives: Inconsistent datatypes: expected %s got %s\" in Oracle\n\nI am trying to delete some entries based on 'name'. I have a table t1 with different columns, one of them being name and of type NCLOB. I have a method to delete these entries in Java, and I am using a named query. My question is, how can I solve the NCLOB issue so I'll be able to remove the data. I don't want to change the column type. The name query looks like this: `DELETE FROM table1 t1 WHERE t1.name = :name` How can I solve this?","reasoning":"When trying to delete an entry based on the 'name' column in Oracle, an error is encountered, that is \"Expected data type does not match actual data type: expected %s actual %s\". The programmar is trying to delete some entries in table t1 where the name column is of type NCLOB. He\/she is using a method in Java and a named query to perform the delete operation with the query statement: `DELETE FROM table1 t1 WHERE t1.name = :name`. To resolve the \"Inconsistent datatypes\" error that occurs when deleting an NCLOB type name column entry in Oracle, you can use the DBMS_LOB package. The DBMS_LOB package should be used to compare NCLOB data instead of using the = operator directly.","id":"71","excluded_ids":["N\/A"],"gold_ids_long":["DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB.txt"],"gold_ids":["DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_2.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_12.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_4.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_33.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_15.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_9.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_29.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_0.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_10.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_28.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_23.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_31.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_11.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_3.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_14.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_32.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_21.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_18.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_19.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_25.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_20.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_16.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_17.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_6.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_22.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_26.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_5.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_24.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_27.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_7.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_30.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_8.txt","DBMS_LOB_LIBCACHE\/oracle_DBMS_LOB_9_13.txt"],"gold_answer":"You cannot apply an equality predicate to a LOB of any kind. Instead, either\nsubstring out and cast a ` nvarchar2 ` and apply the predicate to it, or use `\ndbms_lob.compare ` , or use ` LIKE ` without a ` % ` :\n\n \n \n delete from table1 t1 where dbms_lob.compare(t1.name,:name) = 0\n \n\nor\n\n \n \n delete from table1 t1 WHERE CAST(SUBSTR(t1.name,1,2000) AS nvarchar2(2000)) = :name\n \n\nor\n\n \n \n delete from table1 t1 where t1.name like :name\n \n\nBut while this works, it raises the more important question of why you are\nusing a LOB datatype for an identifier column. The fact that you find yourself\ndoing this suggests that a LOB is the incorrect datatype for your needs. The\nabove DMLs will work, but will not perform well. LOBs are meant for\nunstructured data that should not contain pieces of information that are\nprogrammatically meaningful."} {"query":"How to keep the shell window open after running a PowerShell script?\n\nI have a very short PowerShell script that connects to a server and imports the AD module. I'd like to run the script simply by double clicking, but I'm afraid the window immediately closes after the last line.\n\nHow can I sort this out?","reasoning":"The programmer want to keep the windows open after the script finishes running. The relevant article is required to solve this.","id":"72","excluded_ids":["N\/A"],"gold_ids_long":["Daniel_Schroeder\/Daniel_Schroeder.txt"],"gold_ids":["Daniel_Schroeder\/Daniel_Schroeder_4_0.txt","Daniel_Schroeder\/Daniel_Schroeder_4_3.txt","Daniel_Schroeder\/Daniel_Schroeder_4_2.txt","Daniel_Schroeder\/Daniel_Schroeder_4_1.txt"],"gold_answer":"You basically have 3 options to prevent the PowerShell Console window from\nclosing, that I describe [ in more detail on my blog post\n](http:\/\/blog.danskingdom.com\/keep-powershell-console-window-open-after-\nscript-finishes-running\/) .\n\n 1. **One-time Fix:** Run your script from the PowerShell Console, or launch the PowerShell process using the -NoExit switch. e.g. ` PowerShell -NoExit \"C:\\SomeFolder\\SomeScript.ps1\" `\n 2. **Per-script Fix:** Add a prompt for input to the end of your script file. e.g. ` Read-Host -Prompt \"Press Enter to exit\" `\n 3. **Global Fix:** Change your registry key by adding the ` -NoExit ` switch to always leave the PowerShell Console window open after the script finishes running. \n\n \n \n Registry Key: HKEY_CLASSES_ROOT\\Applications\\powershell.exe\\shell\\open\\command\n Description: Key used when you right-click a .ps1 file and choose Open With -> Windows PowerShell.\n Default Value: \"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" \"%1\"\n Desired Value: \"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" \"& \\\"%1\\\"\"\n \n Registry Key: HKEY_CLASSES_ROOT\\Microsoft.PowerShellScript.1\\Shell\\0\\Command\n Description: Key used when you right-click a .ps1 file and choose Run with PowerShell (shows up depending on which Windows OS and Updates you have installed).\n Default Value: \"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" \"-Command\" \"if((Get-ExecutionPolicy ) -ne 'AllSigned') { Set-ExecutionPolicy -Scope Process Bypass }; & '%1'\"\n Desired Value: \"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -NoExit \"-Command\" \"if((Get-ExecutionPolicy ) -ne 'AllSigned') { Set-ExecutionPolicy -Scope Process Bypass }; & \\\"%1\\\"\"\n \n \n\nSee my blog for more information and a script to download that will make the\nregistry change for you."} {"query":"Typing a function param to be any JSON serializable object\n\nwhat would be the best type for this function?\nI want basically any input type that` json.dumps` can process (with or without `JSONEncoder`)\n\nFor now i'm using `Union[List[Any], Dict[Any, Any]]` but it's not exhaustive and mypy complains `Explicit \"Any\" is not allowed`.\n\n```\nimport json\nfrom typing import List, Dict\nfrom datetime import date\n\n\ndef my_function_doing_stuff_then_serializing(input: Union[List[Any], Dict[Any, Any]], **kwargs) -> None:\n json.dumps(input, **kwargs)\n```\n\nso I can do this\n\n```\nimport json\nfrom datetime import date\nfrom somewhere import my_function_doing_stuff_then_serializing\n\n\nclass DateEncoder(json.JSONEncoder):\n def default(self, obj: Any) -> Any:\n if isinstance(obj, date):\n return obj.isoformat()\n return super().default(obj)\n\nmy_function_doing_stuff_then_serializing([date.today()], cls=DateEncoder)\n```","reasoning":"The programmer wants to type a function parameter to be any JSON serializable object. The current choice has an error `Explicit \"Any\" is not allowed`. Therefore, a special marker, which is defined inside Python, is required indicating that an assignment should be recognized as a proper type alias definition by type checkers. ","id":"73","excluded_ids":["N\/A"],"gold_ids_long":["python_typing_tokenize\/python_typing.txt"],"gold_ids":["python_typing_tokenize\/python_typing_2_0.txt"],"gold_answer":"Using a current version of mypy, you can now use recursive a ` TypeAlias ` ,\nwhich is a \"special marker indicating that an assignment should be recognized\nas a proper type alias definition by type checkers\".\n\nIn your case, you could do either:\n\n \n \n from typing import List, Dict, Union\n JsonType = Union[None, int, str, bool, List[JsonType], Dict[str, JsonType]]\n \n\nor, using the TypeAlias marker for checkers:\n\n \n \n from typing import List, Dict, Union, TypeAlias\n JsonType: TypeAlias = Union[None, int, str, bool, List[JsonType], Dict[str, JsonType]]\n \n\nAnd [ as mentioned in this discussion\n](https:\/\/github.com\/python\/typing\/issues\/182) , there are many more ways to\ndefine such aliases according to your specific needs."} {"query":"Hamcrest matcher for checking return value of method in collection\n\n`hasProperty` can be used with `hasItem` to check for the value of a given property, eg:\n```\nMatcher hasName = Matchers.<Person>hasProperty(\"name\", is(\"Winkleburger\"));\nassertThat(names, hasItem(hasName));\n```\n\nThis is fine when name is a property, ie: there is a method called `getName()`.\n\nIs there a matcher that will check a method that isn't a property? ie: in this case, it will check the return value of the method `name()` rather than `getName()`, for items in the collection.","reasoning":"The question is about using the Hamcrest matcher to check the return value of a method in a collection, not the value of a property. The sample code uses hasProperty and hasItem matchers to check the value of the name property of the Person object. Need a matcher to check that a method is not a property, or be able to assist the matcher.","id":"74","excluded_ids":["N\/A"],"gold_ids_long":["hamcrest_classes\/hamcrest_classes.txt"],"gold_ids":["hamcrest_classes\/hamcrest_classes_36_1.txt","hamcrest_classes\/hamcrest_classes_36_0.txt"],"gold_answer":"You can use another Hamcrest built-in for this, a [ FeatureMatcher\n](http:\/\/hamcrest.org\/JavaHamcrest\/javadoc\/1.3\/org\/hamcrest\/FeatureMatcher.html)\n. These are designed to be combined with other matchers after they transform\nyour input into something else. So in your case you would do something like\nthis:\n\n \n \n @Test\n public void test1() {\n List<Person> names = new ArrayList<>();\n names.add(new Person(\"Bob\"));\n names.add(new Person(\"i\"));\n \n assertThat(names, hasItem(name(equalTo(\"Winkleburger\"))));\n }\n \n private FeatureMatcher<Person, String> name(Matcher<String> matcher) {\n return new FeatureMatcher<Person, String>(matcher, \"name\", \"name\") {\n @Override\n protected String featureValueOf(Person actual) {\n return actual.name();\n }\n };\n }\n \n\nThe benefit you'll get with this over a custom matcher is that it's fully\nreusable and composable with other matchers as all it does is the data-\nextraction then defers to whatever other matcher you want. You're also going\nto get appropriate diagnostics, e.g. in the above example you will if you\nchange the assert to a value that's not present you will receive:\n\n \n \n java.lang.AssertionError: \n Expected: a collection containing name \"Batman\"\n but: name was \"Bob\", name was \"Winkleburger\""} {"query":"Can javascript sort like windows?\n\nI need a way in Javascript to sort strings as in Windows but it seems it's impossible.\n\nWindows explorer sorts like this:\n```\n1.jpg - 2.jpg - 3.jpg - ....\n```\n\nWhile Javascript sorts like this:\n```\n1.jpg - 10.jpg - 11.jpg - 2.jpg -...\n```\n\nWindows sorts based on the numeral value in the filename while Javascript just sorts by a characters' ASCII code.\n\nSometimes filenames aren't just numbers or text but a combination of both, e.G.:\n```\n\"mark 01 in school.jpg\"\n\"mark 02 in school.jpg\"\n\"john 05 in theater.jpg\"\n```\n\nWhat I need is a Javascript function that sorts like shown above.\n\nMy question is: is there a function in JS or how can I implement one on my own?","reasoning":"A JS function or related algorithm that can sort strings like what Windows can do is required.","id":"75","excluded_ids":["N\/A"],"gold_ids_long":["Miigon_blog\/Miigon_blog.txt"],"gold_ids":["Miigon_blog\/Miigon_blog_0_1.txt","Miigon_blog\/Miigon_blog_0_0.txt"],"gold_answer":"You will have to write your own sort function and pass it as argument to sort\nmethod. Simple example:\n\n \n \n your_array.sort(sortfunction)\n \n function sortfunction(a, b){\n var num1 = extractNumberFrom(a);\n var num2 = extractNumberFrom(b);\n return num1 - num2;\n }"} {"query":"How can I subtract a number from string elements in R?\n\nI have a long string. The part is\n```\nx <- \"Text1 q10_1 text2 q17 text3 q22_5 ...\"\n```\n\nHow can I subtract 1 from each number after \"q\" letter to obtain the following?\n```\ny <- \"Text1 q9_1 text2 q16 text3 q21_5 ...\"\n```\n\nI can extract all my numbers from x:\n```\nnumbers <- stringr::str_extract_all(x, \"(?<=q)\\\\d+\")\nnumbers <- as.integer(numbers[[1]]) - 1\n```\n\nBut how can I update x with these new numbers?\n\nThe following is not working\n```\nstringr::str_replace_all(x, \"(?<=q)\\\\d+\", as.character(numbers))\n```","reasoning":"The programmar wants to subtract 1 from each number after \"q\" letter. The use of regular expressions is recommended to have a solution, which can capture the part of the string wanted to replace, and perform the modification and reinsertion operation.","id":"76","excluded_ids":["N\/A"],"gold_ids_long":["R_base_tools\/R_base_all.txt","R_base_tools\/R_micropan.txt"],"gold_ids":["R_base_tools\/R_base_all_416_0.txt","R_base_tools\/R_base_all_416_1.txt","R_base_tools\/R_micropan_26_0.txt"],"gold_answer":"I learned today that ` stringr::str_replace_all ` will take a function:\n\n \n \n stringr::str_replace_all(\n x, \n \"(?<=q)\\\\d+\", \n \\(x) as.character(as.integer(x) - 1)\n )"} {"query":"Model translation in Django Rest Framework\n\nI'm developing an API with Django Rest Framework, and I'd need some models with a few fields that should support translation in multiple languages then, of course, serializers should have to retrieve the field with the expected language. I've thought about two options: adding extra fields to the model (one field for language) or creating another model with all texts in every language. On the other hand, I've seen that there are some libraries such as django-modeltranslation that are intended to solve that issue, however, I'd like to know some opinions about them. What do you think? What would you recommend to me?\n\nThank you very much","reasoning":"Design options for implementing a multilingual translation model in the Django Rest Framework are addressed through the use of third-party libraries such as django-modeltranslation. The related notes need to be found and looked at carefully.","id":"77","excluded_ids":["N\/A"],"gold_ids_long":["django_modeltranslation\/django_modeltranslation.txt"],"gold_ids":["django_modeltranslation\/django_modeltranslation_2_0.txt"],"gold_answer":"As per [ documentation ](https:\/\/django-\nmodeltranslation.readthedocs.io\/en\/latest\/caveats.html#using-in-combination-\nwith-django-rest-framework) ,\n\n## Using in combination with django-rest-framework\n\nWhen creating a new viewset , make sure to override get_queryset method, using\nqueryset as a property won\u2019t work because it is being evaluated once, before\nany language was set.\n\n## ****\n\nSo depending on your needs you can create a class inheriting from drf (for\nexample ` ViewSet ` ), obtaining the language code from request and replacing\nthe ` get_queryset ` method to filter by language.\n\n## EDIT\n\nTo implement this, you would typically:\n\nDetect the language from the request, using either the URL, query parameters,\nor headers. Modify the queryset in the get_queryset method of your viewset to\nfilter or annotate results with the correct translation, based on the detected\nlanguage. For example:\n\n \n \n from django.utils.translation import get_language\n from rest_framework import viewsets\n \n class YourModelViewSet(viewsets.ModelViewSet):\n queryset = YourModel.objects.all()\n serializer_class = YourModelSerializer\n \n def get_queryset(self):\n lang = get_language()\n # Adjust the query based on the language, e.g., filtering or annotating\n return super().get_queryset().filter(language=lang)"} {"query":"Lossy compression method: uint16 to a uint8?\n\nI'm looking for suggestions on lossy data compression methods. I need to compress a uint16 to a uint8 such that the resolution loss increases with increasing uint16 values. I am currently using the following:\n\n```\ndef log2_compress(x: np.uint16) -> np.uint8:\n # Add l to avoid log2(0)\n y = math.log2(x + 1)\n # log2(1) = 0.\n # log2(65536) = 16.\n # Scale from [0,16] to [0,255].\n return np.uint8((y \/ 16) * 255)\n```\n\nThis is simple but has the downside that it is wasteful on the `uint8` side, namely\n\n```\nlog2_compress(0) = 0\nlog2_compress(1) = 15\nlog2_compress(2) = 25\n```\nso `[1,14]`, `[16,24]`, etc. aren't used. Can someone suggest a method similar to `log2_compress` but uses more (all?) of the `uint8` bits? I am not really concerned about performance either.","reasoning":"There is a need for a method of lossy compression of uint16 to uint8, which requires that the loss of resolution increases as the value of uint16 increases. The log2_compress method currently used is simple but has a low utilisation rate in the uint8 range. One possible approach is to use mathematical graphical analyses, such as rising slopes implying smaller uint16 input values.","id":"78","excluded_ids":["N\/A"],"gold_ids_long":["wiki_function\/wiki_function.txt"],"gold_ids":["wiki_function\/wiki_function_1_5.txt","wiki_function\/wiki_function_1_4.txt"],"gold_answer":"Depending on what you mean by\n\n> such that the resolution loss increases with increasing ` uint16 ` values\n\nyou might want to try different compression functions. Without any further\ninformation about the context, we will assume that you are looking for an\nincreasing [ concave function\n](https:\/\/en.wikipedia.org\/wiki\/Concave_function) to map [0~65535] to [0~255].\n\nYou can try first the function ` sqrt ` :\n\n \n \n def sqrt_compress(x: np.uint16) -> np.uint8:\n return np.uint8(math.sqrt(x))\n \n\nIf the smoothness of this function is not exactly what you need, you can try\nother powers rather that 1\/2 for the square root. You can add the power as a\nparameter ` p ` that will have effect on the smoothness of the compression\nfunction:\n\n \n \n def power_compress_p(x: np.uint16, p: float = 2) -> np.uint8:\n \"\"\"Compression function. Higher value of p use less bits.\"\"\"\n return np.uint8(2 ** (8 - 16 \/ p) * x ** (1 \/ p))\n \n\nFor your use case, use ` p > 1 ` .\n\nThe power functions obviously are not the only compression that exist, you\ncould use a dictionary to define a map function if you want better control\nover the granularity."} {"query":"How can I get LLM to only respond in JSON strings?\n\nThis is how I am defining the executor\n```\nconst executor = await initializeAgentExecutorWithOptions(tools, model, {\n agentType: 'chat-conversational-react-description',\n verbose: false,\n});\n```\n\nWhenever I prompt the AI I have this statement at the end.\n```\ntype SomeObject = {\n field1: number,\n field2: number,\n}\n\n- It is very critical that you answer only as the above object and JSON stringify it as a single string.\n Don't include any other verbose explanatiouns and don't include the markdown syntax anywhere.\n```\n\nThe `SomeObject` is just an example. Usually it will have a proper object type. When I use the `executor` to get a response from the AI, half the time I get the proper JSON string, but the other half the times are the AI completely ignoring my instructions and gives me a long verbose answer in just plain English...\n\nHow can I make sure I always get the structured data answer I want? Maybe using the `agentType: 'chat-conversational-react-description'` isn't the right approach here?","reasoning":"The programmar wanted to a guaranteed way to ensure the output is JSON. However his problem was solved in a feature update from Openai, and the article in question needs to be found.","id":"79","excluded_ids":["N\/A"],"gold_ids_long":["Openai_Text_Generation\/Openai_Text_Generation.txt"],"gold_ids":["Openai_Text_Generation\/Openai_Text_Generation_0_4.txt","Openai_Text_Generation\/Openai_Text_Generation_0_0.txt","Openai_Text_Generation\/Openai_Text_Generation_0_3.txt","Openai_Text_Generation\/Openai_Text_Generation_0_1.txt"],"gold_answer":"## Update Nov. 6, 2023\n\nOpenAI announced today a new \u201cJSON Mode\u201d at the DevDay Keynote. When activated\nthe model will only generate responses using the ` JSON ` format.\n\nYou can refer to the official docs [ here\n](https:\/\/platform.openai.com\/docs\/guides\/text-generation\/json-mode) .\n\n## Original Answer\n\nThat's a great question and ` LangChain ` provides an easy solution. Look at\nLangChain's ` Output Parsers ` if you want a quick answer. It is the\nrecommended way to process LLM output into a specified format.\n\nHere's the official link from the docs:\n\n * **JavaScript** : [ https:\/\/js.langchain.com\/docs\/modules\/model_io\/output_parsers\/ ](https:\/\/js.langchain.com\/docs\/modules\/model_io\/output_parsers\/)\n * Python: [ https:\/\/python.langchain.com\/docs\/modules\/model_io\/output_parsers\/ ](https:\/\/python.langchain.com\/docs\/modules\/model_io\/output_parsers\/)\n\n* * *\n\nSide note: I wrote an introductory tutorial about this particular issue but\nfor Python, so if anyone else is interested in more details you can [ check it\nout here ](https:\/\/www.gettingstarted.ai\/how-to-langchain-output-parsers-\nconvert-text-to-objects\/) .\n\nThe example below does not use ` initializeAgentExecutorWithOptions ` , but\nwill ensure that the output is processed as ` JSON ` without specifying this\nexplicitly in your system prompt.\n\n## How it works\n\nIn order to tell LangChain that we'll need to convert the LLM response to a `\nJSON ` output, we'll need to define a ` StructuredOutputParser ` and pass it\nto our ` chain ` .\n\n### Defining our ` parser ` :\n\nHere's an example:\n\n \n \n \/\/ Let's define our parser\n const parser = StructuredOutputParser.fromZodSchema(\n z.object({\n field1: z.string().describe(\"first field\"),\n field2: z.string().describe(\"second field\")\n })\n );\n \n\n### Adding it to our ` Chain ` :\n\n \n \n \/\/ We can then add it to our chain\n const chain = RunnableSequence.from([\n PromptTemplate.fromTemplate(...),\n new OpenAI({ temperature: 0 }),\n parser, \/\/ <-- this line\n ]);\n \n\n### Invoking our chain with ` format_instructions ` :\n\n \n \n \/\/ Finally, we'll pass the format instructions to the invoke method\n const response = await chain.invoke({\n question: \"What is the capital of France?\",\n format_instructions: parser.getFormatInstructions(), \/\/ <-- this line\n });\n \n\nGo ahead and log the ` parser.getFormatInstructions() ` method before you call\n` invoke ` if you'd like to see the output.\n\nWhen we pass ` parser.getFormatInstructions() ` to the ` format_instructions `\nproperty, this lets LangChain append the desired ` JSON ` schema that we\ndefined in step 1 to our prompt before sending it to the large language model.\n\nAs a final point, it is absolutely **critical** to make sure your query\/prompt\nis relevant and produces values that could be interpreted as the properties in\nyour object ` SomeObject ` that are defined in the ` parser ` .\n\nPlease give this a try, and let me know if you're able to consistently output\n` JSON ` ."} {"query":"Enqueue javascript with type=\"module\"\n\nI want to use `countUp.js` on my custom theme in Wordpress.\n\nWhen I add the file with `wp_enqueue_script()`, I get an error:\n\n```\nUncaught SyntaxError: Unexpected token 'export'\n```\n\nI've read that it can be fixed by setting `type=\"module\"` on the `<script>` tag, but I don't know how to do that as that option doesn't exist in `wp_enqueue_script()`...\n\nCan anyone help me?","reasoning":"In order to properly load JavaScript files with ES6 module syntax in WordPress (such as countUp.js), it is necessary to explore the hook functions and filter mechanisms provided by WordPress. These built-in tools allow developers to modify and extend core functionality at specific stages of execution. Consider using the script_loader_tag filter to inject the type=\"module\" attribute into the output <script> tag by adding an appropriate callback function. This approach provides an elegant solution without modifying the core code, taking full advantage of WordPress' powerful extensibility. Of course, the implementation details need to review the WordPress hook function usage and parameters, and with the existing theme framework compatibility and other factors to weigh.","id":"80","excluded_ids":["N\/A"],"gold_ids_long":["WP_Scripts\/WP_Scripts.txt"],"gold_ids":["WP_Scripts\/WP_Scripts_14_1.txt","WP_Scripts\/WP_Scripts_14_0.txt"],"gold_answer":"One can add attributes to a script by applying filter ' [ script_loader_tag\n](https:\/\/developer.wordpress.org\/reference\/hooks\/script_loader_tag\/) '.\n\nUse ` add_filter('script_loader_tag', 'add_type_attribute' , 10, 3); ` to add\nthe filter.\n\nDefine the callback function like the example given on the link above:\n\n \n \n function add_type_attribute($tag, $handle, $src) {\n \/\/ if not your script, do nothing and return original $tag\n if ( 'your-script-handle' !== $handle ) {\n return $tag;\n }\n \/\/ change the script tag by adding type=\"module\" and return it.\n $tag = '<script type=\"module\" src=\"' . esc_url( $src ) . '\"><\/script>';\n return $tag;\n }"} {"query":"python packages installing order\n\nwhen we install python packages using pip, sometimes it installs some other packages. for example installing torch requires installing numpy. imagine in a requirements.txt file there are packages with their versions, but not in a right order. how can we realize the right order? (for example if numpy version is specified as 1.1 but previous package in the list automatically installs numpy of last version!)\n\nI installed librosa==0.9.2 which installed scipy 1.10 during its installation. but after that there is scipy=1.8 in the list. doesn't it make a conflict?","reasoning":"When managing Python package dependencies and installation order within a project, relying solely on a requirements.txt file may lead to version conflicts and other issues. The need to explore a more standardized way of organizing project metadata is becoming increasingly apparent. An emerging standard worth considering could potentially provide a more controlled and deterministic solution, allowing developers to more precisely constrain dependency package versions during environment setup, thereby effectively avoiding potential package version conflicts. This standard is required.","id":"81","excluded_ids":["N\/A"],"gold_ids_long":["pyproject_toml_Project_Summaries\/pyproject_toml.txt"],"gold_ids":["pyproject_toml_Project_Summaries\/pyproject_toml_1_0.txt"],"gold_answer":"Installation order should not matter at all if you use modern tools (Python,\npip, etc.) and libraries that are packaged with modern techniques ( [ `\n[build-system] ` section in ` pyproject.toml `\n](https:\/\/packaging.python.org\/en\/latest\/specifications\/declaring-build-\ndependencies\/) ).\n\nIn the case that you are presenting, it could be that _pip_ had to build\n_librosa_ (because no compatible _wheel_ could be found). _pip_ does builds in\nephemeral one-time-use isolated virtual environments, so maybe _pip_ used\n_scipy 1.10_ to build _librosa_ . But what was installed in your actual\nvirtual environment was a different version of _scipy_ ( _1.8_ ), it is not an\nissue in itself. There are some cases, where a library is only compatible at\nrun-time with the library it was built with."} {"query":"Is there a fiber api in .net?\n\nOut of more curiosity than anything I've been looking for a set of C#\/.net classes to support fibers\/co-routines (the win32 version) and haven't had any luck.\n\nDoes anybody know of such a beast?","reasoning":"The programmer wants to look for C#\/.net classes to support fibers\/co-routines. Please check if such a class exists, if not, the official documentation should provide the reason.","id":"82","excluded_ids":["N\/A"],"gold_ids_long":["Process_Threads\/Process_Threads.txt"],"gold_ids":["Process_Threads\/Process_Threads_0_0.txt","Process_Threads\/Process_Threads_0_1.txt"],"gold_answer":"No. There isn't a Fiber API in the Framework. I suspect this is because there\nis little advantage to using them - even the [ fiber API page\n](http:\/\/msdn.microsoft.com\/en-us\/library\/ms682661%28VS.85%29.aspx) (native)\nmentions:\n\n> In general, fibers do not provide advantages over a well-designed\n> multithreaded application.\n\n.NET makes it so much easier to develop a \"well-designed\" multithreaded\napplication that I suspect there is little use for a fiber API."} {"query":"How do I compare strings in Java?\n\nI've been using the `==` operator in my program to compare all my strings so far. However, I ran into a bug, changed one of them into `.equals()` instead, and it fixed the bug.\n\nIs `==` bad? When should it and should it not be used? What's the difference?","reasoning":"The programmer want to know the deep difference between `.equals()` and `==` in Java for String comparsion. The definition of `.equals()` needs to be rechecked carefully. The official explanantion is required.","id":"83","excluded_ids":["N\/A"],"gold_ids_long":["Java_Class_Objects_Observable\/Java_Class_Objects.txt"],"gold_ids":["Java_Class_Objects_Observable\/Java_Class_Objects_1_0.txt"],"gold_answer":"` == ` tests for reference equality (whether they are the same object).\n\n` .equals() ` tests for value equality (whether they contain the same data).\n\n[ Objects.equals()\n](http:\/\/docs.oracle.com\/javase\/7\/docs\/api\/java\/util\/Objects.html#equals\\(java.lang.Object,%20java.lang.Object\\))\nchecks for ` null ` before calling ` .equals() ` so you don't have to\n(available as of JDK7, also available in [ Guava\n](https:\/\/github.com\/google\/guava\/wiki\/CommonObjectUtilitiesExplained#equals)\n).\n\nConsequently, if you want to test whether two strings have the same value you\nwill probably want to use ` Objects.equals() ` .\n\n \n \n \/\/ These two have the same value\n new String(\"test\").equals(\"test\") \/\/ --> true \n \n \/\/ ... but they are not the same object\n new String(\"test\") == \"test\" \/\/ --> false \n \n \/\/ ... neither are these\n new String(\"test\") == new String(\"test\") \/\/ --> false \n \n \/\/ ... but these are because literals are interned by \n \/\/ the compiler and thus refer to the same object\n \"test\" == \"test\" \/\/ --> true \n \n \/\/ ... string literals are concatenated by the compiler\n \/\/ and the results are interned.\n \"test\" == \"te\" + \"st\" \/\/ --> true\n \n \/\/ ... but you should really just call Objects.equals()\n Objects.equals(\"test\", new String(\"test\")) \/\/ --> true\n Objects.equals(null, \"test\") \/\/ --> false\n Objects.equals(null, null) \/\/ --> true\n \n\nFrom the Java Language Specification [ JLS 15.21.3. Reference Equality\nOperators ` == ` and ` != `\n](https:\/\/docs.oracle.com\/javase\/specs\/jls\/se21\/html\/jls-15.html#jls-15.21.3)\n:\n\n> While ` == ` may be used to compare references of type ` String ` , such an\n> equality test determines whether or not the two operands refer to the same `\n> String ` object. The result is ` false ` if the operands are distinct `\n> String ` objects, even if they contain the same sequence of characters ( [\n> \u00a73.10.5\n> ](https:\/\/docs.oracle.com\/javase\/specs\/jls\/se21\/html\/jls-3.html#jls-3.10.5)\n> , [ \u00a73.10.6\n> ](https:\/\/docs.oracle.com\/javase\/specs\/jls\/se21\/html\/jls-3.html#jls-3.10.6)\n> ). The contents of two strings ` s ` and ` t ` can be tested for equality by\n> the method invocation ` s.equals(t) ` .\n\nYou almost **always** want to use ` Objects.equals() ` . In the **rare**\nsituation where you **know** you're dealing with [ interned\n](https:\/\/docs.oracle.com\/javase\/8\/docs\/api\/java\/lang\/String.html#intern--)\nstrings, you _can_ use ` == ` .\n\nFrom [ JLS 3.10.5. _String Literals_\n](https:\/\/docs.oracle.com\/javase\/specs\/jls\/se8\/html\/jls-3.html#jls-3.10.5) :\n\n> Moreover, a string literal always refers to the _same_ instance of class `\n> String ` . This is because string literals - or, more generally, strings\n> that are the values of constant expressions ( [ \u00a715.28\n> ](https:\/\/docs.oracle.com\/javase\/specs\/jls\/se8\/html\/jls-15.html#jls-15.28) )\n> - are \"interned\" so as to share unique instances, using the method `\n> String.intern ` .\n\nSimilar examples can also be found in [ JLS 3.10.5-1\n](https:\/\/docs.oracle.com\/javase\/specs\/jls\/se8\/html\/jls-3.html#d5e1634) .\n\n### Other Methods To Consider\n\n[ String.equalsIgnoreCase()\n](https:\/\/docs.oracle.com\/javase\/8\/docs\/api\/java\/lang\/String.html#equalsIgnoreCase-\njava.lang.String-) value equality that ignores case. Beware, however, that\nthis method can have unexpected results in various locale-related cases, see [\nthis question ](https:\/\/stackoverflow.com\/questions\/44238749\/equalsignorecase-\nnot-working-as-intended) .\n\n[ String.contentEquals()\n](https:\/\/docs.oracle.com\/javase\/8\/docs\/api\/java\/lang\/String.html#contentEquals-\njava.lang.CharSequence-) compares the content of the ` String ` with the\ncontent of any ` CharSequence ` (available since Java 1.5). Saves you from\nhaving to turn your StringBuffer, etc into a String before doing the equality\ncomparison, but leaves the null checking to you."} {"query":"Event binding on dynamically created elements?\n\nI have a bit of code where I am looping through all the select boxes on a page and binding a .hover event to them to do a bit of twiddling with their width on mouse on\/off.\n\nThis happens on page ready and works just fine.\n\nThe problem I have is that any select boxes I add via Ajax or DOM after the initial loop won't have the event bound.\n\nI have found this plugin (jQuery Live Query Plugin), but before I add another 5k to my pages with a plugin, I want to see if anyone knows a way to do this, either with jQuery directly or by another option.","reasoning":"Explore how to use jQuery to bind events to dynamically created elements without introducing additional plugin libraries. Initially, when the page is loaded, the loop binds the .hover event to the existing select element successfully, but subsequent select elements added dynamically via Ajax or DOM cannot bind the event automatically. Need a jQuery built-in method or other technical solutions, can be achieved on the dynamic elements of the event binding, so as to unify the management of all select elements of the interactive behaviour.","id":"84","excluded_ids":["N\/A"],"gold_ids_long":["jQuery_api\/jQuery_api.txt"],"gold_ids":["jQuery_api\/jQuery_api_0_1.txt","jQuery_api\/jQuery_api_0_0.txt","jQuery_api\/jQuery_api_0_5.txt","jQuery_api\/jQuery_api_0_4.txt","jQuery_api\/jQuery_api_0_3.txt","jQuery_api\/jQuery_api_0_2.txt"],"gold_answer":"**As of jQuery 1.7** you should use [ ` jQuery.fn.on `\n](https:\/\/api.jquery.com\/on\/#on-events-selector-data-handler) with the\nselector parameter filled:\n\n \n \n $(staticAncestors).on(eventName, dynamicChild, function() {});\n \n\n_Explanation:_\n\nThis is called event delegation and works as followed. The event is attached\nto a static parent ( ` staticAncestors ` ) of the element that should be\nhandled. This jQuery handler is triggered every time the event triggers on\nthis element or one of the descendant elements. The handler then checks if the\nelement that triggered the event matches your selector ( ` dynamicChild ` ).\nWhen there is a match then your custom handler function is executed.\n\n* * *\n\n**Prior to this** , the recommended approach was to use [ ` live() `\n](http:\/\/api.jquery.com\/live) :\n\n \n \n $(selector).live( eventName, function(){} );\n \n\nHowever, ` live() ` was deprecated in 1.7 in favour of ` on() ` , and\ncompletely removed in 1.9. The ` live() ` signature:\n\n \n \n $(selector).live( eventName, function(){} );\n \n\n... can be replaced with the following [ ` on() ` ](http:\/\/api.jquery.com\/on\/)\nsignature:\n\n \n \n $(document).on( eventName, selector, function(){} );\n \n\n* * *\n\nFor example, if your page was dynamically creating elements with the class\nname ` dosomething ` you would bind the event to **a parent which already\nexists** (this is the nub of the problem here, you need something that exists\nto bind to, don't bind to the dynamic content), this can be (and the easiest\noption) is ` document ` . Though bear in mind [ ` document ` may not be the\nmost efficient option ](https:\/\/stackoverflow.com\/questions\/12824549\/should-\nall-jquery-events-be-bound-to-document) .\n\n \n \n $(document).on('mouseover mouseout', '.dosomething', function(){\n \/\/ what you want to happen when mouseover and mouseout \n \/\/ occurs on elements that match '.dosomething'\n });\n \n\nAny parent that exists at the time the event is bound is fine. For example\n\n \n \n $('.buttons').on('click', 'button', function(){\n \/\/ do something here\n });\n \n\nwould apply to\n\n \n \n <div class=\"buttons\">\n <!-- <button>s that are generated dynamically and added here -->\n <\/div>"} {"query":"How to access the correct `this` inside a callback\n\nI have a constructor function which registers an event handler:\n```\nfunction MyConstructor(data, transport) {\n this.data = data;\n transport.on('data', function () {\n alert(this.data);\n });\n}\n\n\/\/ Mock transport object\nvar transport = {\n on: function(event, callback) {\n setTimeout(callback, 1000);\n }\n};\n\n\/\/ called as\nvar obj = new MyConstructor('foo', transport);\n```\n\nHowever, I'm not able to access the `data` property of the created object inside the callback. It looks like `this` does not refer to the object that was created, but to another one.\n\nI also tried to use an object method instead of an anonymous function:\n```\nfunction MyConstructor(data, transport) {\n this.data = data;\n transport.on('data', this.alert);\n}\n\nMyConstructor.prototype.alert = function() {\n alert(this.name);\n};\n```\n\nbut it exhibits the same problems.\n\nHow can I access the correct object?","reasoning":"In the constructor `MyConstructor`, the event listener is registered via `transport.on('data', callback)`, but the `callback` function does not have access to the data property of the `MyConstructor` instance internally, and `this` points to some other object. The same problem exists even if the callback is defined as a `MyConstructor.prototype.alert` instance method. A solution needs to be explored by fully checking the relative documentation.","id":"85","excluded_ids":["N\/A"],"gold_ids_long":["Expressions_operators\/Expressions_operators.txt"],"gold_ids":["Expressions_operators\/Expressions_operators_35_12.txt","Expressions_operators\/Expressions_operators_35_11.txt","Expressions_operators\/Expressions_operators_35_9.txt","Expressions_operators\/Expressions_operators_35_10.txt","Expressions_operators\/Expressions_operators_35_13.txt","Expressions_operators\/Expressions_operators_35_15.txt","Expressions_operators\/Expressions_operators_35_14.txt"],"gold_answer":"## What you should know about ` this `\n\n` this ` (aka \"the context\") is a special keyword inside each function and its\nvalue only depends on _how_ the function was called, not how\/when\/where it was\ndefined. It is not affected by lexical scopes like other variables (except for\narrow functions, see below). Here are some examples:\n\n \n \n function foo() {\n console.log(this);\n }\n \n \/\/ normal function call\n foo(); \/\/ `this` will refer to `window`\n \n \/\/ as object method\n var obj = {bar: foo};\n obj.bar(); \/\/ `this` will refer to `obj`\n \n \/\/ as constructor function\n new foo(); \/\/ `this` will refer to an object that inherits from `foo.prototype`\n \n\nTo learn more about ` this ` , have a look at the [ MDN documentation\n](https:\/\/developer.mozilla.org\/en-\nUS\/docs\/Web\/JavaScript\/Reference\/Operators\/this) .\n\n* * *\n\n## How to refer to the correct ` this `\n\n### Use [ arrow functions ](https:\/\/developer.mozilla.org\/en-\nUS\/docs\/Web\/JavaScript\/Reference\/Functions\/Arrow_functions)\n\nECMAScript 6 introduced _arrow functions_ , which can be thought of as lambda\nfunctions. They don't have their own ` this ` binding. Instead, ` this ` is\nlooked up in scope just like a normal variable. That means you don't have to\ncall ` .bind ` . That's not the only special behavior they have, please refer\nto the MDN documentation for more information.\n\n \n \n function MyConstructor(data, transport) {\n this.data = data;\n transport.on('data', () => alert(this.data));\n }\n \n\n### Don't use ` this `\n\nYou actually don't want to access ` this ` in particular, but _the object it\nrefers to_ . That's why an easy solution is to simply create a new variable\nthat also refers to that object. The variable can have any name, but common\nones are ` self ` and ` that ` .\n\n \n \n function MyConstructor(data, transport) {\n this.data = data;\n var self = this;\n transport.on('data', function() {\n alert(self.data);\n });\n }\n \n\nSince ` self ` is a normal variable, it obeys lexical scope rules and is\naccessible inside the callback. This also has the advantage that you can\naccess the ` this ` value of the callback itself.\n\n### Explicitly set ` this ` of the callback - part 1\n\nIt might look like you have no control over the value of ` this ` because its\nvalue is set automatically, but that is actually not the case.\n\nEvery function has the method [ ` .bind ` _ [docs] _\n](https:\/\/developer.mozilla.org\/en-\nUS\/docs\/Web\/JavaScript\/Reference\/Global_Objects\/Function\/bind) , which returns\na new function with ` this ` bound to a value. The function has exactly the\nsame behavior as the one you called ` .bind ` on, only that ` this ` was set\nby you. No matter how or when that function is called, ` this ` will always\nrefer to the passed value.\n\n \n \n function MyConstructor(data, transport) {\n this.data = data;\n var boundFunction = (function() { \/\/ parenthesis are not necessary\n alert(this.data); \/\/ but might improve readability\n }).bind(this); \/\/ <- here we are calling `.bind()` \n transport.on('data', boundFunction);\n }\n \n\nIn this case, we are binding the callback's ` this ` to the value of `\nMyConstructor ` 's ` this ` .\n\n**Note:** When a binding context for jQuery, use [ ` jQuery.proxy ` _ [docs]\n_ ](http:\/\/api.jquery.com\/jQuery.proxy\/) instead. The reason to do this is so\nthat you don't need to store the reference to the function when unbinding an\nevent callback. jQuery handles that internally.\n\n### Set ` this ` of the callback - part 2\n\nSome functions\/methods which accept callbacks also accept a value to which the\ncallback's ` this ` should refer to. This is basically the same as binding it\nyourself, but the function\/method does it for you. [ ` Array#map ` _ [docs] _\n](https:\/\/developer.mozilla.org\/en-\nUS\/docs\/Web\/JavaScript\/Reference\/Global_Objects\/Array\/map) is such a method.\nIts signature is:\n\n \n \n array.map(callback[, thisArg])\n \n\nThe first argument is the callback and the second argument is the value ` this\n` should refer to. Here is a contrived example:\n\n \n \n var arr = [1, 2, 3];\n var obj = {multiplier: 42};\n \n var new_arr = arr.map(function(v) {\n return v * this.multiplier;\n }, obj); \/\/ <- here we are passing `obj` as second argument\n \n\n**Note:** Whether or not you can pass a value for ` this ` is usually\nmentioned in the documentation of that function\/method. For example, [\njQuery's ` $.ajax ` method _ [docs] _ ](http:\/\/api.jquery.com\/jQuery.ajax\/)\ndescribes an option called ` context ` :\n\n> This object will be made the context of all Ajax-related callbacks.\n\n* * *\n\n## Common problem: Using object methods as callbacks\/event handlers\n\nAnother common manifestation of this problem is when an object method is used\nas callback\/event handler. Functions are first-class citizens in JavaScript\nand the term \"method\" is just a colloquial term for a function that is a value\nof an object property. But that function doesn't have a specific link to its\n\"containing\" object.\n\nConsider the following example:\n\n \n \n function Foo() {\n this.data = 42,\n document.body.onclick = this.method;\n }\n \n Foo.prototype.method = function() {\n console.log(this.data);\n };\n \n\nThe function ` this.method ` is assigned as click event handler, but if the `\ndocument.body ` is clicked, the value logged will be ` undefined ` , because\ninside the event handler, ` this ` refers to the ` document.body ` , not the\ninstance of ` Foo ` . \nAs already mentioned at the beginning, what ` this ` refers to depends on how\nthe function is **called** , not how it is **defined** . \nIf the code was like the following, it might be more obvious that the function\ndoesn't have an implicit reference to the object:\n\n \n \n function method() {\n console.log(this.data);\n }\n \n \n function Foo() {\n this.data = 42,\n document.body.onclick = this.method;\n }\n \n Foo.prototype.method = method;\n \n\n**The solution** is the same as mentioned above: If available, use ` .bind `\nto explicitly bind ` this ` to a specific value\n\n \n \n document.body.onclick = this.method.bind(this);\n \n\nor explicitly call the function as a \"method\" of the object, by using an\nanonymous function as callback \/ event handler and assign the object ( ` this\n` ) to another variable:\n\n \n \n var self = this;\n document.body.onclick = function() {\n self.method();\n };\n \n\nor use an arrow function:\n\n \n \n document.body.onclick = () => this.method();"} {"query":"How can I disable UWP WebView2's drag and drop?\n\nIn my `UWP` application, I use a webview2 (from \"Microsoft.UI.Xaml.Controls\"). And the web content shown inside of it can be drag-and-drop.\n\nI want to disable this \"drag-and-drop\" feature, and I tried a lot of ways, but all failed. E.g., `AllowDrop=\"False\" CanDrag=\"False\"`\n\nHow can I do it?\n\nXAML code:\n```\nxmlns:controls=\"using:Microsoft.UI.Xaml.Controls\"\n\n <controls:WebView2 x:Name=\"_WebView2\" Height=\"370\" Width=\"792\"\n NavigationStarting=\"WebView2_NavigationStarting\"\/>\n```\n\nC#:\n```\nawait _WebView2.EnsureCoreWebView2Async();\n_WebView2.CoreWebView2.Navigate(sourceStr);\n```","reasoning":"WebView2 displays web content that can be dragged and dropped, and now want to turn this feature off. However, attempts such as AllowDrop=\"False\" CanDrag=\"False\" fail. Please provide XAML and C# code samples.","id":"86","excluded_ids":["N\/A"],"gold_ids_long":["CoreWebView2\/CoreWebView2.txt"],"gold_ids":["CoreWebView2\/CoreWebView2_1_1.txt"],"gold_answer":"Try this:\n\n \n \n await _WebView2.EnsureCoreWebView2Async();\n _WebView2.CoreWebView2.Controller.AllowExternalDrop = false;\n \n\nIt works in my test application. You can see [ the documentation\n](https:\/\/learn.microsoft.com\/en-\nus\/dotnet\/api\/microsoft.web.webview2.core.corewebview2controller.allowexternaldrop?view=webview2-dotnet-1.0.2365.46)\n."} {"query":"java.lang.OutOfMemoryError: Java heap space\n\nI am getting the following error on execution of a multi-threading program\n```\njava.lang.OutOfMemoryError: Java heap space\n```\n\nThe above error occured in one of the threads.\n\n1. Upto my knowledge, Heap space is occupied by instance variables only. If this is correct, then why this error occurred after running fine for sometime as space for instance variables are alloted at the time of object creation.\n\n2. Is there any way to increase the heap space?\n\n3. What changes should I made to my program so that It will grab less heap space?","reasoning":"During the execution of a multi-threaded Java programme, the error \"java.lang.OutOfMemoryError: Java heap space\" is encountered. A way to increase the heap space size of the Java Virtual Machine needs to be found.","id":"87","excluded_ids":["N\/A"],"gold_ids_long":["oracle\/oracle.txt"],"gold_ids":["oracle\/oracle_2_0.txt","oracle\/oracle_2_1.txt","oracle\/oracle_2_4.txt","oracle\/oracle_2_3.txt"],"gold_answer":"If you want to increase your heap space, you can use ` java -Xms<initial heap\nsize> -Xmx<maximum heap size> ` on the command line. By default, the values\nare based on the JRE version and system configuration. You can find out [ more\nabout the VM options on the Java website\n](http:\/\/java.sun.com\/javase\/technologies\/hotspot\/vmoptions.jsp) .\n\nHowever, I would recommend profiling your application to find out why your\nheap size is being eaten. NetBeans has a [ very good profiler\n](http:\/\/profiler.netbeans.org\/) included with it. I believe it uses the [ `\njvisualvm `\n](http:\/\/java.sun.com\/javase\/6\/docs\/technotes\/guides\/visualvm\/index.html)\nunder the hood. With a profiler, you can try to find where many objects are\nbeing created, when objects get garbage collected, and more."} {"query":"Getting Error during build: RollupError: expression expected after migrating from vue-cli to vite\n\nI just migrated from vue-cli to vite. Serving locally works fine, but during the build, I got the following error:\n```\nx Build failed in 221ms\nerror during build:\nRollupError: Expression expected\n at getRollupError (file:\/\/\/home\/pc\/Desktop\/proj\/node_modules\/rollup\/dist\/es\/shared\/parseAst.js:379:41)\n at ParseError.initialise (file:\/\/\/home\/pc\/pc\/proj\/node_modules\/rollup\/dist\/es\/shared\/node-entry.js:11172:28)\n at convertNode (file:\/\/\/home\/officeubuntu23\/pc\/proj\/node_modules\/rollup\/dist\/es\/shared\/node-entry.js:12914:10)\n at convertProgram (file:\/\/\/home\/officeubuntu23\/pc\/proj\/node_modules\/rollup\/dist\/es\/shared\/node-entry.js:12234:12)\n at Module.setSource (file:\/\/\/home\/officeubuntu23\/pc\/proj\/node_modules\/rollup\/dist\/es\/shared\/node-entry.js:14073:24)\n at async ModuleLoader.addModuleSource (file:\/\/\/home\/pc\/Desktop\/proj\/node_modules\/rollup\/dist\/es\/shared\/node-entry.js:18712:13)\n```\n\nI have tried updating node and npm, reinstalling rollup and doing `npm update` and `npm install`.","reasoning":"Build errors can be related to mismatched configurations after migration, incompatible dependency versions, build scripts not updated, etc. There needs to be relevant guidance documentation to help programmers sift through the errors step by step.","id":"88","excluded_ids":["N\/A"],"gold_ids_long":["vue_school\/vue_school.txt"],"gold_ids":["vue_school\/vue_school_2_2.txt","vue_school\/vue_school_2_3.txt","vue_school\/vue_school_2_1.txt","vue_school\/vue_school_2_0.txt"],"gold_answer":"We encountered that exact same error on my team as well. A team member was\nusing Linux, so he set up some symlinks to a few required resources. The rest\nof us were using Windows, so the symlinks didn't play nice with Windows\/Vite.\n\nTry checking your project for simlinks and see if that is causing the error."} {"query":"r convert month year to ordered numeric\n\nI have a dataset with column where the values are month year in format like this\n```\n M_Yr\n March 1990\n April 1990\n May 1990\n June 1990\n July 1990\n Aug 1990\n Sept 1990\n Oct 1990\n Nov 1990\n Dec 1990\n Jan 1991\n Feb 1991\n March 1991\n April 1991\n May 1991\n June 1991\n July 1991\n Aug 1991\n Sept 1991\n Oct 1991\n Nov 1991\n Dec 1991\n```\n\nI tried this approach.\n```\ndf$Col1 <- as.numeric(df$M_Yr)\n```\n\nThis does converts the `month Year` variable to numeric but the order is scrambled and not in right sequence. So I am wondering what is an efficient way to create this numeric variable without writing a lengthy `case_when` statement.\n\nAny suggestion is much appreciated. Thanks.","reasoning":"This is a question about how to convert month and year values to an ordered numeric format. The given dataset contains a column \"M_Yr\" with values in the format \"Month Year\" as strings, e.g. \"March 1990\". The goal is to convert these values to an ordered numeric representation without using a lengthy case_when statement. A possible solution is to leverage functions from a relative package, which is specifically designed for working with dates and times, making it convenient to parse and manipulate month and year information.","id":"89","excluded_ids":["N\/A"],"gold_ids_long":["lubridate\/lubridate.txt"],"gold_ids":["lubridate\/lubridate_4_1.txt","lubridate\/lubridate_4_0.txt"],"gold_answer":"[ Stack Overflow ](https:\/\/stackoverflow.com)\n\n 1. [ About ](https:\/\/stackoverflow.co\/)\n 2. Products \n 3. [ OverflowAI ](https:\/\/stackoverflow.co\/teams\/ai\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav-bar&utm_content=overflowai)\n\n 1. [ Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers ](https:\/\/stackoverflow.co\/teams\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=stack-overflow-for-teams)\n 2. [ Advertising & Talent Reach devs & technologists worldwide about your product, service or employer brand ](https:\/\/stackoverflow.co\/advertising\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=stack-overflow-advertising)\n 3. [ OverflowAI GenAI features for Teams ](https:\/\/stackoverflow.co\/teams\/ai\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=overflow-ai)\n 4. [ OverflowAPI Train & fine-tune LLMs ](https:\/\/stackoverflow.co\/api-solutions\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=overflow-api)\n 5. [ Labs The future of collective knowledge sharing ](https:\/\/stackoverflow.co\/labs\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=labs)\n 6. [ About the company ](https:\/\/stackoverflow.co\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=about-the-company) [ Visit the blog ](https:\/\/stackoverflow.blog\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=top-nav&utm_content=blog)\n\nLoading\u2026\n\n 1. ### [ current community ](https:\/\/stackoverflow.com)\n\n * [ Stack Overflow ](https:\/\/stackoverflow.com)\n\n[ help ](https:\/\/stackoverflow.com\/help) [ chat\n](https:\/\/chat.stackoverflow.com\/?tab=site&host=stackoverflow.com)\n\n * [ Meta Stack Overflow ](https:\/\/meta.stackoverflow.com)\n\n### your communities\n\n[ Sign up\n](https:\/\/stackoverflow.com\/users\/signup?ssrc=site_switcher&returnurl=https%3a%2f%2fstackoverflow.com%2fquestions%2f78322853%2fr-\nconvert-month-year-to-ordered-numeric) or [ log in\n](https:\/\/stackoverflow.com\/users\/login?ssrc=site_switcher&returnurl=https%3a%2f%2fstackoverflow.com%2fquestions%2f78322853%2fr-\nconvert-month-year-to-ordered-numeric) to customize your list.\n\n### [ more stack exchange communities ](https:\/\/stackexchange.com\/sites)\n\n[ company blog ](https:\/\/stackoverflow.blog)\n\n 2. 3. [ Log in ](https:\/\/stackoverflow.com\/users\/login?ssrc=head&returnurl=https%3a%2f%2fstackoverflow.com%2fquestions%2f78322853%2fr-convert-month-year-to-ordered-numeric)\n 4. [ Sign up ](https:\/\/stackoverflow.com\/users\/signup?ssrc=head&returnurl=https%3a%2f%2fstackoverflow.com%2fquestions%2f78322853%2fr-convert-month-year-to-ordered-numeric)\n\n 1. 1. [ Home ](\/)\n 2. [ Questions ](\/questions)\n 3. [ Tags ](\/tags)\n 4. 5. [ Users ](\/users)\n 6. [ Companies ](https:\/\/stackoverflow.com\/jobs\/companies?so_medium=stackoverflow&so_source=SiteNav)\n 7. [ Labs ](javascript:void\\(0\\))\n 8. [ Jobs ](\/jobs?source=so-left-nav)\n 9. [ Discussions ](\/beta\/discussions)\n 10. [ Collectives ](javascript:void\\(0\\))\n\n 11. Communities for your favorite technologies. [ Explore all Collectives ](\/collectives-all)\n\n 2. Teams \n\nNow available on Stack Overflow for Teams! AI features where you work:\nsearch, IDE, and chat.\n\n[ Learn more\n](https:\/\/stackoverflow.co\/teams\/ai\/?utm_medium=referral&utm_source=stackoverflow-\ncommunity&utm_campaign=side-bar&utm_content=overflowai-learn-more) [ Explore\nTeams\n](https:\/\/stackoverflow.co\/teams\/?utm_medium=referral&utm_source=stackoverflow-\ncommunity&utm_campaign=side-bar&utm_content=explore-teams)\n\n 3. [ Teams ](javascript:void\\(0\\))\n 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. [ Explore Teams ](https:\/\/stackoverflow.co\/teams\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=side-bar&utm_content=explore-teams-compact)\n\n##### Collectives\u2122 on Stack Overflow\n\nFind centralized, trusted content and collaborate around the technologies you\nuse most.\n\n[ Learn more about Collectives ](\/collectives)\n\n**Teams**\n\nQ&A for work\n\nConnect and share knowledge within a single location that is structured and\neasy to search.\n\n[ Learn more about Teams ](https:\/\/stackoverflow.co\/teams\/)\n\nGet early access and see previews of new features.\n\n[ Learn more about Labs ](https:\/\/stackoverflow.co\/labs\/)\n\n# Page not found\n\nThis question was removed from Stack Overflow for reasons of moderation .\nPlease refer to the help center for [ possible explanations why a question\nmight be removed ](\/help\/deleted-questions) .\n\nHere are some similar questions that might be relevant:\n\n * [ Why is subtracting these two epoch-milli Times (in year 1927) giving a strange result? ](\/questions\/6841333\/why-is-subtracting-these-two-epoch-milli-times-in-year-1927-giving-a-strange-r)\n * [ Get the current year in JavaScript ](\/questions\/6002254\/get-the-current-year-in-javascript)\n * [ How to check if a String is numeric in Java ](\/questions\/1102891\/how-to-check-if-a-string-is-numeric-in-java)\n * [ Convert a Unix timestamp to time in JavaScript ](\/questions\/847185\/convert-a-unix-timestamp-to-time-in-javascript)\n * [ MySQL Query GROUP BY day \/ month \/ year ](\/questions\/508791\/mysql-query-group-by-day-month-year)\n * [ How do I use PHP to get the current year? ](\/questions\/64003\/how-do-i-use-php-to-get-the-current-year)\n * [ Year and Month columns issue SQL ](\/questions\/68964132\/year-and-month-columns-issue-sql)\n * [ How to allow only numeric (0-9) in HTML inputbox using jQuery? ](\/questions\/995183\/how-to-allow-only-numeric-0-9-in-html-inputbox-using-jquery)\n * [ Get month name from Date ](\/questions\/1643320\/get-month-name-from-date)\n * [ Is there a function to change my months column from int to text without it showing NA ](\/questions\/74943727\/is-there-a-function-to-change-my-months-column-from-int-to-text-without-it-showi)\n\nTry a [ Google Search\n](http:\/\/www.google.com\/search?q=site:stackoverflow.com\/questions+r+convert+month+year+to+ordered+numeric)\n\nTry [ searching for similar questions\n](\/search?q=r%20convert%20month%20year%20to%20ordered%20numeric)\n\nBrowse our [ recent questions ](\/questions)\n\nBrowse our [ popular tags ](\/tags)\n\nIf you feel something is missing that should be here, [ contact us ](\/contact)\n."} {"query":"Calculate weighted average by date in R\n\nI'm new to R and having some difficulty aggregating by date. I have time-series data with multiple price and quantity entries per date. The actual data is more complex, but it looks something like this:\n```\nprice<-c(1.50,3,1.50,3,3,2.90,3)\nquantity<-c(10,5,10,5,5,5,5)\ndate<-c('01\/09\/21','01\/09\/21','01\/16\/21','01\/16\/21','01\/23\/21','01\/30\/21','01\/30\/21')\ndf<-data.frame(date,price,quantity)\n```\n\n```\ndate price quantity\n01\/09\/21 1.5 10\n01\/09\/21 3 5\n01\/16\/21 1.5 10\n01\/16\/21 3 5\n01\/23\/21 3 5\n01\/30\/21 2.9 5\n01\/30\/21 3 5\n```\n\nI'd like to create a new data frame with only the four individual dates and a single price value for each. To do so, I'm trying to calculate the weighted average of price on each individual date, similar to the example below:\n```\ndate price_weighted\n01\/09\/21 2\n01\/16\/21 2\n01\/23\/21 3\n01\/30\/21 2.95\n```\n\nI've tried using `price_weighted<-aggregate(price~date,df,weighted.mean)`, which returns something similar to what I want, but for some reason it's calculating the average price rather than the weighted average. Any suggestions would be appreciated!","reasoning":"This is a question about calculating the weighted average of prices grouped by date in R. The given data frame contains multiple entries of prices and quantities for the same date. The goal is to create a new data frame with a single row for each unique date and the corresponding single weighted average price value calculated. Although attempting to use the weighted.mean function did not yield the expected result, other functions and data manipulation techniques provided by R can be explored to calculate the correct weighted average based on the grouped dates and their associated price and quantity data.","id":"90","excluded_ids":["N\/A"],"gold_ids_long":["R_base_all\/R_base_all.txt"],"gold_ids":["R_base_all\/R_base_all_207_0.txt","R_base_all\/R_base_all_404_0.txt","R_base_all\/R_base_all_404_1.txt"],"gold_answer":"You can do this with dplyr by adding up the price times the quantity divided\nby the total quantity in that day.\n\n \n \n library(dplyr)\n price<-c(1.50,3,1.50,3,3,2.90,3)\n quantity<-c(10,5,10,5,5,5,5)\n date<-c('01\/09\/21','01\/09\/21','01\/16\/21','01\/16\/21','01\/23\/21','01\/30\/21','01\/30\/21')\n df<-data.frame(date,price,quantity)\n \n df %>% \n group_by(date) %>% \n summarise(wt_mean = sum(price * quantity\/sum(quantity)))\n #> # A tibble: 4 \u00d7 2\n #> date wt_mean\n #> <chr> <dbl>\n #> 1 01\/09\/21 2 \n #> 2 01\/16\/21 2 \n #> 3 01\/23\/21 3 \n #> 4 01\/30\/21 2.95\n \n\nYou could also do this with ` by() ` or ` tapply() ` from base R:\n\n \n \n by(df, list(df$date), function(x)Hmisc::wtd.mean(x$price, weights=x$quantity))\n #> : 01\/09\/21\n #> [1] 2\n #> ------------------------------------------------------------ \n #> : 01\/16\/21\n #> [1] 2\n #> ------------------------------------------------------------ \n #> : 01\/23\/21\n #> [1] 3\n #> ------------------------------------------------------------ \n #> : 01\/30\/21\n #> [1] 2.95\n \n tapply(df, df$date, function(x)Hmisc::wtd.mean(x$price, weights=x$quantity))\n #> 01\/09\/21 01\/16\/21 01\/23\/21 01\/30\/21 \n #> 2.00 2.00 3.00 2.95\n \n\nCreated on 2024-04-13 with [ reprex v2.0.2 ](https:\/\/reprex.tidyverse.org)"} {"query":"How to convert MJD time in UTC (only date, no time) in order to make a plot\n\nI have a list of MJD time. I need to convert it in list UTC but only the date (year\/month\/day) and no time, so a list that I can use to make a plot with time in x-axis. Thanks\n\nI tried with astropy and the command Time('list',forma='mjd').iso but i have also the time, and I am not able to delete.","reasoning":"Utilizing the appropriate date and time formatting tools or libraries, when converting MJD values to dates, it is necessary to specify outputting only the date portion without including the time part. This may involve using specific formatting parameters or methods to control the output format.","id":"91","excluded_ids":["N\/A"],"gold_ids_long":["astropy_Time\/astropy_Time.txt"],"gold_ids":["astropy_Time\/astropy_Time_43_0.txt"],"gold_answer":"I would utilize the ` subfmt ` specification to extract only the data and not\nthe time as follows. I created an array of MJD times and then convert them to\nUTC and derive only the dates as follows\n\n \n \n from astropy.time import Time\n import numpy as np\n \n mjd_array = np.arange(51545,51550,1)\n time_array = Time(mjd_array,format='mjd')\n \n print(time_array.to_value(format='iso', subfmt='date')\n \n \n\nThis prints the following as output\n\n \n \n ['2000-01-02' '2000-01-03' '2000-01-04' '2000-01-05' '2000-01-06']\n \n\nThis should cover your use case I believe. Alternatively if I followed what\nyou were doing I would end print this and that will have both date and time\n\n \n \n print(time_array.to_value(format='iso')\n \n \n \n ['2000-01-02 00:00:00.000' '2000-01-03 00:00:00.000'\n '2000-01-04 00:00:00.000' '2000-01-05 00:00:00.000'\n '2000-01-06 00:00:00.000']"} {"query":"applying strsplit on data.frame results in unexpected output\n\nI have one dataframe an two functions:\n\nMy dataframe:\n```\ns_words<-c(\"one,uno\",\"two,dos\",\"three,tres\",\"four,cuatro\")\nn_nums<-c(10,20,30,40)\ndf1 <- data.frame(n_nums,s_words) \n> df1\n n_nums s_words\n1 10 one,uno\n2 20 two,dos\n3 30 three,tres\n4 40 four,cuatro\n```\n\nMy two functions:\n```\nf_op1 <- function(s_input) {\n s_ret = paste0(\"***\",s_input,\"***\")\n return(s_ret)\n}\n\n\nf_op2 <- function(s_input) {\n a_segments=unlist(strsplit(s_input,split=\"\\\\W+\"))\n s_eng = a_segments[1]\n s_spa = a_segments[2]\n s_ret = paste0(\"*\",s_eng,\"***\",s_spa,\"*\")\n return(s_ret)\n}\n```\n\nWhen I apply my functions on the dataframe ....\n```\ndf1$s_op1 <- f_op1(df1$s_words)\ndf1$s_op2 <- f_op2(df1$s_words)\n```\n\nI get this:\n```\n> df1\n n_nums s_words s_op1 s_op2\n1 10 one,uno ***one,uno*** *one***uno*\n2 20 two,dos ***two,dos*** *one***uno*\n3 30 three,tres ***three,tres*** *one***uno*\n4 40 four,cuatro ***four,cuatro*** *one***uno*\n```\n\nBut I need this, something like:\n```\n> df1\n n_nums s_words s_op1 s_op2\n1 10 one,uno ***one,uno*** *one***uno*\n2 20 two,dos ***two,dos*** *two***dos*\n3 30 three,tres ***three,tres*** *three***tres*\n4 40 four,cuatro ***four,cuatro*** *four***cuatro*\n```\n\nf_op2 is only for demonstration purposes, in reality it is more complex and uses \"strsplit\". I think there is some problem with strsplit, but I'm not sure, I'm begginer in R language. Thanks in advance for your explanations.\n\nI have searched a lot for help but I can't find the solution.","reasoning":"When working with dataframes in R, difficulties were encountered with string manipulation, in particular when attempting to apply a specific string splitting and processing function to each element in the column, the output was not as expected. More appropriate methods need to be explored to ensure that the function acts correctly on each element to achieve the desired output format.","id":"92","excluded_ids":["N\/A"],"gold_ids_long":["R_base_all\/R_base_all.txt"],"gold_ids":["R_base_all\/R_base_all_295_0.txt","R_base_all\/R_base_all_351_0.txt","R_base_all\/R_base_all_351_2.txt","R_base_all\/R_base_all_351_1.txt","R_base_all\/R_base_all_295_1.txt"],"gold_answer":"` library(tidyverse) `\n\nAlternative approach - just mutate 2 new columns instead of writing a function\nfor each column\n\n \n \n df1 %>% \n mutate(s_op1 = str_c(\"***\", s_words, \"***\")) %>% \n mutate(s_op2 = str_c(\"*\", str_replace(s_words, \",\", \"***\"), \"*\"))"} {"query":"Trying to subscribe to oneSignal in a React Native App\n\nHello I have created a new 17.1 ReactNative app and have incorporated OneSignal.\n\nAccording to the example it should subscribe but it does not.\n\nI am also testing this on a physical device.\n\nThis runs, no errors and i get the prompt the first time to allow notifications.\n\nBut the user is not subscribed.\n\nI tried adding\n```\nOneSignal.sendTag('my app id', true);\n```\n\nbut got the error sendTag is undefined\n\nmy code looks like\n```\nconst App = () => {\n\n \/\/ Remove this method to stop OneSignal Debugging\n OneSignal.Debug.setLogLevel(LogLevel.Verbose);\n\n \/\/ OneSignal Initialization\n OneSignal.initialize(\"ONESIGNAL_APP_ID\");\n\n \/\/ requestPermission will show the native iOS or Android notification permission prompt.\n \/\/ We recommend removing the following code and instead using an In-App Message to prompt for notification permission\n OneSignal.Notifications.requestPermission(true);\n\n \/\/ Method for listening for notification clicks\n OneSignal.Notifications.addEventListener('click', (event) => {\n console.log('OneSignal: notification clicked:', event);\n });\n\n const isDarkMode = useColorScheme() === 'dark';\n const backgroundStyle = {\n backgroundColor: isDarkMode ? Colors.darker : Colors.lighter,\n };\n\n return (\n <SafeAreaView style={backgroundStyle}>\n <StatusBar\n barStyle={isDarkMode ? 'light-content' : 'dark-content'}\n backgroundColor={backgroundStyle.backgroundColor}\n \/>\n <ScrollView\n contentInsetAdjustmentBehavior=\"automatic\"\n style={backgroundStyle}>\n <Header \/>\n <View\n style={{\n backgroundColor: isDarkMode ? Colors.black : Colors.white,\n }}>\n <Button\n title=\"Press me\"\n onPress={subscribe}\n \/>\n <LearnMoreLinks \/>\n <\/View>\n <\/ScrollView>\n <\/SafeAreaView>\n );\n };\n```","reasoning":"After initialising OneSignal correctly, you need to call the appropriate method to subscribe the user to receive push notifications. Setting up the correct application ID before initialising OneSignal ensures that the OneSignal service gets valid application credentials and can subscribe users to push notifications for that application.","id":"93","excluded_ids":["N\/A"],"gold_ids_long":["Client_Side\/Client_Side.txt"],"gold_ids":["Client_Side\/Client_Side_14_1.txt"],"gold_answer":"I use\n\n \n \n OneSignal.setAppId(\"ONESIGNAL_APP_ID\")\n \n\ninstead of\n\n \n \n OneSignal.initialize(\"ONESIGNAL_APP_ID\")\n \n\nTo assign a user \/ phone to the OneSignal application. Maybe this could help!"} {"query":"Bot renaming user by user's command\n\nI've been having a problem with some project I 've been doing for the last 3-4 days on discord. It has to do with bots of course and the language I chose is javascript (discord.js).\n\nSo, the thing seems kinda simple but I am really stuck in this cause I have only a little experience with javascript.\n\nThe bot is supposed to read two values on a message, those values are a string and a number. The bot will simply nickname you the string and that number.\n\nexample: User says: john123 40 bot: renaming the user as \" John123 | 40 \"\n\nThe nicknaming command and such are the easy part, the hard one for me is how should I tell the bot \"take the string, put it left of the \"|\", take the number, put it right of the \"|\" \". I mean the bot can't even read them. Here is my try:\n\n```\nvar name = message.content.includes(String)\nvar number = message.content.includes(\"1\"|| \"2\"|| \"3\"|| \"4\"|| \"5\"|| \"6\"|| \"7\"|| \"8\"|| \"9\"|| \"10\"|| \"11\"|| \"12\"|| \"13\"|| \"14\"|| \"15\"|| \"16\"|| \"17\"|| \"18\"|| \"19\"|| \"20\"|| \"21\"|| \"22\"|| \"23\"|| \"24\"|| \"25\"|| \"26\"|| \"27\"|| \"28\"|| \"29\"|| \"30\"|| \"31\"|| \"32\"|| \"33\"|| \"34\"|| \"35\"|| \"36\"|| \"37\"|| \"38\"|| \"39\"|| \"40\")\n\nfunction theNaming (name, number){\nmessage.member.setNickname('name'|' number')\n.then(console.log)\n.catch(console.error);\n}\n```\n\n(the level is supposed to not go higher than 40 so, I thought it may work inside the include)","reasoning":"A new nickname needs to be set for a user based on the string and number in the message entered by the user, in the format \"String | Number\". Functions for character manipulation are required.","id":"94","excluded_ids":["N\/A"],"gold_ids_long":["String_prototype\/String_prototype.txt"],"gold_ids":["String_prototype\/String_prototype_30_4.txt","String_prototype\/String_prototype_30_5.txt","String_prototype\/String_prototype_30_3.txt"],"gold_answer":"> How should I tell the bot \"take the string, put it left of the \"|\", take the\n> number, put it right of the \"|\" \". I mean the bot can't even read them.\n\nThe bot can read them if you pass the message in proper format. Your question\nshould have mentioned that ` name ` and ` number ` will be separated by space.\nThis is how the bot recognizes ` name ` and ` number ` from entered message.\n\n[ String.split() ](https:\/\/developer.mozilla.org\/en-\nUS\/docs\/Web\/JavaScript\/Reference\/Global_Objects\/String\/split) can be use to\nsplit the message string into ` name ` and ` number ` .\n\n \n \n var message = \"john123 40\";\r\n var words = message.split(' '); \/\/ Using space as the separator.\r\n \r\n console.log(words);\n\nNow ` words ` array contains strings from the message which are separated by\nspace. And you can access them using their index.\n\n \n \n var name = words[0];\n var number = words[1];\n \n\nThen pass them as arguments to the naming function like ` theNaming(name,\nnumber); `"} {"query":"undefined kafka components for Go kafka\n\nI was trying to install one of my go files. But I bumped into this error\n```\nC:\\mygoproject>go install kafkapublisher.go\n\n\\#command-line-arguments\n.\\kafkapublisher.go:8:65: undefined: kafka.Message\n\n.\\kafkapublisher.go:10:19: undefined: kafka.NewProducer\n\n.\\kafkapublisher.go:10:38: undefined: kafka.ConfigMap\n\n.\\kafkapublisher.go:17:31: undefined: kafka.Event\n\n.\\kafkapublisher.go:19:26: undefined: kafka.Message\n```\n\nOn my kafkapublisher.go file, I already imported the kafka dependency:\n```\n import (\n \"github.com\/confluentinc\/confluent-kafka-go\/kafka\"\n \"log\"\n )\n```\n\neven on my `go.mod` file\n```\n module mymodule\n \n go 1.12\n \n require (\n github.com\/aws\/aws-lambda-go v1.15.0\n github.com\/confluentinc\/confluent-kafka-go v1.3.0\n )\n```","reasoning":"Kafka Go client is based on the C library. Therefore, the relative flag needs to be changed to go to use C libraries for kafka client.","id":"95","excluded_ids":["N\/A"],"gold_ids_long":["medium\/medium.txt"],"gold_ids":["medium\/medium_0_1.txt","medium\/medium_0_0.txt"],"gold_answer":"I was facing the same issues.\n\nKafka Go client is based on the C library. So setting flag ` CGO_ENABLED=1 `\nwill enable go to use C libraries for kafka client.\n\nHope it saves someone's time."} {"query":"Python requests module with multithread\n\nCurrently, I am developing the astrophotometric software for multiple telescopes. For this, I constructed a mother computer connected with multiple telescopes and these telescopes can be controlled by HTTP protocol. For synchronized operation, I am trying to control these multiple telescopes simultaneously with multithreading.\nHowever, when I retrieve image data (110MB for each, ~1.2GB for total) from multiple telescopes (10 telescopes), the data transfer speed is much slower than I expected. For seamless operation, we have 10G connection with the mother computer, and 1G connection with 10 telescopes. I expected ~9Gbps data transfer speed when 10 telescopes transfer the data simultaneously, but only 1.5Gbps achieved.\n\nFor simple test, I extracted the part of the code and check the time consumption.\n```\nimport requests\nimport time\nfrom astropy.time import Time\ndef request_imagearray(cam):\n client_trans_id = 1\n client_id = 1\n attribute = 'imagearray'\n\n url = f\"{cam.device.base_url}\/{attribute}\"\n hdrs = {'accept' : 'application\/imagebytes'}\n # Make Host: header safe for IPv6\n if(cam.device.address.startswith('[') and cam.device.address.startswith('[::1]')):\n hdrs['Host'] = f'{cam.device.address.split(\"%\")[0]}]'\n pdata = {\n \"ClientTransactionID\": f\"{client_trans_id}\",\n \"ClientID\": f\"{client_id}\" \n } \n\n print('START:',Time.now(), cam.device.address)\n start = time.time()\n response = requests.request(\"GET\",\"%s\/%s\" % (cam.device.base_url, attribute), params=pdata, headers=hdrs, verify = False)\n\n print('consumed time:', time.time() - start, cam.device.address)\n# %%\n\nunitnumlist = [1,2,3,5,6,7,8,9,10,11]\ncamlist = []\nfor unitnum in unitnumlist:\n #camlist.append(mainCamera(unitnum))\n Thread(target = request_imagearray, kwargs = dict(cam = mainCamera(unitnum))).start()\n#%% \n```\n\nand the output is\n```\nSTART: 2024-04-15 07:49:59.067946 10.0.106.6:11111\nSTART: 2024-04-15 07:49:59.382973 10.0.106.7:11112\nSTART: 2024-04-15 07:49:59.541495 10.0.106.8:11113\nSTART: 2024-04-15 07:49:59.647055 10.0.106.10:11111\nSTART: 2024-04-15 07:49:59.788433 10.0.106.11:11111\nSTART: 2024-04-15 07:49:59.897876 10.0.106.12:11111\nSTART: 2024-04-15 07:50:00.009254 10.0.106.13:11111\nSTART: 2024-04-15 07:50:00.157893 10.0.106.14:11111\nSTART: 2024-04-15 07:50:00.347704 10.0.106.16:11111\nSTART: 2024-04-15 07:50:00.544626 10.0.106.9:11111\nconsumed time: 5.96204686164856 10.0.106.6:11111\nconsumed time: 7.304373502731323 10.0.106.7:11112\nconsumed time: 7.58618688583374 10.0.106.8:11113\nconsumed time: 7.617574453353882 10.0.106.10:11111\nconsumed time: 7.678117990493774 10.0.106.11:11111\nconsumed time: 7.76300311088562 10.0.106.12:11111\nconsumed time: 7.636215925216675 10.0.106.14:11111\nconsumed time: 7.785021066665649 10.0.106.13:11111\nconsumed time: 7.338518857955933 10.0.106.9:11111\nconsumed time: 11.309057235717773 10.0.106.16:11111\n```\n\nThe comsumed time is much longer than I expected. I expected ~3sec for all data transferred. It seems that multithreads starts to request \"request.get\" at the same time, but all the data is not transferred at the same time.","reasoning":"While the use of multiple threads is intended to achieve parallelism, the frequent creation and destruction of request session objects can introduce additional overheads and reduce overall performance. In contrast, maintaining a pool of request sessions and reusing session objects across multiple threads reduces the overhead of connection establishment and resource allocation.","id":"96","excluded_ids":["N\/A"],"gold_ids_long":["Request_Advanced\/concurrent_futures.txt","Request_Advanced\/Request_Advanced.txt"],"gold_ids":["Request_Advanced\/concurrent_futures_2_1.txt","Request_Advanced\/concurrent_futures_2_0.txt","Request_Advanced\/Request_Advanced_1_0.txt"],"gold_answer":"Use ` requests.Session ` to open one connection instead of reopen it every `\nrequest.request ` .\n\nI would also suggest using [ ThreadPoolExecutor\n](https:\/\/docs.python.org\/3\/library\/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor)\nfor a more convenient threading interface\n\n \n \n import time\n from concurrent.futures.thread import ThreadPoolExecutor\n from itertools import repeat\n \n import requests\n from astropy.time import Time\n \n \n def request_imagearray(cam, _session):\n client_trans_id = 1\n client_id = 1\n attribute = 'imagearray'\n \n url = f\"{cam.device.base_url}\/{attribute}\"\n hdrs = {'accept': 'application\/imagebytes'}\n # Make Host: header safe for IPv6\n if cam.device.address.startswith('[') and cam.device.address.startswith('[::1]'):\n hdrs['Host'] = f'{cam.device.address.split(\"%\")[0]}]'\n pdata = {\n \"ClientTransactionID\": f\"{client_trans_id}\",\n \"ClientID\": f\"{client_id}\"\n }\n \n print('START:', Time.now(), cam.device.address)\n start = time.time()\n response = _session.get(url, params=pdata, headers=hdrs, verify=False)\n print('consumed time:', time.time() - start, cam.device.address)\n \n \n unitnumlist = [1, 2, 3, 5, 6, 7, 8, 9, 10, 11]\n camlist = [mainCamera(unitnum) for unitnum in unitnumlist]\n with requests.Session() as session:\n with ThreadPoolExecutor(max_workers=max(len(unitnumlist), 1)) as executor:\n results = executor.map(request_imagearray, camlist, repeat(session))\n \n\nThe assignment to ` results ` is in case ` request_imagearray ` returns\nsomething."} {"query":"writing code to determine how many pages will be printed based on the string of p\n\nI am programming a printer like program in python and the question write a function \"count_pages(p) that returns how many pages will be printed based on the string p\n\nI tried doing the count_pages function to return how many pages. the arguments and return value are\n\n\"5-7, 2\" 4\n\n\"12-18, 20-20\" 8\n\n\"18-12, 20-20, 5-6\"\n\nI ran into the error of:\n```\nTraceback (most recent call last):\n File \"C:\/Users\/19013\/Desktop\/1900\/printer_party.py\", line 19, in <module>\n print(count_pages('5-7, 2')) # Expected output: 4\n File \"C:\/Users\/19013\/Desktop\/1900\/printer_party.py\", line 11, in count_pages\n start, end = map(int, r.split('-'))\nValueError: not enough values to unpack (expected 2, got 1)\n```\n\n```\ndef count_pages(p):\n # Split the input string by commas\n ranges = p.split(', ')\n \n # Initialize the total page count\n total_pages = 0\n \n # Iterate through each range\n for r in ranges:\n # Split the range by hyphen\n start, end = map(int, r.split('-'))\n \n # Add the number of pages in this range to the total\n total_pages += end - start + 1\n \n return total_pages\n\n# Example usage\nprint(count_pages('5-7, 2')) # Expected output: 4\nprint(count_pages('12-18, 20-20')) # Expected output: 8\nprint(count_pages('18-12, 20-20, 5-6')) # Expected output: 10\n```","reasoning":"The problem is to write a Python function `count_pages(p)` to determine how many pages the supplied string will print. The string consists of a range of page numbers separated by hyphens and individual page numbers separated by commas. The function needs to return the total number of pages printed. A way to facilitate string parsing is needed.","id":"97","excluded_ids":["N\/A"],"gold_ids_long":["re_functions\/re_functions.txt"],"gold_ids":["re_functions\/re_functions_2_2.txt","re_functions\/re_functions_2_3.txt","re_functions\/re_functions_2_0.txt","re_functions\/re_functions_2_4.txt","re_functions\/re_functions_2_1.txt"],"gold_answer":"If not using ` re ` module then you need to deal explicitly with 2 conditions\nthat are missed in your code: 1. end comes before start in the range; 2. there\nis no range just a single number:\n\n \n \n def count_pages(p):\n # Split the input string by commas\n ranges = p.split(', ')\n \n # Initialize the total page count\n total_pages = 0\n \n # Iterate through each range\n for r in ranges:\n # Split the range by hyphen\n if '-' in r:\n start, end = map(int, r.split('-'))\n total_pages += abs(end - start) + 1 # in case end<start\n \n else:\n total_pages += 1 # no range, just a value\n \n return total_pages\n \n\ngives:\n\n \n \n 4\n 8\n 10"} {"query":"Pydantic - How to subclass a builtin type\n\nI'm trying to make a subclass of timedelta that expects to receive milliseconds instead of seconds, but it's not currently working.\n\nAm I going against the grain? Is there a \"right\" way to achieve this with Pydantic? Or do I somehow need to tell Pydantic that `MillisecondTimedelta` is just a `timedelta`..\n\n```\nfrom datetime import timedelta\n\nfrom pydantic import BaseModel\n\n\nclass MillisecondTimedelta(timedelta):\n @classmethod\n def __get_validators__(cls):\n # timedelta expects seconds\n yield lambda v: v \/ 1000\n yield cls\n\n\nclass MyModel(BaseModel):\n td: MillisecondTimedelta\n\ndata = {\n \"td\": 7598040,\n}\n\nprint(MyModel(**data))\n```\n\nResults in:\n```\nTraceback (most recent call last):\n File \"main.py\", line 14, in <module>\n class MyModel(BaseModel):\n File \"pydantic\/main.py\", line 262, in pydantic.main.ModelMetaclass.__new__\n File \"pydantic\/fields.py\", line 315, in pydantic.fields.ModelField.infer\n File \"pydantic\/fields.py\", line 284, in pydantic.fields.ModelField.__init__\n File \"pydantic\/fields.py\", line 362, in pydantic.fields.ModelField.prepare\n File \"pydantic\/fields.py\", line 541, in pydantic.fields.ModelField.populate_validators\n File \"pydantic\/class_validators.py\", line 255, in pydantic.class_validators.prep_validators\n File \"pydantic\/class_validators.py\", line 238, in pydantic.class_validators.make_generic_validator\n File \"\/usr\/lib\/python3.8\/inspect.py\", line 3105, in signature\n return Signature.from_callable(obj, follow_wrapped=follow_wrapped)\n File \"\/usr\/lib\/python3.8\/inspect.py\", line 2854, in from_callable\n return _signature_from_callable(obj, sigcls=cls,\n File \"\/usr\/lib\/python3.8\/inspect.py\", line 2384, in _signature_from_callable\n raise ValueError(\nValueError: no signature found for builtin type <class '__main__.MillisecondTimedelta'>\n```","reasoning":"Programmer creates a timedelta subclass in Pydantic that expects to receive milliseconds instead of seconds, and wants to resolve an error caused by Pydantic's inability to generate signatures for built-in types. The solution is to recheck the documentation of the referenced built-in function to fill in the missing parts.","id":"98","excluded_ids":["N\/A"],"gold_ids_long":["pydantic\/pydantic.txt"],"gold_ids":["pydantic\/pydantic_0_6.txt","pydantic\/pydantic_0_9.txt","pydantic\/pydantic_0_8.txt","pydantic\/pydantic_0_7.txt","pydantic\/pydantic_0_5.txt","pydantic\/pydantic_0_4.txt","pydantic\/pydantic_0_3.txt","pydantic\/pydantic_0_0.txt","pydantic\/pydantic_0_2.txt"],"gold_answer":"As shown [ on the doc page of ` __get_validators__() ` ](https:\/\/pydantic-\ndocs.helpmanual.io\/usage\/types\/#custom-data-types) , you need to yield one or\nmore validators.\n\nThe modified class is reported below; the problem was that Pydantic\nunderstands (for a timedelta field) int and floats as seconds [ (source)\n](https:\/\/pydantic-docs.helpmanual.io\/usage\/types\/#datetime-types) .\n\n \n \n class MillisecondTimedelta(timedelta):\n @classmethod\n def __get_validators__(cls):\n yield cls.validate\n \n @classmethod\n def validate(cls, v):\n if any(isinstance(v, t) for t in (int, float)):\n return cls(milliseconds=v)\n \n\nNow everything should work correctly.\n\n \n \n >>> data = {\"td\": 1000}\n >>> print(MyModel(**data))\n td=MillisecondTimedelta(seconds=1)\n \n\nEDIT: Without a custom class and a validator, it's possible to use a function\nto edit the value to assign to the class constructor; it's required to\ndecorate this function, [ as seen here ](https:\/\/pydantic-\ndocs.helpmanual.io\/usage\/validators\/)\n\n \n \n class MyModel(BaseModel):\n td: timedelta\n \n @validator('td')\n def convert_to_ms(cls, v):\n return v \/ 1000\n \n\nAlso this solution is working:\n\n \n \n >>> data = {\"td\": 3000}\n >>> print(MyModel(**data))\n td=datetime.timedelta(seconds=3)"} {"query":"Loading external javascript file in index.html of React App\n\nI have this React app in which I load my external javascript file inside the tag as below index.html:\n```\n....\n<script type=\"text\/javascript\" src=\"..\/src\/assets\/externalJavascript.js\"><\/script>\n<\/head>\n```\n\nInside externalJavascript.js, I have this below:\n```\nlet queryParams = window.location.search.substring(1);\n\nconsole.log(\"Query: \", queryParams)\n\nwindow.loadFunc = function loadFunc() {\n console.log(\"Script Loaded!\");\n };\n```\n\nI access this loadFunc() inside my desired component using window.loadFunc()\n\nAll this is not working but if I make the js file inline in the index.html, it's working! Like this below:\n```\n<script>\n (function () {\n let queryParams = window.location.search.substring(1);\n\n console.log(\"Query: \", queryParams);\n\n window.loadFunc = function loadFunc() {\n console.log(\"Script Loaded!\");\n };\n })()\n <\/script>\n```\n\nWhen my app loads, I get the query in console log and also I can access the loadFunc() only when it is inline script.\n\nPlease help me to solve this.\n\nThank you!","reasoning":"When loading an external JavaScript file in the index.html of a React app, the loadFunc() function in the file doesn't work correctly, but when the same JavaScript code is inlined into the HTML it works fine. The value of `type` needs to be changed to resolve the issue of loading and executing external JavaScript files.","id":"99","excluded_ids":["N\/A"],"gold_ids_long":["freeCodeCamp\/freeCodeCamp.txt"],"gold_ids":["freeCodeCamp\/freeCodeCamp_0_1.txt","freeCodeCamp\/freeCodeCamp_0_0.txt"],"gold_answer":"instead of ` \"type=text\/javascript\" ` try ` type=\"module\" `"} {"query":"DAX RLS Function using LOOKUPVALUE Parsing but not working\n\nI have a table that I'm trying to implement RLS on using a secondary table with a structure below:\n\nEmployeeTable\n```\nEmployeeID EmployeeEmail\n1 1234@email.com\n2 4567@email.com\n```\n\nFilterTable\n```\nEmployeeID ManagerHierarchy\n1 3&4&5\n2 6&7&4&5\n```\n\nThe ManagerHierarchy column is a string that shows all managers of an employee concatenated together and separated by \"&\".\n\nThe goal of the RLS is to create a filter that allows any manager to view the report and have their data only display employeeIDs wherein their own ID exists within the ManagerHierarchy column and thus only showing their subordinates.\n\nI have the below DAX expression applied on EmployeeTable that I thought would work and parses in the expression builder, but it is giving me errors:\n```\n[EmplID]=\nLOOKUPVALUE(\nFilterTable[EmployeeID], FilterTable[ManagerHIERARCHY],\n\nLOOKUPVALUE( \/\/This is to return the viewer's own employeeID to be crossed over into the FilterTable\n[EmployeeID], [EmployeeEmail], USERPRINCIPALNAME())\n)\n```\n\nThe Report it gives is as follows:\n\nAn error was encoutnered during the evaluation of the row level security defined on EmployeeTable. Function 'LOOKUPVALUE' does not support values of type Text with values of type integer. Consider using the VALUE or FORMAT function to convert one of the values.\n\nI've tried re-shuffling my DAX expression with to convert it as such, but I haven't been able to make it work as intended.","reasoning":"When setting up row-level security with DAX, attempting to filter the EmployeeTable based on the EmployeeID contained in the ManagerHierarchy string via the LOOKUPVALUE function encountered a mismatch between the text and integer types.The definition of LOOKUPVALUE needs to be looked at again to get the correct understanding of the Error.","id":"100","excluded_ids":["N\/A"],"gold_ids_long":["Learn_filter_financial_functions\/Learn_filter.txt"],"gold_ids":["Learn_filter_financial_functions\/Learn_filter_0_0.txt","Learn_filter_financial_functions\/Learn_filter_0_1.txt"],"gold_answer":"I would avoid the need for complex DAX (hard to code and test - especially in\nthe context of security) by splitting the ManagerHierarchy column into\nmultiple rows.\n\nThe **Power Query Editor** can easily handle this - select the column and from\nthe **Home** ribbon choose **Split Column \/ By Delimiter** . Specify the \"&\"\ncharacter as the delimiter (if the editor doesn't guess that), **Split at =\neach occurrence** , and then in the **Advanced options** section choose\n**Split into = Rows** .\n\nIf you need to preserve your current data structure, this could be a new\nquery, starting by Reference to your existing FilterTable query.\n\nAfter that transformation, the DAX expression you wrote can be used to apply\nRow Level Security.\n\nPersonally I would go a step further in the Query design, and add steps to\n**Merge Queries \/ Expand** to copy the Manager's Email ID onto each row of the\nFilterTable. This would make the security implementation even easier to code\nand test. It would also be much more transparent e.g. for a security audit.\n\nFrom experience, testing and building confidence in any Power BI security\nimplementation is difficult (can't impersonate without granting access) and\nhigh-risk. Mistakes can be really embarrassing and burn confidence in your\nsolution. The best approach is always to keep the technical details of the\nsecurity design as simple as possible, so you can involve many others in\nsigning off the testing. A solution that involves complex code only understood\nby and visible to a handful of developers would typically not pass a security\naudit."} {"query":"Consume InputIterator with C++ ranges\n\nWith an input iterator, I can consume different sub-ranges out of it, e.g.\n```\nvoid test(std::input_iterator auto it) {\n while (*it < 1) ++it; \/\/ drop_while\n int n = *it++; \/\/ take(1)\n int sum = 0; while (n-- > 0) sum += (1 + *it++); \/\/ fold_left(take(n) | transform)\n int prd = 1; while (int x = *it++ \/ 2) prd *= x; \/\/ fold_left(transform | take_while)\n std::cout << sum << ' ' << prd << '\\n';\n}\n\nint main() {\n test(std::begin({0, 0, 0, 3, 400, 30, 2, 4, 6, 0}));\n}\n```\n\nIs there a way to do the same with `std::ranges`\/`std::views`?","reasoning":"The programmer attempts to use std::ranges and std::views in C++ for operations like drop_while, take(1), fold_left(take(n) | transform) and fold_left(transform | take_while). But all of these views \/ adaptors require that the input ranges conform to a common conceptual requirement, and the definition of this particular concept needs to be found in the C++ documentation to determine if this attempt is feasible.","id":"101","excluded_ids":["N\/A"],"gold_ids_long":["C++_Ranges_library\/C++_Ranges_library.txt"],"gold_ids":["C++_Ranges_library\/C++_Ranges_library_123_3.txt","C++_Ranges_library\/C++_Ranges_library_123_2.txt"],"gold_answer":"Generally speaking, no. At least it won't conform with `\nstd::ranges::view_interface ` nor ` std::ranges::range_adaptor_closure `\n\nEssentially all the views \/ adaptors you mentioned requires the `\nstd::ranges::viewable_range ` concept, which forbids any lvalue ` input_range\n` , implying you must handover ownership to the adaptors used which could not\nbe splited across all.\n\nIt should be noted that OP provided an array as test range, which is a `\nforward_range ` and less strict than ` input_range ` , i.e. it supports multi-\npass instead of single-pass. A better test would be ...\n\n \n \n std::generator<int> test_range() {\n static constexpr auto data = {0, 0, 0, 3, 400, 30, 2, 4, 6, 0};\n for(int num: data) {\n co_yield num;\n }\n }\n \n int main() {\n auto data = test_range();\n static_assert(std::ranges::input_range<decltype(data)>);\n test(data);\n }\n \n\nThis ought not be interpreted as ` std::ranges ` \/ ` std::views ` is less\npowerful than ` <algorithm> ` or ` <numeric> ` though, since `\nstd::accumulate(it, it+n, ...) ` creates a copy of the iterator.\n\nNever the less OP have raised a good example that managing your own iterator\nstays more powerful than depending on STL utilities."} {"query":"Storing Binary String in DynamoDb\n\nI am trying to store a binary string in DynamoDB using Java. However, the builder is converting the bytes somehow. My original string is a zipped string. If I pass:\n\nH4sIAAAAAAAAA7LJKMnNsePlsslITUyxsynJLMlJtTMxMFXwyy9RcMzJyS9PTbHRhwjb6IMVARUn5adUKiSlJ+fn5BfZKpVnZJakKoHEk1PzSlKL7GwyDDHNAIrZ6EMVgOwDKoPy8tIz8yr0DfUMDfVMkZXog6wBM6COBAAAAP\/\/AwBuTqCXrQAAAA==\n\nThe value in the database is converted to: SDRzSUFBQUFBQUFBQTdMSktNbk5zZVBsc3NsSVRVeXhzeW5KTE1sSnRUTXhNRlh3eXk5UmNNekp5UzlQVGJIUmh3amI2SU1WQVJVbjVhZFVLaVNsSitmbjVCZlpLcFZuWkpha0tvSEVrMVB6U2xLTDdHd3lEREhOQUlyWjZFTVZnT3dES29QeTh0SXo4eXIwRGZVTURmVk1rWlhvZzZ3Qk02Q09CQUFBQVAvL0F3QnVUcUNYclFBQUFBPT0=\n```\nObject responseMap = mMap.get(\"Response\");\nLinkedHashMap<String, String> responseMapped = (LinkedHashMap<String, String>)responseMap;\n\nString status = statusMapped.get(\"N\");\nString response = responseMapped.get(\"B\");\n\nSdkBytes bytes = SdkBytes.fromUtf8String(response);\n\nhistoryValueMap.put(\"Status\", AttributeValue.builder().n(status).build());\nhistoryValueMap.put(\"Response\", AttributeValue.builder().b(bytes).build());\n\nhistoryMap.put(historyKey, AttributeValue.builder().m(historyValueMap).build());\n```\n\nI also tried to convert the byte array to a base64 but that didn't work either.\n```\nString status = statusMapped.get(\"N\");\nbyte[] response = responseMapped.get(\"B\").getBytes();\n\nbyte[] base64Response = Base64.getEncoder().encode(response);\nSdkBytes responseAsSdk = SdkBytes.fromByteArray(base64Response);\n\nhistoryValueMap.put(\"Status\", AttributeValue.builder().n(status).build());\nhistoryValueMap.put(\"Response\", AttributeValue.builder().b(responseAsSdk).build());\n```","reasoning":"When trying to store binary strings to DynamoDB using Java, but the original compressed strings are converted after storage, resulting in an inability to properly store and retrieve the original binary data. Need to find a document that uses a similar approach to properly store and retrieve binary string data.","id":"102","excluded_ids":["N\/A"],"gold_ids_long":["DynamoDB_Security\/DynamoDB.txt"],"gold_ids":["DynamoDB_Security\/DynamoDB_0_1.txt","DynamoDB_Security\/DynamoDB_0_0.txt"],"gold_answer":"Here is a really simple example from our [ documentation\n](https:\/\/docs.aws.amazon.com\/amazondynamodb\/latest\/developerguide\/JavaDocumentAPIBinaryTypeExample.html)\n:\n\n \n \n public static void createItem(String threadId, String replyDateTime) throws IOException {\n \n Table table = dynamoDB.getTable(tableName);\n \n \/\/ Craft a long message\n String messageInput = \"Long message to be compressed in a lengthy forum reply\";\n \n \/\/ Compress the long message\n ByteBuffer compressedMessage = compressString(messageInput.toString());\n \n table.putItem(new Item().withPrimaryKey(\"Id\", threadId).withString(\"ReplyDateTime\", replyDateTime)\n .withString(\"Message\", \"Long message follows\").withBinary(\"ExtendedMessage\", compressedMessage)\n .withString(\"PostedBy\", \"User A\"));\n }\n \n public static void retrieveItem(String threadId, String replyDateTime) throws IOException {\n \n Table table = dynamoDB.getTable(tableName);\n \n GetItemSpec spec = new GetItemSpec().withPrimaryKey(\"Id\", threadId, \"ReplyDateTime\", replyDateTime)\n .withConsistentRead(true);\n \n Item item = table.getItem(spec);\n \n \/\/ Uncompress the reply message and print\n String uncompressed = uncompressString(ByteBuffer.wrap(item.getBinary(\"ExtendedMessage\")));\n \n System.out.println(\"Reply message:\\n\" + \" Id: \" + item.getString(\"Id\") + \"\\n\" + \" ReplyDateTime: \"\n + item.getString(\"ReplyDateTime\") + \"\\n\" + \" PostedBy: \" + item.getString(\"PostedBy\") + \"\\n\"\n + \" Message: \"\n + item.getString(\"Message\") + \"\\n\" + \" ExtendedMessage (uncompressed): \" + uncompressed + \"\\n\");\n }\n \n \n \n private static ByteBuffer compressString(String input) throws IOException {\n \/\/ Compress the UTF-8 encoded String into a byte[]\n ByteArrayOutputStream baos = new ByteArrayOutputStream();\n GZIPOutputStream os = new GZIPOutputStream(baos);\n os.write(input.getBytes(\"UTF-8\"));\n os.close();\n baos.close();\n byte[] compressedBytes = baos.toByteArray();\n \n \/\/ The following code writes the compressed bytes to a ByteBuffer.\n \/\/ A simpler way to do this is by simply calling\n \/\/ ByteBuffer.wrap(compressedBytes);\n \/\/ However, the longer form below shows the importance of resetting the\n \/\/ position of the buffer\n \/\/ back to the beginning of the buffer if you are writing bytes directly\n \/\/ to it, since the SDK\n \/\/ will consider only the bytes after the current position when sending\n \/\/ data to DynamoDB.\n \/\/ Using the \"wrap\" method automatically resets the position to zero.\n ByteBuffer buffer = ByteBuffer.allocate(compressedBytes.length);\n buffer.put(compressedBytes, 0, compressedBytes.length);\n buffer.position(0); \/\/ Important: reset the position of the ByteBuffer\n \/\/ to the beginning\n return buffer;\n }\n \n private static String uncompressString(ByteBuffer input) throws IOException {\n byte[] bytes = input.array();\n ByteArrayInputStream bais = new ByteArrayInputStream(bytes);\n ByteArrayOutputStream baos = new ByteArrayOutputStream();\n GZIPInputStream is = new GZIPInputStream(bais);\n \n int chunkSize = 1024;\n byte[] buffer = new byte[chunkSize];\n int length = 0;\n while ((length = is.read(buffer, 0, chunkSize)) != -1) {\n baos.write(buffer, 0, length);\n }\n \n String result = new String(baos.toByteArray(), \"UTF-8\");\n \n is.close();\n baos.close();\n bais.close();\n \n return result;\n }"} {"query":"How can I pass a variable from CSV file to Oracle SQL query fetch in Python?\n\nI have the following piece of code where I read a csv file and connect to the database. then I want to pass two columns from CSV file as variable to my query and eventually convert the result to a pd database.\n\nI have tried different ways of binding and converted columns to list but I was unsuccessful. with this piece of code I get the following Error:\n```\nDatabaseError: DPY-4010: a bind variable replacement value \nfor placeholder \":HOUR\" was not provided\n```\n\nor I get below error when I add this part to execute():\n```\nres = connection.cursor().execute(\"SELECT HOUR,UNITSCHEDULEID,VERSIONID,MINRUNTIME FROM \n int_Stg.UnitScheduleOfferHourly WHERE Hour in :1 AND UnitScheduleId in :2\", hour, unitid)\n```\n```\nTypeError: Cursor.execute() takes from 2 to 3 positional arguments but 4 were given\n```\n\nthe following is the code I execute:\n```\nimport pandas as pd\nimport numpy as np\n\ndf = pd.read_csv(r\"csv.csv\") \n\ndf=df.dropna()\n\nunitid=df['UNITSCHEDULEID'].unique()\nhour=df['GMT_DATETIME'].unique()\n\nimport os, oracledb, csv, pyodbc\nTNS_Admin = os.environ.get('TNS_Admin', r'\\\\corp.xxx\\oracle')\noracledb.defaults.config_dir = TNS_Admin\npw = input(f'Enter password: ')\nconnection = oracledb.connect(user='xxxxx', password= pw, dsn=\"World\")\n\nres = connection.cursor().execute(\"SELECT HOUR,UNITSCHEDULEID,VERSIONID,MINRUNTIME FROM \n int_Stg.UnitScheduleOfferHourly WHERE Hour in :Hour AND UnitScheduleId in :unitid\").fetchall()\nprint(res)\n\nconnection.close()\n```","reasoning":"The user is trying to read data from a CSV file and then pass it as a variable to an Oracle database SQL query, but is encountering an error that the bound variable substitution value is not provided. A different method than the one involved in the code needs to be found in the Python OracleDB documentation to resolve this reported error.","id":"103","excluded_ids":["N\/A"],"gold_ids_long":["Bind_variables_Managing_Transactions\/Bind_variables.txt"],"gold_ids":["Bind_variables_Managing_Transactions\/Bind_variables_13_0.txt"],"gold_answer":"As the Python OracleDB documentation for [ Binding Multiple Values to a SQL `\nWHERE IN ` Clause ](https:\/\/python-\noracledb.readthedocs.io\/en\/latest\/user_guide\/bind.html#binding-multiple-\nvalues-to-a-sql-where-in-clause) states, you need to generate a statement with\na bind variable for every value in the arrays and then pass in those values to\nthe bind variables:\n\n \n \n query = \"\"\"SELECT HOUR, UNITSCHEDULEID, VERSIONID, MINRUNTIME\n FROM int_Stg.UnitScheduleOfferHourly\n WHERE Hour in ({hour_binds})\n AND UnitScheduleId in ({id_binds})\"\"\".format(\n hour_binds=\",\".join((f\":{idx}\" for idx, _ in enumerate(hours, 1))),\n id_binds=\",\".join((f\":{idx}\" for idx, _ in enumerate(unitid, len(hours) + 1))),\n )\n \n res = connection.cursor().execute(query, (*hours, *unitid)).fetchall()\n print(res)\n \n\nIf you have more than 1000 elements in either list then split the list up into\nmultiple ` IN ` clauses.\n\n \n \n def generate_sql_in_binds(\n name: str,\n size: int,\n start: int = 1,\n max_binds: int = 1000,\n ) -> str:\n in_clauses = (\n \"{name} IN ({binds})\".format(\n name=name,\n binds=\",\".join(\n (\n f\":{b+start}\"\n for b in range(i, min(i+max_binds,size))\n )\n )\n )\n for i in range(0, size, max_binds)\n )\n return \"(\" + (\" OR \".join(in_clauses)) + \")\"\n \n query = \"\"\"SELECT HOUR, UNITSCHEDULEID, VERSIONID, MINRUNTIME\n FROM int_Stg.UnitScheduleOfferHourly\n WHERE {hour_binds}\n AND {id_binds}\"\"\".format(\n hour_binds=generate_sql_in_binds(\"hour\", len(hours)),\n id_binds=generate_sql_in_binds(\"UnitScheduleId\", len(unitid), len(hours) + 1),\n )\n \n res = connection.cursor().execute(query, (*hours, *unitid)).fetchall()\n print(res)"} {"query":"How to save a xml structure using python tree.write(file_name)?\n\nI wanted to add Element with Subelements to my xml file using Python. But after changing file the Element and Subelement looks like a line, not like a xml-tree.\n\nI do this:\n```\ntree = ET.parse('existing_file.xml')\nroot = tree.getroot()\nnew_object = ET.SubElement(root, 'object')\nname = ET.SubElement(new_object, 'name')\nname.text = 'car'\ncolor = ET.SubElement(new_object, 'color')\ncolor.text = 'Red'\nnew_tree = ET.ElementTree(root)\nnew_tree.write('new_file.xml')\n```\n\nBut after this I've got file without structure like this:\n```\n<object><name>car<\/name><color>red<\/color><\/object>\n```\n\nBut i need this:\n```\n<object>\n <name>car<\/name>\n <color>red<\/color>\n<\/object>\n```\n\nwhat do I do wrong?","reasoning":"After adding new elements and sub-elements to an XML file using Python's ElementTree (ET) library, users found that the elements and sub-elements in the resulting XML file were displayed on a single line instead of the desired tree structure. It is necessary to find an object about xml.dom.minidom to parse the XML data in string form into a DOM object that maintains the structure, and then use the write method of that object to generate the formatted XML file.","id":"104","excluded_ids":["N\/A"],"gold_ids_long":["xml_dom\/xml_dom_minidom.txt"],"gold_ids":["xml_dom\/xml_dom_minidom_0_0.txt","xml_dom\/xml_dom_minidom_0_1.txt"],"gold_answer":"using\n\n \n \n new_tree = ET.ElementTree(root)\n ET.indent(new_tree)\n new_tree.write('new_file.xml', xml_declaration=True, short_empty_elements=True)\n \n\nor using ` xml.dom.minidom.parseString `\n\n \n \n new_object = ET.SubElement(root, 'object')\n name = ET.SubElement(new_object, 'name')\n name.text = 'car'\n color = ET.SubElement(new_object, 'color')\n color.text = 'Red'\n \n # Convert the ElementTree to a string with indentation\n xml_string = ET.tostring(root, encoding='unicode')\n \n # Use minidom to format the XML string\n dom = xml.dom.minidom.parseString(xml_string)\n formatted_xml = dom.toprettyxml()\n \n # Write the formatted XML to a new file\n with open('new_file.xml', 'w') as f:\n f.write(formatted_xml)\n \n\n* * *\n \n \n <?xml version=\"1.0\" ?>\n <object>\n <name>car<\/name>\n <color>red<\/color>\n <object>\n <name>car<\/name>\n <color>Red<\/color>\n <\/object>\n <\/object>"} {"query":"How can I build a firestore query for Android with whereGreaterthan() filters for two different fields?\n\nI need to filter the list of my documents which I am fetching from `firestore` in my android `app`. this is the query.\n```\n query = FirebaseFirestore.getInstance()\n .collection(\"students\")\n .whereLessThan(\"mAge\",25)\n .whereGreaterThan(\"mAge\",20)\n .whereGreaterThan(\"mGrades\",20);\n```\n\nbut, I get an error in the log:\n```\njava.lang.RuntimeException: Unable to start activity ComponentInfo{dsardy.in.acchebacche\/dsardy.in.acchebacche.MainActivity}: java.lang.IllegalArgumentException: All where filters other than whereEqualTo() must be on the same field. But you have filters on 'mAge' and 'mGrades'\n```\n\nCan this be achieved? a filter with two or more fields greater than some values is an important and general query, `firestore` must have something to tackle this.","reasoning":"The programmer wants to build a firestore query for Android for two different fields. The relative tutorial(s) needs to be re-checked to get the correct answer.","id":"105","excluded_ids":["N\/A"],"gold_ids_long":["firebase\/firestore.txt"],"gold_ids":["firebase\/firestore_1_12.txt","firebase\/firestore_0_12.txt","firebase\/firestore_1_13.txt","firebase\/firestore_0_13.txt","firebase\/firestore_0_14.txt"],"gold_answer":"**Edit: 2024\/04\/18**\n\nAs @FrankvanPuffelen mentioned in his comment:\n\n> Firestore recently added the ability to have inequality and range conditions\n> on multiple fields in a single query.\n\nBelow are docs:\n\n * [ Query with range and inequality filters on multiple fields ](https:\/\/firebase.google.com\/docs\/firestore\/query-data\/multiple-range-fields)\n * [ Optimize queries with range and inequality filters on multiple fields ](https:\/\/firebase.google.com\/docs\/firestore\/query-data\/multiple-range-optimize-indexes)\n\n* * *\n\nFirestore allows to chain of multiple inequality conditions to create more\nspecific queries, but only on the same field. As you can probably see in the [\nofficial documentation ](https:\/\/firebase.google.com\/docs\/firestore\/query-\ndata\/queries) range filters on different fields are forbidden.\n\nTo achieve what you want, you need to query your database twice, once to\nfilter data using ` .whereGreaterThan(\"mAge\",20) ` and second using `\n.whereGreaterThan(\"mGrades\",20) ` but you cannot use them in the same query.\n\nAnother way to make it happen is to store a special field that might fit the\nquery, although in real-world applications it will be almost impossible to\nstore every single way a user might query the data."} {"query":"How to make MudSelect show text of selected option instead of value?\n\n```\n<MudSelect MultiSelection=\"true\" @bind-SelectedValues=\"ViewModel.Model.GlobalSalaryAccessUserIds\">\n @foreach (var salaryAccessUser in ViewModel.GlobalSalaryAccessUsers)\n {\n <MudSelectItem Value=\"@salaryAccessUser.Id\">@GenerateSalaryAccessUserDisplayString(salaryAccessUser)<\/MudSelectItem>\n }\n <\/MudSelect>\n```\n\nNow instead of string which generates in `GenerateSalaryAccessUserDisplayString` it shows a value of the option, which is id, when I've selected a few of them . How can I change it to show the generated string?","reasoning":"The method that can change `GenerateSalaryAccessUserDisplayString` from id to the generated string is required. A kind of change in the attributes is required for this solution.","id":"106","excluded_ids":["N\/A"],"gold_ids_long":["mudblazor\/mudblazor.txt"],"gold_ids":["mudblazor\/mudblazor_0_1.txt","mudblazor\/mudblazor_0_3.txt"],"gold_answer":"Found the answer. You can use ` ToStringFunc ` attribute of ` MudSelect ` .\n\nIn your code behind you have to declare a ` Func<'your option value\ntype',string> ` and apply it to ` ToStringFunc ` .\n\nMy case:\n\n \n \n <MudSelect MultiSelection=\"true\" @bind-SelectedValues=\"ViewModel.Model.GlobalSalaryAccessUserIds\" Variant=\"Variant.Outlined\" Label=\"Manager\" ToStringFunc=\"ToStringConverter\">\n @foreach (var salaryAccessUser in ViewModel.GlobalSalaryAccessUsers)\n {\n <MudSelectItem Value=\"@(salaryAccessUser.Id)\">@GenerateSalaryAccessUserDisplayString(salaryAccessUser)<\/MudSelectItem>\n }\n <\/MudSelect>\n \n \/\/ ------------ Code behind ------------------\n protected Func<int, string> ToStringConverter;\n \n \/\/ Have to initialize it separately\n protected override Task OnInitializedComponentAsync()\n {\n ToStringConverter = GenerateSalaryAccessUserDisplayString;\n \n return Task.CompletedTask;\n }\n \n private string GenerateSalaryAccessUserDisplayString(int salaryAccessUserId)\n {\n var salaryAccessUser = ViewModel.GlobalSalaryAccessUsers.FirstOrDefault(u => u.Id == salaryAccessUserId);\n \n return $\"{salaryAccessUser.FirstName} {salaryAccessUser.LastName}\";\n }"} {"query":"How to record audio and video corss platform with electron\n\nSo i was making a electron project that records your screen and your desktop or selected app's audio with desktopCapture. I got the screen record, and at one point even got the mic to work, but at no point, no matter what i tried, i couldn't record desktop audio nor any app's audio. After some research i found that you cannot record any desktop nor app's audio with chromium on linux.\n\nSo what could be the solution or some other ways to try to record desktop audio. Maybe there is some way to record desktop audio with a different library and then combine the video with audio somehow.\n\nAny suggestions would be appreciated.\n\nCode for the screen recorder itself:\n```\nvideoSelectBtn.onclick = getVideoSources;\n\nasync function getVideoSources() {\n const inputSources = await desktopCapturer.getSources({\n types: [\"window\", \"screen\", \"audio\"],\n });\n\n inputSources.forEach((source) => {\n if (source.name === \"Screen 1\") {\n selectSource(source);\n } else {\n console.log(source);\n }\n });\n}\n\nasync function selectSource(source) {\n videoSelectBtn.innerText = source.name;\n\n const constraints = {\n audio: {\n mandatory: {\n chromeMediaSource: \"desktop\",\n },\n },\n video: {\n mandatory: {\n chromeMediaSource: \"desktop\",\n },\n },\n };\n\n const stream = await navigator.mediaDevices.getUserMedia(constraints);\n```","reasoning":"The programmer fails to record desktop audio with the code he\/she has written. The official documents related to Electron need to be confirmed again to answer this question.","id":"107","excluded_ids":["N\/A"],"gold_ids_long":["Electron\/Electron.txt"],"gold_ids":["Electron\/Electron_0_1.txt"],"gold_answer":"This seems to be related to Chrome that doesn't allow to record other audio\nsources from the computer or desktop and as Electron is dependent of Chromium,\nthe engine is blocking that feature right now.\n\nThis caveat is mentioned in the official docs about mac: [\nhttps:\/\/www.electronjs.org\/docs\/latest\/api\/desktop-capturer#caveats\n](https:\/\/www.electronjs.org\/docs\/latest\/api\/desktop-capturer#caveats) \\- but\nit concerns macOS mostly."} {"query":"My api request in my asyncThunk function in slice file doesn't work when I put a dispatch method before that\n\nI have this asyncThunk action and everything was fine until I put a `dispatch` call before sending request. As a result every code after this `dispatch` call doesn't work anymore. 'first' goes in log but 'second' no. I don't know why this dispatch blocks next codes in my asyncThunk function.\n\nMy slice file:\n```\nexport const postLoginData = createAsyncThunk(\n 'login\/postLoginData',\n async (allData) => {\n const { dispatch, params } = allData;\n\n let loginResponse = '';\n\n console.log('first')\n dispatch(setStatus({type: 'loading', payload: 'wait'}))\n console.log('second')\n\n await postRequest('\/o\/token\/coach', params)\n .then(response => {\n loginResponse = response.data\n const data = response.data;\n if (data.access_token) {\n dispatch(loginSuccess(data))\n localStorage.setItem('Authentication', JSON.stringify(data));\n }\n })\n .catch(err => {\n if (err.response.status === 400) {\n loginResponse = { error: '400 Error' }\n }\n })\n dispatch(setStatus({ type: 'loading', payload: false }))\n console.log(loginResponse)\n return loginResponse\n }\n)\n```","reasoning":"In Redux asynchronous thunk functions, calling the store's dispatch method before sending an API request can cause subsequent code to fail. To solve this problem, it may be necessary to provide a dispatch method directly in one of the parameters which should be found, rather than calling dispatch directly from the store.","id":"108","excluded_ids":["N\/A"],"gold_ids_long":["reducers_and_actions_urls_rtk_query\/reducers_and_actions.txt"],"gold_ids":["reducers_and_actions_urls_rtk_query\/reducers_and_actions_0_6.txt","reducers_and_actions_urls_rtk_query\/reducers_and_actions_0_7.txt","reducers_and_actions_urls_rtk_query\/reducers_and_actions_0_4.txt","reducers_and_actions_urls_rtk_query\/reducers_and_actions_0_2.txt","reducers_and_actions_urls_rtk_query\/reducers_and_actions_0_8.txt","reducers_and_actions_urls_rtk_query\/reducers_and_actions_0_5.txt","reducers_and_actions_urls_rtk_query\/reducers_and_actions_0_0.txt","reducers_and_actions_urls_rtk_query\/reducers_and_actions_0_3.txt","reducers_and_actions_urls_rtk_query\/reducers_and_actions_0_1.txt"],"gold_answer":"You really should use the ` dispatch ` function available on the ` thunkApi `\n, the _**second** _ argument passed to the ` createAsyncThunk ` [ payload\ncreator ](https:\/\/redux-toolkit.js.org\/api\/createAsyncThunk#payloadcreator) .\n\nIt's also a bit of a Javascript anti-pattern to mix ` async\/await ` with\nPromise chains; select one or the other.\n\nBasic Example:\n\n \n \n export const postLoginData = createAsyncThunk(\n 'login\/postLoginData',\n async ({ params }, thunkApi) => {\n let loginResponse = '';\n \n thunkApi.dispatch(setStatus({ type: 'loading', payload: 'wait' }));\n \n try {\n const { data } = await postRequest('\/o\/token\/coach', params);\n loginResponse = data;\n \n if (data.access_token) {\n thunkApi.dispatch(loginSuccess(data));\n localStorage.setItem('Authentication', JSON.stringify(data));\n }\n } catch(err) {\n if (err.response.status === 400) {\n loginResponse = { error: '400 Error' };\n }\n }\n \n thunkApi.dispatch(setStatus({ type: 'loading', payload: false }));\n \n return loginResponse;\n }\n );\n \n\nSince you are using Redux-Toolkit and ` createAsyncAction ` you are kind of\nmissing the point by manually dispatching the ` setStatus ` and ` loginSuccess\n` actions to manage any \"loading\" and \"authentication\" states when you really\nought to be adding reducers cases that handle the dispatched `\npostLoginData.pending ` , ` postLoginData.fulfilled ` , and `\npostLoginData.rejected ` actions that _automagically_ get dispatched when you\ndispatch ` postLoginData ` to the store and the action processes.\n\nHere's a suggested refactor:\n\n \n \n export const postLoginData = createAsyncThunk(\n 'login\/postLoginData',\n async ({ params }, thunkApi) => {\n try {\n const { data } = await postRequest('\/o\/token\/coach', params);\n \n if (data.access_token) {\n localStorage.setItem('Authentication', JSON.stringify(data));\n }\n \n return data;\n } catch(error) {\n return thunkApi.rejectWithValue(error.response.message);\n }\n }\n );\n \n\nThe status slice, add reducer cases to handle the ` .pending ` , and `\n.fulfilled ` and ` .rejected ` actions.\n\n \n \n const statusSlice = createSlice({\n name: \"status\",\n initialState: {\n status: false,\n },\n extraReducers: builder => {\n builder\n ....\n .addCase(postLoginData.pending, (state) => {\n state.status = \"wait\";\n })\n .addCase(postLoginData.fulfilled, (state) => {\n state.status = false;\n })\n .addCase(postLoginData.rejected, (state) => {\n state.status = false;\n })\n ....;\n },\n });\n \n\nThe auth slice, add a reducer case to handle the ` .fulfilled ` action.\n\n \n \n const authSlice = createSlice({\n name: \"auth\",\n initialState: {\n ....\n },\n extraReducers: builder => {\n builder\n ....\n .addCase(postLoginData.fulfilled, (state, action) => {\n \/\/ update whatever state with the fetched data in action.payload\n })\n ....;\n },\n });"} {"query":"Gorm preload doesn't follow the Join conditions\n\nSo I'm trying to make a complex queries with a lot of joins and nested structs. I need to load the structs but Preload is not following related Joins with their conditions and its keep doing its own queries.\n\nHow can I make this work?\n\nMy joins are pretty complicated and I don't want to do the same join twice.\n```\ntype Assetinfo struct {\n Uid string\n MapPolicyApps []MapPolicyApps `gorm:\"foreignKey:app_id;references:uid;AssociationForeignKey:pol_id\"`\n}\ntype MapPolicyApps struct {\n Id int\n PolID string `gorm:\"column_name:pol_id;foreignKey:Uid\"`\n PolicyPolicy PolicyPolicy `gorm:\"foreignKey:Uid;references:PolID;AssociationForeignKey:uid\"`\n AppType string \/\/ This can be application or category\n AppId string `gorm:\"column_name:app_id;foreignKey:Uid\"` \/\/ this can be app id or application id (bad naming i know)\n Asset Assetinfo `gorm:\"foreignKey:AppId;references:Uid\"`\n CreatedAt string\n}\n\ntype PolicyPolicy struct {\n Uid string `gorm:\"primaryKey\"`\n IsEnabled bool\n MapPolicyProfiles []MapPolicyProfiles `gorm:\"foreignKey:PolID;references:Uid\"`\n}\n\nvar assets []*Assetinfo \n\nerr := Db.Table(\"application_application apps\").\n Select(\"Distinct apps.*, apps.avatar as icon, profile_asset.address as ip, apps.is_health_check as is_need_request\").\n Joins(\"LEFT JOIN map_policy_identities ON map_policy_identities.assign_type = 'user' AND map_policy_identities.assign_id = ?\", userid).\n Joins(\"LEFT JOIN policy_policy ON map_policy_identities.pol_id = policy_policy.uid AND policy_policy.is_enabled = true\").\n Joins(\"LEFT JOIN map_policy_apps ON map_policy_identities.pol_id = map_policy_apps.pol_id\").\n Joins(\"LEFT JOIN map_category_apps ON map_category_apps.category_id = map_policy_apps.app_id\").\n Joins(\"LEFT JOIN map_app_assets ON apps.uid = map_app_assets.app_id\").\n Joins(\"LEFT JOIN profile_asset ON profile_asset.uid = map_app_assets.asset_id\").\n Where(\"map_policy_apps.app_id = apps.uid OR apps.uid = map_category_apps.app_id\").\n Where(\"apps.is_enabled = true AND apps.is_visibled = true\").\n Preload(\"MapPolicyApps.PolicyPolicy.MapPolicyProfiles.ProfileAddr\").\n Preload(\"MapPolicyApps.PolicyPolicy.MapPolicyProfiles.ProfileTimerange\").\n Preload(\"MapPolicyApps.PolicyPolicy.MapPolicyProfiles.ProfilePosturecheck\").\n Find(&assets).Error\n```\n\nAnd Gorm can't load the structs without Preload, So if it's possible to load all nested structs with joins and without Preload, I would appreciate to know how.","reasoning":"When using Gorm to perform complex association queries, `preload` cannot correctly load nested structures in association relationships with conditions. Since the query statement contains multiple complex join conditions, multiple levels of nested structures need to be loaded. To solve this problem, try customising the SQL statement used by preload to ensure that the required nested data is loaded correctly.","id":"109","excluded_ids":["N\/A"],"gold_ids_long":["gorm_associations_advanced\/gorm_associations.txt","gorm_associations_advanced\/gorm_Advanced.txt"],"gold_ids":["gorm_associations_advanced\/gorm_Advanced_4_0.txt"],"gold_answer":"So I'm trying to make a complex queries with a lot of joins and nested\nstructs. I need to load the structs but Preload is not following related Joins\nwith their conditions and its keep doing its own queries.\n\nHow can I make this work?\n\nMy joins are pretty complicated and I don't want to do the same join twice.\n\n \n \n type Assetinfo struct {\n Uid string\n MapPolicyApps []MapPolicyApps `gorm:\"foreignKey:app_id;references:uid;AssociationForeignKey:pol_id\"`\n }\n type MapPolicyApps struct {\n Id int\n PolID string `gorm:\"column_name:pol_id;foreignKey:Uid\"`\n PolicyPolicy PolicyPolicy `gorm:\"foreignKey:Uid;references:PolID;AssociationForeignKey:uid\"`\n AppType string \/\/ This can be application or category\n AppId string `gorm:\"column_name:app_id;foreignKey:Uid\"` \/\/ this can be app id or application id (bad naming i know)\n Asset Assetinfo `gorm:\"foreignKey:AppId;references:Uid\"`\n CreatedAt string\n }\n \n type PolicyPolicy struct {\n Uid string `gorm:\"primaryKey\"`\n IsEnabled bool\n MapPolicyProfiles []MapPolicyProfiles `gorm:\"foreignKey:PolID;references:Uid\"`\n }\n \n var assets []*Assetinfo \n \n err := Db.Table(\"application_application apps\").\n Select(\"Distinct apps.*, apps.avatar as icon, profile_asset.address as ip, apps.is_health_check as is_need_request\").\n Joins(\"LEFT JOIN map_policy_identities ON map_policy_identities.assign_type = 'user' AND map_policy_identities.assign_id = ?\", userid).\n Joins(\"LEFT JOIN policy_policy ON map_policy_identities.pol_id = policy_policy.uid AND policy_policy.is_enabled = true\").\n Joins(\"LEFT JOIN map_policy_apps ON map_policy_identities.pol_id = map_policy_apps.pol_id\").\n Joins(\"LEFT JOIN map_category_apps ON map_category_apps.category_id = map_policy_apps.app_id\").\n Joins(\"LEFT JOIN map_app_assets ON apps.uid = map_app_assets.app_id\").\n Joins(\"LEFT JOIN profile_asset ON profile_asset.uid = map_app_assets.asset_id\").\n Where(\"map_policy_apps.app_id = apps.uid OR apps.uid = map_category_apps.app_id\").\n Where(\"apps.is_enabled = true AND apps.is_visibled = true\").\n Preload(\"MapPolicyApps.PolicyPolicy.MapPolicyProfiles.ProfileAddr\").\n Preload(\"MapPolicyApps.PolicyPolicy.MapPolicyProfiles.ProfileTimerange\").\n Preload(\"MapPolicyApps.PolicyPolicy.MapPolicyProfiles.ProfilePosturecheck\").\n Find(&assets).Error\n \n \n\nAnd Gorm can't load the structs without Preload, So if it's possible to load\nall nested structs with joins and without Preload, I would appreciate to know\nhow.\n\n * [ go ](\/questions\/tagged\/go \"show questions tagged 'go'\")\n * [ go-gorm ](\/questions\/tagged\/go-gorm \"show questions tagged 'go-gorm'\")\n\n[ Share ](\/q\/78296654 \"Short permalink to this question\")\n\nFollow\n\n[ edited Apr 15 at 7:29 ](\/posts\/78296654\/revisions \"show all edits to this\npost\")\n\nBad Boy\n\nasked Apr 9 at 7:05\n\n[ ![Bad Boy's user\navatar](https:\/\/www.gravatar.com\/avatar\/6a12f008975c1e94230f4cb73c9cf5fe?s=64&d=identicon&r=PG&f=y&so-\nversion=2) ](\/users\/22169295\/bad-boy)\n\n[ Bad Boy ](\/users\/22169295\/bad-boy) Bad Boy\n\n83 9 9 bronze badges\n\nAdd a comment | \n\nRelated questions\n\n[ 3 Gorm eager loading joins with multiple relationships\n](\/questions\/71170220\/gorm-eager-loading-joins-with-multiple-relationships)\n\n[ 2 Optimizing SQL data access in go ](\/questions\/42815331\/optimizing-sql-\ndata-access-in-go)\n\n[ 1 using Preload and Join in gorm ](\/questions\/68628299\/using-preload-and-\njoin-in-gorm)\n\nRelated questions\n\n[ 3 Gorm eager loading joins with multiple relationships\n](\/questions\/71170220\/gorm-eager-loading-joins-with-multiple-relationships)\n\n[ 2 Optimizing SQL data access in go ](\/questions\/42815331\/optimizing-sql-\ndata-access-in-go)\n\n[ 1 using Preload and Join in gorm ](\/questions\/68628299\/using-preload-and-\njoin-in-gorm)\n\n[ 4 How to join multiple tables using GORM without Preload\n](\/questions\/71343405\/how-to-join-multiple-tables-using-gorm-without-preload)\n\n[ 2 gorm: Join with changeable 'where' conditions\n](\/questions\/55952363\/gorm-join-with-changeable-where-conditions)\n\n[ 21 What does Preload function do in gorm? ](\/questions\/59471501\/what-does-\npreload-function-do-in-gorm)\n\n[ 0 Gorm Preload Nested Structs ](\/questions\/58587949\/gorm-preload-nested-\nstructs)\n\n[ 3 Gorm - Preload as deep as necessary ](\/questions\/71983550\/gorm-preload-\nas-deep-as-necessary)\n\n[ 2 Gorm preload gives ambiguous column error ](\/questions\/75005219\/gorm-\npreload-gives-ambiguous-column-error)\n\nLoad 6 more related questions Show fewer related questions\n\n## 0\n\nSorted by: [ Reset to default ](\/questions\/78296654\/gorm-preload-doesnt-\nfollow-the-join-conditions?answertab=scoredesc#tab-top)\n\nHighest score (default) Trending (recent votes count more) Date modified\n(newest first) Date created (oldest first)\n\n## Know someone who can answer? Share a link to this [ question\n](https:\/\/stackoverflow.com\/questions\/78296654\/gorm-preload-doesnt-follow-the-\njoin-conditions) via [ email ](\/cdn-cgi\/l\/email-\nprotection#4d723e382f27282e39701e392c2e26687f7d023b283f2b21223a687f7d1c38283e392422236b2c203d762f222934700a223f20687f7d3d3f2821222c29687f7d2922283e23687f7a39687f7d2b222121223a687f7d392528687f7d07222423687f7d2e22232924392422233e687d0c2539393d3e687e2c687f2b687f2b3e392c2e26223b283f2b21223a632e2220687f2b3c687f2b7a757f747b7b7879687e2b3e2820687e297f)\n, [ Twitter\n](https:\/\/twitter.com\/share?url=https%3a%2f%2fstackoverflow.com%2fq%2f78296654%3fstw%3d2)\n, or [ Facebook\n](https:\/\/www.facebook.com\/sharer.php?u=https%3a%2f%2fstackoverflow.com%2fq%2f78296654%3fsfb%3d2)\n.\n\n## Your Answer\n\n**Reminder:** Answers generated by artificial intelligence tools are not\nallowed on Stack Overflow. [ Learn more ](\/help\/gen-ai-policy)\n\nThanks for contributing an answer to Stack Overflow!\n\n * Please be sure to _answer the question_ . Provide details and share your research! \n\nBut _avoid_ \u2026\n\n * Asking for help, clarification, or responding to other answers. \n * Making statements based on opinion; back them up with references or personal experience. \n\nTo learn more, see our [ tips on writing great answers ](\/help\/how-to-answer)\n.\n\nDraft saved\n\nDraft discarded\n\n### Sign up or [ log in\n](\/users\/login?ssrc=question_page&returnurl=https%3a%2f%2fstackoverflow.com%2fquestions%2f78296654%2fgorm-\npreload-doesnt-follow-the-join-conditions%23new-answer)\n\nSign up using Google\n\nSign up using Email and Password\n\nSubmit\n\n### Post as a guest\n\nName\n\nEmail\n\nRequired, but never shown\n\n### Post as a guest\n\nName\n\nEmail\n\nRequired, but never shown\n\nPost Your Answer Discard\n\nBy clicking \u201cPost Your Answer\u201d, you agree to our [ terms of service\n](https:\/\/stackoverflow.com\/legal\/terms-of-service\/public) and acknowledge you\nhave read our [ privacy policy ](https:\/\/stackoverflow.com\/legal\/privacy-\npolicy) .\n\n##\n\nBrowse other questions tagged\n\n * [ go ](\/questions\/tagged\/go \"show questions tagged 'go'\")\n * [ go-gorm ](\/questions\/tagged\/go-gorm \"show questions tagged 'go-gorm'\")\n\nor [ ask your own question ](\/questions\/ask) .\n\n * The Overflow Blog \n * [ Scaling systems to manage all the metadata ABOUT the data ](https:\/\/stackoverflow.blog\/2024\/08\/13\/satish-jayanthi-coalesce-ai-metadata-etl\/)\n\n * [ Navigating cities of code with Norris Numbers ](https:\/\/stackoverflow.blog\/2024\/08\/13\/navigating-cities-of-code-with-norris-numbers\/)\n\n * Featured on Meta \n * [ We've made changes to our Terms of Service & Privacy Policy - July 2024 ](https:\/\/meta.stackexchange.com\/questions\/401841\/weve-made-changes-to-our-terms-of-service-privacy-policy-july-2024)\n\n * [ Bringing clarity to status tag usage on meta sites ](https:\/\/meta.stackexchange.com\/questions\/402121\/bringing-clarity-to-status-tag-usage-on-meta-sites)\n\n * [ Tag hover experiment wrap-up and next steps ](https:\/\/meta.stackoverflow.com\/questions\/431188\/tag-hover-experiment-wrap-up-and-next-steps)\n\n#### [ Hot Network Questions ](https:\/\/stackexchange.com\/questions?tab=hot)\n\n * [ Variant of global failure of GCH ](https:\/\/mathoverflow.net\/questions\/476820\/variant-of-global-failure-of-gch)\n * [ Why does Air Force Two lack a tail number? ](https:\/\/aviation.stackexchange.com\/questions\/106319\/why-does-air-force-two-lack-a-tail-number)\n * [ Sums of X*Y chunks of the nonnegative integers ](https:\/\/codegolf.stackexchange.com\/questions\/274938\/sums-of-xy-chunks-of-the-nonnegative-integers)\n * [ What tool has a ring on the end of a threaded handle shaft? ](https:\/\/diy.stackexchange.com\/questions\/305072\/what-tool-has-a-ring-on-the-end-of-a-threaded-handle-shaft)\n * [ I stopped an interview because I couldn't solve some difficult problems involving technology I haven't used in years. What could I have done instead? ](https:\/\/workplace.stackexchange.com\/questions\/198739\/i-stopped-an-interview-because-i-couldnt-solve-some-difficult-problems-involvin)\n * [ Why are the perfect fifth and fourth called \"perfect\" in 12-ET when they differ by approximately 2 cents from just intonation? ](https:\/\/music.stackexchange.com\/questions\/137114\/why-are-the-perfect-fifth-and-fourth-called-perfect-in-12-et-when-they-differ)\n * [ If there is a subgroup of order d, then is there a subgroup of order n\/d? ](https:\/\/math.stackexchange.com\/questions\/4958022\/if-there-is-a-subgroup-of-order-d-then-is-there-a-subgroup-of-order-n-d)\n * [ Tensor algebra and universal enveloping algebra ](https:\/\/mathoverflow.net\/questions\/476765\/tensor-algebra-and-universal-enveloping-algebra)\n * [ Can I continue using technology after it is patented ](https:\/\/law.stackexchange.com\/questions\/104380\/can-i-continue-using-technology-after-it-is-patented)\n * [ Holomorphic manifolds with an Einstein structure and non constant holomorphic sectional curvature ](https:\/\/mathoverflow.net\/questions\/476850\/holomorphic-manifolds-with-an-einstein-structure-and-non-constant-holomorphic-se)\n * [ How can sound travel as a transverse wave? ](https:\/\/physics.stackexchange.com\/questions\/824354\/how-can-sound-travel-as-a-transverse-wave)\n * [ Expected returns vs annualized returns ](https:\/\/money.stackexchange.com\/questions\/163688\/expected-returns-vs-annualized-returns)\n * [ Efficiently check if polygon contains any point from a list ](https:\/\/gis.stackexchange.com\/questions\/484808\/efficiently-check-if-polygon-contains-any-point-from-a-list)\n * [ Why didn't Walter White choose to work at Gray Matter instead of becoming a drug lord in Breaking Bad? ](https:\/\/movies.stackexchange.com\/questions\/123194\/why-didnt-walter-white-choose-to-work-at-gray-matter-instead-of-becoming-a-drug)\n * [ The word \"let\" as in \"without let or hindrance\" ](https:\/\/english.stackexchange.com\/questions\/625086\/the-word-let-as-in-without-let-or-hindrance)\n * [ Why don't programming languages or IDEs support attaching descriptive metadata to variables? ](https:\/\/softwareengineering.stackexchange.com\/questions\/454526\/why-dont-programming-languages-or-ides-support-attaching-descriptive-metadata-t)\n * [ Sci-fi movie about a woman alone on a spaceship with an AI ](https:\/\/scifi.stackexchange.com\/questions\/290729\/sci-fi-movie-about-a-woman-alone-on-a-spaceship-with-an-ai)\n * [ How does an op amp amplify things in respect to its electron flow? ](https:\/\/electronics.stackexchange.com\/questions\/722204\/how-does-an-op-amp-amplify-things-in-respect-to-its-electron-flow)\n * [ How does the Israeli administration of the Golan Heights affect its current non-Jewish population? ](https:\/\/politics.stackexchange.com\/questions\/88698\/how-does-the-israeli-administration-of-the-golan-heights-affect-its-current-non)\n * [ WW2 Bombers continuing on one of 2 or 4 engines, how would that work? ](https:\/\/aviation.stackexchange.com\/questions\/106307\/ww2-bombers-continuing-on-one-of-2-or-4-engines-how-would-that-work)\n * [ Do space stations have anything that big spacecraft (such as the Space Shuttle and SpaceX Starship) don't have? ](https:\/\/space.stackexchange.com\/questions\/66622\/do-space-stations-have-anything-that-big-spacecraft-such-as-the-space-shuttle-a)\n * [ Using elastic-net only for feature selection ](https:\/\/stats.stackexchange.com\/questions\/652636\/using-elastic-net-only-for-feature-selection)\n * [ Embedding rank of finite groups and quotients ](https:\/\/mathoverflow.net\/questions\/476804\/embedding-rank-of-finite-groups-and-quotients)\n * [ Can you fur up floor joists (rather than replace) to meet load requirements? ](https:\/\/diy.stackexchange.com\/questions\/305111\/can-you-fur-up-floor-joists-rather-than-replace-to-meet-load-requirements)\n\nmore hot questions\n\n[ Question feed ](\/feeds\/question\/78296654 \"Feed of this question and its\nanswers\")\n\n# Subscribe to RSS\n\nQuestion feed\n\nTo subscribe to this RSS feed, copy and paste this URL into your RSS reader.\n\n![](\/posts\/78296654\/ivc\/cd4e?prg=769dbf9c-2469-4eee-ad44-1d58c5d39eb3)\n\nlang-golang\n\n[ ](https:\/\/stackoverflow.com)\n\n##### [ Stack Overflow ](https:\/\/stackoverflow.com)\n\n * [ Questions ](\/questions)\n * [ Help ](\/help)\n * [ Chat ](https:\/\/chat.stackoverflow.com\/?tab=site&host=stackoverflow.com)\n\n##### [ Products ](https:\/\/stackoverflow.co\/)\n\n * [ Teams ](https:\/\/stackoverflow.co\/teams\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=footer&utm_content=teams)\n * [ Advertising ](https:\/\/stackoverflow.co\/advertising\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=footer&utm_content=advertising)\n * [ Collectives ](https:\/\/stackoverflow.co\/collectives\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=footer&utm_content=collectives)\n * [ Talent ](https:\/\/stackoverflow.co\/advertising\/employer-branding\/?utm_medium=referral&utm_source=stackoverflow-community&utm_campaign=footer&utm_content=talent)\n\n##### [ Company ](https:\/\/stackoverflow.co\/)\n\n * [ About ](https:\/\/stackoverflow.co\/)\n * [ Press ](https:\/\/stackoverflow.co\/company\/press\/)\n * [ Work Here ](https:\/\/stackoverflow.co\/company\/work-here\/)\n * [ Legal ](https:\/\/stackoverflow.com\/legal)\n * [ Privacy Policy ](https:\/\/stackoverflow.com\/legal\/privacy-policy)\n * [ Terms of Service ](https:\/\/stackoverflow.com\/legal\/terms-of-service\/public)\n * [ Contact Us ](\/contact)\n * Cookie Settings \n * [ Cookie Policy ](https:\/\/stackoverflow.com\/legal\/cookie-policy)\n\n##### [ Stack Exchange Network ](https:\/\/stackexchange.com)\n\n * [ Technology ](https:\/\/stackexchange.com\/sites#technology)\n * [ Culture & recreation ](https:\/\/stackexchange.com\/sites#culturerecreation)\n * [ Life & arts ](https:\/\/stackexchange.com\/sites#lifearts)\n * [ Science ](https:\/\/stackexchange.com\/sites#science)\n * [ Professional ](https:\/\/stackexchange.com\/sites#professional)\n * [ Business ](https:\/\/stackexchange.com\/sites#business)\n * [ API ](https:\/\/api.stackexchange.com\/)\n * [ Data ](https:\/\/data.stackexchange.com\/)\n\n * [ Blog ](https:\/\/stackoverflow.blog?blb=1)\n * [ Facebook ](https:\/\/www.facebook.com\/officialstackoverflow\/)\n * [ Twitter ](https:\/\/twitter.com\/stackoverflow)\n * [ LinkedIn ](https:\/\/linkedin.com\/company\/stack-overflow)\n * [ Instagram ](https:\/\/www.instagram.com\/thestackoverflow)\n\nSite design \/ logo \u00a9 2024 Stack Exchange Inc; user contributions licensed\nunder [ CC BY-SA ](https:\/\/stackoverflow.com\/help\/licensing) . rev\n2024.8.13.13883"} {"query":"Copy and delete files from SFTP folder\n\nI have to pick (remove) the files with file mask `FileName_A_*` and `FileName_B_*` from SFTP location and place them in an sharedrive.\n\nI tried using WinSCP. I have created an `HourlyFile.txt` file with below code and placed it under `C:\\Program Files (x86)\\WinSCP`. Another batch file `HourlyFile.bat` to execute the script from HourlyFile.txt\n\nHourlyFile.txt:\n```\noption batch abort\noption confirm off\nopen sftp..........\nget -filemask=\"FileName_A_*\" \/outbound\/test\/* \\\\sharedrive\nget -filemask=\"FileName_B_*\" \/outbound\/test\/* \\\\sharedrive\ndel \/outbound\/test\/FileName_A_*\ndel \/outbound\/test\/FileName_B_* \nexit\n```\n\nHourlyFile.bat:\n```\nwinscp.com \/script=HourlyFile.txt\npause\n```\n\nI tried with below options to delete the file but got the error message \"Unknown command\". Also the above code is copying subfolder from `\/outbound\/test\/` , which it should not.\nCommands tried:\n```\ndel \/outbound\/test\/FileName_A_*\n-del \/outbound\/test\/FileName_A_*\ndelete \/outbound\/test\/FileName_A_*\ndelete \/outbound\/test\/FileName_A_20190604_090002\ndelete \/outbound\/test\/FileName_A_20190604_090002.csv\n```","reasoning":"The programmer wants to delete files with the file masks FileName_A_* and FileName_B_* from the SFTP location. The corresponding commands need to be found.","id":"110","excluded_ids":["N\/A"],"gold_ids_long":["WinSCP\/WinSCP.txt"],"gold_ids":["WinSCP\/WinSCP_10_0.txt"],"gold_answer":"If you want to download and delete the files, you better use [ ` -delete `\nswitch of the ` get ` command\n](https:\/\/winscp.net\/eng\/docs\/scriptcommand_get#delete) . This way, you can be\nsure that WinSCP deletes only those files that were really successfully\ndownloaded.\n\n \n \n get -delete \/outbound\/test\/FileName_A_* \\\\sharedrive\\\n get -delete \/outbound\/test\/FileName_B_* \\\\sharedrive\\\n \n\n_See WinSCP article[ How do I create script that synchronizes files and\ndeletes synchronized files from source afterward?\n](https:\/\/winscp.net\/eng\/docs\/faq_delete_synchronized_files) _\n\n* * *\n\nTo answer your literal question: WinSCP has no ` del ` command. WinSCP has [ `\nrm ` command ](https:\/\/winscp.net\/eng\/docs\/scriptcommand_rm) :\n\n \n \n rm \/outbound\/test\/FileName_A_*\n rm \/outbound\/test\/FileName_B_*"} {"query":"useState set method not changing my state - react native\n\nI'm working on a part of an application dealing with a deck of cards. For however many cards a player is supposed to have, I am looping through a deck of cards held in a state, randomly picking a card, assigning it to that player and then removing it from the deck. This all works correctly but when the deck is empty (.length === 0), I want to invoke a shuffle() method that will set the deck equal to the discard pile which is also an array held in a useState. The discard pile is also working as expected but for some reason when trying to set the currentDeck to the discard pile, using setCurrentDeck(discardPile), the currentDeck array remains empty. (I've also tried setCurrentDeck([...discardPile])).\n\nI know this mostly likely has to do with how React handles batching hook calls, and that the value of currentDeck is not what I think it is because of the other setCurrentDeck() calls, but I'm still not able to figure out what's going on.\n\nMy code (simplified):\n```\nconst [currentDeck, setCurrentDeck] = useState(deck) \/\/ 'deck' is a hard coded array of cards\nconst [discardPile, setDiscardPile] = useState([])\nconst [faceUpCard, setFaceUpCard] = useState(null)\n\n useEffect(() => {\n if (currentDeck.length === 0) {\n const temp = discardPile;\n setCurrentDeck([...temp]);\n setDiscardPile([]);\n }\n }, [currentDeck]);\n\n function deal() {\n if (readyToDeal) {\n for (let i = 0; i < round + 2; i++) {\n for (let j = 0; j < players.length; j++) {\n const rand = Math.floor(Math.random() * currentDeck.length);\n \/\/ players are assigned cards\n currentDeck.splice(rand, 1); \/\/ card that was assigned is removed from deck\n setCurrentDeck([...currentDeck]);\n }\n }\n const rand = Math.floor(Math.random() * currentDeck.length);\n setFaceUpCard(currentDeck[rand]);\n currentDeck.splice(rand, 1);\n setCurrentDeck([...currentDeck]);\n setReadyToDeal(false);\n }\n }\n```\n\nI have also tried setting currentDeck with a shuffle method like this:\n```\n function shuffle() {\n const temp = discardPile;\n setCurrentDeck([ ...temp]);\n setDiscardPile([]);\n }\n\n function deal() {\n if (readyToDeal) {\n for (let i = 0; i < round + 2; i++) {\n for (let j = 0; j < players.length; j++) {\n if (currentDeck.length === 0) {\n shuffle();\n }\n const rand = Math.floor(Math.random() * currentDeck.length);\n \/\/ players are assigned cards\n currentDeck.splice(rand, 1); \/\/ card that was assigned is removed from deck\n setCurrentDeck([...currentDeck]);\n }\n }\n const rand = Math.floor(Math.random() * currentDeck.length);\n setFaceUpCard(currentDeck[rand]);\n currentDeck.splice(rand, 1);\n setCurrentDeck([...currentDeck]);\n setReadyToDeal(false);\n }\n }\n```\n\nbut still no luck. Any help or advice would be appreciated. Thank you!","reasoning":"In React Native, when trying to update the state using the set method of useState, the state does not change. This may be related to the way React handles hook calls and state updates, and requires a proper understanding of how useEffect hooks and state are used in React.","id":"111","excluded_ids":["N\/A"],"gold_ids_long":["React_hook_adding_interactivity\/React_hook.txt","React_hook_adding_interactivity\/adding_interactivity.txt"],"gold_ids":["React_hook_adding_interactivity\/React_hook_0_5.txt","React_hook_adding_interactivity\/React_hook_0_10.txt","React_hook_adding_interactivity\/adding_interactivity_0_6.txt","React_hook_adding_interactivity\/React_hook_0_6.txt","React_hook_adding_interactivity\/adding_interactivity_0_4.txt","React_hook_adding_interactivity\/React_hook_0_11.txt","React_hook_adding_interactivity\/React_hook_0_7.txt","React_hook_adding_interactivity\/adding_interactivity_0_3.txt","React_hook_adding_interactivity\/React_hook_0_4.txt","React_hook_adding_interactivity\/React_hook_0_3.txt","React_hook_adding_interactivity\/adding_interactivity_0_1.txt","React_hook_adding_interactivity\/adding_interactivity_0_2.txt","React_hook_adding_interactivity\/React_hook_0_12.txt","React_hook_adding_interactivity\/React_hook_0_8.txt","React_hook_adding_interactivity\/React_hook_0_9.txt","React_hook_adding_interactivity\/adding_interactivity_0_5.txt","React_hook_adding_interactivity\/React_hook_0_14.txt","React_hook_adding_interactivity\/React_hook_0_2.txt","React_hook_adding_interactivity\/React_hook_0_13.txt"],"gold_answer":"You need to understand how [ ` useEffect `\n](https:\/\/react.dev\/reference\/react\/useEffect) and [ state\n](https:\/\/react.dev\/learn\/state-a-components-memory) is supposed to be used in\nReact.\n\n 1. You should not rely on _when_ or how often ` useEffect ` is executed. You should only use it to declare which state depends on which other state (see also [ \"You Might Not Need an Effect\" ](https:\/\/react.dev\/learn\/you-might-not-need-an-effect) ). \n\n 2. The \"state\" from ` setState ` (the value and the ` set... ` function) should be used to e.g. update based on some other state or on user actions. It should not be used like a normal \"variable\". \n\n> When you call ` useState ` , you are telling React that you want this\n> component to remember something\n\nNote that the complete ` for ` loop runs until it is finished **before** React\neven tries to execute ` setCurrentDeck ` . When the ` for ` loop is finished,\nall ` set... ` functions are executed, which is effectively like only the last\none was executed.\n\n### Maybe a solution:\n\nI suppose you probably should build the new deck using \"normal\" javascript\npatterns (maybe in a separate function), i.e. without using React specific\npatterns. Then, when the deck is created and it's time to interact with the\nuser and update the screen again, update the React state with the ready built\ndeck."} {"query":"eslint configuration not read in my project\n\nI have setup a project using nx and I have the following config\n\n.eslintrc.base.json\n```\n{\n \"root\": true,\n \"ignorePatterns\": [\"**\/*\"],\n \"plugins\": [\"@nx\", \"unused-imports\", \"import\"],\n \"overrides\": [\n {\n \"files\": [\"*.ts\", \"*.tsx\", \"*.js\", \"*.jsx\"],\n \"rules\": {\n\n \"import\/no-duplicates\": \"warn\"\n }\n },\n {\n \"files\": [\"*.ts\", \"*.tsx\"],\n \"extends\": [\"plugin:@nx\/typescript\"],\n \"rules\": {}\n },\n {\n \"files\": [\"*.js\", \"*.jsx\"],\n \"extends\": [\"plugin:@nx\/javascript\"],\n \"rules\": {}\n },\n {\n \"files\": [\"*.spec.ts\", \"*.spec.tsx\", \"*.spec.js\", \"*.spec.jsx\"],\n \"env\": {\n \"jest\": true\n },\n \"rules\": {}\n }\n ]\n}\n```\n\nthen .eslintrc.json\n```\n{\n \"ignorePatterns\": [\"!**\/*\"],\n \"overrides\": [\n {\n \"files\": [\"*.ts\", \"*.tsx\"],\n \"rules\": {}\n },\n {\n \"files\": [\"*.js\", \"*.jsx\"],\n \"rules\": {}\n },\n {\n \"files\": [\"*.spec.ts\", \"*.spec.tsx\", \"*.spec.js\", \"*.spec.jsx\"],\n \"env\": {\n \"jest\": true\n },\n \"rules\": {}\n }\n ],\n \"extends\": [\".\/.eslintrc.base.json\"]\n}\n```\n\nthen I pureposely did this in my app\n```\nimport { Response } from 'express';\nimport { Request } from 'express';\n```\n\nThis should trigger a warning, but nothing appear in vscode.\n\nmy setting is currently\n```\n{\n \"editor.formatOnSave\": true,\n \"editor.defaultFormatter\": \"esbenp.prettier-vscode\",\n \"editor.codeActionsOnSave\": {\n \"source.fixAll.eslint\": \"always\",\n \"source.fixAll.stylelint\": \"always\"\n },\n \"typescript.tsdk\": \"node_modules\/typescript\/lib\"\n}\n```\n\nWhat am I missing ? I tried other rules they also dont work.","reasoning":"The eslint configuration does not work in nx projects, resulting in repeated code import warnings that are not displayed. Official guide is needed, in order to have a guide on how to migrate to new aflat config. Moreover, a setting to force eslint to use old-style config is also needed.","id":"112","excluded_ids":["N\/A"],"gold_ids_long":["Script_Commands\/Script_Commands.txt"],"gold_ids":["Script_Commands\/Script_Commands_17_2.txt","Script_Commands\/Script_Commands_57_5.txt","Script_Commands\/Script_Commands_57_4.txt","Script_Commands\/Script_Commands_17_4.txt","Script_Commands\/Script_Commands_17_3.txt","Script_Commands\/Script_Commands_17_8.txt","Script_Commands\/Script_Commands_57_7.txt","Script_Commands\/Script_Commands_57_2.txt","Script_Commands\/Script_Commands_57_6.txt","Script_Commands\/Script_Commands_17_1.txt","Script_Commands\/Script_Commands_17_5.txt","Script_Commands\/Script_Commands_17_7.txt","Script_Commands\/Script_Commands_57_1.txt","Script_Commands\/Script_Commands_57_3.txt","Script_Commands\/Script_Commands_17_6.txt"],"gold_answer":"If you have eslint ` 9.0.0 ` , then the config format you provided will not\nwork out of the box as it is deprecated. To fix this you can either:\n\n * Migrate to new flat config using the [ official guide ](https:\/\/eslint.org\/docs\/latest\/use\/configure\/migration-guide) . You will also need to enable ` eslint.experimental.useFlatConfig ` setting in VS Code. \n * Force eslint to use old-style config by setting ` ESLINT_USE_FLAT_CONFIG ` environment variable to ` false ` as is described [ here ](https:\/\/eslint.org\/docs\/latest\/use\/configure\/configuration-files-deprecated) . \n * Downgrade eslint version in your project to let\u2019s say ` 8.57.0 ` . \n\nHope that helps!"} {"query":"Using discrete colors for map fill in mapboxgl instead of default interpoolated colors?\n\nThe way I've typically seen mapboxgl fill properties work on choropleth maps is something like this:\n```\nmap.on('load', function () {\n map.addSource('bb', { type: 'geojson', data: data, generateId: true});\n map.addLayer({\n 'id': 'berlin',\n 'type': 'fill',\n 'source': 'bb',\n 'paint': {\n 'fill-color': {\n 'property': some_numeric_val,\n 'stops': [[4, '#feebe2'], [8, '#fbb4b9'], [12, '#f768a1'], [16, '#c51b8a'], [20, '#7a0177']]\n },\n 'fill-opacity': .65\n }\n });\n map.addLayer({\n 'id': 'berlin-stroke',\n 'type': 'line',\n 'source': 'bb',\n 'paint': {\n 'line-color': '#000',\n 'line-width': [\n 'case',\n ['boolean', ['feature-state', 'hover'], false],\n 2,\n .5\n ]\n }\n });\n });\n```\n\ni.e. the colors are created based on a property that the user selects. However, it seems like mapboxgl's default behavior is to interpolate colors. For example, if one of my geographic units has a value is somewhere between the breakpoints, mapboxgl will interpolate the color, resulting in a gradient of colors.\n\nIs there a way to make the colors distinct (non-interpolated)? i.e. if value is 4 or less, the color is #feebe2, if the value is 8 or less, the color is '#fbb4b9', for a total of 5 discrete colors in the example I have here.\n\nI have not been able to find an answer to this anywhere. Thanks.","reasoning":"To get discrete fill colours instead of gradients in mapboxgl, you need an `expression` that returns discrete output values segmented by a range of values.","id":"113","excluded_ids":["N\/A"],"gold_ids_long":["mapbox_expressions_layers\/mapbox_expressions.txt"],"gold_ids":["mapbox_expressions_layers\/mapbox_expressions_57_0.txt"],"gold_answer":"You can use [ step expressions ](https:\/\/docs.mapbox.com\/style-\nspec\/reference\/expressions\/#step) .\n\n> Produces discrete, stepped results by evaluating a piecewise-constant\n> function defined by pairs of input and output values (\"stops\"). The input\n> may be any numeric expression (e.g., [\"get\", \"population\"]). Stop inputs\n> must be numeric literals in strictly ascending order. Returns the output\n> value of the stop just less than the input, or the first output if the input\n> is less than the first stop.\n\n**Syntax**\n\n \n \n [\"step\",\n input: number,\n stop_output_0: OutputType,\n stop_input_1: number, stop_output_1: OutputType,\n stop_input_n: number, stop_output_n: OutputType, ...\n ]: OutputType\n \n\nReference : [ Sample example from mapbox ](https:\/\/docs.mapbox.com\/mapbox-gl-\njs\/example\/cluster\/) which demonstrate similar requirement as mentioned in\nquestion.\n\nYou can try updating your code like below.\n\n \n \n map.addLayer({\n 'id': 'berlin',\n 'type': 'fill',\n 'source': 'bb',\n 'paint': {\n 'fill-color': [\n \/\/ Use step expressions (https:\/\/docs.mapbox.com\/style-spec\/reference\/expressions\/#step)\n 'step',\n \/\/ Replace some_numeric_val by required property name from which you want to get the value.\n ['get', 'some_numeric_val'], \/\/ input: number,\n \/\/ Set color which is expected to fill when value is less than 5, since 5 is first step value (stop_input_1) which is mentioned in next parameter value.\n \/\/ stop_output_0: OutputType,\n '#feebe2',\n \/\/ Property value & required color to apply from given value till Property value mentioned in next step.\n \/\/ stop_input_1: number, stop_output_1: OutputType,\n \/\/ For current example #fbb4b9 will for value between 5 to 8.\n 5, '#fbb4b9',\n \/\/ Property value & required color to apply from given value till Property value mentioned in next step.\n \/\/ stop_input_2: number, stop_output_2: OutputType,\n \/\/ For current example #f768a1 will for value between 9 to 12.\n 9, '#f768a1',\n \/\/ Property value & required color to apply from given value till Property value mentioned in next step.\n \/\/ stop_input_3: number, stop_output_3: OutputType,\n \/\/ For current example #c51b8a will for value between 13 to 16.\n 13, '#c51b8a',\n \/\/ Property value & required color to apply from given value till Property value mentioned in next step.\n \/\/ stop_input_4: number, stop_output_4: OutputType,\n \/\/ For current example #7a0177 will for value between 17 to 20.\n 17, '#7a0177',\n \/\/ Property value & required color to apply from given value till Property value mentioned in next step.\n \/\/ stop_input_5: number, stop_output_5: OutputType\n \/\/ For current example #7a0177 will for value >= 21\n 21, '#7a0177'\n ],\n 'fill-opacity': .65\n }\n });"} {"query":"How can I list exported values from all of my CloudFormation templates?\n\nI'm exporting the name of the stack and the URL of my Lambda function in my CloudFormation template.\n```\nOutputs:\n LambdaInvokeURL:\n Value: !GetAtt Myurl.FunctionUrl \n Export:\n Name: !Sub \"${AWS::StackName}\"\n```\n\nI have 8 to 10 stacks exporting similar outputs.\n\nHow can I list all exported names & values across all stacks in my AWS account?\n\nShould I write a new CloudFormation template or a Lambda function to list them?","reasoning":"To view the names and values of all CloudFormation stack exports in your AWS account, you can use the commands provided by the AWS CLI.","id":"114","excluded_ids":["N\/A"],"gold_ids_long":["cloudformation_commands\/cloudformation_commands.txt"],"gold_ids":["cloudformation_commands\/cloudformation_commands_58_1.txt","cloudformation_commands\/cloudformation_commands_58_2.txt","cloudformation_commands\/cloudformation_commands_58_0.txt"],"gold_answer":"Use the ` aws cloudformation list-exports ` [ CLI command\n](https:\/\/awscli.amazonaws.com\/v2\/documentation\/api\/latest\/reference\/cloudformation\/list-\nexports.html) , the ` ListExports ` [ API\n](https:\/\/docs.aws.amazon.com\/AWSCloudFormation\/latest\/APIReference\/API_ListExports.html)\nor any of the [ equivalent SDK methods\n](https:\/\/docs.aws.amazon.com\/AWSCloudFormation\/latest\/APIReference\/API_ListExports.html#API_ListExports_SeeAlso)\n.\n\nYou don\u2019t need to write a new CloudFormation template or a Lambda function for\nthis.\n\nSample output as per docs:\n\n \n \n {\n \"Exports\": [\n {\n \"ExportingStackId\": \"arn:aws:cloudformation:us-west-2:123456789012:stack\/private-vpc\/99764070-b56c-xmpl-bee8-062a88d1d800\",\n \"Name\": \"private-vpc-subnet-a\",\n \"Value\": \"subnet-07b410xmplddcfa03\"\n },\n {\n \"ExportingStackId\": \"arn:aws:cloudformation:us-west-2:123456789012:stack\/private-vpc\/99764070-b56c-xmpl-bee8-062a88d1d800\",\n \"Name\": \"private-vpc-subnet-b\",\n \"Value\": \"subnet-075ed3xmplebd2fb1\"\n },\n {\n \"ExportingStackId\": \"arn:aws:cloudformation:us-west-2:123456789012:stack\/private-vpc\/99764070-b56c-xmpl-bee8-062a88d1d800\",\n \"Name\": \"private-vpc-vpcid\",\n \"Value\": \"vpc-011d7xmpl100e9841\"\n }\n ]\n }"} {"query":"How to upload folders to react with special characters in the name\n\nI have a problem with my javascript uploading: I am using this code:\n```\nconst handleParameterSearchUpload = async (event) => {\n const files = event.target.files;\n console.log(files);\n const folders = {};\n\n for (let i = 0; i < files.length; i++) {\n const file = files[i];\n const filePath = file.webkitRelativePath;\n const pathParts = filePath.split('\/');\n\n \/\/ If the file is in a subfolder, add the file to the folder's list\n if (pathParts.length > 2) {\n const folderName = encodeURIComponent(pathParts[1]);\n if (!folders[folderName]) {\n folders[folderName] = [];\n }\n folders[folderName].push(file);\n }\n }\n console.log(folders.length);\n \/\/ Call processFiles for each folder\n for (const folderName in folders) {\n const folderFiles = folders[folderName];\n await processFiles(folderFiles, true);\n console.log(\"Processed\", folderName);\n }\n parameterSearchInputRef.current.value = \"\";\n };\n```\n\nto process the files in a folder.\n\nThis code is used here:\n```\n<input\n type=\"file\"\n webkitdirectory=\"true\"\n style={{ display: 'none' }\n ref={parameterSearchInputRef} \n onChange={handleParameterSearchUpload} \n\/>\n```\n\nNow in this folder there are files and subfolders which are not empty. Unfortunately I have a problem. When I upload the folder the files are uploaded, but not the subfolders. The code is not the problem, because when I rename the folder it works fine, but with this folder name which I upload:\n```\n20240118-165159[param.defaults]CombinationParamSearch{sheets.l4_cortex_inh.params.cell.params.v_thresh_[-57.5, -58],sheets.l4_cortex_exc.AfferentConnection.base_weight_[0.0013, 0.0016, 0.0018]}\n```\n\nit doesn't work.\n\nUnfortunately I will always upload these types of folders to the webpage, how to resolve the issue? The subfolders have the following name:\n```\nSelfSustainedPushPull_ParameterSearch_____base_weight_0.0013_v_thresh_-57.5 SelfSustainedPushPull_ParameterSearch_____base_weight_0.0013_v_thresh_-58\n```\n\nand so on\n\nUnfortunately the base problem is that the subfolders seem like are not getting uploaded, because if I log the console, the subfolders nor the contents inside them are not getting logged. I really don't know how to resolve this issue, without using packages like `fs` or `path`. Any ideas? Unfortunately I can't just ask the users to rename the folders, because these folder names are generated from another software.","reasoning":"The programmar wants to upload folders to react with special characters in the name. It is more effective to use the method related to `DirectoryReader` and after encoding the folder name, a built-in object is required to decode it.","id":"115","excluded_ids":["N\/A"],"gold_ids_long":["File_and_Directory_Entries_built_in_objects\/built_in_objects.txt","File_and_Directory_Entries_built_in_objects\/File_and_Directory_Entries_API.txt"],"gold_ids":["File_and_Directory_Entries_built_in_objects\/built_in_objects_32_9.txt","File_and_Directory_Entries_built_in_objects\/File_and_Directory_Entries_API_0_1.txt"],"gold_answer":"As far as I know React does not like ` webkitdirectory ` and also `\nwebkitdirectory ` might as well be soon deprecated in favor of dropzones. Not\nreally sure about this, but that's what I've read in some discussions about\nit. Also, I don't think it is fully compatible with all browsers. See [\nbrowser compatibility ](https:\/\/developer.mozilla.org\/en-\nUS\/docs\/Web\/API\/HTMLInputElement\/webkitdirectory#browser_compatibility) .\n\nFor file uploads with directories and subdirectories, it's often more\neffective to use the standard 'HTML File API' in combination with `\nDirectoryReader ` . Another thing to consider is recursively traversing the\ndirectory structure.\n\nHere's an example on how you can implement it:\n\n \n \n const handleParameterSearchUpload = async (event) => {\n const traverseFiles = async (files) => {\n for (let i = 0; i < files.length; i++) {\n const file = files[i];\n if (file.isDirectory) {\n const directoryReader = file.createReader();\n const entries = await new Promise((resolve) => {\n directoryReader.readEntries(resolve);\n });\n await traverseFiles(entries);\n } else {\n \/\/ Process the file\n console.log(\"Uploading file:\", file.name);\n \/\/ Upload logic\n }\n };\n \n const files = event.target.files;\n console.log(\"Uploaded files:\", files);\n \n await traverseFiles(files);\n \n \/\/ Clear `event.target.value` instead, since we can access the input element\n \/\/ directly from the event object. No need for ref.\n event.target.value = \"\";\n };\n \n\nSomething I also want to mention, just in case, is that you need to use `\ndecodeURIComponent ` to get the original folder name after you encoded it.\n\nGenerally speaking, using FTP or SFTP to upload your files to the server would\nbe an ideal approach for handling folders with multiple subfolders and files.\nHTTP is not really suited for bulk file transfers. Another solution you might\nconsider is zipping the files before uploading them to the server and then\nunzipping them on the server side."} {"query":"Substituting number of simulations (\"n\") in rnorm() using a list of predetermined values (in R)\n\nI'm using rnorm() but instead of substituting the number of simulations \"n\" with just one variable, I would like to do this multiple times with \"n\" having predetermined values from a series of values. (In my project, I will need to do this 11,600+ times as there are 11,600+ predetermined values which I need to use rnorm() for - but the mean and standard deviation will be constant. To simplify everything for this discussion, I will just assume I have 10 predetermined values representing the number of simulations I would like to do.)\n\nunit.cost <- rnorm(728, mean = 8403.86, sd = 1000)\n\nInstead of just using \"728\" as the number of simulations (\"n\"), I would want to automatically substitute iteratively using these series of values: 728, 628, 100, 150, 99, 867, 934, 11, 67, 753. (I also want to use these series of values in the same order - and not just randomly using them, as the expected output should be a data frame listing the unit.cost using the predetermined values of n. Note that both mean and sd are constant.)\n\nWhat I've tried:\n\nI am very much beginner in R, so upon search, it looks like for loop would be an ideal candidate to do this setup?","reasoning":"A programmer using the R language now wishes to iteratively use one value at a time from a provided list as the number of times to simulate, while keeping the mean and standard deviation constant, and collate the results into a data frame. A function needs to be found in the R language that will perform a specific function on each element of the list in turn and collect the results.","id":"116","excluded_ids":["N\/A"],"gold_ids_long":["R_base_all\/R_base_all.txt"],"gold_ids":["R_base_all\/R_base_all_351_1.txt","R_base_all\/R_base_all_351_2.txt","R_base_all\/R_base_all_351_0.txt"],"gold_answer":"using ` lapply() ` , we apply the same function to each element of your\nvector, then return the results in a list (hence the ` l ` of ` lapply ` ).\n\n` rnorm ` takes the arguments ` rnorm(n, mean = 0, sd = 1) ` \\- we want your\nvector of values to fill the ` n ` argument, so we specify the ` mean ` and `\nsd ` :\n\n \n \n nstouse <- c(728, 628, 100, 150, 99, 867, 934, 11, 67, 753)\n \n nsout <- lapply(nstouse, rnorm, mean = 8403.86, sd = 1000)\n \n\ngives:\n\n \n \n > str(nsout)\n List of 10\n $ : num [1:728] 9168 8497 7782 6304 8711 ...\n $ : num [1:628] 7709 6477 9047 7756 9781 ...\n $ : num [1:100] 9761 7519 8675 8413 9376 ...\n $ : num [1:150] 8378 9830 6052 7293 9908 ...\n $ : num [1:99] 7074 8290 8287 10091 10256 ...\n $ : num [1:867] 8034 9261 9053 8760 10571 ...\n $ : num [1:934] 9351 8425 8199 7613 9412 ...\n $ : num [1:11] 8540 7392 8925 8540 8259 ...\n $ : num [1:67] 8366 7445 9361 9586 7632 ...\n $ : num [1:753] 8637 8854 8402 6992 8878 ...\n \n\nfurther request for means. I will present them in a data frame:\n\n \n \n dfout <- data.frame(n = nstouse,\n mean = sapply(nsout, mean))\n \n\ngives\n\n \n \n n mean\n 1 728 8362.516\n 2 628 8404.694\n 3 100 8536.995\n 4 150 8566.984\n 5 99 8428.155\n 6 867 8373.285\n 7 934 8388.744\n 8 11 8356.367\n 9 67 8612.405\n 10 753 8406.939"}