questions
stringlengths 50
48.9k
| answers
stringlengths 0
58.3k
|
|---|---|
Memory Error when applying spacy model to large log file I am currently working on tokenizing a large log file that contains 39296844 characters. I am using the nlp = spacy.load('en_core_web_sm') model for this text file. Additionally I established the nlp.max_length = 100000000000 so that I can read very large files. However, when I run the code doc = nlp(df.iloc[161][1], disable=['ner', 'parser', "textcat"]) where df.iloc[161][1] contains the text of the log file, I run into the following memory error:---------------------------------------------------------------------------MemoryError Traceback (most recent call last)Input In [36], in <cell line: 1>()----> 1 df["build_log"] = df["build_log"].apply(preprocess)File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\series.py:4433, in Series.apply(self, func, convert_dtype, args, **kwargs) 4323 def apply( 4324 self, 4325 func: AggFuncType, (...) 4328 **kwargs, 4329 ) -> DataFrame | Series: 4330 """ 4331 Invoke function on values of Series. 4332 (...) 4431 dtype: float64 4432 """-> 4433 return SeriesApply(self, func, convert_dtype, args, kwargs).apply()File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\apply.py:1088, in SeriesApply.apply(self) 1084 if isinstance(self.f, str): 1085 # if we are a string, try to dispatch 1086 return self.apply_str()-> 1088 return self.apply_standard()File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\apply.py:1143, in SeriesApply.apply_standard(self) 1137 values = obj.astype(object)._values 1138 # error: Argument 2 to "map_infer" has incompatible type 1139 # "Union[Callable[..., Any], str, List[Union[Callable[..., Any], str]], 1140 # Dict[Hashable, Union[Union[Callable[..., Any], str], 1141 # List[Union[Callable[..., Any], str]]]]]"; expected 1142 # "Callable[[Any], Any]"-> 1143 mapped = lib.map_infer( 1144 values, 1145 f, # type: ignore[arg-type] 1146 convert=self.convert_dtype, 1147 ) 1149 if len(mapped) and isinstance(mapped[0], ABCSeries): 1150 # GH#43986 Need to do list(mapped) in order to get treated as nested 1151 # See also GH#25959 regarding EA support 1152 return obj._constructor_expanddim(list(mapped), index=obj.index)File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\_libs\lib.pyx:2870, in pandas._libs.lib.map_infer()Input In [35], in preprocess(text) 1 def preprocess(text):----> 2 doc = nlp(text, disable=['ner', 'parser']) 3 lemmas = [token.lemma_ for token in doc] 4 commands = get_commands("command-words.txt")File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\language.py:1025, in Language.__call__(self, text, disable, component_cfg) 1023 raise ValueError(Errors.E109.format(name=name)) from e 1024 except Exception as e:-> 1025 error_handler(name, proc, [doc], e) 1026 if doc is None: 1027 raise ValueError(Errors.E005.format(name=name))File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\util.py:1630, in raise_error(proc_name, proc, docs, e) 1629 def raise_error(proc_name, proc, docs, e):-> 1630 raise eFile ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\language.py:1020, in Language.__call__(self, text, disable, component_cfg) 1018 error_handler = proc.get_error_handler() 1019 try:-> 1020 doc = proc(doc, **component_cfg.get(name, {})) # type: ignore[call-arg] 1021 except KeyError as e: 1022 # This typically happens if a component is not initialized 1023 raise ValueError(Errors.E109.format(name=name)) from eFile ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\pipeline\trainable_pipe.pyx:56, in spacy.pipeline.trainable_pipe.TrainablePipe.__call__()File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\util.py:1630, in raise_error(proc_name, proc, docs, e) 1629 def raise_error(proc_name, proc, docs, e):-> 1630 raise eFile ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\pipeline\trainable_pipe.pyx:52, in spacy.pipeline.trainable_pipe.TrainablePipe.__call__()File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\pipeline\tok2vec.py:125, in Tok2Vec.predict(self, docs) 123 width = self.model.get_dim("nO") 124 return [self.model.ops.alloc((0, width)) for doc in docs]--> 125 tokvecs = self.model.predict(docs) 126 batch_id = Tok2VecListener.get_batch_id(docs) 127 for listener in self.listeners:File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:315, in Model.predict(self, X) 311 def predict(self, X: InT) -> OutT: 312 """Call the model's `forward` function with `is_train=False`, and return 313 only the output, instead of the `(output, callback)` tuple. 314 """--> 315 return self._func(self, X, is_train=False)[0]File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\chain.py:54, in forward(model, X, is_train) 52 callbacks = [] 53 for layer in model.layers:---> 54 Y, inc_layer_grad = layer(X, is_train=is_train) 55 callbacks.append(inc_layer_grad) 56 X = YFile ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:291, in Model.__call__(self, X, is_train) 288 def __call__(self, X: InT, is_train: bool) -> Tuple[OutT, Callable]: 289 """Call the model's `forward` function, returning the output and a 290 callback to compute the gradients via backpropagation."""--> 291 return self._func(self, X, is_train=is_train)File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\with_array.py:40, in forward(model, Xseq, is_train) 38 return model.layers[0](Xseq, is_train) 39 else:---> 40 return _list_forward(cast(Model[List2d, List2d], model), Xseq, is_train)File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\with_array.py:75, in _list_forward(model, Xs, is_train) 73 lengths = layer.ops.asarray1i([len(seq) for seq in Xs]) 74 Xf = layer.ops.flatten(Xs, pad=pad) # type: ignore---> 75 Yf, get_dXf = layer(Xf, is_train) 77 def backprop(dYs: List2d) -> List2d: 78 dYf = layer.ops.flatten(dYs, pad=pad) # type: ignoreFile ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:291, in Model.__call__(self, X, is_train) 288 def __call__(self, X: InT, is_train: bool) -> Tuple[OutT, Callable]: 289 """Call the model's `forward` function, returning the output and a 290 callback to compute the gradients via backpropagation."""--> 291 return self._func(self, X, is_train=is_train)File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\chain.py:54, in forward(model, X, is_train) 52 callbacks = [] 53 for layer in model.layers:---> 54 Y, inc_layer_grad = layer(X, is_train=is_train) 55 callbacks.append(inc_layer_grad) 56 X = YFile ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:291, in Model.__call__(self, X, is_train) 288 def __call__(self, X: InT, is_train: bool) -> Tuple[OutT, Callable]: 289 """Call the model's `forward` function, returning the output and a 290 callback to compute the gradients via backpropagation."""--> 291 return self._func(self, X, is_train=is_train)File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\residual.py:40, in forward(model, X, is_train) 37 else: 38 return d_output + dX---> 40 Y, backprop_layer = model.layers[0](X, is_train) 41 if isinstance(X, list): 42 return [X[i] + Y[i] for i in range(len(X))], backpropFile ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:291, in Model.__call__(self, X, is_train) 288 def __call__(self, X: InT, is_train: bool) -> Tuple[OutT, Callable]: 289 """Call the model's `forward` function, returning the output and a 290 callback to compute the gradients via backpropagation."""--> 291 return self._func(self, X, is_train=is_train)File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\chain.py:54, in forward(model, X, is_train) 52 callbacks = [] 53 for layer in model.layers:---> 54 Y, inc_layer_grad = layer(X, is_train=is_train) 55 callbacks.append(inc_layer_grad) 56 X = YFile ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:291, in Model.__call__(self, X, is_train) 288 def __call__(self, X: InT, is_train: bool) -> Tuple[OutT, Callable]: 289 """Call the model's `forward` function, returning the output and a 290 callback to compute the gradients via backpropagation."""--> 291 return self._func(self, X, is_train=is_train)File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\chain.py:54, in forward(model, X, is_train) 52 callbacks = [] 53 for layer in model.layers:---> 54 Y, inc_layer_grad = layer(X, is_train=is_train) 55 callbacks.append(inc_layer_grad) 56 X = Y [... skipping similar frames: Model.__call__ at line 291 (1 times)]File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\chain.py:54, in forward(model, X, is_train) 52 callbacks = [] 53 for layer in model.layers:---> 54 Y, inc_layer_grad = layer(X, is_train=is_train) 55 callbacks.append(inc_layer_grad) 56 X = YFile ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:291, in Model.__call__(self, X, is_train) 288 def __call__(self, X: InT, is_train: bool) -> Tuple[OutT, Callable]: 289 """Call the model's `forward` function, returning the output and a 290 callback to compute the gradients via backpropagation."""--> 291 return self._func(self, X, is_train=is_train)File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\maxout.py:49, in forward(model, X, is_train) 47 W = model.get_param("W") 48 W = model.ops.reshape2f(W, nO * nP, nI)---> 49 Y = model.ops.gemm(X, W, trans2=True) 50 Y += model.ops.reshape1f(b, nO * nP) 51 Z = model.ops.reshape3f(Y, Y.shape[0], nO, nP)File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\backends\numpy_ops.pyx:94, in thinc.backends.numpy_ops.NumpyOps.gemm()File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\blis\py.pyx:79, in blis.py.gemm()MemoryError: Unable to allocate 7.87 GiB for an array with shape (7331886, 288) and data type float32I have been trying to figure out the issue for a while and was wondering if anyone knew how to fix this issue? I thought disabling certain components would help but that doesn't seem to be the case. Any suggestion would be greatly appreciated!
|
I don't see the point of processing ~40m characters as a single string. Do lines separated by \n form logical units? In this case read the string line by line and process each line using pipe().text = df.iloc[161][1]lines = text.split('\n')processed_lines = nlp.pipe(lines, disable=['ner', 'parser', "textcat"])# get for example lemmas, nested by linelemmas_per_line = [[tok.lemma_ for tok in line] for line in processed_lines]# or if you need them as flat listlemmas_flat = [lem for line in lemmas_per_line for lem in line]Note that even when using the faster pipe() I wouldn't expect SpaCy to process more than ~50k characters per second, so this should take at least 10-12 minutes or possibly much slower depending on your PC and the model used. If you need a progress bar to show progress you can use tqdm:from tqdm import tqdm...processed_lines = tqdm(nlp.pipe(lines, disable=['ner', 'parser', "textcat"]))...
|
Persistent storage of data? My use case needs to store the data on a disk immediately when the data is available. I'm using Raspberry PI and few lasers. Once the laser is activated/deactivated timestamp is taken and it should be stored on the disk. Data is only stored when lasers are "armed". They can also be in "idle" state (they're still working, but timestamps are ignored). Also, lasers can be armed/disarmed multiple times. What would be the most efficient way of doing this? Using plane csv/xml/txt or something else? Actual SD card that is used in RPI is limited to 8GB. Another question, when using open() method, should i close() the file once i executed write() method or should I keep it open as long as the script itself is running (script is running all the time until user decides to quit)?
|
Sounds like python?If so, you can write to your file using with: with open('/path', 'w') as f: f.write('stuff')and the file descriptor will close automatically when execution exits the block.However, regarding your other questions it depends on your use case. Why does it need to be available immediately? Will another process be reading it? How quickly will this be happening? Are there any other bits of data you need to save along with the timestamp - presumably whether the laser is on or off at that time?Likely, a good solution for you would be a lightweight database such as SQLite. The storage on disk is approximately what it would be in a "flat" file, such as the .txt or .csv you reference. It will be fast. And it eliminates concern about managing the actual writing.
|
Django: typehinting backward / related_name / ForeignKey relationships Let's say we have the following models:class Site(models.Model): # This is djangos build-in Site Model passclass Organization(models.Model): site = models.OneToOneField(Site)And if I use this somewhere in some other class:organization = self.site.organizationThen mypy complains:Site has no attribute "organization"How can I make mypy happy here?
|
Django adds backwards relations at runtime which aren't caught by mypy which only does static analysis.To make mypy happy (and to make it work with your editor's autocomplete) you need to add an explicit type hint to Site:class Site(models.Model): organization: "Organization"class Organization(models.Model): site = models.OneToOneField(Site)Using quotes around the type is needed since we are doing a forward reference to Organization before it has been defined.For foreign keys and many-to-many relationships, you can do the same thing, but using a QuerySet type hint instead:class Organization(models.Model): site = models.OneToOneField(Site) employees: models.QuerySet["Employee"]class Employee(models.Model): organization = models.ForeignKey( Organization, on_delete=models.CASCADE, related_name="employees", )EDIT: There is a django-stubs package which is meant to integrate with mypy, however I haven't used it personally. It may provide a solution for this without having to explicitly add type hints to models.
|
Efficient way of replacing values from a data set with values from another one I have this code:for index, row in df.iterrows(): for index1, row1 in df1.iterrows(): if df['budget'].iloc[index] == 0: if df['production_companies'].iloc[index] == df1['production_companies'].iloc[index1] and df['release_date'].iloc[index].year == df1['release_year'].iloc[index1] : df['budget'].iloc[index] = df1['mean'].iloc[index1]It works, but it would take too long to finish. How can I make it run faster?I also tried:df.where((df['budget'] != 0 and df['production_companies'] != df1['production_companies'] and df['release_date'] != df1['release_year']), other = pd.replace(to_replace = df['budget'], value = df1['mean'], inplace = True))It should be faster but it doesn't work. How do I achieve this?Thank you!df looks like this: budget; production_companies; release_date ;title 0; Villealfa Filmproduction Oy ;10/21/1988; Ariel 0; Villealfa Filmproduction Oy ;10/16/1986; Shadows in Paradise 4000000; Miramax Films; 12/25/1995; Four Rooms 0; Universal Pictures; 10/15/1993; Judgment Night 42000; inLoops ;1/1/2006; Life in Loops (A Megacities RMX) ... and df1: production_companies; release_year; mean;Metro-Goldwyn-Mayer (MGM); 1998; 17500000 Metro-Goldwyn-Mayer (MGM); 1999; 12500000 Metro-Goldwyn-Mayer (MGM); 2000; 12000000 Metro-Goldwyn-Mayer (MGM) ;2001 ;43500000 Metro-Goldwyn-Mayer (MGM); 2002 ;12000000 Metro-Goldwyn-Mayer (MGM) ;2003; 36000000 Metro-Goldwyn-Mayer (MGM); 2004 ;27500000 ... I want to replace the value 0 from df with the "mean" vealue from df1 if the year and the production company is the same.
|
Get rid of all of the loops, you can accomplish this efficiently with a merge. Here I provided some example data, since none of the data you provided will actually merge. You want to make sure release_date in df is a datetime, if it isn't already. import pandas as pdimport numpy as npdf = pd.DataFrame({'budget': [0, 100, 0, 1000, 0], 'production_company': ['Villealfa Filmproduction Oy', 'Villealfa Filmproduction Oy', 'Villealfa Filmproduction Oy', 'Miramax Films', 'Miramax Films'], 'release_date': ['10/21/1988', '10/18/1986', '12/25/1955', '1/1/2006', '4/13/2017'], 'title': ['AAA', 'BBB', 'CCC', 'DDD', 'EEE']})df1 = pd.DataFrame({'production_companies': ['Villealfa Filmproduction Oy', 'Villealfa Filmproduction Oy', 'Villealfa Filmproduction Oy', 'Miramax Films', 'Miramax Films'], 'release_year': [1988, 1986, 1955, 2006, 2017], 'mean': [1000000, 2000000, 30000000, 4000000, 5000000]})df['release_date'] = pd.to_datetime(df.release_date, format='%m/%d/%Y')# budget production_company release_date title#0 0 Villealfa Filmproduction Oy 1988-10-21 AAA#1 100 Villealfa Filmproduction Oy 1986-10-18 BBB#2 0 Villealfa Filmproduction Oy 1955-12-25 CCC#3 1000 Miramax Films 2006-01-01 DDD#4 0 Miramax Films 2017-04-13 EEEThen you want to replace budget where it is 0 with the mean if production company and year match. So as a merge this is:df.loc[df.budget==0, 'budget'] = (df.merge(df1, left_on=['production_company', df.release_date.dt.year], right_on=['production_companies', 'release_year'], how='left') .loc[df.budget==0, 'mean'])# budget production_company release_date title#0 1000000 Villealfa Filmproduction Oy 1988-10-21 AAA#1 100 Villealfa Filmproduction Oy 1986-10-18 BBB#2 30000000 Villealfa Filmproduction Oy 1955-12-25 CCC#3 1000 Miramax Films 2006-01-01 DDD#4 5000000 Miramax Films 2017-04-13 EEEIf you don't have mean data for a given production company and year, the 0s in budget will be replaced with np.NaN, so you can either leave them or replace them back to 0 if you want.
|
pass wx.grid to wx.frame WX.python All im trying to do is have 2 classes 1- creates a grid2- takes the grid and puts it into a wx.notebook so basically one class makes the grid the other class takes the grid as parameter and add it to the wx.notebookbut I keep getting an error that says self.m_grid1 = wx.grid.Grid(self) TypeError: Grid(): arguments did not match any overloaded call:overload 1: too many arguments overload 2: argument 1 has unexpected type 'reportGrid'and here is the code for the Grid class is called reportGridclass reportGrid ():def __init__( self, list): self.m_grid1 = wx.grid.Grid(self) self.m_grid1.Create(parent = None, id=wx.ID_ANY, pos=wx.DefaultPosition, size=wx.DefaultSize, style=wx.WANTS_CHARS, name="Grid") # Grid self.m_grid1.CreateGrid( 7, 18 ) self.m_grid1.EnableEditing( True ) self.m_grid1.EnableGridLines( True ) self.m_grid1.SetGridLineColour( wx.SystemSettings.GetColour( wx.SYS_COLOUR_WINDOWTEXT ) ) self.m_grid1.EnableDragGridSize( True ) self.m_grid1.SetMargins( 0, 0 ) # Columns self.m_grid1.EnableDragColMove( False ) self.m_grid1.EnableDragColSize( True ) self.m_grid1.SetColLabelSize( 30 ) self.m_grid1.SetColLabelAlignment( wx.ALIGN_CENTRE, wx.ALIGN_CENTRE ) # Rows self.m_grid1.EnableDragRowSize( True ) self.m_grid1.SetRowLabelSize( 80 ) self.m_grid1.SetRowLabelAlignment( wx.ALIGN_CENTRE, wx.ALIGN_CENTRE ) # Label Appearance self.m_grid1.SetColLabelValue(0, "Yield") self.m_grid1.SetColLabelValue(1, "64CU") self.m_grid1.SetColLabelValue(2, "Yield") self.m_grid1.SetColLabelValue(3, "60CU") self.m_grid1.SetColLabelValue(4, "Chain") self.m_grid1.SetColLabelValue(5, "Logic") self.m_grid1.SetColLabelValue(6, "Delay") self.m_grid1.SetColLabelValue(7, "BIST") self.m_grid1.SetColLabelValue(8, "CREST") self.m_grid1.SetColLabelValue(9, "HSIO") self.m_grid1.SetColLabelValue(10, "DC-Spec") self.m_grid1.SetColLabelValue(11, "HBM") self.m_grid1.SetColLabelValue(12, "OS") self.m_grid1.SetColLabelValue(13, "PS") self.m_grid1.SetColLabelValue(14, "Alarm") self.m_grid1.SetColLabelValue(15, "JTAG") self.m_grid1.SetColLabelValue(16, "Thermal IDD") self.m_grid1.SetColLabelValue(17, "Insuff Config") self.m_grid1.SetRowLabelValue(0, "Today") self.m_grid1.SetRowLabelValue(1, "WTD") self.m_grid1.SetRowLabelValue(2, "WW45") self.m_grid1.SetRowLabelValue(3, "WW44") self.m_grid1.SetRowLabelValue(4, "WW43") self.m_grid1.SetRowLabelValue(5, "Monthly") self.m_grid1.SetRowLabelValue(6, "QTD") # Cell Defaults for i in range(len(list)): for j in range(len(list[i])): self.m_grid1.SetCellValue(i,j, list[i][j]) self.m_grid1.SetDefaultCellAlignment( wx.ALIGN_LEFT, wx.ALIGN_TOP )and here the class that takes it as a parameter and suppose to create notebook class reportFrame ( wx.Frame ):def __init__( self, parent , grid1): wx.Frame.__init__ ( self, parent, id = wx.ID_ANY, title = u"Report", pos = wx.DefaultPosition, size = wx.Size( 7990,210 ), style = wx.DEFAULT_FRAME_STYLE|wx.TAB_TRAVERSAL ) self.SetSizeHints( wx.DefaultSize, wx.DefaultSize ) bSizer6 = wx.BoxSizer( wx.VERTICAL ) self.m_notebook1 = wx.Notebook( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, 0 ) self.m_notebook1.SetBackgroundColour( wx.SystemSettings.GetColour( wx.SYS_COLOUR_INFOBK ) ) self.m_panel2 = wx.Panel( self.m_notebook1, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL ) bSizer14 = wx.BoxSizer( wx.HORIZONTAL ) bSizer14.Add( grid1, 0, wx.ALL, 5 ) self.m_panel2.SetSizer( bSizer14 ) self.m_panel2.Layout() bSizer14.Fit( self.m_panel2 ) self.m_notebook1.AddPage( self.m_panel2, u"a page", False ) self.m_panel3 = wx.Panel( self.m_notebook1, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL ) bSizer17 = wx.BoxSizer( wx.VERTICAL ) bSizer17.Add( grid1, 0, wx.ALL, 5 ) self.m_panel3.SetSizer( bSizer17 ) self.m_panel3.Layout() bSizer17.Fit( self.m_panel3 ) self.m_notebook1.AddPage( self.m_panel3, u"a page", True ) bSizer6.Add( self.m_notebook1, 1, wx.EXPAND |wx.ALL, 3 ) self.SetSizer( bSizer6 ) self.Layout() self.Centre( wx.BOTH ) self.Show(show=True)
|
wx.grid.Grid(self) here self must be a wx.Window (or subclass) type. In your code it's reportGrid type.But reportGrid is not a wx.Window nor a subclass of wx.Window.If you have a page "pagegrid" (for example, of type wx.Panel or subclass) of the wx.Notebook then you can setclass reportGrid (wx.Panel): def __init__( self, list): self.m_grid1 = wx.grid.Grid(self)and inside your notebook definitionpagegrid = reportGrid(nb)nb.AddPage(pagegrid , "Grid Page")
|
select the rows of a table according to an id that is in a JSON in a column of the table I need to select the rows of a table according to the id In a JSON In one of the columns using Pandas.example :column_acolumn_bcolumn_caaaabbbbb{'id' : cc, 'name' : xx ...}xxxxyyyy{'id' : ff, 'name' : gg ...}so I want to select all the rows where the id of the JSON in the column_c is equal to 'cc', So the result will be :column_acolumn_bcolumn_caaaabbbbb{'id' : cc, 'name' : xx ...}
|
If you have already loaded this into pandas it is probably now a dictionary which can be accessed by key similar to the dataframe so you are looking to filter like this:df[df['column_c']['id'] == 'cc']
|
How to change decimal separator from dot to coma in Pandas when column have NaN values? When I try to open my ready files in Excel, it changes my decimals number to data. I try to change dot to coma in decimals numbers, and it work. I used this code to change it:def convert_df(df): return df.to_csv(sep=';',decimal=',').encode('utf-8')Problem is that I have some NaN values in my DataFrame. I changed NaN values to '-' to make it look prettier. This function above does not change dot to coma in columns that have this '-' value.I try this code too:DF['Age'].replace('.',',',inplace=True)But this solutions work in the same way as this first one.Anyone has some solutions for this problem?Thanks for help.
|
Here is one way to do it:import pandas as pddf = pd.DataFrame( { "col1": [1.1, 1.2, 1.3], "col2": [1.1, pd.NA, 1.3], })print(df) # Toy dataframe col1 col20 1.1 1.11 1.2 <NA>2 1.3 1.3df.fillna("-").applymap(lambda x: str(x).replace(".", ",")).to_csv( path_or_buf="df.csv", sep=";", index=False)When you open df.csv in Excel:
|
Linking two files by ID, and then removing data values from one file by referencing the other in Python using DataFrames I don't think this problem is that complex, I'm just dumb and I'm not sure how to word my search.I have two files, and they are linked by a common ID. One file (FileA), there is an upper year and a lower year listed out in each row. In the other file (FileB), there is a range of years. I don't need the years defined by the interval in FileA in FileB. How do I remove them by referencing a common ID? It needs to be done per each ID group which is adding to the complexity.File A:ID, uyear, lyear2341, 2005, 19952341, 2013, 2010So I don't need the years from 1995 - 2005, and 2010-2013 for the ID 2341 in FileBExample FileB:ID, year, price,4321, 1991, 2.454321, 1992, 2.474321, 1993, 3.44321, 1994, 3.44321, 1995, 2.344321, 1996, 2.443214, 1990, 2.333214, 1991, 2.443214, 1992, 2.55
|
I added some 2341 references in your file_b example to show that they would be filtered out:import pandas as pdfile_a = pd.DataFrame( data=[[2341, 2005, 1995], [2341, 2013, 2010]], columns=["id", "uyear", "year"])file_b = pd.DataFrame( data=[[4321, 1991, 2.45], [4321, 1992, 2.47], [4321, 1993, 3.4], [4321, 1994, 3.4], [4321, 1995, 2.34], [4321, 1996, 2.44], [2341, 1994, 2.34], [2341, 1995, 2.34], [2341, 1996, 2.44], [3214, 1990, 2.33], [3214, 1991, 2.44], [3214, 1992, 2.55]], columns=["id", "year", "price"])Note that we'd expect one of the 2341s to be kept: 2341 for 1994. The other two rows fall into one of the ranges in file_a.remove_indexes = (file_b .assign(file_b_index=lambda x: x.index) .merge(file_a, on="id", how="left") .query("year_x >= year_y and year_x <= uyear") .file_b_index)file_b[~file_b.index.isin(remove_indexes)].reset_index()[["id", "year", "price"]]Yields id year price0 4321 1991 2.451 4321 1992 2.472 4321 1993 3.403 4321 1994 3.404 4321 1995 2.345 4321 1996 2.446 2341 1994 2.347 3214 1990 2.338 3214 1991 2.449 3214 1992 2.55The basic idea is to determine which indexes from file_b you need removed (because they match on if and fall into at least one range), then remove rows from the original file by index.
|
is there a way to make python type in google without getting into anything? so im trying to make a code that you copy what you want then you press the hotkey and i want python to open google and type there "what is the meaning of (what ever word you want) in Hebrew"and then close python after the code is complete is there a way to do that?this is the code:from pynput.keyboard import Key, KeyCode, Listenerimport webbrowserfrom googlesearch import searchimport pyperclipdef function_1(): """ One of your functions to be executed by a combination """ query='what is the mening of '+pyperclip.paste()+'in hebrew' for res in search(query, tld="co.in", num=10, stop=10, pause=2): webbrowser.open(res)combination_to_function = { frozenset([Key.delete, KeyCode(vk=67)]): function_1 # delete + c }pressed_vks = set()def get_vk(key): """ Get the virtual key code from a key. These are used so case/shift modifications are ignored. """ return key.vk if hasattr(key, 'vk') else key.value.vkdef is_combination_pressed(combination): """ Check if a combination is satisfied using the keys pressed in pressed_vks """ return all([get_vk(key) in pressed_vks for key in combination])def on_press(key): """ When a key is pressed """ vk = get_vk(key) # Get the key's vk pressed_vks.add(vk) # Add it to the set of currently pressed keys for combination in combination_to_function: # Loop through each combination if is_combination_pressed(combination): # Check if all keys in the combination are pressed combination_to_function[combination]() # If so, execute the functiondef on_release(key): """ When a key is released """ vk = get_vk(key) # Get the key's vk pressed_vks.remove(vk) # Remove it from the set of currently pressed keyswith Listener(on_press=on_press, on_release=on_release) as listener: listener.join()
|
If you simply need to open the browser and execute a search you can use this. import webbrowserdef search_google(subject): webbrowser.open("https://www.google.com/search?q=What is the meaning of " + subject + " in Hebrew")search_google("Sample")Additional parameters can also be used, take a look at this blogpost by Pete Watson-Wailes
|
Biopython PDBIO assembly chain IDs I am using Bio.PDB to parse structures in mmCIF and PDB format. I realised that PDBIO does not deal well with two-character chain identifiers (like ‘AA’ or ‘AB’) found in assembly structures. I have made a slight change to the code that fits me. Attached you will find the modified PDBIO module. What it does basically is checking the length of the chain identifier string and adds a space in front of it, if is a single character. The formatting string is modified accordingly.These are my changes in Bio.PDB.PDBIO module. Please consider it putting it in a future update.Modified:_ATOM_FORMAT_STRING = "%s%5i %-4s%c%3s%s%4i%c %8.3f%8.3f%8.3f%s%6.2f %4s%2s%2s\n"Modified:for chain in model.get_list(): if not select.accept_chain(chain): continue chain_id = chain.get_id() if len(chain_id)==1: #Added line chain_id = ' {}'.format(chain_id) #Added lineModified:fp.write("TER %5i %3s %s%4i%c \n
|
Stackoverflow is a site to ask questions. What you are proposing is a change to BioPython software. Luckily, BioPython is open-source, so you create a pull request so that your change can be added to the software.Go to https://github.com/biopython/biopython/blob/master/Bio/PDB/PDBIO.pyClick on the arrow icon in the top right corner. This will create a fork of the BioPython repository Make the changes that you mentioned above in your fork and a title and a description:Click on propose file change. You can now visually compare your modifications side by side.If everything looks OK, click on create pull request. This will send a pull request to the master branch of the BioPython repository. There it will be reviewed. If the authors of the BioPython software agree that this is a usefull change, they will merge it into the sofware.
|
Django, get value of the ChoiceField form I have a form which contain a choicefield of items on my database. My question is How can I get the selected value of my choicheField?forms.pyclass list_data(forms.Form): message = forms.CharField(widget=forms.Textarea) def __init__(self, author, *args, **kwargs): super(list_data, self).__init__(*args, **kwargs) self.fields['List'] = forms.ChoiceField( choices=[(o.id, str(o)) for o in List.objects.filter(author=author)] )views.pydef sms(request): form2 = list_data(author=request.user) if request.method == "POST": form2 = list_data(request.POST) if form2.is_valid(): choice = form2.cleaned_data["List"] print(choice) else: return render(request, "data_list/sms.html", {"form2": form2}) return render(request, "data_list/sms.html", {"form2": form2})When I try to press the submit button it give me this error: int() argument must be a string, a bytes-like object or a number, not 'QueryDict'So I changed the form2 = list_data(request.POST) for form2 = list_data(author=request.user)the error is gone but it print nothing else.Thanks for helpingmodels.pyclass List(models.Model): item = models.CharField(max_length=100) content = models.TextField() site = models.CharField(max_length=11, choices=THE_SITE) content_list = models.TextField() author = models.ForeignKey(User, on_delete=models.CASCADE) def __str__(self): return self.item
|
In case of a POST request, you pass request.POST as first parameter, and thus as author, and not as data. You can rewrite the view to:def sms(request): if request.method == 'POST': form2 = list_data(request.user, data=request.POST) if form2.is_valid(): choice = form2.cleaned_data["List"] print(choice) else: form2 = list_data(author=request.user) return render(request, "data_list/sms.html", {"form2": form2})I would however advise to use a ModelChoiceField [Django-doc] here that will remove some boilerplate logic, and then you can work with model objects:class ListDataForm(forms.Form): message = forms.CharField(widget=forms.Textarea) list = forms.ModelChoiceField(queryset=List.objects.none()) def __init__(self, author, *args, **kwargs): super(list_data, self).__init__(*args, **kwargs) self.fields['list'].queryset = List.objects.filter(author=author)Note that according to the PEP-0008 style guidelines, the classes should be written in PerlCase (so ListDataForm, not list_data), and the attributes should be written in snake_case, so list, not List.
|
Set conditional constraint, Pulp First time using pulp and I am trying to set a conditional constraint on a production problem I am working on. Unfortunately I cannot find any examples in the documentation as to how to do so either. The objective function is to maximise revenue by informing monthly plant production on which product to produce based on the forecast price per product minus costs (Naturally there are lots of other constraints omitted here, otherwise it would be far simpler). For the below data I need to set the following constraints:A plant can only produce a single product each month, despite having the capability to produce multiple products. that limits a plant to producing only ONE product in a month. I am quite new to pulp but despite trawling the documentation and S.O. I cannot find an example implementation. Production data:My code so far # omitted data etl logic - it is formatted as per the above images# Get production infoplants = LpVariable.dicts('plants', ((month, plant, product) for month, plant, product in wp_df.index if month >= 4), lowBound = 0, cat='Integer')# Get forecast price info by productforecast_prices = LpVariable.dicts('price_by_prod', ((month, contract) for month, contract in fcst_diffs.index if month >= 4), lowBound = 0, cat='Integer') # Prod costs for each month, plant.costs = LpVariable.dicts( 'prod costs', ((month, plant) for month, plant in prod_costs_df.index), lowBound=0, cat='Integer')# Define problemmodel = LpProblem('Revenue Maximising Production Optimisation', LpMaximize)# Define objective functionmodel += lpSum( [plants[m,w,g] * wp_df.loc[(m,w,g), 'production_output'] for m, w, g in wp_df.index] + [costs[m, w] * costs_df.loc[(m,w), 'prod_costs_usd'] for m, w in prod_costs_df.index])I am omitting constraints for now as I have quite a few to set. Appreciate the help, thank you.
|
Introduce a set of binary variables which are indexed by {plant, product, month}, which determine whether plant i is being used to make product j during month k. Variable will be 1 when this is true, and 0 otherwise.You'll then need to add constraints so that the amount of product j being produced in plant i during month k is limited. Typically this could be done with a constraint constraining this amount variable to be <= b*C where b is the binary variable, and C is the capacity of that plant to make that product.Finally you need to constraint each plant to only make a single product during each month. For each month, and for each plant, the sum of these binary variables across all the products is limited to be <= 0.Good luck!
|
pandas grouping and visualization I have to do some analysis using Python3 and pandas with a dataset which is shown as a toy example-data''' location importance agent count0 London Low chatbot 21 NYC Medium chatbot 12 London High human 33 London Low human 44 NYC High human 15 NYC Medium chatbot 26 Melbourne Low chatbot 37 Melbourne Low human 48 Melbourne High human 59 NYC High chatbot 5'''My aim is to group the location and then count the number of Low, Medium and/or High 'importance' column for each location. So far, the code I have come up with is-data.groupby(['location', 'importance']).aggregate(np.size)''' agent countlocation importance London High 1 1 Low 2 2Melbourne High 1 1 Low 2 2NYC High 2 2 Medium 2 2'''This grouping and count aggregation contains index as the grouping objects-data.groupby(['location', 'importance']).aggregate(np.size).indexI don't know how to proceed next? Also, how can I visualize this?Help?
|
I think you need DataFrame.pivot_table, added aggfunc=sum for aggregate if duplicates and then use DataFrame.plot:df = data.pivot_table(index='location', columns='importance', values='count', aggfunc='sum')df.plot()If need counts of pairs location with importance use crosstab:df = pd.crosstab(data['location'], data['importance'])df.plot()
|
How to get first element of a list inside dictionary and add to Pandas Dataframe column in Python? I have a dictionary like this:dict = {"key 1": ["val 1", "val 2"], "key 2": ["val 3", "val 4", "val 5"], "key 3": ["val 6", "val 7"], ...}I also have a pandas dataframe that contains all the keys like this: key0 key 11 key 22 key 3...I need to add a new column to the dataframe called first_key that takes the first element of the list inside the dictionary for each key in the dict, so it ends up like this: key first_key0 key 1 val 11 key 2 val 32 key 3 val 6...which I have had some trouble with... doing something like this doesn't work:df['first_key'] = df['key'].map(dict[WHAT HERE][0]):D
|
Try:dct = { "key 1": ["val 1", "val 2"], "key 2": ["val 3", "val 4", "val 5"], "key 3": ["val 6", "val 7"],}df["first_key"] = df["key"].apply(dct.get).str[0]print(df)Prints: key first_key0 key 1 val 11 key 2 val 32 key 3 val 6Or:df["first_key"] = df["key"].map(dct).str[0]
|
Unbound local error does not occur consistently I am trying to add data to my SQlite3 table which runs on a function that takes two arguments to find a city and a neighbourhood def scrapecafes(city, area) Strangely, this works well with some of the arguments I am entering but not with others. For example if I run scrapecafes(melbourne, thornbury) the code works fine, but if I run scrapecafes(melbourne, carlton I get the following error: UnboundLocalError: local variable 'lat' referenced before assignmentI know the function definitely works, but I can't figure out why I am getting the UnboundLocalError for some arguments but not for others. Here is the code:import foliumfrom bs4 import BeautifulSoupimport requestsfrom requests import getimport sqlite3import geopandasimport geopyfrom geopy.geocoders import Nominatimfrom geopy.extra.rate_limiter import RateLimiter#cafeNamesdef scrapecafes(city, area): #url = 'https://www.broadsheet.com.au/melbourne/guides/best-cafes-thornbury' #go to the website url = f"https://www.broadsheet.com.au/{city}/guides/best-cafes-{area}" response = requests.get(url, timeout=5) soup_cafe_names = BeautifulSoup(response.content, "html.parser") type(soup_cafe_names) cafeNames = soup_cafe_names.findAll('h2', attrs={"class":"venue-title", }) #scrape the elements cafeNamesClean = [cafe.text.strip() for cafe in cafeNames] #clean the elements #cafeNameTuple = [(cafe,) for cafe in cafeNamesCleans #print(cafeNamesClean) #addresses soup_cafe_addresses = BeautifulSoup(response.content, "html.parser") type(soup_cafe_addresses) cafeAddresses = soup_cafe_addresses.findAll( attrs={"class":"address-content" }) cafeAddressesClean = [address.text for address in cafeAddresses] #cafeAddressesTuple = [(address,) for address in cafeAddressesClean] #print(cafeAddressesClean) ##geocode addresses locator = Nominatim(user_agent="myGeocoder") geocode = RateLimiter(locator.geocode, min_delay_seconds=1) try: location = [] for item in cafeAddressesClean: location.append(locator.geocode(item)) lat = [loc.latitude for loc in location] long = [loc.longitude for loc in location] except: pass #zip up for table fortable = list(zip(cafeNamesClean, cafeAddressesClean, lat, long)) print(fortable) ##connect to database try: sqliteConnection = sqlite3.connect('25july_database.db') cursor = sqliteConnection.cursor() print("Database created and Successfully Connected to 25july_database") sqlite_select_Query = "select sqlite_version();" cursor.execute(sqlite_select_Query) record = cursor.fetchall() print("SQLite Database Version is: ", record) cursor.close() except sqlite3.Error as error: print("Error while connecting to sqlite", error) #create table try: sqlite_create_table_query = ''' CREATE TABLE IF NOT EXISTS test555 ( name TEXT NOT NULL, address TEXT NOT NULL, latitude FLOAT NOT NULL, longitude FLOAT NOT NULL );''' cursor = sqliteConnection.cursor() print("Successfully Connected to SQLite") cursor.execute(sqlite_create_table_query) sqliteConnection.commit() print("SQLite table created") except sqlite3.Error as error: print("Error while creating a sqlite table", error)##enter data into table try: sqlite_insert_name_param = """INSERT INTO test555 (name, address, latitude, longitude) VALUES (?,?,?,?);""" cursor.executemany(sqlite_insert_name_param, fortable) sqliteConnection.commit() print("Total", cursor.rowcount, "Records inserted successfully into table") sqliteConnection.commit() cursor.close() except sqlite3.Error as error: print("Failed to insert data into sqlite table", error) finally: if (sqliteConnection): sqliteConnection.close() print("The SQLite connection is closed")
|
The problem is geopy doesn't have co-ordinates for Carlton. Hence, you should change your table schema and insert null in those cases.When geopy doesn't have data, it returns None and when try to call something on None it throws exception. You have to put the try/except block inside the for loop.from bs4 import BeautifulSoupimport requestsfrom requests import getimport sqlite3import geopandasimport geopyfrom geopy.geocoders import Nominatimfrom geopy.extra.rate_limiter import RateLimiter#cafeNamesdef scrapecafes(city, area): #url = 'https://www.broadsheet.com.au/melbourne/guides/best-cafes-thornbury' #go to the website url = f"https://www.broadsheet.com.au/{city}/guides/best-cafes-{area}" response = requests.get(url, timeout=5) soup_cafe_names = BeautifulSoup(response.content, "html.parser") cafeNames = soup_cafe_names.findAll('h2', attrs={"class":"venue-title", }) #scrape the elements cafeNamesClean = [cafe.text.strip() for cafe in cafeNames] #clean the elements #cafeNameTuple = [(cafe,) for cafe in cafeNamesCleans #addresses soup_cafe_addresses = BeautifulSoup(response.content, "html.parser") cafeAddresses = soup_cafe_addresses.findAll( attrs={"class":"address-content" }) cafeAddressesClean = [address.text for address in cafeAddresses] #cafeAddressesTuple = [(address,) for address in cafeAddressesClean] ##geocode addresses locator = Nominatim(user_agent="myGeocoder") geocode = RateLimiter(locator.geocode, min_delay_seconds=1) lat = [] long = [] for item in cafeAddressesClean: try: location = locator.geocode(item.strip().replace(',','')) lat.append(location.latitude) long.append(location.longitude) except: lat.append(None) long.append(None) #zip up for table fortable = list(zip(cafeNamesClean, cafeAddressesClean, lat, long)) print(fortable) ##connect to database try: sqliteConnection = sqlite3.connect('25july_database.db') cursor = sqliteConnection.cursor() print("Database created and Successfully Connected to 25july_database") sqlite_select_Query = "select sqlite_version();" cursor.execute(sqlite_select_Query) record = cursor.fetchall() print("SQLite Database Version is: ", record) cursor.close() except sqlite3.Error as error: print("Error while connecting to sqlite", error) #create table try: sqlite_create_table_query = ''' CREATE TABLE IF NOT EXISTS test ( name TEXT NOT NULL, address TEXT NOT NULL, latitude FLOAT, longitude FLOAT );''' cursor = sqliteConnection.cursor() print("Successfully Connected to SQLite") cursor.execute(sqlite_create_table_query) sqliteConnection.commit() print("SQLite table created") except sqlite3.Error as error: print("Error while creating a sqlite table", error)##enter data into table try: sqlite_insert_name_param = """INSERT INTO test (name, address, latitude, longitude) VALUES (?,?,?,?);""" cursor.executemany(sqlite_insert_name_param, fortable) sqliteConnection.commit() print("Total", cursor.rowcount, "Records inserted successfully into table") sqliteConnection.commit() cursor.close() except sqlite3.Error as error: print("Failed to insert data into sqlite table", error) finally: if (sqliteConnection): sqliteConnection.close() print("The SQLite connection is closed")scrapecafes('melbourne', 'carlton')
|
Canno't create a PySimpleGui table with my data My table does not accept data in format, that I put in var dataTimport PySimpleGUI as sgdataT = [[''], [''], [''], [''], [''], [''], [''], [''], ['']]def edit(): sg.theme('Light Green 1') headings = ['CPF', 'NAME', 'ENDEREÇO', 'CITY', 'STATE', 'GENDER', 'EMAIL', 'BIRTH', 'FAQ'] # ------ Window Layout ------ layout = [ [sg.Table(values=dataT[1:][:], headings=headings, max_col_width=55, auto_size_columns=True, display_row_numbers=True, justification='center', key='-TABLE-', size=(920,390))], [sg.Button('Delete')], ] # ------ Create Window ------ window = sg.Window('MyTable', layout) # ------ Event Loop ------ while True: event, values = window.read() print(event, values) if event is None: break window.close()edit()
|
There are 9 columns for headings,headings = ['CPF', 'NAME', 'ENDEREÇO', 'CITY', 'STATE', 'GENDER', 'EMAIL', 'BIRTH', 'FAQ']Here table data means 9 rows and each one only with one row.dataT = [[''], [''], [''], [''], [''], [''], [''], [''], ['']]Option size maybe something wrong, it said 'DO NOT USE! Use num_rows instead'.To avoid the width just exactly match the length of each heading, set each column width with extra 2 more chars.Character width maybe not exactly same as shown for non-monospace font, so add monospace font by method sg.set_options.After all, code as following,import PySimpleGUI as sgdataT = [ ['', '', '', '', '', '', '', '', ''],]def edit(): sg.theme('LightGreen1') sg.set_options(font=("Courier New", 12)) headings = ['CPF', 'NAME', 'ENDEREÇO', 'CITY', 'STATE', 'GENDER', 'EMAIL', 'BIRTH', 'FAQ'] # ------ Window Layout ------ layout = [ [sg.Table(values=dataT, headings=headings, max_col_width=55, auto_size_columns=False, col_widths=list(map(lambda i:len(i)+2, headings)), display_row_numbers=True, justification='center', key='-TABLE-', num_rows=20)], [sg.Button('Delete')], ] # ------ Create Window ------ window = sg.Window('MyTable', layout) # ------ Event Loop ------ while True: event, values = window.read() print(event, values) if event is None: break window.close()edit()
|
Webscraping with beautiful soup 4, class not working I'm trying to webscrape, as a personal excersie, the players data from this page:https://sofifa.com/playersSo I want to grab the players ID which is in this kind of line of HTML:<td class = "col col-pi" data-col="pi"> 11111 </td>So what I do is this:First I get my soupurl = 'http://sofifa.com/players'def soup_making(url): my_page = requests.get(url) soup = bs(my_page.text, "html.parser") return soupsoup = soup_making(url)The I try to do my scraping with find_all:test = soup.find_all('td',{'class':'col col-pi'})print(test)And the output is [], this method has worked for other classes of the same page, but it doesn't work for this particular "col col-pi" as well as some others like "col col-name", but if I scrape this:<td class = "col col-ae" data-col="ae"> 26 </td>test = soup.find_all('td',{'class':'col col-ae'})print(test)This works, does anyone knows why is working with some clasess and not with others when I'm using the same method for both? Do you recomend a better way of doing it?Thanks for the answer @myz540 is so weird that is no picking all the td classes, here is an image of the source code I see:Example of the sofifa source code td classes
|
I went to the site and inspected the source. I copied your code and grabbed all the td elements but I did not find any with class="col col-pi".soup = soup_making(url)tags = soup.find_all('td')all_td_classes = set()for tag in tags: for c in tag.attrs['class']: all_td_classes.add(c)print(all_td_classes)Outputs:{'col-oa', 'col-name', 'col-pt', 'col-vl', 'col', 'col-wg', 'col-comment', 'col-ae', 'col-tt', 'col-avatar'}FalseWhere are you seeing the player ID?
|
Is it possible to choose at runtime to import uic compiled files or dynamically load the ui with QUiLoader()? As stated in the official documentation there are 2 ways of importing .ui files in your code:Option A: Generating a Python classOption B: Loading it directlyIn my project I'm using Option A, but now I'm wondering if it would be possible to choose at a project level Option A or Option B at runtime, because it would avoid having to compile the widgets after each change while in development
|
In the case of Qt for Python the option is to use loadUiType:ui_class, qt_class = loadUiType("filename.ui")class FooWidget(QFooWidget): def __init__(self, parent=None): super().__init__(parent) self.ui = ui_class() self.ui.setupUi(self)
|
How to read into a pandas dataframe the wollowing json? I have the following json:[ [ { "A": "2017-02-02T11:57:41+0000", "B": "agent", "C": "hi how are you son." }, { "A": "2017-02-01T22:19:58+0000", "B": "user2", "C": "M contestan" }, { "A": "2017-02-01T22:19:42+0000", "B": "user2", "C": "preetty thanks you?" }, { "A": "2017-02-01T22:19:28+0000", "B": "user2", "C": "the cat sat over the fox" } ]]How can I compose it into a pandas dataframe like this?:A B C2017-02-02T11:57:41+0000 agent Hola Alex, si no has realizado la modificación de los datos afiliados, por favor confírmanos tu DNI, celular y operador para revisarlo. Gracias.....2017-02-01T16:22:30+0000 user1 Hola me han depositado un dinero a mi nombre, no tengo cuenta en este banco, puedo saber por aquí si ya puedo cobrar? DNI 42782263 graciasI tried to build it with:df = pd.DataFrame.apply(lambda x: map(x.from_records, json_path))Anddf = pd.DataFrame('../path/file.json')And with read_json(), However it is not working. Thus, How can I build the dataframe from the json?.
|
In [17]: import jsonAssuming you have the following JSON string:In [18]: sOut[18]: '[[{"A": "2017-02-02T11:57:41+0000", "B": "agent", "C": "Hola Alex, si no has realizado la modificacin de los datos afiliados, porfavor confrmanos tu DNI, celular y operador para revisarlo. Gracias."}, {"A": "2017-02-01T22:19:58+0000", "B": "user2", "C": "Me podran ayudar?, estoy llamando al CC y no contestan"}, {"A": "2017-02-01T22:19:42+0000", "B": "user2", "C": "No me llega el sms con la clave token"}, {"A": "2017-02-01T22:19:28+0000", "B": "user2", "C": "Tengo problemas para hacer pagos de servicios desde la app"}, {"A": "2017-02-01T22:19:18+0000", "B": "user2", "C": "Buenas tardes"}], [{"A": "2017-02-01T22:19:12+0000", "B": "agent", "C": "Hola Alexander, as es, el dinero ya se encuentra disponible puedes acercarte a cualquiera de nuestras tiendas el nmero es 1703070024597. Buenas noches"}, {"A": "2017-02-01T16:22:30+0000", "B": "user1", "C": "Hola me han depositado un dinero a mi nombre, no tengo cuenta en este banco, puedo saber por aqu si ya puedo cobrar? DNI 42782263 gracias"}]]'you can parse it:In [19]: data = json.loads(s)and build a DataFrame:In [31]: pd.DataFrame.from_records(np.concatenate(data))Out[31]: A B C0 2017-02-02T11:57:41+0000 agent Hola Alex, si no has realizado la mo...1 2017-02-01T22:19:58+0000 user2 Me podran ayudar?, estoy llamando al...2 2017-02-01T22:19:42+0000 user2 No me llega el sms con la clave token3 2017-02-01T22:19:28+0000 user2 Tengo problemas para hacer pagos de ...4 2017-02-01T22:19:18+0000 user2 Buenas tardes5 2017-02-01T22:19:12+0000 agent Hola Alexander, as es, el dinero ya ...6 2017-02-01T16:22:30+0000 user1 Hola me han depositado un dinero a m...
|
Python Selenium: extraction of rating given by individual reviewer I am trying to extract google reviews of a resturant using Python Selenium. I tried to extract the reviews posted by each reviewers. Here is my code:from selenium import webdriverfrom selenium.webdriver.common.by import Byfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as ECfrom selenium.webdriver.chrome.options import Optionsfrom selenium.webdriver.common.action_chains import ActionChainsimport timedriver = webdriver.Chrome('')base_url = 'https://www.google.com/search?tbs=lf:1,lf_ui:9&tbm=lcl&sxsrf=AOaemvJFjYToqQmQGGnZUovsXC1CObNK1g:1633336974491&q=10+famous+restaurants+in+Dunedin&rflfq=1&num=10&sa=X&ved=2ahUKEwiTsqaxrrDzAhXe4zgGHZPODcoQjGp6BAgKEGo&biw=1280&bih=557&dpr=2#lrd=0xa82eac0dc8bdbb4b:0x4fc9070ad0f2ac70,1,,,&rlfi=hd:;si:5749134142351780976,l,CiAxMCBmYW1vdXMgcmVzdGF1cmFudHMgaW4gRHVuZWRpbiJDUjEvZ2VvL3R5cGUvZXN0YWJsaXNobWVudF9wb2kvcG9wdWxhcl93aXRoX3RvdXJpc3Rz2gENCgcI5Q8QChgFEgIIFkiDlJ7y7YCAgAhaMhAAEAEQAhgCGAQiIDEwIGZhbW91cyByZXN0YXVyYW50cyBpbiBkdW5lZGluKgQIAxACkgESaXRhbGlhbl9yZXN0YXVyYW50mgEkQ2hkRFNVaE5NRzluUzBWSlEwRm5TVU56ZW5WaFVsOUJSUkFCqgEMEAEqCCIEZm9vZCgA,y,2qOYUvKQ1C8;mv:[[-45.8349553,170.6616387],[-45.9156414,170.4803685]]'driver.get(base_url)WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,"//div[./span[text()='Newest']]"))).click()total_reviews_text =driver.find_element_by_xpath("//div[@class='review-score-container']//div//div//span//span[@class='z5jxId']").textnum_reviews = int (total_reviews_text.split()[0])all_reviews = WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, 'div.gws-localreviews__google-review')))time.sleep(2)total_reviews = len(all_reviews)while total_reviews < num_reviews: driver.execute_script('arguments[0].scrollIntoView(true);', all_reviews[-1]) WebDriverWait(driver, 5, 0.25).until_not(EC.presence_of_element_located((By.CSS_SELECTOR, 'div[class$="activityIndicator"]'))) #all_reviews = driver.find_elements_by_css_selector('div.gws-localreviews__google-review') time.sleep(5) all_reviews = WebDriverWait(driver, 5).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, 'div.gws-localreviews__google-review'))) print(total_reviews) total_reviews +=5review_info = driver.find_elements_by_xpath("//div[@class='PuaHbe']")for person in person_infos: rating = person.find_element_by_xpath("./span").get_attribute('aria-label') print(rating)However, the above code produces/print 'none'. I am not sure where I made the mistake. Any help to fix the issue would be appreciated.
|
You are using a wrong XPath locator.Instead ofrating = person.find_element_by_xpath("./span").get_attribute('aria-label')Try usingrating = person.find_element_by_xpath("./g-review-stars/span").get_attribute('aria-label')
|
How do I extract a specific column from a dataset using pandas that's imported from a HTML file? import requestsimport osimport pandas as pdfrom bs4 import BeautifulSoup#Importing htmldf = pd.read_html(os.path.expanduser("~/Documents/HTMLSpider/HTMLSpider_test/spotgamma.html"))print (df['Latest Data'])All of the documentation I can find online states that extracting a specific column from a dataset required you to specify the name of the column header in square braces, yet this is returning a TypeError when I try to do so:> print (df['Latest Data'])TypeError: list indices must be integers or slices, not strIf you're curious as to what the dataset looks like without trying to specify the column: SpotGamma Proprietary Levels Latest Data ... NDX QQQ0 Ref Price: 4465 ... 15283 3721 SpotGamma Imp. 1 Day Move: 0.91%, ... NaN NaN2 SpotGamma Imp. 5 Day Move: 2.11% ... NaN NaN3 SpotGamma Gamma Index™: 0.48 ... 0.04 -0.084 Volatility Trigger™: 4415 ... 15075 3735 SpotGamma Absolute Gamma Strike: 4450 ... 15500 3706 Gamma Notional(MM): $157 ... $4 $-397
|
Note thatdf = pd.read_html(os.path.expanduser("~/Documents/HTMLSpider/HTMLSpider_test/spotgamma.html"))will return a list of dataframes, not a single one.See: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_html.html ("Read HTML tables into a list of DataFrame objects.")Better doldf = pd.read_html(os.path.expanduser("~/Documents/HTMLSpider/HTMLSpider_test/spotgamma.html"))and thendf = ldf[0] # replace 0 with the number of the dataframe you wantto get the first dataframe (there may be more, check len(ldf) to see how many you got and which one has the column you need).
|
Python serial port returing null string Reading data from the serial port:readline() in the below code return the null vector, the reading data from the serial port is hexadecimal number like AABB00EF the putty gives me the output means the communication is working but nothing works via pythonhere is the code:#!/usr/bin/pythonimport serial, timeser = serial.Serial()ser.port = "/dev/ttyUSB0"ser.baudrate = 115200#ser.bytesize = serial.EIGHTBITS #ser.parity = serial.PARITY_NONE #ser.stopbits = serial.STOPBITS_ONE#ser.timeout = None ser.timeout = 1 #ser.xonxoff = False #ser.rtscts = False #ser.dsrdtr = False #ser.writeTimeout = 2 try: ser.open()except Exception, e: print "error open serial port: " + str(e) exit()if ser.isOpen(): try: #ser.flushInput() #ser.flushOutput() #time.sleep(0.5) # numOfLines = 0 # f=open('signature.txt','w+') while True: response = ser.readline() print len(response) #f=ser.write(response) print response # numOfLines = numOfLines + 1 f.close() ser.close() except Exception, e1: print "error communicating...: " + str(e1)else: print "cannot open serial port "
|
readline will try to read until the end of the line is reached, if there is no \r or \n then it will wait forever (if you have a timeout it might work...) instead try something like thisser.setTimeout(1)result = ser.read(1000) # read 1000 characters or until our timeout occures, whichever comes firstprint repr(result)just use this codeser = serial.Serial("/dev/ttyUSB0",115200,timeout=1)print "OK OPENED SERIAL:",sertime.sleep(1)# if this is arduino ... wait longer time.sleep(5)ser.write("\r") # send newlinetime.sleep(0.1) print "READ:",repr(ser.read(8))you can create a readuntil methoddef read_until(ser,terminator="\n"): resp = "" while not resp.endswith(terminator): tmp = ser.read(1) if not tmp: return resp # timeout occured resp += tmp return resp then just use it like read_until(ser,"\r")
|
When does it make sense to use a public package as a Submodule in Python vs installing using pip? I am working on a python project that has many open sourced dependencies that may not be regularly maintained. I tried using packages as submodules by adding them with Git; but then I get an error saying the module I want is not available when I try to use the submodule; when I install the package with pip it works fine. This hasn't happened with every package. I am wondering why I can't use the submodule like I would the installed package simply by importing it? (Modules seem to be missing from the submodule import vs the pip package installed import.)However is it better to use these packages as submodules or just add the required package and version number to a requirements.txt file to be installed for production deployment?(Any additional functionality required for a submodule or package is added with a wrapper)
|
git is a development tool; you use it during development but not deployment. pip is a deployment tool; during development you use it to install necessary libraries; during deployment your users use it to install your package with dependencies.Use submodules when you need something from a remote repository in your development environment. For example, if said remote repository contains Makefile(s) or other non-python files that you need and that usually aren't installed with pip.For everything else pip is preferable.
|
Error on downloading scrapy image i have a scrapy spider to fetch images and content from some ecommerce sites. Now i want to download images, i write a few codes but i got this error :.. File "/usr/lib/python2.7/pprint.py", line 238, in format return _safe_repr(object, context, maxlevels, level) File "/usr/lib/python2.7/pprint.py", line 282, in _safe_repr vrepr, vreadable, vrecur = saferepr(v, context, maxlevels, level) File "/usr/lib/python2.7/pprint.py", line 323, in _safe_repr rep = repr(object) File "/usr/local/lib/python2.7/dist-packages/Scrapy-0.23.0-py2.7.egg/scrapy/item.py", line 77, in __repr__ return pformat(dict(self)) File "/usr/lib/python2.7/pprint.py", line 63, in pformat return PrettyPrinter(indent=indent, width=width, depth=depth).pformat(object) File "/usr/lib/python2.7/pprint.py", line 122, in pformat self._format(object, sio, 0, 0, {}, 0) File "/usr/lib/python2.7/pprint.py", line 140, in _format rep = self._repr(object, context, level - 1) File "/usr/lib/python2.7/pprint.py", line 226, in _repr self._depth, level) File "/usr/lib/python2.7/pprint.py", line 238, in format return _safe_repr(object, context, maxlevels, level) File "/usr/lib/python2.7/pprint.py", line 282, in _safe_repr vrepr, vreadable, vrecur = saferepr(v, context, maxlevels, level) File "/usr/lib/python2.7/pprint.py", line 323, in _safe_repr rep = repr(object) File "/usr/local/lib/python2.7/dist-packages/Scrapy-0.23.0-py2.7.egg/scrapy/item.py", line 77, in __repr__ return pformat(dict(self)) File "/usr/lib/python2.7/pprint.py", line 63, in pformat return PrettyPrinter(indent=indent, width=width, depth=depth).pformat(object) File "/usr/lib/python2.7/pprint.py", line 122, in pformat self._format(object, sio, 0, 0, {}, 0) File "/usr/lib/python2.7/pprint.py", line 140, in _format rep = self._repr(object, context, level - 1) File "/usr/lib/python2.7/pprint.py", line 226, in _repr self._depth, level) File "/usr/lib/python2.7/pprint.py", line 238, in format return _safe_repr(object, context, maxlevels, level) File "/usr/lib/python2.7/pprint.py", line 280, in _safe_repr for k, v in _sorted(object.items()): File "/usr/lib/python2.7/pprint.py", line 78, in _sorted with warnings.catch_warnings(): exceptions.RuntimeError: maximum recursion depth exceededMy spider : from scrapy.spider import Spiderfrom scrapy.selector import Selectorfrom scrapy.http import Requestfrom loom.items import LoomItemimport sysfrom scrapy.contrib.loader import XPathItemLoaderfrom scrapy.utils.response import get_base_urlfrom scrapy.contrib.spiders import CrawlSpider, Rulefrom scrapy.contrib.linkextractors.sgml import SgmlLinkExtractorclass LoomSpider(CrawlSpider): name = "loom_org" allowed_domains = ["2loom.com"] start_urls = [ "http://2loom.com", "http://2loom.com/collections/basic", "http://2loom.com/collections/design", "http://2loom.com/collections/tum-koleksiyon" ] rules = [ Rule(SgmlLinkExtractor(allow='products'), callback='parse_items',follow = True), Rule(SgmlLinkExtractor(allow=()), follow=True), ] def parse_items(self, response): sys.setrecursionlimit(10000) item = LoomItem() items = [] sel = Selector(response) name = sel.xpath('//h1[@itemprop="name"]/text()').extract() brand = "2loom" price_lower = sel.xpath('//h1[@class="product-price"]/text()').extract() price = "0" image = sel.xpath('//meta[@property="og:image"]/@content').extract() description = sel.xpath('//meta[@property="og:description"]/@content').extract() print image ##image indiriliyor loader = XPathItemLoader(item, response = response) loader.add_xpath('image_urls', '//meta[@property="og:image"]/@content') ##ID Split ediliyor (10. Design | Siyah & beyaz kalpli) id = name[0].strip().split(". ") id = id[0] item['id'] = id item['name'] = name item['url'] = response.url item['image'] = loader.load_item() item['category'] = "Basic" item['description'] = description item["brand"] = "2Loom" item['price'] = price item['price_lower'] = price_lower print item items.append(item) return itemsItems# Define here the models for your scraped items## See documentation in:# http://doc.scrapy.org/en/latest/topics/items.htmlfrom scrapy.item import Item, Fieldclass LoomItem(Item): # define the fields for your item here like: # name = Field() id = Field() name = Field() brand = Field() image = Field() category = Field() description = Field() price_lower = Field() price = Field() url = Field() images = Field() image_urls = Field() Pipeline : from scrapy.contrib.pipeline.images import ImagesPipeline, ImageExceptionfrom scrapy.http import Requestfrom cStringIO import StringIOimport psycopg2import hashlibfrom scrapy.conf import settingsclass MyImagePipeline(ImagesPipeline): def get_media_requests(self, item, info): return [Request(x) for x in item.get('image_urls', [])] def item_completed(self, results, item, info): item['images'] = [x for ok, x in results if ok] return item # Override the convert_image method to disable image conversion def convert_image(self, image, size=None): buf = StringIO() try: image.save(buf, image.format) except Exception, ex: raise ImageException("Cannot process image. Error: %s" % ex) return image, buf def image_key(self, url): image_guid = hashlib.sha1(url).hexdigest() return 'full/%s.jpg' % (image_guid) Settings : BOT_NAME = 'loom'SPIDER_MODULES = ['loom.spiders']NEWSPIDER_MODULE = 'loom.spiders'DOWNLOAD_DELAY = 5 ITEM_PIPELINES = {'scrapy.contrib.pipeline.images.ImagesPipeline': 1}IMAGES_STORE = '/root/loom/images/'IMAGES_THUMBS = { 'small': (90, 90), 'big': (300, 300),}USER_AGENT = "Mozilla/5.0 (Windows NT 6.0; rv:2.0) Gecko/20100101 Firefox/4.0"IM_MODULE = 'loom.pipelines.MyImagePipeline'ITEM_PIPELINES = ['loom.pipelines.MyImagePipeline']LOG_LEVEL = 'INFO' I dont know why I got this error. So thanks for help
|
Try change recursion limit for sys.setrecursionlimit(10000) in spyder. My python interpreter gave 900 recursions before "RuntimeError"
|
Firestore updates using python api are not persisting I have the following code. from firebase_admin import firestoredb = firestore.client()collection = db.collection('word_lists')word_list = collection.get()for item in word_list: item_dict = item.to_dict() print item_dict['next_practice_date'] item.reference.update({'next_practice_date': 0.0})When I run the code the first time everything is fine, no errors. The second time I run it I expect all the prints to print 0.0 but instead many print None, particularly the ones at the end. What is going on?
|
I did not find the solution to the problem but instead switched from firebase_admin import firestoreto from google.cloud import firestore and everything works well now.
|
Sparse matrix hstack getting error regarding subscriptability Would someone please explain why this does not work?from scipy.sparse import coo_matrix, hstackrow = np.array([0,3,1,0])col = np.array([0,3,1,2])data = np.array([4,5,7,9])temp = coo_matrix((data, (row, col)))temp_stack = coo_matrix([0, 11,22,33], ([0, 1,2,3], [0, 0,0,0]))temp_res = hstack(temp, temp_stack)I get an error that coo_matrix is not subscriptable, but I don't understand why, it appears that I am concatenating matrices of compatible dimension.
|
First note that the first argument of hstack is expected to be a tuple containing the arrays to be stacked, so you should call it with hstack((temp, temp_stack)).Next, temp has shape (4, 4) and temp_stack has shape (1, 4). These shapes can not be hstacked. What shape do expect the result to be? If you are trying to create a result that has shape (5, 4), use vstack:In [28]: result = vstack((temp, temp_stack))In [29]: result.AOut[29]: array([[ 4, 0, 9, 0], [ 0, 7, 0, 0], [ 0, 0, 0, 0], [ 0, 0, 0, 5], [ 0, 11, 22, 33]], dtype=int64)If you meant for temp_stack to have shape (4, 1), then fix how it is created by adding an extra level of parentheses in the call of coo_matrix:In [38]: temp_stack = coo_matrix(([0, 11, 22, 33], ([0, 1, 2, 3], [0, 0, 0, 0])))In [39]: temp_stack.shapeOut[39]: (4, 1)In [40]: result = hstack((temp, temp_stack))In [41]: result.AOut[41]: array([[ 4, 0, 9, 0, 0], [ 0, 7, 0, 0, 11], [ 0, 0, 0, 0, 22], [ 0, 0, 0, 5, 33]], dtype=int64)By the way, I think it is a SciPy bug that this calltemp_stack = coo_matrix([0, 11,22,33], ([0, 1,2,3], [0, 0,0,0]))does not raise an error. That call is equivalent totemp_stack = coo_matrix(arg1=[0, 11,22,33], shape=([0, 1,2,3], [0, 0,0,0]))and that shape value is clearly not valid. That call to coo_matrix should raise a ValueError. I created an issue for this on the SciPy github site: https://github.com/scipy/scipy/issues/9919
|
How to assign project category to a project using JIRA rest apis How to assign project category to a project using JIRA rest apis.My Jira server version is 6.3.13
|
All the following is using python!If you are creating a new issue you can do it in two different ways, the first being a dict: issue_dict = { 'project': {'id': 123}, 'summary': 'New issue from jira-python', 'description': 'Look into this one', 'issuetype': {'name': 'Bug'}, } new_issue = jira.create_issue(fields=issue_dict)The second way is to do it all in the function call: new_issue = jira.create_issue(project='PROJ_key_or_id', summary='New issue from jira-python', description='Look into this one', issuetype={'name': 'Bug'})However if you are updating a existing issue then you have to use the .update() function. Which would look like this: new_issue.update(issuetype={'name' : 'Bug'})Source: http://pythonhosted.org/jira/
|
How to make multiple update in django? I'm trying to make multiple update in django by checking in checkbox then push the update button. This is my view.pydef update_kel_stat(request, id, kelid): if request.method == "POST": cursor = connection.cursor() sql = "UPDATE keluargapeg_dipkeluargapeg SET KelStatApprov='3' WHERE (PegUser = %s AND KelID=%s )" % (id, kelid,) cursor.execute(sql)where 'id' is user parameter and 'kelid' is row paramater where 'kelid' become multiple parameter.This is my url.pyurl(r'^karyawan/update_status/(?P<id>\d+)/(?P<kelid>\d+)/$', views.pesan_update, name='update_pesan')template.html, I use JavaScript to load url where use to update <script> function setDeleteAction() { if (confirm("Are you sure want to delete these rows?")) { document.kel.action = "{% url 'update_pesan' %}"; document.kel.submit(); } }</script><form method="post" action="" name="kel" enctype="multipart/form-data">{% for keluarga in kels %} <tr id="{{ keluarga.KelID }}"> <td> <a href="#">{{ keluarga.KelNamaLengkap }}</a> </td> <td>{{ keluarga.KelHubungan }}</td> <td class="hidden-480">{{ keluarga.KelTglLahir }}</td> <td>{{ keluarga.KelJenisKel }}</td> <td class="hidden-480">{{ keluarga.KelIjazahAkhir }} </td> <td>{{ keluarga.KelPekerjaan }}</td> {% if keluarga.KelStatApprov == '1' %} <td><span class="label label-sm label-danger">Draft</span> </td> {% elif keluarga.KelStatApprov == '2' %} <td> <span class="label label-sm label-warning">Revisi</span> </td> {% elif keluarga.KelStatApprov == '3' %} <td> <span class="label label-sm label-success">Setuju</span> </td> {% endif %} <td>{{ keluarga.KelKetRevisi }}</td> <td> <a href=" {{ MEDIA_URL }}{{ keluarga.KelFileUpload }}">{{ keluarga.KelNamaFile }}</a> </td> <td><input type="checkbox" name="kel[]" value="{{ keluarga.KelID }}"></td> <td> <div class="hidden-sm hidden-xs action-buttons"> <a class="green" href="{% url 'edit_keluarga' keluarga.PegUser keluarga.KelID %}"> <i class="ace-icon fa fa-pencil bigger-130"></i> </a> <a class="red" href="#"> <i class="ace-icon fa fa-trash-o bigger-130"></i> </a> </div> </td> </tr>{% endfor %}<tr> <td> <button type="button" name="btn_delete" id="btn_delete" class="btn btn-success" onClick="setDeleteAction();">Approve </button> </td></tr>How can I get multiple row(like array in php) in view and url?
|
were you looking for getlist? QueryDict.getlist(key, default=None) Returns the data with the requested key, as a Python list. Returns an empty list if the key doesn’t exist and no default value was provided. It’s guaranteed to return a list of some sort unless the default value provided is not a list.request.POST.getlist('kel')
|
Open New Thunderbird Email Using Python I'm trying to open just a new Thunderbird email and attach a file to it for me to fill out the recipient email addresses' instead of hardcoding it. I'm using Windows 7, Python 2.7 and the latest version of Thunderbird.I noticed some other questions like this but they all involved writing a Thunderbird plugin which isn't what I want to do. I know how to do this for Outlook like below and want to do the same thing: # open new e-mail in Outlook and attach the Map Package outlook = win32com.client.Dispatch("Outlook.Application") email = outlook.CreateItem(0) email.Subject = "Map Package Area of Interest" email.Attachments.Add(pkgPath) email.Display()Thanks
|
Thunderbird and other programs from Mozilla don't use win32com. Instead, they use xpcom. See [http://kb.mozillazine.org/Calling_Thunderbird_from_other_programs. There is a python module, PyXPCOM, which could help you out with controlling Mozilla from Python, if you really want to.You can also use AutoHotKey to script Thunderbird and many other programs, too.
|
Why child class doesn't overwerite the fields from based class in python and how deal with that I create based abstract class in python which is based class for all child classes and implement some functions which will be redundant to write each time in every child class.class Element: ###SITE### __sitedefs = [None] def getSitedefs(self): return self.__sitedefsclass SRL16(Element): ###SITE### __sitedefs = ['SLICEM']Ther result is logical in on hand beacuse I get the value from the based class where I declare the value but otherwise I overwite it in child one. My question is how to get fromsrl = SRL16()srl.getSitedefs()SLICEM not NONEProbably I missunderstending something very based but please help. Best regards
|
Your problem's is due to name mangling. See eg: What is the meaning of a single- and a double-underscore before an object name?.If you change all the __sitedefs by _sitedefs then everything should work as expected.
|
What type of field do I have to use in order to associate related parent object in serializer I have two models with one to many relation. I will use the default example.class Album(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) album_name = models.CharField(max_length=100) artist = models.CharField(max_length=100)class Track(models.Model): album = models.ForeignKey(Album, related_name='tracks', on_delete=models.CASCADE) order = models.IntegerField() title = models.CharField(max_length=100) duration = models.IntegerField()The question I have is how do I implement serializer in order to associate a track with an album by providing only Album 'id' key.What I want is to know which type of serializers.Field do I have to declare is the serializer.Here is an exampleclass TrackSerializer(serializers.Serializer): album = serializers.MagiclyRelatedFieldByUUID() # <---- ??? title = serializers.CharField() order = serializers.IntegerField() duration = serializers.IntegerField() class Meta: model = models.TrackRequest looks like this:{ 'album': '137b5a6c-dd76-11e6-bf26-cec0c932ce01', 'title': 'my new track', 'duration': 10 'order': 31}UpdatedI managed to solve it with a HyperlinkedRelatedField by specifying view_name='album-detail', queryset=models.Album.objects.all() and lookup_field='uuid', but in this case I have to send a valid url to the album. Is it the only way to get instance of related model in serializer?So far it my solution is following:class TrackSerializer(serializers.Serializer): album = serializers.HyperlinkedRelatedField(view_name='album-detail', queryset=models.Album.objects.all(), lookup_field='uuid')
|
You can even try thisalbum = serializers.SlugRelatedField( queryset=models.Album.objects.all(), slug_field='uuid' )It will accept you model uid to get the object.{ "album": "ed79716c-ba5d-4d3f-bb96-2685b38139e5", "title": "Eleanor Rigby", "order": 2, "duration": 206}
|
Organize subplots using matplotlib I am trying to plot the content of a json file. The script should generate 64 subplots. Each subplot consists of 128 samples (voltage levels). "ElementSig" is a "key" in that json file for a list of 8192 samples. I am taking 128 samples at a time and generate a subplot of it as you see in my following script:import jsonimport matplotlib.pyplot as pltjson_data = open('txrx.json')loaded_data = json.load(json_data)json_data.close()j = 0E = loaded_data['ElementSig']for i in range(64): plt.ylabel('E%s' % str(i+1)) print 'E', i, ':' plt.figure(1) plt.subplot(64, 2, i+1) print E[0+j:127+j] plt.plot(E[0+j:127+j]) j += 128plt.show()The results is very packed and the figures are overlapping.Any help is appreciated.
|
I got a better figure when I saved it as .png file. fig = plt.figure(figsize=(20, 222))plt.subplots_adjust(top=.9, bottom=0.1, wspace=0.2, hspace=0.2)for i in range(1, 65): print 'E', i, ':' plt.subplot(64, 2, i) plt.ylabel('E%s' % str(i)) i += 1 print E[0+j:127+j] plt.plot(E[0+j:127+j]) j += 128plt.savefig('foo.png', bbox_inches='tight')plt.show()Though I believe there is a better solution.
|
TypeError: cupcake_flour() missing 1 required positional argument: 'cu_flour'. What am I doing wrong? This is my first time taking python and i'm having a hard time understanding what I've done wrong to receive this error? This code is supposed to change grams to cups for a cupcake recipe and this is just the first step converting the flour. The input function works but after that I get the above error.user = input("How many cookies do you want to make? ")def cupcake_flour(cu_flour): cu_flour = user * 100 / 120 print(cu_flour + "cups of flour")def main(): cupcake_flour()main()
|
You have defined your function cupcake_flour to take an argument, but you are not providing one when you are calling cupcake_flour(). You probably want to pass the user input to the function and then print the amount of flour needed like so:def cupcake_flour(cookies): cu_flour = cookies * 100 / 120 print(str(cu_flour) + "cups of flour")def main(): num_cookies = int(input("How many cookies do you want to make? ")) cupcake_flour(num_cookies)main()Note a few minor changes: int(input("How many cookies do you want to make? ")) since the input is supposed to be interpreted as a number (and used as such in the calculation)Moved the user input into the main, as it makes more sense to only ask for it when main() is calledstr(cu_flour) as it needs to be a string
|
subset a python dataframe by conditions I trying to select the name row with count>250, which is called effective here. So we will try to find the mean of its ratet3=dfnew.groupby('name')['ratings']t4=t3.count()t5=t4[t4.values>250]t6=t3.mean()t6[(t6.index==t5.index)]Obviously the problem is in last row of my code. Where I want to match t6's index with t5's index. If they match, then save it, otherwise left it out. It is kind of like inner join in SQL.What should I do to modify last row?Suppose dataframe like thisinput:name ratings A 1 A 2 : A 251 B 1 B 2 : B 230so intended result should be 126 ( (1+251)/2))OutputA 126
|
t3=dfnew.groupby('name')['ratings'].agg(['count','mean'])t5=t3[t3['count']>250]t5It works fine when I aggregate two functions at the same time.
|
"view_as_windows" from skimage but in Pytorch Is there any Pytorch version of view_as_windows from skimage? I want to create the view while the tensor is on the GPU.
|
I needed the same functionality from Pytorch and ended up implementing it myself:def view_as_windows_torch(image, shape, stride=None): """View tensor as overlapping rectangular windows, with a given stride. Parameters ---------- image : `~torch.Tensor` 4D image tensor, with the last two dimensions being the image dimensions shape : tuple of int Shape of the window. stride : tuple of int Stride of the windows. By default it is half of the window size. Returns ------- windows : `~torch.Tensor` Tensor of overlapping windows """ if stride is None: stride = shape[0] // 2, shape[1] // 2 windows = image.unfold(2, shape[0], stride[0]) return windows.unfold(3, shape[1], stride[1])Essentially it is just two lines of Pytorch code relying on torch.Tensor.unfold. You can easily convince yourself, that it does the same as skimage.util.view_as_windows:import torchx = torch.arange(16).reshape((1, 1, 4, 4))patches = view_as_windows_torch(image=x, shape=(2, 2))print(patches)Gives:tensor([[[[[[ 0, 1], [ 4, 5]], [[ 1, 2], [ 5, 6]], [[ 2, 3], [ 6, 7]]], [[[ 4, 5], [ 8, 9]], [[ 5, 6], [ 9, 10]], [[ 6, 7], [10, 11]]], [[[ 8, 9], [12, 13]], [[ 9, 10], [13, 14]], [[10, 11], [14, 15]]]]]])I hope this helps!
|
literal_eval and boolean Logic in Python >>> from ast import literal_eval>>> H = {"('a','b')":1}>>> x = ('a','b')>>> str(x)"('a', 'b')">>> list(H.keys())[0]"('a','b')">>> str(x) == list(H.keys())[0]FalseWhy do I get a False statement? However, when I do>>> x == literal_eval(list(H.keys())[0])TrueI get a True statement.
|
In my tests, str(x) is "('a', 'b')". Do you notice the space after the comma?That is enough to explain why the strings are different (one contains a space while the other does not), while the tuples are equal.
|
How can I find the second smallest output for my function? I used this function to find the biggest pullback $ wise for my data frame column with stock prices. I need help to figure out how to get the X following output. Basically the plan is to join those outputs into a new data frame to get the X biggest pullbacks within my data frame.Main question:How could I loop through the X biggest pullback, starting at the biggest pullback and finding the X next biggest pullback?def maxdrop(p):bestdrop = 0wheredrop = -1,-1i = 0while i < len(p) - 1: if p[i+1] < p[i]: bestlocal = p[i+1] wherelocal = i+1 j = i + 1 while j < len(p) - 1 and p[j + 1] < p[i]: j += 1 if p[j] < bestlocal: bestlocal = p[j] wherelocal = j if p[i] - bestlocal > bestdrop: bestdrop = p[i] - bestlocal wheredrop = i, wherelocal i = j+1 else: i += 1return bestdrop,wheredropmaxdrop(df1['price'])Here is the current output for the code:(782.5300000000001, (1640, 1657))
|
The strategy u can use is to first find the biggest pullback, then exclude that range where that pullback is and then calculate the biggest pullback for all valid ranges that are left.I made my own maxdrop function that works in a similar fashion as yours, except it only looks within specified bounds. Then alldrops returns an array of all draw-downs without overlap. Then you could sort this array by the $ draw-down to get what you want.def maxdrop(pricearray, leftbound=0 , rightbound=-1): # Calculate the pullback/drop by splitting the array in half, # then calculating the max of the first and the min of the second. # By testing all "splitting points" and selecting the maximum we get the biggest drop drops = [] begin, end = -1, -1 if rightbound == -1: rightbound = len(pricearray) for i in range(leftbound+1,rightbound-1): leftpart = pricearray[leftbound:i] rightpart = pricearray[i:rightbound] begin = pricearray.index(max(leftpart)) end = pricearray.index(min(rightpart)) delta = max(leftpart)-min(rightpart) drops.append([delta, begin, end]) if len(drops) > 0: return max(drops) else: return None;def alldrops(pricearray): droplist = [] # Stores all the drops droplist.append(maxdrop(pricearray)) while True: termswhereadded = False validranges = [] # Stores all ranges that are not part of drawdown #Get ranges that are not already part of a drawdown for i in range(-1, len(droplist)): if(i == -1): b = 0 else: b = droplist[i][2] if(i == len(droplist)-1): e = len(pricearray)-1 else: e = droplist[i+1][1] if (b < e-1): validranges.append((b, e)); # If there are no valid ranges left, we are finished if (len(validranges) == 0): break; #Calculate the biggest drawdown in all those valid ranges for vrange in validranges: drop = maxdrop(pricearray, vrange[0], vrange[1]) if (drop != None): if (drop[0] > 0): droplist.append(drop) termswhereadded = True droplist.sort(key= lambda n : n[1]) # If no drawdown was added we are finished if(not termswhereadded): break; return droplistFor a array with a hundred random elements you get (the first element in each array is the pullback, the second where it starts, the third where it ends)[[0.6391820462436719, 0, 1], [4.945067107442718, 3, 7], [0.38440828483857103, 10, 11], [0.44438096165870533, 14, 15], [1.2783529599412589, 23, 24], [0.20126563551455945, 25, 26], [1.1957951552365884, 28, 30], [0.5895546638677374, 32, 37], [1.5337809447945148, 40, 41], [3.0108867730327518, 43, 60], [1.0752516082881058, 67, 68], [1.0413928565593054, 70, 71], [3.039113846862932, 82, 87], [6.364453213541438, 92, 99]]Which when you plot the pullbacks in matplotlib:
|
Is there any method to replace selectROI with auto selection? I have finished detecting faces through videos and generating a bounding box if detected by Haar Cascade classifier.And now I only want to analyze the particular part of the face such as foreheads or cheeks, but I could just choose the place manually through selectROI in OpenCV.Is there any method to revise my code or I could just do it manually?import cv2 as cvimport argparseimport numpy as npparser = argparse.ArgumentParser()parser.add_argument('--face_cascade', help='Path to face cascade.',default='opencv-3.4/data/haarcascades/haarcascade_frontalface_alt2.xml')parser.add_argument('--camera', help='Camera divide number.', type=int, default=0)args = parser.parse_args()face_cascade_name = args.face_cascadeface_cascade = cv.CascadeClassifier()if not face_cascade.load(cv.samples.findFile(face_cascade_name)):print('Error loading face cascade')exit(0)camera_device = args.camera # for build-in cameracap = cv.VideoCapture(camera_device)if not cap.isOpened: print('Error opening video capture') exit(0)tracker = cv.TrackerCSRT_create()roi = Nonewhile True: ret, frame = cap.read() if frame is None: print('No captured frame, Break!') break frame_gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY) frame_gray = cv.equalizeHist(frame_gray) faces = face_cascade.detectMultiScale( frame_gray, scaleFactor=1.1, minNeighbors=3) for (x, y, w, h) in faces: cv.rectangle(frame, (x, y), (x+w, y+h), (0, 0, 255), 3) if roi is None: roi = cv.selectROI('frame', frame, False, False) if roi != (0, 0, 0, 0): tracker.init(frame, roi) success, rect = tracker.update(frame) if success: (x, y, w, h) = [int(i) for i in rect] cv.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 3) cv.imshow('Face Detection', frame) if (cv.waitKey(1) == ord('q') or cv.waitKey(1) == 27): break
|
there can be different ways you can go around for detecting and analysing facial regions, I am listing a few:you can use Dlib's Landmark Detector to detect facial landmarks and classify the facial regions based on landmark's position. Example: the face portion above eye-brow landmarks is forehead region etc. For more clarity see the image below.You can object detectors which can detect facials regions which you want but it will be difficult to find pre-trained model for this, you have to train your own model.
|
AttributeError: module 'numexpr' has no attribute '__version__' Trying to import some modules written below:import numpy as npimport os.pathimport pandas as pdimport mathimport matplotlib.pyplot as pltfrom mpl_toolkits.mplot3d import Axes3DHowever I get an AttributeError: module 'numexpr' has no attribute 'version' which I don't know how to solve. I've already tried to uninstall and install numpy. I've added the full error message below I apologize if it's a bit lengthy.AttributeError Traceback (most recent call last)<ipython-input-1-c0202e9c9cc8> in <module>() 1 import numpy as np 2 import os.path----> 3 import pandas as pd 4 import math 5 import matplotlib.pyplot as plt~\Anaconda3\lib\site-packages\pandas\__init__.py in <module>() 40 import pandas.core.config_init 41 ---> 42 from pandas.core.api import * 43 from pandas.core.sparse.api import * 44 from pandas.stats.api import *~\Anaconda3\lib\site-packages\pandas\core\api.py in <module>() 8 from pandas.core.dtypes.missing import isnull, notnull 9 from pandas.core.categorical import Categorical---> 10 from pandas.core.groupby import Grouper 11 from pandas.io.formats.format import set_eng_float_format 12 from pandas.core.index import (Index, CategoricalIndex, Int64Index,~\Anaconda3\lib\site-packages\pandas\core\groupby.py in <module>() 44 from pandas.core.base import (PandasObject, SelectionMixin, GroupByError, 45 DataError, SpecificationError)---> 46 from pandas.core.index import (Index, MultiIndex, 47 CategoricalIndex, _ensure_index) 48 from pandas.core.categorical import Categorical~\Anaconda3\lib\site-packages\pandas\core\index.py in <module>() 1 # flake8: noqa----> 2 from pandas.core.indexes.api import * 3 from pandas.core.indexes.multi import _sparsify~\Anaconda3\lib\site-packages\pandas\core\indexes\api.py in <module>()----> 1 from pandas.core.indexes.base import (Index, _new_Index, # noqa 2 _ensure_index, _get_na_value, 3 InvalidIndexError) 4 from pandas.core.indexes.category import CategoricalIndex # noqa 5 from pandas.core.indexes.multi import MultiIndex # noqa~\Anaconda3\lib\site-packages\pandas\core\indexes\base.py in <module>() 50 import pandas.core.algorithms as algos 51 from pandas.io.formats.printing import pprint_thing---> 52 from pandas.core.ops import _comp_method_OBJECT_ARRAY 53 from pandas.core.strings import StringAccessorMixin 54 from pandas.core.config import get_option~\Anaconda3\lib\site-packages\pandas\core\ops.py in <module>() 17 from pandas import compat 18 from pandas.util._decorators import Appender---> 19 import pandas.core.computation.expressions as expressions 20 21 from pandas.compat import bind_method~\Anaconda3\lib\site-packages\pandas\core\computation\__init__.py in <module>() 8 try: 9 import numexpr as ne---> 10 ver = ne.__version__ 11 _NUMEXPR_INSTALLED = ver >= LooseVersion(_MIN_NUMEXPR_VERSION) 12 AttributeError: module 'numexpr' has no attribute '__version__'
|
I experienced the same problem a while back and solved it by doing the following thing in Ananconda:pip uninstall -y numpypip uninstall -y setuptoolspip install setuptoolspip install numpyIf you are using Anaconda3 try the same thing using pip3.
|
What is the optimal way to create a new column in Pandas dataframe based on conditions from another row? I have a Pandas dataframe, week1_plays in the following format:What I want to do is add a column week1_plays['distance_from_receiver']such that for each row in the dataframe, we grab the keys of gameId, playId, frameId and find the x and y position of the player with those keys and position == 'WR'. Then I'll calculate the distance from the receiver with the following function:def get_distance(rec_x, rec_y, def_x, def_y): distance = np.sqrt( ((def_x - rec_x)**2) + ((def_y - rec_y)**2) ) return distanceFor example using the sample provided, the row 0 input to the function would beget_distance(91.35, 44.16, 88.89, 36.47)The current solution I have is to use a lambda function on the dataframe as such:week1_topReceivers['distance_from_receiver'] = week1_topReceivers.apply(lambda row: get_distance(week1_wr_position.loc[np.where((week1_topReceivers['playId'] == row['playId']) & (week1_topReceivers['frameId'] == row['frameId']) & (week1_topReceivers['gameId'] == row['frameId']))]['x'],week1_topReceivers.loc[np.where((week1_topReceivers['playId'] == row['playId']) & (week1_topReceivers['frameId'] == row['frameId']) & (week1_topReceivers['gameId'] == row['frameId']))]['y'], row['x'], row['y']), axis = 1)but querying the dataframe for the first two inputs takes a very long time with a large dataframe. I know there has to be a more optimal solution to this but my searches online aren't turning up any better options.EDIT: Here is a larger sample and the expected output:SAMPLEx y o dir event position frameId team gameId playId playDirection route88.89 36.47 105.63 66.66 None SS 1 home 2018090600 75 left NaN91.35 44.16 290.45 16.86 None WR 1 away 2018090600 75 left HITCH86.31 22.01 70.12 168.91 None FS 1 home 2018090600 75 left NaN73.64 28.70 103.05 219.41 None FS 1 home 2018090600 75 left NaN86.48 31.12 95.90 33.36 None MLB 1 home 2018090600 75 left NaN82.67 20.53 81.14 174.57 None CB 1 home 2018090600 75 left NaN84.00 43.49 108.23 110.32 None CB 1 home 2018090600 75 left NaN85.63 26.59 87.69 38.80 None LB 1 home 2018090600 75 left NaN88.89 36.47 105.63 68.49 None SS 2 home 2018090600 75 left NaN91.37 44.17 290.45 29.61 None WR 2 away 2018090600 75 left HITCH86.32 22.00 70.88 119.04 None FS 2 home 2018090600 75 left NaN73.64 28.70 104.57 228.17 None FS 2 home 2018090600 75 left NaN86.48 31.11 101.10 30.26 None MLB 2 home 2018090600 75 left NaN82.68 20.53 82.24 147.46 None CB 2 home 2018090600 75 left NaN84.02 43.49 107.33 106.73 None CB 2 home 2018090600 75 left NaN85.64 26.61 87.69 37.51 None LB 2 home 2018090600 75 left NaN88.88 36.47 107.02 57.53 None SS 3 home 2018090600 75 left NaN91.37 44.17 290.45 32.20 None WR 3 away 2018090600 75 left HITCH86.33 22.00 71.88 93.49 None FS 3 home 2018090600 75 left NaN73.63 28.69 104.57 227.74 None FS 3 home 2018090600 75 left NaNEXPECTED OUTPUT:x y o dir event position frameId team gameId playId playDirection route distance_from_receiver88.89 36.47 105.63 66.66 None SS 1 home 2018090600 75 left NaN 8.0791.35 44.16 290.45 16.86 None WR 1 away 2018090600 75 left HITCH 0.0086.31 22.01 70.12 168.91 None FS 1 home 2018090600 75 left NaN 22.7273.64 28.70 103.05 219.41 None FS 1 home 2018090600 75 left NaN 23.5186.48 31.12 95.90 33.36 None MLB 1 home 2018090600 75 left NaN 13.9282.67 20.53 81.14 174.57 None CB 1 home 2018090600 75 left NaN 25.1784.00 43.49 108.23 110.32 None CB 1 home 2018090600 75 left NaN 7.3885.63 26.59 87.69 38.80 None LB 1 home 2018090600 75 left NaN 18.4888.89 36.47 105.63 68.49 None SS 2 home 2018090600 75 left NaN 8.0991.37 44.17 290.45 29.61 None WR 2 away 2018090600 75 left HITCH 0.0086.32 22.00 70.88 119.04 None FS 2 home 2018090600 75 left NaN 22.7473.64 28.70 104.57 228.17 None FS 2 home 2018090600 75 left NaN 23.5386.48 31.11 101.10 30.26 None MLB 2 home 2018090600 75 left NaN 13.9582.68 20.53 82.24 147.46 None CB 2 home 2018090600 75 left NaN 25.1984.02 43.49 107.33 106.73 None CB 2 home 2018090600 75 left NaN 7.3985.64 26.61 87.69 37.51 None LB 2 home 2018090600 75 left NaN 18.4788.88 36.47 107.02 57.53 None SS 3 home 2018090600 75 left NaN 8.0991.37 44.17 290.45 32.20 None WR 3 away 2018090600 75 left HITCH 0.0086.33 22.00 71.88 93.49 None FS 3 home 2018090600 75 left NaN 22.7473.63 28.69 104.57 227.74 None FS 3 home 2018090600 75 left NaN 23.54
|
You are looking for a merge or join operation. Try something like this:df = pd.DataFrame({'gameId':[1,1,1,1,1,1],'playId':[1,1,1,1,1,1], 'frameId':[1,1,1,2,2,2], 'position':['A','B','WR','C','WR','D'], 'x':[87,56,45,34,45,67], 'y':[25,36,47,365,25,36]})# create a table with just the wide receiver positions:wr = df.loc[df.position=='WR'].drop(columns='position')# merge the wide receiver x,y values into the original table based on the keys:df = df.merge(wr, how='outer', on=['gameId', 'playId', 'frameId'], suffixes=['', '_wr'])# apply your function to calculate the column (avoid using apply because it's super slow)df['dist_from_wr'] = [get_distance(x, y, x_wr, y_wr) for x, y, x_wr, y_wr in zip(df.x, df.y, df.x_wr, df.y_wr)]Note as well, that you're lucky here because your function is already vectorized (which is not always the case) so you can actually do this even more efficiently by passing entire columns as input arguments as follows:df['dist_from_wr'] = get_distance(df.x, df.y, df.x_wr, df.y_wr)Result:| gameId | playId | frameId | position | x | y | x_wr | y_wr | dist_from_wr ||-------:|-------:|--------:|:---------|----:|----:|-----:|-----:|-------------:|| 1 | 1 | 1 | A | 87 | 25 | 45 | 47 | 47.4131 || 1 | 1 | 1 | B | 56 | 36 | 45 | 47 | 15.5563 || 1 | 1 | 1 | WR | 45 | 47 | 45 | 47 | 0 || 1 | 1 | 2 | C | 34 | 365 | 45 | 25 | 340.178 || 1 | 1 | 2 | WR | 45 | 25 | 45 | 25 | 0 || 1 | 1 | 2 | D | 67 | 36 | 45 | 25 | 24.5967 |
|
Python 3.8 sort - Lambda function behaving differently for lists, strings Im trying to sort a list of objects based on frequency of occurrence (increasing order) of characters. Im seeing that the sort behaves differently if list has numbers versus characters. Does anyone know why this is happening?Below is a list of numbers sorted by frequency of occurrence.# Sort list of numbers based on increasing order of frequencynums = [1,1,2,2,2,3]countMap = collections.Counter(nums)nums.sort(key = lambda x: countMap[x])print(nums)# Returns correct output[3, 1, 1, 2, 2, 2]But If I sort a list of characters, the order of 'l' and 'o' is incorrect in the below example:# Sort list of characters based on increasing order of frequencyalp = ['l', 'o', 'v', 'e', 'l', 'e', 'e', 't', 'c', 'o', 'd', 'e']countMap = collections.Counter(alp)alp.sort(key = lambda x: countMap[x])print(alp)# Returns Below output - characters 'l' and 'o' are not in the correct sorted order['v', 't', 'c', 'd', 'l', 'o', 'l', 'o', 'e', 'e', 'e', 'e']# Expected output['v', 't', 'c', 'd', 'l', 'l', 'o', 'o', 'e', 'e', 'e', 'e']
|
Sorting uses stable sort - that means if you have the same sorting criteria for two elements they keep their relative order/positioning (here it being the amount of 2 for both of them).from collections import Counter# Sort list of characters based on increasing order of frequencyalp = ['l', 'o', 'v', 'e', 'l', 'e', 'e', 't', 'c', 'o', 'd', 'e']countMap = Counter(alp)alp.sort(key = lambda x: (countMap[x], x)) # in a tie, the letter will be used to un-tieprint(alp)['c', 'd', 't', 'v', 'l', 'l', 'o', 'o', 'e', 'e', 'e', 'e'] This fixes it by using the letter as second criteria.To get your exact output you can use:# use original position as tie-breaker in case counts are identical countMap = Counter(alp)pos = {k:alp.index(k) for k in countMap}alp.sort(key = lambda x: (countMap[x], pos[x]))print(alp)['v', 't', 'c', 'd', 'l', 'l', 'o', 'o', 'e', 'e', 'e', 'e']See Is python's sorted() function guaranteed to be stable? or https://wiki.python.org/moin/HowTo/Sorting/ for details on sorting.
|
How do I fix a value error when using scipy.integrate odeint function? I'm an engineering student and I'm trying to figure out how to use the odeint function from the scipy.integrate module (I've only ever used ode45 in MATLAB). I'm attempting to numerically solve a simple second order mass, spring, dashpot system. Below is the code I've written (specifically I'm using Jupyter Notebook and running the latest version of Python 3):import numpy as npfrom scipy.integrate import odeintfrom matplotlib.pyplot as plt%matplotlib inline# Numerical solution to mx" + bx' + kx = f(t)# Define state vector y and its derivativedef translational(x,t,m,b,k,f): y = [x[0], x[1]] # state vector ydot = [x[1], f -b/m*x[1] - k/m*x[0]] # derivative of state vector return ydot# Parameters for the systemt = np.arange(0,10,0.01)IC = [0, 0] #[x0 v0]m = 10 # kgb = 2 # N*s/mk = 5 # N/mf = 5*np.cos(10*t)y = odeint(translational,IC,t,args=(m,b,k,f))When I execute the code it returns the following error:TypeError Traceback (most recent call last)TypeError: only size-1 arrays can be converted to Python scalarsThe above exception was the direct cause of the following exception:ValueError Traceback (most recent call last)<ipython-input-7-423018367c52> in <module> 20 k = 5 # N/m 21 f = 5*np.cos(10*t)---> 22 y = odeint(translational,IC,t,args=(m,b,k,f))~\anaconda3\lib\site-packages\scipy\integrate\odepack.py in odeint(func, y0, t, args, Dfun, col_deriv, full_output, ml, mu, rtol, atol, tcrit, h0, hmax, hmin, ixpr, mxstep, mxhnil, mxordn, mxords, printmessg, tfirst) 239 t = copy(t) 240 y0 = copy(y0)--> 241 output = _odepack.odeint(func, y0, t, args, Dfun, col_deriv, ml, mu, 242 full_output, rtol, atol, tcrit, h0, hmax, hmin, 243 ixpr, mxstep, mxhnil, mxordn, mxords,ValueError: setting an array element with a sequence.For the life of me I can't figure out what's wrong. Any help is much appreciated! Thanks.
|
f is an array of numbers, and therefore so is f -b/m*x[1] - k/m*x[0], so the return value of your function translational is not correct.Instead of attempting to precompute the values of f, what you should do is use the expression for the function in translational:def translational(x,t,m,b,k): y = [x[0], x[1]] # state vector f = 5*np.cos(10*t) ydot = [x[1], f -b/m*x[1] - k/m*x[0]] # derivative of state vector return ydotand remove f from the args parameter of the odeint function call.
|
How to build a dictionary that map nodes to its degree in networkx2.1,python3? what I try is here :def comm_deg(G): nodes = G.nodes() A=nx.adj_matrix(G) deg_dict = {} n = len(nodes) degree= A.sum(axis = 1) for i in range(n): deg_dict[nodes[i]] = degree[i,0] return deg_dictit shows that KeyError: 0, I find both using nodes[] degree[,] would occur this issuehere is the full error message:> File "/Users/shaoyupei/Desktop/code/untitled1.py", line 25, in comm_deg> deg_dict[nodes[i]] = degrees[i,0]> File "/anaconda3/lib/python3.6/site-packages/networkx/classes/reportviews.py", line 178, in __getitem__> return self._nodes[n]> KeyError: 0
|
So there's several issues here.First, there's a better way to create a dict than what you're doing. In fact it's basically already built in. G.degree is already a dict-like object so that G.degree[node] will give the degree of node.If you really want it to be a dict, the best way to do that is probablydeg_dict = dict(G.degree)Now let's look at the error you're getting. G.nodes() is not a list (it's also something dictlike). So when you set nodes=G.nodes(), then nodes isn't a list. Here nodes[0] trying to return the attributes of node 0 (and for what it's worth, if your nodes don't have any attributes nodes[node] will return an empty dict). But (I believe) 0 is not a node in your graph G. So this is the meaning of your error message.Also, as a general rule, if you ever do n=len(x) and then for i in range(n):, you almost always really want to do for name in x: or if you really need the index, you could do for i, name in enumerate(x).So if you want to use the approach you did,for i, node in nodes: deg_dist[node] = degree[i]
|
Pandas datetime filter I want to get subset of my dataframe if date is before 2022-04-22. The original df is like belowdf: date hour value0 2022-04-21 0 10 1 2022-04-21 1 12 2 2022-04-21 2 14 3 2022-04-23 0 10 4 2022-04-23 1 12 5 2022-04-23 2 14 I checked data type by df.dtypes and it told me 'date' column is 'object'.So I checked individual cell using df['date'][0] and it is datetime.date(2022, 4, 21).Also, df['date'][0] < datetime.date(2022, 4, 22) gave me 'True'However, when I wanted to apply this smaller than in whole dataframe bydf2 = df[df['date'] < datetime.date(2022, 4, 22)],it showed TypeError: '<' not supported between instances of 'str' and 'datetime.date'Why was this happening? Thanks in advance!
|
You most likely still have some string dates in one of your rows thus the first element might be ok but a complete comparison of all values using "<" will fail.Either you use timegeb's answer in the comments.df['date'] = pd.to_datetime(df['date'])or you convert them elementwiseimport datetimedf['date']=[datetime.datetime.strptime(d,'%Y-%m-%d') if type(d)==str else d for d in test]Both methods might fail if you have an odd string in any of your rows. In that case you can use:def convstr2date(d): if type(d)==str: try: d = datetime.datetime.strptime(str(d),'%Y-%m-%d') except: d = np.datetime64('NaT') return ddf['date'] = [convstr2date(d) for d in df['date']]
|
How to get the "element name" for selenium? I have (found) a python script whose purpose is to click a certain button on a certain web page. This is the script:from selenium import webdriverfrom webdriver_manager.chrome import ChromeDriverManagerdriver = webdriver.Chrome(ChromeDriverManager().install())url = "https://url"driver.get(url)button = driver.find_element_by_class_name("name-of-the-button")button.click()As I understand, "class name" is the name of the button.My question is how do I know the class name of the specific button on that specific website? What's the proper way to get this information from?
|
What you're looking for is called the developer tools. If you're using Chrome, here's a good tutorial. Most browsers have a very similar layout.Essentialy you want to hit F12 to open up the developer tools, hit CTRL+SHIFT+C and click the button you want. On the right, the button source code should be highlighted, and you should be able to see a class= attribute.
|
Is this a correct implementation for a curried operator.add function? This blogpost proposed this implementation of a curried addition function:def addN(n): return lambda x: x + ndef plus(a, b): addA=addN(a) return addA(b)I believe my version is more correct because it uses unary functions all the way.from operator import addplus = lambda a: lambda b: add(a, b) plus(1)(2)What do you think ?
|
Your plus function is identical to the addN function from the blog post. The only differences are:You used a lambda where the blog post used a def.For some reason you used operator.add(a,b) instead of simply a+b.Usage of plus and addN will give the same results so neither one is more "correct".Note that the function naming you chose is confusing since your plus function corresponds to the addN function (rather than the plus function) from the blog post.
|
Django: Edit Function while not changing the Image data I have an edit function that I want the user to be able to edit the Picture object (tags), while keeping the old image. The form is looking for a photo but I do want the user to be able to change the image - just the other information. How do you pass the original image data from the picture object into the PictureForm so it validates?My view:@csrf_protect@login_requireddef edit_picture(request, picture_id, template_name="picture/newuserpicture.html"): picture = get_object_or_404(Picture, id=picture_id) if request.user != picture.user: return HttpResponseForbidden() if request.method == 'POST': form = PictureForm(request.POST or None, request.FILES or None, instance=picture) if form.is_valid(): form.save() return HttpResponseRedirect('/picture/%d/' % picture.id ) else: form = PictureForm(instance=picture) data = { "picture":picture, "form":form } return render_to_response(template_name, data, context_instance=RequestContext(request))
|
I think this thread should give you a clue how to make existing fields readonly:In a Django form, how do I make a field readonly (or disabled) so that it cannot be edited?I you want to hide the picture completely and stumble across validation errors because the field is marked as required in your model definition (blank=True) another option would be to override the form's save method and tweak the field's required attribute.Something along these lines: def __init__(self, *args, **kwargs): super(PictureForm, self).__init__(*args, **kwargs) for key in self.fields: self.fields[key].required = False
|
Does local GAE read and write to a local datastore file on the hard drive while it's running? I have just noticed that when I have a running instance of my GAE application, there nothing happens with the datastore file when I add or remove entries using Python code or in admin console. I can even remove the file and still have all data safe and sound in admin area and accessible from code. But when I restart my application, all data obviously goes away and I have a blank datastore. So, the question - does GAE reads all data from the file only when it starts and then deals with it in the memory, saving the data after I stop the application? Does it make any requests to the datastore file when the application is running? If it doesn't save anything to the file while it's running, then, possibly, data may be lost if the application unexpectedly stops? Please make it clear for me if you know how it works in this aspect.
|
How the datastore reads and writes its underlying files varies - the standard datastore is read on startup, and written progressively, journal-style, as the app modifies data. The SQLite backend uses a SQLite database.You shouldn't have to care, though - neither backend is designed for robustness in the face of failure, as they're development backends. You shouldn't be modifying or deleting the underlying files, either.
|
Is there any python implement of edonkey/emule I want deploy a project in google appengine to search edonkey/emule, Is there any python implement of edonkey/emule or ed2k protocol library ?
|
After 20 minutes of googling all combinations of python and edonkey/emule/ed2k and visiting all sites of all clients listed under the "eDonkey network" Wikipedia page I can say with near certainty that the answer is "No."
|
Is there a way to label the mean and median in matplotlib boxplot legend? I have the following box plot which plots some values with different mean and median values for each box; I am wondering if there is any way to label them so that they appear on the graph legend (because the current box plot plots an orange line for the median and a blue dot for the mean and it is not so clear which is which)? Also is there a way to make one legend for these subplots, instead of having a legend for each one, since they are essentially the same objects just different data?Here's a code example for one of the subplots, the other subplots are the same but have different data:fig = plt.figure()xlim = (4, 24)ylim = (0, 3700)plt.subplot(1,5,5)x_5_diff = {5: [200, 200, 291, 200, 291, 200, 291, 200, 291, 200, 291, 200, 291, 200, 291], 7: [161, 161, 179, 161, 179, 161, 179, 161, 179, 161, 179, 161, 179, 161, 179], 9: [205, 205, 109, 205, 109, 205, 109, 205, 109, 205, 109, 205, 109, 205, 109], 11: [169, 169, 95, 169, 95, 169, 95, 169, 95, 169, 95, 169, 95, 169, 95], 13: [43, 43, 70, 43, 70, 43, 70, 43, 70, 43, 70, 43, 70, 43, 70], 15: [33, 33, 39, 33, 39, 33, 39, 33, 39, 33, 39, 33, 39, 33, 39], 17: [23, 23, 126, 23, 126, 23, 126, 23, 126, 23, 126, 23, 126, 23, 126], 19: [17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17], 21: [15, 15, 120, 15, 120, 15, 120, 15, 120, 15, 120, 15, 120, 15, 120], 23: [63, 63, 25, 63, 25, 63, 25, 63, 25, 63, 25, 63, 25, 63, 25]}keys = sorted(x_5_diff)plt.boxplot([x_5_diff[k] for k in keys], positions=keys) # box-and-whisker plotplt.hlines(y = 1600, colors= 'r', xmin = 5, xmax = 23, label = "Level 1 Completed")plt.title("x = 5 enemies")plt.ylim(0,3700)plt.plot(keys, [sum(x_5_diff[k]) / len(x_5_diff[k]) for k in keys], '-o')plt.legend()plt.show()Any help would be appreciated.
|
Its a bit late, but try this: bp = plt.boxplot([x_5_diff[k] for k in keys], positions=keys) # You can access boxplot items using ist dictionary plt.legend([bp['medians'][0], bp['means'][0]], ['median', 'mean'])
|
Tensorflow 2.0 : AttributeError: module 'tensorflow' has no attribute 'matrix_band_part' While running the code tf.matrix_band_part , i get the following errorAttributeError: module 'tensorflow' has no attribute 'matrix_band_part'My tensorflow version : 2.0Any solution for this problem is needed.
|
I have found the answer. So i would like to share.Compatible version for the function for tensorflow 2.0 istf.compat.v1.matrix_band_partRef : https://www.tensorflow.org/api_docs/python/tf/linalg/band_part
|
Cant See Data Inside of Section Tag with Selenium I am trying to count how many buttons are on a page. And then later press them. However to access these buttons I have to go through an iframe, some generic (div) layers, and a region (section) layer.I'm able to get through the iframe layer withdriver.switch_to.frame("iframeID")but cant figure out how to gain access to elements within the secion layers.html looks something like this:<iframe id="iframeID" resize="" src="about:blank;" seamless="" scrolling="no" allowfullscreen="" style="height: 2135px;" xpath="1"> #document <!document> <html> <head>...</head> <body> <section class="sectionC"> <div class="divC"> <button type="button" class="buttonC" data-id="1234" style="">Done</button> </div> </section> </body> </html></iframe>
|
It is simple to achieve with Beautiful Soup:from bs4 import BeautifulSoupsoup = BeautifulSoup(driver.page_source, 'html.parser')len(soup.find_all('button', {'type' : 'button'}))Hope this helps.
|
Python Countdown but in Year, Month, Week, Days, Hours, Minutes, Sec I would like to have my lifetime displayed in the form of a countdown. Unfortunately, Python datetime only allows days. And couldn't program a conversionthis is what i tried:#!/usr/bin/env python3import timeimport datetimefrom dateutil.relativedelta import relativedeltafrom datetime import timedeltawhile True: lebenszeit = datetime.datetime(2085,7,6) - datetime.datetime.now() jahr = str(int((lebenszeit.days)/365.25)) monate = str('%0.2d' %(int((((lebenszeit.days)*365)-int((lebenszeit.days)/365))*12))) tage = str('%0.2d' %(int(((((lebenszeit.days)/365)-int((lebenszeit.days)/365))*12)-((((lebenszeit.days)/365)-int((lebenszeit.days)/365))*12)*30))) print(jahr+"."+monate+"."+tag) i = i+1as you can see very complicated...I would like to have a countdown that should look like this ( Year, Month, Week, Days, Hours, Minutes, Secounds):68.02.04.29.07.40.44
|
Here's how I'd do it. Note that "months" is approximate, assuming 30 days per month. Using only "weeks" would be more accurate.import timeimport datetimefrom datetime import timedeltalebenszeit = datetime.datetime(2085,7,6) - datetime.datetime.now()alldays = lebenszeit.daysjahr = int((alldays)/365.25)alldays -= int(jahr * 365.25)months = int((alldays)/30.0)alldays -= months * 30weeks = int((alldays)/7.0)alldays -= weeks * 7days = alldaysprint(f"{jahr}.{months:02d}.{weeks:02d}.{days:02d}")
|
Linear discriminant Analysis Sklearn I’m running LDA on a dataset and the outcome was good across all metrics. However I can’t seem to extract the top features or loadings like I can for PCA.Is anyone familiar with extracting top features / loadings from LDA when using sklearn python3?
|
try this:import numpy as npfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDAX = training_inputy = training_label.ravel()clf = LDA(n_components=1)clf.fit(X, y)clf.coef_beste_Merkmal = np.argsort(clf.coef_)[0][::-1][0:25]print('beste_Merkmal =', beste_Merkmal)
|
Python: Large float arithmetic for El Gamal decryption ContextThe decryption math formula for the El Gamal method is the following:m = ab^(-k) mod pSpecifically in Python, I want to compute the following equivalent:>>> m = (b**(-k) * a) % pThe issue in the above Python code is that the numbers inserted would overflow or result in 0.0 due to precision. Consider the following example:>>> (15653**(-3632) * 923) % 2626430.0The expected answer for the above example is 152015.More ExamplesAttemptsI've tried to research a strategy to deal with this problem and found that using Python's default pow(x,y,z), which differs from math.pow(), can help.pow(x,y,z) is equivalent to x**y % zHowever, I cannot use pow(x,y,z). I tried to use pow(15653, -3632, 262643), but I cannot multiply the result of pow(15653, -3632) by 923 to then, as a final step, mod by 262643.In other words, instead of x**y % z, I am trying to perform (x**y * a ) % z, but there is clearly a 3-parameter limit or number of operations from pow(x,y,z).What can I do to compute the math formula in Python?
|
Very easily: just multiply the two, and do an explicit mod:>>> p = 262643>>> pow(15653, -3632, p)86669>>> 86669 * 923 % p152015Done!
|
I have a set of points along the oval. How do I create a filled binary mask I am trying to get an filled binary mask of a contour of this image. I took a look this question SciPy Create 2D Polygon Mask; however it does not seem to like my set of data. import numpy as npfrom matplotlib.nxutils import points_inside_polynx, ny = 10, 10poly_verts = [(1,1), (5,1), (5,9),(3,2),(1,1)]# Create vertex coordinates for each grid cell...# (<0,0> is at the top left of the grid in this system)x, y = np.meshgrid(np.arange(nx), np.arange(ny))x, y = x.flatten(), y.flatten()points = np.vstack((x,y)).Tgrid = points_inside_poly(points, poly_verts)grid = grid.reshape((ny,nx))print gridI wonder if there is another way that I can try to return a binary mask or someone to explain the limitations of points_inside_polybecause it seems to end up something like this
|
I'm not sure what you're plotting at the end, but your example works for me:import numpy as npimport matplotlib.pyplot as pltfrom matplotlib.nxutils import points_inside_polyfrom itertools import product, compresspv = [(1,1),(5,1),(5,9),(3,2),(1,1)]x, y = np.meshgrid(np.arange(10),np.arange(10))x, y = x.flatten(), y.flatten()xy = np.vstack((x,y)).Tgrid = points_inside_poly(xy,pv)xv, yv = zip(*pv)xp, yp = zip(*compress(xy,grid))plt.plot(xp,yp,'o',color='red',label='points')plt.plot(xv,yv,'o',color='blue',label='vertices')plt.xlim((0,10))plt.ylim((0,10))plt.legend()plt.show()
|
Django Middleware - How to edit the HTML of a Django Response object? I'm creating a custom middleware to django edit response object to act as a censor. I would like to find a way to do a kind of search and replace, replacing all instances of some word with one that I choose.I've created my middleware object, added it to my MIDDLEWARE_CLASSES in settings and have it set up to process the response. But so far, I've only found methods to add/edit cookies, set/delete dictionary items, or write to the end of the html:class CensorWare(object): def process_response(self, request, response): """ Directly edit response object here, searching for and replacing terms in the html. """ return responseThanks in advance.
|
You can simply modify the response.content string:response.content = response.content.replace("BAD", "GOOD")
|
Keras model doest not provide same results after converting into tensorflow-js model Keras model performs as expected in python but after converting the model the results are different on the same data.I tried updating the keras and tensorflow-js version but still the same issue.Python code for testing:import kerasimport cv2model = keras.models.load_model("keras_model.h5")img = cv2.imread("test_image.jpg")def preprocessing_img(img): img = cv2.resize(img, (50,50)) x = np.array(img) image = np.expand_dims(x, axis=0) return image/255prediction_array= model.predict(preprocessing_img(img))print(prediction_array)print(np.argmax(prediction_array))Results:[[1.9591815e-16 1.0000000e+00 3.8602989e-18 3.2472009e-19 5.8910814e-11]]1These results are correct.Javascript Code:tfjs version:<script type="text/javascript" src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.13.5"></script>preprocessing_img method and prediction in js:function preprocessing_img(img) { let tensor = tf.fromPixels(img) const resized = tf.image.resizeBilinear(tensor, [50, 50]).toFloat() const offset = tf.scalar(255.0); const normalized = tf.scalar(1.0).sub(resized.div(offset)); const batched = normalized.expandDims(0) return batched }const pred = model.predict(preprocessing_img(imgEl)).dataSync()const class_index = tf.argMax(pred);In this case the results are not same and the last index in the pred array is 1 90% of the time.I think there is something wrong with the preprocessing method of image in javascript since i am not an expert in javascript or am i missing something in javascript part?
|
It has to do with the image used for the prediction. The image needs to have completely loaded before the prediction.imEl.onload = function (){ const pred = model.predict(preprocessing_img(imgEl)).dataSync() const class_index = tf.argMax(pred);}
|
Is there a way to make Class[key] work to extract from a static container? I'm trying to build a class that maintains an internal list of all objects of that class and can look them up by ID. While I could use myClass.get(objectID) to get the objects, I would really prefer to use myClass[objectID] but this throws TypeError: 'type' object is not subscriptable. Is there any permutation of the sample case below that would work?class Bucket(object): bucket = set() def __init__(self, id, name): self.id = id self.name = name Bucket.bucket.add(self) def get(id): return Bucket.__getitem__(None, id) def __getitem__(self, id): for i in Bucket.bucket: if i.id == id: return i.name return Noneb = Bucket("foo", "bar")print(1, Bucket.get("foo"))print(2, b["foo"])print(3, Bucket["foo"])1 bar2 barTraceback (most recent call last): File "{snip}\bucketTest.py", line 22, in <module> print(3, Bucket["foo"])TypeError: 'type' object is not subscriptableEDITWith a hint in the direction of metaclasses, I've come up with this. As I have honestly never stumbled across them before, I have to ask: am I doing this right? Am I missing some fundamental bit, or is this vaguely correct? How could I improve it?class MetaBucket(type): def __init__(cls, name, bases, dct): cls.bucket = set() def __getitem__(cls, key): for i in cls.bucket: if i.id == key: return i.nameclass Bucket(metaclass = MetaBucket): def __init__(self, id, name): self.id = id self.name = name Bucket.bucket.add(self)b = Bucket("foo", "bar")print(3, Bucket["foo"])
|
With respect to your EDIT that uses a metaclass, I'd suggest using a dict instead of a set for the bucket attribute since it makes things easier and more succinct:class MetaBucket(type): def __init__(cls, name, bases, dct): cls.bucket = {} def __getitem__(cls, id): return cls.bucket[id] def __setitem__(cls, id, name): cls.bucket[id] = nameclass Bucket(metaclass=MetaBucket): def __init__(self, id, name): self.bucket[id] = nameb = Bucket("foo", "bar")print(3, Bucket["foo"]) # -> 3 barprint(4, Bucket["nonesuch"]) # -> KeyError: 'nonesuch'
|
Unable to iterate over the "tr" element of a table using beautiful soup from bs4 import BeautifulSoupimport reimport urllib2url = 'http://sports.yahoo.com/nfl/players/5228/gamelog'page = urllib2.urlopen(url)soup = BeautifulSoup(page)table = soup.find(id='player-game_log-season').find('tbody').find_all('tr')for rows in tr: data = raws.find_all("td") print dataI'm trying to go through the table for a certain player's stats last year and grab their stats, however, I get a AttributeError: 'NoneType' object has no attribute 'find_all' When I try to run this code. I'm new to beautiful soup so I'm not really sure what the problem is. Also if anyone has any good tutorials to recommend me that would be awesome. Reading through the documentation is sort of confusing as I am fairly new to programming.
|
There's no tbody in the table under div#player-game_log-season. And your code has some typos.raws -> rowstable -> tr...tr = soup.find(id='player-game_log-season').find_all('tr')for rows in tr: data = rows.find_all("td") print data
|
Pandas Dataframe: New Column that uses Country if Province is empty, else use the Province The meat of what I'm trying to do can be seen at the bottom.Here's the dataset I'm using: https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csvWhat I want is to add to ['Names'] the data from ['Province/State'] if it isn't empty, else the data from ['Country/Region'].I'm building an interactive heat map using plotly, and it works. But the problem is, there are multiple markers named "Canada" (for each of the states there) and Greenland is named "Denmark," because in the CSV file, "Greenland" is under "State/Province" and "Denmark" is under "Country/Region."import pandas as pdimport plotly.graph_objects as goimport requestsfrom datetime import date, timedeltayesterday = date.today() - timedelta(days=1)confirmed_url = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv'deaths_url = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv'yesterdays_date = yesterday.strftime('%-m/%d/%y') confirmed = pd.read_csv(confirmed_url)deaths = pd.read_csv(deaths_url)confirmed.iloc[0]['Country/Region'] #Testfor place in deaths[['Province/State','Country/Region']]: if place is float: deaths_names.append('Country/Region') else: deaths_names.append('Province/State')confirmed['Name'] = df(confirmed_names)deaths['Name'] = df(deaths_names)
|
This worked:def names_column(frame, lst): #Makes a new column called Name for i in range(len(frame)): if type(frame['Province/State'][i]) is str: lst.append(frame['Province/State'][i]) else: lst.append(frame['Country/Region'][i]) frame['Name'] = df(lst)names_column(confirmed, confirmed_names)names_column(deaths, deaths_names)
|
in class, pass from method to method the values of local variables I have a problem to pass from method to method the values of local variables. I didn't put them in the constructor because I would like some processing to be done in the methodsclass Myclass: def __init__(self,nbr1,nbr2): self.nbr1 = nbr1 self.nbr2 = nbr2 def operation1(self): nbr3 =nbr1+nbr2 return nbr3 #I would like to pass the nbr3 value in the operation2 function# for some treatments def operation2(self): nbr4= nbr3*2 return nbr4, nbr3 #and return value of def operation2 in showMe function def showMe(self,param): showresult = param() print(f'this a result : {showresult[0]} and another result {showresult[1]}')nbr1 = 5nbr2 = 7result = Myclass(nbr1,nbr2)result.showMe(result.operation2)but I have an error nbr3 is not definedthank for helps
|
You need to actually call operation1() somewheredef operation2(self): nbr3 = self.operation1() nbr3 *= 2 return nbr4, nbr3 Or set the instance variabledef operation1(self): self.nbr3 = self.nbr1 + self.nbr2def operation2(self): nbr4 = self.nbr3 * 2 return nbr4, self.nbr3 ...result = Myclass(nbr1,nbr2)result.operation1()result.showMe(result.operation2)Or remove the operation1 and set the instance variableclass Myclass: def __init__(self,nbr1,nbr2): self.nbr1 = nbr1 self.nbr2 = nbr2 self.nbr3 = self.nbr1 + self.nbr2 def operation2(self): nbr4= self.nbr3 * 2 return nbr4, nbr3
|
Does Google Drive API search responds not only files/folder metadata but also the matched content w.r.t query in the search response? response = DRIVE.files().list(q="fullText contains 'what is python?',spaces='drive',fields='*',pageToken=page_token).execute()from the above sample Python code,what extra param that I can pass or extract to get the files with the matched content as well with them?Example response(current){'kind': 'drive#file', 'id': '1acVspMMcliVE8M6WzNL14sdvXYT-dScw', 'name': '4590764611082297754.txt', 'mimeType': 'text/plain',.....}So can this json response can also include the matched content from the query and also the score in any form.Please let me know if this feature is available or can be coded/extracted somehowThanks
|
The q parameter for file.list method allows you to search for things like files with a specific title, or file typeThe google drive api is just a file storage system it does not have the power to open a file and see what it contains. There is no method for searching the contents of a file.
|
Python - find items with multiple occurences and replace with mean For df:sample type countsample1 red 5sample1 red 7sample1 green 3sample2 red 2sample2 green 8sample2 green 8sample2 green 2sample3 red 4sample3 blue 5I would like to find items in "type" with multiple occurences and replace the "count" for each of those with the mean count. So expected output: sample type countsample1 red 6sample1 green 3sample2 red 2sample2 green 6sample3 red 4sample3 blue 5Sonon_uniq = df.groupby("sample")["type"].value_counts()non_uniq = non_uniq.where(non_uniq > 1).dropna()finds the "type" with multiple occurences but I don't know how to match it in df
|
I believe you can simplify solution to mean per all groups, because mean by value is same like this value:df = df.groupby(["sample","type"], as_index=False, sort=False)["count"].mean()print (df) sample type count0 sample1 red 61 sample1 green 32 sample2 red 23 sample2 green 64 sample3 red 45 sample3 blue 5Your solution is possible change by:m = df.groupby(["sample", "type"])['type'].transform('size') > 1df1 = df[m].groupby(["sample","type"], as_index=False, sort=False)["count"].mean()df = pd.concat([df1, df[~m]], ignore_index=True)print (df) sample type count0 sample1 red 61 sample2 green 62 sample1 green 33 sample2 red 24 sample3 red 45 sample3 blue 5
|
Iterating to produce a unique list This is the initial code:word_list = ['cat','dog','rabbit']letter_list = [ ]for a_word in word_list: for a_letter in a_word: letter_list.append(a_letter)print(letter_list)I need to modify it to produce a list of unique letters.Could somebody please advise how to do this without using set()The result should be like this> ['c', 'a', 't', 'd', 'o', 'g', 'r', 'b', 'i']
|
Only problem that I can see is that you have not checked if the letter is already present in list or not. Try this:>>> word_list= ['cat', 'dog', 'rabbit']>>> letter_list= []>>> for a_word in word_list: for a_letter in a_word: if a_letter not in letter_list: letter_list.append(a_letter)>>> print letter_list['c', 'a', 't', 'd', 'o', 'g', 'r', 'b', 'i']
|
useEffect fires and print statements run but no actual axios.post call runs reactjs I have a useEffect function that is firing due to yearsBackSettings changing and the console.log statements inside useEffect fire too:useEffect(() => { console.log("something changed") console.log(yearsBackSettings) if (userId) { const user_profile_api_url = BASE_URL + '/users/' + userId const request_data = { searches: recentSearches, display_settings: displaySettings, years_back_settings: yearsBackSettings } console.log("running user POST") console.log(request_data) axios.post(user_profile_api_url, request_data) .then(response => { console.log("user POST response") console.log(response) }) } }, [recentSearches, displaySettings, yearsBackSettings])As the image shows, changing yearsBackSettings causes this to run, which SHOULD make a post request with all the new settings:However, for some reason there is nothing happening on the server except the stock search running:the last updated time for stock ibm before save: 08/25/2022 08:13:30stock was updated within the last 5 minutes...no need to make an api callthe last updated time for stock ibm after save: 08/25/2022 08:13:30[25/Aug/2022 08:17:25] "POST /users/114260670592402026255 HTTP/1.1" 200 9[25/Aug/2022 08:17:25] "GET /dividends/ibm/3/5 HTTP/1.1" 200 4055the last updated time for stock ibm before save: 08/25/2022 08:13:30stock was updated within the last 5 minutes...no need to make an api callthe last updated time for stock ibm after save: 08/25/2022 08:13:30[25/Aug/2022 08:17:26] "GET /dividends/ibm/27/5 HTTP/1.1" 200 8271the last updated time for stock ibm before save: 08/25/2022 08:13:30stock was updated within the last 5 minutes...no need to make an api callthe last updated time for stock ibm after save: 08/25/2022 08:13:30[25/Aug/2022 08:18:11] "GET /dividends/ibm/27/70 HTTP/1.1" 200 14734The post to users there was an initial one when users loaded. If I sign in and sign out I lose the 70 years in the second component:When I log out and log in it shows 27 years and 5 years, I lose the 70 because the /users POST didn't runI have the following React main componentimport React, {useState, useEffect} from 'react';import { connect } from 'react-redux';import axios from 'axios';const SearchPage = ({userId}) => { const [recentSearches, setRecentSearches] = useState([DEFAULT_STOCK]); const [dividendsYearsBack, setDividendsYearsBack] = useState('3'); const [debouncedDividendYearsBack, setDebouncedDividendYearsBack] = useState('3'); const [earningsYearsBack, setEarningsYearsBack] = useState('5'); const [debouncedEarningsYearsBack, setDebouncedEarningsYearsBack] = useState('5'); const [errorMessage, setErrorMessage] = useState(''); ) const [displaySettings, setDisplaySettings] = useState([ {setting_name: 'showYieldChange', visible: true}, {setting_name: 'showAllDividends', visible: true}, {setting_name: 'showAllEarnings', visible: true}, ]) const [yearsBackSettings, setYearsBackSettings] = useState([ {section: 'dividendsYearsBack', years_back: 3}, {section: 'earningsYearsBack', years_back: 5} ]) const onTermUpdate = (term) => { const trimmed = term.trim() setTerm(trimmed); } debounceTerm(setDebouncedTerm, term, 1500); debounceTerm(setDebouncedDividendYearsBack, dividendsYearsBack, 1500); debounceTerm(setDebouncedEarningsYearsBack, earningsYearsBack, 1500); useEffect(() => {runSearch()}, [debouncedTerm]); useEffect(() => { // alert(dividendsYearsBack) if (dividendsYearsBack !== '' && earningsYearsBack !== '') { runSearch(); } }, [debouncedDividendYearsBack, debouncedEarningsYearsBack]) useEffect(() => { const yearsSettingsCopy = Object.assign(yearsBackSettings); const dividendsYearsBackSetting = yearsSettingsCopy.find((dict) => dict.section == 'dividendsYearsBack'); dividendsYearsBackSetting.years_back = dividendsYearsBack; const earningsYearsBackSetting = yearsSettingsCopy.find((dict) => dict.section == 'earningsYearsBack'); earningsYearsBackSetting.years_back = earningsYearsBack; setYearsBackSettings(yearsSettingsCopy); }, [dividendsYearsBack, earningsYearsBack]) useEffect(() => { const dividendsYearsBackSetting = yearsBackSettings.find((dict) => dict.section == 'dividendsYearsBack'); setDividendsYearsBack(dividendsYearsBackSetting.years_back); const earningsYearsBackSetting = yearsBackSettings.find((dict) => dict.section == 'earningsYearsBack'); setEarningsYearsBack(earningsYearsBackSetting.years_back); }, [yearsBackSettings]) useEffect(() => { console.log("user id changed") if (userId) { const user_profile_api_url = BASE_URL + '/users/' + userId axios.get(user_profile_api_url, {}) .then(response => { // console.log(response) const recent_searches_response = response.data.searches; const new_recent_searches = []; recent_searches_response.map(dict => { new_recent_searches.push(dict.search_term) }) setRecentSearches(new_recent_searches); setDisplaySettings(response.data.display_settings); setYearsBackSettings(response.data.years_back_settings); }) .catch((error) => { console.log("error in getting user profile: ", error.message) }) } }, [userId]) useEffect(() => { console.log("something changed") console.log(yearsBackSettings) if (userId) { const user_profile_api_url = BASE_URL + '/users/' + userId const request_data = { searches: recentSearches, display_settings: displaySettings, years_back_settings: yearsBackSettings } console.log("running user POST") console.log(request_data) axios.post(user_profile_api_url, request_data) .then(response => { console.log("user POST response") console.log(response) }) } }, [recentSearches, displaySettings, yearsBackSettings]) return ( <div className="ui segment"> {renderMainContent()} </div> </div> )}const mapStateToProps = state => { return { userId: state.auth.userId };};export default connect( mapStateToProps)(SearchPage);// export default SearchPage;The yearsBackSettings is showing up changed to 27 and 70 (from the picture) but the POST request doesn't fire. How can I get these settings to save when the settings change?The issue is that the post doesnt run when I update the years back settings: useEffect(() => { console.log("running user profile post"); const user_profile_api_url = BASE_URL + '/users/' + userId const request_data = { searches: recentSearches, display_settings: displaySettings, years_back_settings: yearsBackSettings } axios.post(user_profile_api_url, request_data) .then(response => { console.log(response) }) }, [recentSearches, displaySettings, yearsBackSettings])this doesnt run when yearsBackSettings changes. I am logging yearsBackSettings to console and it is certainly changed, but the post request to user profile doesnt fireI think the issue is here: useEffect(() => { const dividendsYearsBackSetting = yearsBackSettings.find((dict) => dict.section == 'dividendsYearsBack'); dividendsYearsBackSetting.years_back = dividendsYearsBack; const earningsYearsBackSetting = yearsBackSettings.find((dict) => dict.section == 'earningsYearsBack'); earningsYearsBackSetting.years_back = earningsYearsBack; setYearsBackSettings(yearsBackSettings); }, [dividendsYearsBack, earningsYearsBack])as an example, I tried doing any useEffect with yearsBackSettings, and it never works. I have changed the settings a few times and the alert does not fire:useEffect(() => { alert("years back settings changed") }, [yearsBackSettings])
|
The issue is somewhere in your python server code, in your console you can see that you are actually logging a response object with a 200 response code, meaning your server doesn't crash during the actual request.There might be a problem in your server side logging causing the request to not show up, I would look at that first.
|
Optimization variables of a neural network model with simulated annealing I implement an MLP neural network model on the data, for optimization 4 variables a function base on the MLP model is defined, and simulated annealing run on this function. I don't know why I get this error (attached below).Neural network code:# mlp for regressionfrom numpy import sqrtfrom pandas import read_csvfrom sklearn.model_selection import train_test_splitfrom tensorflow.keras import Sequentialfrom tensorflow.keras.layers import Denseimport tensorflowfrom tensorflow import kerasfrom matplotlib import pyplotfrom keras.layers import Dropoutfrom tensorflow.keras import regularizers# determine the number of input featuresn_features = X_train.shape[1]# define modelmodel = Sequential()model.add(Dense(150, activation='tanh', kernel_initializer='zero',kernel_regularizer=regularizers.l2(0.001), input_shape=(n_features,))) #relu/softmax/tanhmodel.add(Dense(100, activation='tanh', kernel_initializer='zero',kernel_regularizer=regularizers.l2(0.001)))model.add(Dense(50, activation='tanh', kernel_initializer='zero',kernel_regularizer=regularizers.l2(0.001)))model.add(Dropout(0.0))model.add(Dense(1))# compile the modelopt= keras.optimizers.Adam(learning_rate=0.001)#opt = tensorflow.keras.optimizers.RMSprop(learning_rate=0.001,rho=0.9,momentum=0.0,epsilon=1e-07,centered=False,name="RMSprop")model.compile(optimizer=opt, loss='mse')# fit the modelhistory=model.fit(X_train, y_train, validation_data = (X_test,y_test), epochs=100, batch_size=10, verbose=0,validation_split=0.3)# evaluate the modelerror = model.evaluate(X_test, y_test, verbose=0)print('MSE: %.3f, RMSE: %.3f' % (error, sqrt(error)))# plot learning curvespyplot.title('Learning Curves')pyplot.xlabel('Epoch')pyplot.ylabel('Cross Entropy')pyplot.plot(history.history['loss'], label='train')pyplot.plot(history.history['val_loss'], label='val')pyplot.legend()pyplot.show() function code:def objective_function(X): wob = X[0] torque= X[1] RPM = X[2] pump = X[3] input=[wob,torque,RPM, 0.00017,0.027,pump,0,0.5,0.386,0.026,0.0119,0.33,0.83,0.48] input = pd.DataFrame(input) obj= model.predict(input) return objsimulated annealing for optimization:import timeimport randomimport mathimport numpy as np## custom sectioninitial_temperature = 100cooling = 0.8 # cooling coef.number_variables = 4upper_bounds = [1,1,1,1]lower_bounds = [0,0,0,0]computing_time = 1 # seconds## simulated Annealing algorithm## 1. Genertate an initial solution randomlyinitial_solution = np.zeros((number_variables))for v in range(number_variables): initial_solution[v] = random.uniform(lower_bounds[v], upper_bounds[v])current_solution = initial_solutionbest_solution = initial_solutionn=1 # no of solutions acceptedbest_fitness = objective_function(best_solution)current_temperature = initial_temperature # current temperaturestart = time.time()no_attemps = 100 # number of attemps in each level of temperaturerecord_best_fitness = []for i in range(9999999): for j in range(no_attemps): for k in range(number_variables): ## 2. generate a candidate solution y randomly based on solution x current_solution[k] = best_solution[k] + 0.1*(random.uniform(lower_bounds[k], upper_bounds[k])) current_solution[k] = max(min(current_solution[k], upper_bounds[k]), lower_bounds[k]) # repaire the solution respecting the bounds ## 3. check if y is better than x current_fitness = objective_function(current_solution) E = abs(current_fitness - best_solution) if i==0 and j==0: EA = E if current_fitness < best_fitness: p = math.exp(-E/(EA*current_temperature)) # make a decision to accept the worse solution or not ## 4. make a decision whether r < p if random.random()<p: accept = True # this worse solution is not accepted else: accept = False # this worse solution is not accepted else: accept = True # accept better solution ## 5. make a decision whether step comdition of inner loop is met if accept == True: best_solution = current_solution # update the best solution best_fitness = objective_function(best_solution) n = n + 1 #count the solutions accepted EA = (EA*(n-1)+E)/n # accept EA print('interation : {}, best_solution:{}, best_fitness:{}'. format(i, best_solution, best_fitness)) record_best_fitness.append(best_fitness) ## 6. decrease the temperature current_temperature = current_temperature * cooling ## 7. stop condition of outer loop is met end = time.time() if end-start >= computing_time: breakThe error picture:
|
it's for your input shape, in MLP neural network your input shape is [none,14], but in your function's input id [14,1], so you need transpose it.def objective_function(X): wob = X[0] torque= X[1] RPM = X[2] pump = X[3] input=[wob,torque,RPM, 0.00017,0.027,pump,0,0.5,0.386,0.026,0.0119,0.33,0.83,0.48] input = pd.DataFrame(input) input=input.T model1.predict(input) return obj
|
Issue with load_img- Error- FileNotFoundError: [Errno 2] No such file or directory: for i in os.listdir("D:/Deep Learning/vgg16_images"):print(i)image=[]for i in os.listdir(r'D:\Deep Learning\vgg16_images'): img = load_img(i,target_size=(224, 224)) img=img_to_array(img) img = img.reshape((1, img.shape[0], img.shape[1], img.shape[2])) # prepare the image for the VGG model img = preprocess_input(img) image.append(img)The for loop at the top throws 4 images 1) bus.jpg 2) mug.jpg 3) schoolbus.jpg 4) traffic.jpgthe next section of the code at load_img throws the errorFileNotFoundError: [Errno 2] No such file or directory: 'bus.jpg'Path and image name extension are all correct, the same code works if i remove the image "bus". and the issue is not with that particular image, if i add any other image it throws the errorthe pattern i saw was that once i run the code on x number of images and then when i rerun the code by adding new images it throws the error, tried resolving it by restarting the kernel and closing and refreshign the folders aswell
|
my apologies..this question should not have been there in the first place, realized it later..those days when the brain stops working completelyThe mentioned directory and load_img paths are different. load_imp was working for all images other than the bus.jpg was because those images were there in both the folder pathsNot deleting the question though- someday someone might have the bad day as well
|
pandas pivot data Cols to rows and rows to cols I am using python and pandas have tried a variety of attempts to pivot the following (switch the row and columns)Example:A is unique A B C D E... (and so on) [0] apple 2 22 222 [1] peach 3 33 333 [N] ... and so onAnd I would like to see ? ? ? ? ... and so on A apple peach B 2 3 C 22 33 D 222 333 E ... and so onI am ok if the columns are named after the col "A", and if the first column needs a name, lets call it "name" name apple peach ... B 2 3 C 22 33 D 222 333 E ... and so on
|
Think you're wanting transpose here.df = pd.DataFrame({'A': {0: 'apple', 1: 'peach'}, 'B': {0: 2, 1: 3}, 'C': {0: 22, 1: 33}})df = df.Tprint(df) 0 1A apple peachB 2 3C 22 33Edit for comment. I would probably reset the index and then use the df.columns to update the column names with a list. You may want to reset the index again at the end as needed.df.reset_index(inplace=True)df.columns = ['name', 'apple', 'peach']df = df.iloc[1:, :]print(df) name apple peach1 B 2 32 C 22 33
|
how to convert a pandas dataframe to a list of dictionaries in python? I have a dataframe like this:data = {'id': [1,1,2,2,2,3], 'value': ['a','b','c','d','e','f']}df = pd.DataFrame (data, columns = ['id','value'])I want to convert it to a list of dictionary like:df_dict = [{ 'id': 1, 'value':['a','b']},{'id': 2,'value':['c','d','e']},{'id': 3,'value':['f']}]And then eventually insert this list df_dict to another dictionary:{ "products": [ { "productID": 1234, "tag": df_dict } ]}We don't need to worry about how the other dictionary looks like. We can simply use the example I gave above. How do I do that? Many thanks!
|
You can groupby and then use to_dict to convert it to a dictionary.>>> df.groupby(df['id'], as_index=False).agg(list).to_dict(orient="records")[{'id': 1, 'value': ['a', 'b']}, {'id': 2, 'value': ['c', 'd', 'e']}, {'id': 3, 'value': ['f']}]
|
How do I extract the information from the website? I am trying to gather information of all the vessels from this website:https://www.marinetraffic.com/en/data/?asset_type=vessels&columns=flag,shipname,photo,recognized_next_port,reported_eta,reported_destination,current_port,imo,ship_type,show_on_live_map,time_of_latest_position,lat_of_latest_position,lon_of_latest_position&ship_type_in|in|Cargo%20Vessels|ship_type_in=7 This is my code right now: import selenium.webdriver as webdriverurl = "https://www.marinetraffic.com/en/data/?asset_type=vessels&columns=flag,shipname,photo,recognized_next_port,reported_eta,reported_destination,current_port,imo,ship_type,show_on_live_map,time_of_latest_position,lat_of_latest_position,lon_of_latest_position&ship_type_in|in|Cargo%20Vessels|ship_type_in=7"browser = webdriver.Chrome(executable_path=r"C:\Users\CSA\OneDrive - College Sainte-Anne\Programming\PYTHON\Learning\WS\chromedriver_win32 (1)\chromedriver.exe")browser.get(url)browser.implicitly_wait(100)Vessel_link = browser.find_element_by_class_name("ag-cell-content-link")Vessel_link.click()browser.implicitly_wait(30)imo = browser.find_element_by_xpath('//*[@id="imo"]')print(imo)My outputI am using selenium, which isn't going to work because. I have several thousands of ships to extract data from and it just isn't going to be efficient. (Also, I only need to extract information from Cargo Vessels (U can find that using the filter or by looking at green signs on the vessel type column.) and I need to extract the country name(flag), the Imo and the Vessels name.What should I use? Selenium or Bs4 + requests or other libraries? And How? I just started web scraping... I can't get the Imo nor anything! The HTML structure is very weird.I would appreciate any help. Thank You! :)
|
Instead of clicking each vessel to open up the details, you can get the information you're searching for from the results page. This will get each vessel, pull the info you wanted and click to the next page if there are more vessels:import selenium.webdriver as webdriverurl = "https://www.marinetraffic.com/en/data/?asset_type=vessels&columns=flag,shipname,photo,recognized_next_port,reported_eta,reported_destination,current_port,imo,ship_type,show_on_live_map,time_of_latest_position,lat_of_latest_position,lon_of_latest_position&ship_type_in|in|Cargo%20Vessels|ship_type_in=7"browser = webdriver.Chrome('C:\Users\CSA\OneDrive - College Sainte-Anne\Programming\PYTHON\Learning\WS\chromedriver_win32 (1)\')browser.get(url)browser.implicitly_wait(5)checking_for_vessels = Truevessel_count = 0while checking_for_vessels: vessel_left_container = browser.find_element_by_class_name('ag-pinned-left-cols-container') vessels_left = vessel_left_container.find_elements_by_css_selector('div[role="row"]') vessel_right_container = browser.find_element_by_class_name("ag-body-container") vessels_right = vessel_right_container.find_elements_by_css_selector('div[role="row"]') for i in range(len(vessels_left)): vessel_count += 1 vessel_country_list = vessels_left[i].find_elements_by_class_name('flag-icon') if len(vessel_country_list) == 0: vessel_country = 'Unknown' else: vessel_country = vessel_country_list[0].get_attribute('title') vessel_name = vessels_left[i].find_element_by_class_name('ag-cell-content-link').text vessel_imo = vessels_right[i].find_element_by_css_selector('[col-id="imo"] .ag-cell-content div').text print('Vessel #' + str(vessel_count) + ': ' + vessel_name + ', ' + vessel_country + ', ' + vessel_imo) pagination_container = browser.find_element_by_class_name('MuiTablePagination-actions') page_number = pagination_container.find_element_by_css_selector('input').get_attribute('value') max_page_number = pagination_container.find_element_by_class_name('MuiFormControl-root').get_attribute('max') if page_number == max_page_number: checking_for_vessels = False else: next_page_button = pagination_container.find_element_by_css_selector('button[title="Next page"]') next_page_button.click()There was one vessel that did not display a flag, so there's a check for that and the country is replaced with 'Unknown' if no flag found. The same kind of check can be done for the vessel name and imo.The implicit wait was reduced to 5 because of the known issue of missing a flag on one vessel and waiting 100 seconds for this to be figured out was excessive. This number can be adjusted higher if you find there's issues waiting long enough to find elements.It appears you are using a windows machine. You can place the path of your chromedriver in the PATH variable on your machine and then you don't have to use the path when you instantiate your browser driver. Obviously, your path to your chromedriver is different than mine, so hopefully what you provided is correct or else this won't work.
|
How to make Django work with MySQL Connector/Python? Has anyone made Django work with myconnpy?I've checked out http://github.com/rtyler/connector-django-mysql butthe author said it's very outdated and not supported.If you've managed to make Django work with myconnpy, please share yourexperience.Thanks.
|
I needed something similar, so I forked the project you linked to and updated it to work (for small values of) with Django 1.2's newer database backend API.It should be noted that my use case is very simple (read access to a single table on a single database) and I have not tested it with anything more than that.You can find it at http://github.com/jerith/connector-django-mysql
|
Writing a string to the last line in a file, Python I'm attempting to make a function that writes a string to the last line in a file. However, what I currently have (below) only writes to the first line. So if I call the function more than once, it simply overwrites the first line. I'd like it to instead write the string to a new line; how would I go about this? Snippet :def write (self, string) : # self.Dir is a txt file self.File = file(self.Dir, 'w') self.File.write(string)
|
Open the file in append mode ('a' instead of 'w'). Opening in 'w' mode truncates your file (you're now writing into an empty file)
|
Google AppEngine tells me that my int is not an int The relevant part of the code:pk = int(pk) logging.info('pk: %r :: %s', pk, type(pk))instance = models.Model.get_by_id(int(pk)) The output from the log message abovepk: 757347 :: <type 'int'>The stacktrace:Traceback (most recent call last): File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 634, in __call__ handler.get(*groups) File "/base/data/home/apps/<myapp>/<version>/scrape.py", line 61, in get instance = models.Model.get_by_id(int(pk)) File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 1212, in get_by_id return get(keys[0], config=config) File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 1434, in get model = cls1.from_entity(entity) File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 1350, in from_entity instance = cls(None, _from_entity=True, **entity_values) File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 890, in __init__ prop.__set__(self, value) File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 593, in __set__ value = self.validate(value) File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 2967, in validate % (self.name, type(value).__name__))BadValueError: Property pk must be an int or long, not a unicodeAnyone has an idea if I'm doing something wrong here?Note: removing the int from the last line of the code makes no difference (that was the first version).Also, the code works without a problem on dev_appserver.py.
|
Does your model have a property 'pk', which is now an IntegerProperty(), but was previously a StringProperty(), and the entity with id 757347 was saved with the old version of the model?
|
How to edit "view on site" url on django admin? How in the modern version of Django edit the "view on site" url on django admin?
|
in your model implement get_absolute_url method like this def get_absolute_url(self): return reverse('model_record_view',args=[self.id])where model_record_view is the name of the view and it is id as a paramter
|
Create Pandas DataFrame from a list and list of lists I have two python listsmessages = ['message1', 'message2', 'message3']labels = [[1,0,1,3,1], [1,1,2,0,3], [0,0,2,1,0]]I am creating dataFrame which will take messages as first column and labels as cat_1, cat_2, cat_3, cat_4, cat_5 i.e. total 6 columnsI triedmsgs_labels = pd.DataFrame({'message': messages, 'cat': labels,})but it returns two columns. messages and cat.
|
If no problem with starting by 0 for new columns names use DataFrame constructors with join:df = pd.DataFrame({'message': messages}).join(pd.DataFrame(labels).add_prefix('cat_'))print (df) message cat_0 cat_1 cat_2 cat_3 cat_40 message1 1 0 1 3 11 message2 1 1 2 0 32 message3 0 0 2 1 0f = lambda x: f'cat_{x + 1}'df = pd.DataFrame({'message': messages}).join(pd.DataFrame(labels).rename(columns=f))print (df) message cat_1 cat_2 cat_3 cat_4 cat_50 message1 1 0 1 3 11 message2 1 1 2 0 32 message3 0 0 2 1 0Some another ideas:f = lambda x: f'cat_{x + 1}'df = (pd.DataFrame(labels,index=messages) .rename(columns=f) .rename_axis('messages') .reset_index())print (df) messages cat_1 cat_2 cat_3 cat_4 cat_50 message1 1 0 1 3 11 message2 1 1 2 0 32 message3 0 0 2 1 0Or a bit crazy:f = lambda x: f'cat_{x + 1}'df = (pd.DataFrame(labels,index=pd.Series(messages, name='messages')) .rename(columns=f) .reset_index())Or solution with processing nested lists first:d = {f'cat_{i + 1}': x for i, x in enumerate(map(list, zip(*labels)))}d = {**{'message': messages}, **d}df = pd.DataFrame(d)print (df) message cat_1 cat_2 cat_3 cat_4 cat_50 message1 1 0 1 3 11 message2 1 1 2 0 32 message3 0 0 2 1 0
|
RedisCluster MGET with pipeline I am trying to perform an MGET operation on my Redis with the pipeline to increase performance.I have tried doing MGET in one go as well as as a batch flowfrom rediscluster import RedisClusterru = RedisCluster(startup_nodes=[{"host": "somecache.aws.com", "port": "7845"}], decode_responses=True, skip_full_coverage_check=True) pipe = ru.pipeline() # pipe.mget(keys) for i in range(0, len(keys), batch_size): temp_list = keys[i:i + batch_size] pipe.mget(temp_list)resp = pipe.execute()So far I am getting the errorraise RedisClusterException("ERROR: Calling pipelined function {0} is blocked when running redis in cluster mode...".format(func.__name__))rediscluster.exceptions.RedisClusterException: ERROR: Calling pipelined function mget is blocked when running redis in cluster mode...What I want to know is thatDoes RedisCluster pipelined MGET?If not then is there any other lib that I can use to archive this?
|
Turns out we can not use MGET with the pipeline, below is m final solutionfrom rediscluster import RedisCluster def redis_multi_get(rc: RedisCluster, keys: list): pipe = rc.pipeline() [pipe.get(k) for k in keys] return pipe.execute() if __name__ == '__main__': rc = RedisCluster(startup_nodes=[{"host": host, "port": port}], decode_responses=True, skip_full_coverage_check=True) keys = rc.keys(PREFIX + '*') cache_hit = redis_multi_get(rc, keys)
|
Combine two rows in csv using python I need to combine two rows removing the space between them. What I need is:My csv with single column:"2021-05-13"|"test"|"perfect line""2021-05-13"|"test"| "imperfect line""2021-05-13"|"test"|"perfect line"My output needs to be :"2021-05-13"|"test"|"perfect line""2021-05-13"|"test"|"perfect line""2021-05-13"|"test"|"perfect line"But what I got is:"2021-05-13"|"test"|"perfect line","2021-05-13"|"test"|"perfect line","2021-05-13"|"test"|"perfect line"My code is:fIn = open("01new.csv", "r")fOut = open("output.csv", "w")fOut.write(",".join([line for line in fIn]).replace("\n",""))fIn.close()fOut.close()How can I get the output I need?When I run the code from Pranav's answer, I get this output:"2021-05-13"|"test"|"perfect line""2021-05-13"|"test"|"imperfect line""2021-05-13"|"test"|"perfect line"And in addition i had empty delimiter that too get vanished For eg:My Actual File is"2021-05-13"|"test"|""|"perfect line""2021-05-13"|"test"|""|"imperfect line""2021-05-13"|"test"|""|"perfect line"I need Like :"2021-05-13"|"test"|""|"perfect line""2021-05-13"|"test"|""|"imperfect line""2021-05-13"|"test"|""|"perfect line"
|
You can use a regex to reconstruct the line in the proper format:import re with open(your_file, 'r') as f: s=re.sub(r'^([^|]*\|)([^|]*\|)\n\s*([^|\n]*\n)',r'\1\2\3', f.read(), flags=re.M) print(s)Prints:"2021-05-13"|"test"|"perfect line""2021-05-13"|"test"|"imperfect line""2021-05-13"|"test"|"perfect line"To use a string with csv, feed the string to the StringIO library:import csv import re from io import StringIOwith open(fn, 'r') as f: s=re.sub(r'^([^|]*\|)([^|]*\|)\n\s*([^|\n]*\n)',r'\1\2\3', f.read(), flags=re.M) for row in csv.reader(StringIO(s), delimiter='|'): print(row)Prints:['2021-05-13', 'test', 'perfect line']['2021-05-13', 'test', 'imperfect line']['2021-05-13', 'test', 'perfect line']Another way is to recognize that with a \n inserted unquoted into the CSV file, you have a broken csv file.You can reconstruct the record structure by reading the csv one field at a time then reconstituting into the 3 fields per record (4 fields if you insert the blank field) like so:import csv def next_field(f): for line in f: for field in line.strip().split('|'): if field: yield field.strip('"') with open(fn, 'r') as f, open(fn_out, 'w') as fo: w=csv.writer(fo,delimiter='|', quotechar='"', quoting=csv.QUOTE_ALL) for r in (t[:2]+('',)+t[2:] for t in zip(*[iter(next_field(f))]*3)): w.writerow(r) Your fn_out file is now:"2021-05-13"|"test"|""|"perfect line""2021-05-13"|"test"|""|"imperfect line""2021-05-13"|"test"|""|"perfect line"
|
osm file, parsing, memory error even with clearing elements. I want to take an osm file, clean it, and then save it as a json file.The xml file is about 1 gb big.def audit(): osm_file = open('c:\Users\Stephan\Downloads\los-angeles_california.osm', "r") with open('lala.txt', 'w') as outfile: for event, elem in ET.iterparse(osm_file, events=("start",)): if elem.tag == "node" or elem.tag == "way": json.dump(shape_element(elem),outfile) elem.clear() audit()Eventhough i use elm.clear() i still get an memory error. Anyone knows why ?
|
osm_file = open('c:\Users\Stephan\Downloads\los-angeles_california.osm', "wr")if you want to clean it, it should be writable
|
Ordering a string by its substring numerical value in python I have a list of strings that need to be sorted in numerical order using as a int key two substrings.Obviously using the sort() function orders my strings alphabetically so I get 1,10,2... that is obviously not what I'm looking for.Searching around I found a key parameter can be passed to the sort() function, and using sort(key=int) should do the trick, but being my key a substring and not the whole string should lead to a cast error.Supposing my strings are something like:test1txtfgf10test1txtfgg2test2txffdt3test2txtsdsd1I want my list to be ordered in numeric order on the basis of the first integer and then on the second, so I would have:test1txtfgg2test1txtfgf10test2txtsdsd1test2txffdt3I think I could extract the integer values, sort only them keeping track of what string they belong to and then ordering the strings, but I was wondering if there's a way to do this thing in a more efficient and elegant way.Thanks in advance
|
Try the followingIn [26]: import reIn [27]: f = lambda x: [int(x) for x in re.findall(r'\d+', x)]In [28]: sorted(strings, key=f)Out[28]: ['test1txtfgg2', 'test1txtfgf10', 'test2txtsdsd1', 'test2txffdt3']This uses regex (the re module) to find all integers in each string, then compares the resulting lists. For example, f('test1txtfgg2') returns [1, 2], which is then compared against other lists.
|
How to move specific data from one column to a new column on Pandas? I have a set of data with 2 columns: Column1 = Hex Code and Column2= Current (A).The data in Column1 is Hex Code, 27 different codes which repeats and for each Hex Code have Current (A) value on Column2.I want to pick a set of 27 data points from Column1 & Column2 and place them into Coulmn3 & Column4.Can someone help me to achieve this?This is how the initial data looksThis is how i would like the data to be arranged
|
I am going tho show you my code. But I want to tell that you can not have repeating columns names. We suppose data is the name of your original dataset:import pandas as pdcol_name1=data.columns.values[0]col_name2=data.columns.values[1]two_columns = data[[col_name1,col_name2]][0:27].valuestwo_columns = pd.DataFrame(two_columns,columns=[col_name1+'_1',col_name2+'_2'])df = data.iloc[0:27,:]df = df.join(two_columns)print(df)
|
Unable to click element on page Selenium python I am trying to move to page 2 and beyond of this page (pagination) with python selenium and spent a few hours on this. I am getting this error, and would be thankful of any help..Error from chromedriveris not clickable at point(). Other element would receive the clickMy code so far:class Chezacash: t1 = time.time() driver = webdriver.Chrome(chromedriver) def controller(self): self.driver.get("https://www.chezacash.com/#/home/") element = WebDriverWait(self.driver, 10).until( EC.presence_of_element_located((By.CSS_SELECTOR, "div.panel-heading"))) soup = BeautifulSoup(self.driver.page_source.encode('utf-8'),"html.parser") self.parser(soup) self.driver.find_element(By.XPATH, "//li[@class='paginate_button active']/following-sibling::li").click() time.sleep(2) soup = BeautifulSoup(self.driver.page_source.encode('utf-8'),"html.parser") self.parser(soup) def parser(self, soup): for i in soup.find("table", {"id":"DataTables_Table_1"}).tbody.contents: date = i.findAll("td")[0].get_text().strip() time = i.findAll("td")[1].get_text().strip() home = i.findAll("td")[4].div.span.get_text().strip().encode("utf-8") home_odds = i.findAll("td")[4].div.findAll("span")[1].get_text().strip() draw_odds = i.findAll("td")[5].div.findAll("span")[1].get_text().strip() away = i.findAll("td")[6].div.span.get_text().strip().encode("utf-8") away_odds = i.findAll("td")[6].div.findAll("span")[1].get_text().strip() print homecheza = Chezacash()try: cheza.controller()except: cheza.driver.service.process.send_signal(signal.SIGTERM) # kill the specific phantomjs child proc # quit the node proc cheza.driver.quit() traceback.print_exc()
|
What if instead you would locate the "Next" button by link text, scroll into it's view and then click:next_button = self.driver.find_element_by_link_text("Next")self.driver.execute_script("arguments[0].scrollIntoView();", next_button)next_button.click()I would also maximize the browser window before navigating to the page:self.driver.maximize_window()self.driver.get("https://www.chezacash.com/#/home/")
|
How can I track all SQL query timings and counts in Django? I'd like to have a Django application record how much time each SQL query took.The first problem is that SQL queries differ, even when they originate from the same code. That can be solved by normalizing them, so thatSELECT first_name, last_name FROM people WHERE NOW() - birth_date < interval '20' years;would become something likeSELECT $ FROM people WHERE $ - birth_date < $;After getting that done, we could just log the normalized query and the query timing to a file, syslog or statsd (for statsd, I'd probably also use a hash of the query as a key, and keep an index of hash->query relations elsewhere).The bigger problem, however, is figuring out where that action can be performed. The best place for that I could find is this: https://github.com/django/django/blob/b5bacdea00c8ca980ff5885e15f7cd7b26b4dbb9/django/db/backends/util.py#L46 (note: we do use that ancient version of Django, but I'm fine with suggestions that are relevant only to newer versions).Ideally, I'd like to make this a Django extension, rather than modifying Django source code. Sounds like I can make another backend, inheriting from the one we currently use, and make its CursorWrapper's class execute method record the timing and counter.Is that the right approach, or should I be using some other primitives, like QuerySet or something?
|
Django debug toolbar has a panel that shows "SQL queries including time to execute and links to EXPLAIN each query"http://django-debug-toolbar.readthedocs.io/en/stable/panels.html#sql
|
None value in python numerical integration function I'm trying to write a code that calculates integrals using the rectangular rule and also allows the user to input the integral limits and number of divions(rectangles). I've written the function, but for certain values it just returns "None". Any idea why?Here's my code so far:def integral(f, a, b, N):h = int((b-a)/N)result = 0result += h * f(a)for i in range(1, N-1): result += h * f(a + i*h) return resultdef f(x): return x**3string_input1 = input("Please enter value for a: ") a = int(string_input1)string_input2 = input("Please enter value for b: ")b = int(string_input2)while True: string_input3 = input("Please enter integer positive value for N: ") N = int(string_input3)if N>0: breakprint(integral(f, a, b, N))an example of values that return "None" is a=0 b=1 N=2
|
for i in range(1, N-1): result += h * f(a + i*h) return resultIf N = 2 then for i in range(1, 1) is not going to execute, thus integral returns None.But even if N > 2, having return inside the for loop doesn't make any sense since it will only run the first iteration and then return whatever result is.
|
How to pass a variable as a column name with pyodbc? I have a list that has two phone numbers and I'd like to put each phone number into its own column in an Access database. The column names are Phone_Number1 and Phone_Number2. How do I pass that to the INSERT statement?phone_numbers = ['###.218.####', '###.746.####']driver = '{Microsoft Access Driver (*.mdb, *.accdb)}'filepath = 'C:/Users/Notebook/Documents/master.accdb'myDataSources = pyodbc.dataSources()access_driver = myDataSources['MS Access Database']conn = pyodbc.connect(driver=driver, dbq=filepath)cursor = conn.cursor()phone_number_count = 1for phone_number in phone_numbers: column_name = "Phone_Number" + str(phone_number_count) cursor.execute("INSERT INTO Business_Cards (column_name) VALUES (?)", (phone_number))conn.commit()print("Your database has been updated.")This is what I have so far.Traceback (most recent call last): File "C:/Users/Notebook/PycharmProjects/Jarvis/BusinessCard.py", line 55, in <module> database_entry(phone_numbers, emails, name, title) File "C:/Users/Notebook/PycharmProjects/Jarvis/BusinessCard.py", line 47, in database_entry cursor.execute("INSERT INTO Business_Cards (column_name) VALUES (?)", (phone_number))pyodbc.Error: ('HYS22', "[HYS22] [Microsoft][ODBC Microsoft Access Driver] The INSERT INTO statement contains the following unknown field name: 'column_name'. Make sure you have typed the name correctly, and try the operation again. (-1507) (SQLExecDirectW)")
|
If you want to insert both numbers in the same row, remove the for loop and adjust the INSERT to consider the two columns:phone_numbers = ['###.218.####', '###.746.####']# ...column_names = [f"PhoneNumber{i}" for i in range(1, len(phone_numbers) + 1)]placeholders = ['?'] * len(phone_numbers)cursor.execute(f"INSERT INTO Business_Cards ({', '.join(column_names)}) VALUES ({', '.join(placeholders)})", tuple(phone_numbers))conn.commit()# ...
|
How to blit from the x and y coordinates of an image in Pygame? I'm trying to lessen the number of files I need for my pygame project by instead of having a folder with for example 8 boots files, I can make 1 bigger image that has all of them 8 pictures put next to each other and depending on animation tick, that specific part of the image gets blitted.Currently, I utilise lists.right = ["playerdesigns/playerright0.png","playerdesigns/playerright1.png","playerdesigns/playerright2.png","playerdesigns/playerright3.png"]my code then just depending on animation tick, takes on of those files and blits itbut I wish to make it into one playerright.png image file that 0-100 Xpixels of the picture has playerright1.png, 101-200 Xpixels has playerright2.png etc, and then depending on need, I can blit 100 wide image from any point.
|
You can define a subsurface that is directly linked to the source surface with the method subsurface:subsurface(Rect) -> SurfaceReturns a new Surface that shares its pixels with its new parent. The new Surface is considered a child of the original. Modifications to either Surface pixels will effect each other.The Rect argument of subsurface specifies the rectangular area for the sub-image. It can either be a pygame.Rect object or a tuple with 4 components (x, y, width, height).For example, if you have an image that contains 3 100x100 size sub-images:right_surf = pygame.image.load("playerdesigns/playerright.png")right_surf_list = [right_surf.subsurface((i*100, 0, 100, 100)) for i in range(3)]
|
Why I am getting None in place of int from if block inside a method called by another method in the class Here is the code, I am trying to get a binary search result using a method inside the class. The class has more functions but only this function is giving the wrong output (None in place an integer). The if part from line number 10 to 15 is causing the problem.class Solution: def getNum(self, nums, x): L_nums = len(nums) j = self.binary_search( nums, 0, L_nums-1, x) print("j=",j) def binary_search(self, nums, start, end, x): print("BS called start=>", start,"end=>", end,"x=>", x) if end==start: if nums[end]==x: return end else: print("called else and returing -1") return -1 i = (end-start)//2 + start print("i=",i) if x==nums[i]: return i elif x>nums[i]: self.binary_search(nums, i+1, end, x) else: self.binary_search(nums, start, i-1, x)sol = Solution()sol.getNum([1,3,5,7,9],1)output: here j should be 0, in place of that it is returning NoneBS called start=> 0 end=> 4 x=> 1i= 2BS called start=> 0 end=> 1 x=> 1i= 0j= None
|
Short answer to your question: include the "return" in raw 21,23. elif x>nums[i]: return self.binary_search(nums, i+1, end, x) else: return self.binary_search(nums, start, i-1, x)
|
Using Rasa NLU model with python API instead of HTTP server Is there a way to use https://nlu.rasa.com model without the HTTP server ? I want to use it as a python library/module.
|
Yes, and this is documented in there docs at nlu.rasa.com specifically this section.As of version 0.12.3:Trainingfrom rasa_nlu.training_data import load_datafrom rasa_nlu.config import RasaNLUModelConfigfrom rasa_nlu.model import Trainerfrom rasa_nlu import configtraining_data = load_data('data/examples/rasa/demo-rasa.json')trainer = Trainer(config.load("sample_configs/config_spacy.yml"))trainer.train(training_data)model_directory = trainer.persist('./projects/default/') # Returns the directory the model is stored inParsingfrom rasa_nlu.model import Metadata, Interpreter# where `model_directory points to the folder the model is persisted ininterpreter = Interpreter.load(model_directory)interpreter.parse(u"The text I want to understand")
|
Python - MySQL "Column count doesn't match value count at row 1" name = form.name.dataemail = form.email.datausername = form.username.datapassword = sha256_crypt.encrypt(form.password.data)cursor = mysql.connection.cursor()cursor.execute("Insert into users(name,email.username,password) values(%s,%s,%s,%s)",(name,email,username,password))mysql.connection.commit()cursor.close()I am using python with mysql to send the data entered in the table from a table in the database but I am getting such an error. Can you help me?
|
cursor.execute("Insert into users(name,email.username,password)You have a "." instead of a "," between email and username. It should becursor.execute("Insert into users(name,email,username,password)
|
Update an Excel sheet in real time using Python Is there a way to update a spreadsheet in real time while it is open in Excel? I have a workbook called Example.xlsx which is open in Excel and I have the following python code which tries to update cell B1 with the string 'ID': import openpyxlwb = openpyxl.load_workbook('Example.xlsx')sheet = wb['Sheet']sheet['B1'] = 'ID'wb.save('Example.xlsx')On running the script I get this error:PermissionError: [Errno 13] Permission denied: 'Example.xlsx'I know its because the file is currently open in Excel, but was wondering if there is another way or module I can use to update a sheet while its open.
|
I have actually figured this out and its quite simple using xlwings. The following code opens an existing Excel file called Example.xlsx and updates it in real time, in this case puts in the value 45 in cell B2 instantly soon as you run the script. import xlwings as xwwb = xw.Book('Example.xlsx')sht1 = wb.sheets['Sheet']sht1.range('B2').value = 45
|
In Python how to strip dollar signs and commas from dollar related fields only I'm reading in a large text file with lots of columns, dollar related and not, and I'm trying to figure out how to strip the dollar fields ONLY of $ and , characters.so say I have:a|b|c$1,000|hi,you|$45.43$300.03|$MS2|$55,000where a and c are dollar-fields and b is not.The output needs to be:a|b|c1000|hi,you|45.43300.03|$MS2|55000I was thinking that regex would be the way to go, but I can't figure out how to express the replacement:f=open('sample1_fixed.txt','wb')for line in open('sample1.txt', 'rb'): new_line = re.sub(r'(\$\d+([,\.]\d+)?k?)',????, line) f.write(new_line)f.close()Anyone have an idea?Thanks in advance.
|
Unless you are really tied to the idea of using a regex, I would suggest doing something simple, straight-forward, and generally easy to read:def convert_money(inval): if inval[0] == '$': test_val = inval[1:].replace(",", "") try: _ = float(test_val) except: pass else: inval = test_val return invaldef convert_string(s): return "|".join(map(convert_money, s.split("|")))a = '$1,000|hi,you|$45.43'b = '$300.03|$MS2|$55,000'print convert_string(a)print convert_string(b)OUTPUT1000|hi,you|45.43300.03|$MS2|55000
|
Removing the rows that columns don't match with the same values I have a data frame that looks like this.This is what I have: V1 V2 V3hello 0 0nice 0 1meeting 1 1you 1 0I want to make it look like this: V1 V2 V3hello 0 0meeting 1 1So basically I want to remove the rows that V2 and V3 column does not match with the same numbers. I only one to leave rows that V2 and V3 column share the same values either 0 or 1. How can I do this? Please help me....Thank you very much in advance
|
Use boolean indexing with inverted logic - get all rows with same values in both columns:df = df[df.V2 == df.V3]Alternative with Series.eq for compare:df = df[df.V2.eq(df.V3)]Next alternative with DataFrame.query:df = df.query("V2 == V3")
|
Azure Blob Bindings with Azure Function (Python) I currently have a process of reading from sql, using pandas and pd.Excelwriter to format the data and email it out. I want my function to read from sql (no problem) and write to a blob, then from that blob (using SendGrid binding) attach that file from the blob and send it out. My question is do I need both an in (attaching for email) and an out (archiving to the blob) binding for that blob? Additionally, is this the simplest way to do this? It's be nice to send it and write to the blob as two unconnected operations instead of sequentially. It also appears that with the binding, I have to hard code the name of the file in the blob-path? That seems a little ridiculous, does anyone know a workaround, or perhaps I have misunderstood.
|
do I need both an in (attaching for email) and an out (archiving to the blob) binding for that blob?Firstly I don't think you could bind the blob in and out simultaneously if the not existed. If you have tried you will find it will return error. And I suppose you could send the mail directly with the content from sql and write to blob, don't need to read content from blob again. I have to hard code the name of the file in the blob-path?If you could accept guid or datetime blob name you could bind the path with {rand-guid} or {DateTime}(you could format the time).I fyou could not accept this binding, you could pass the blob path from the trigger body with json data like below pic. If you use other like queue trigger, you also could pass the json data with the path value.
|
Python: Mysql Escape function generates corrupted query Python mysql default escape function, corrupts the query.Original Query string is following. It works fine and does add records to database as desiredINSERT IGNORE INTO state (`name`, `search_query`, `business_status`, `business_type`, `name_type`, `link`) VALUES ("test_name1", "test", "test_status", "test_b_typ", "test_n_typ", "test_link"), ("test_name2", "test", "test_status", "test_b_typ", "test_n_typ", "test_link")But After escaping it to make sql Injection secure using the fuction safe_sql = self.conn.escape_string(original_sql)safe_sql being generated is followingb'INSERT IGNORE INTO state (`name`, `search_query`, `business_status`, `business_type`, `name_type`, `link`) VALUES (\\"test_name1\\", \\"test\\", \\"test_status\\", \\"test_b_typ\\", \\"test_n_typ\\", \\"test_link\\"), (\\"test_name2\\", \\"test\\", \\"test_status\\", \\"test_b_typ\\", \\"test_n_typ\\", \\"test_link\\")'Now if I try to execute the safe_sql I get the syntax error belowMySQLdb._exceptions.ProgrammingError: (1064, 'You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near \'\\"test_name1\\", \\"test\\", \\"test_status\\", \\"test_b_typ\\", \\"test_n_typ\\", \\"tes\' at line 1')Which makes me wonder that if escape function I am using is either broken / uncompatibl or I am not using it the right way ? Also i am entering hundreds of records at one time, and due to the fast processing (which i purely assume) of single query as compared to prepared statements running hundreds of time, I am creating a large query.
|
You can't escape the entire query! You can't construct a query by randomly concatenating strings and then wave a magic wand over it and make it "injection secure". You need to escape every individual value before you put it into the query. E.g.:"INSERT ... VALUES ('%s', ...)" % self.conn.escape_string(foo)But really, your MySQL API probably offers prepared statements, which are much easier to use and less error prone. Something like:self.conn.execute('INSERT ... VALUES (%s, %s, %s, ...)', (foo, bar, baz))
|
error attributing items from scrapy into a database am trying to insert items scraped through scrapy into a MySQL database (create a new database if none is present before), I followed an online tutorial since I have no idea how to do this but an error keeps happening.am trying to store an item that contains 5 text fields into a databasehere's my pipeline# -*- coding: utf-8 -*-# Define your item pipelines here## Don't forget to add your pipeline to the ITEM_PIPELINES setting# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.htmlimport mysql.connectorclass LinkPipeline(object): def _init_(self): self.create_connection() self.create_table() def create_connection(self): self.conn = mysql.connector.connect( host = 'localhost', user = 'root', passwd = 'facebook123', database = 'link' ) self.curr = self.conn.cursor() def create_table(self): self.curr.execute("""DROP TABLE IF EXISTS link_tb""") self.curr.execute("""create table link_tb( profile text, post_url text, action text, url text, date text )""") def process_item(self,item, spider): self.store_db(item) return(item) def store_db(self, item): self.curr.execute("""insert into link_tb values (%s,%s,%s,%s,%s)""", ( item['profile'][0], item['post_url'][0], item['action'][0], item['url'][0], item['date'][0] )) self.conn.commit()here's a part of my spider if response.meta['flag'] == 'init': #parse root comment for root in response.xpath('//div[contains(@id,"root")]/div/div/div[count(@id)!=1 and contains("0123456789", substring(@id,1,1))]'): new = ItemLoader(item=LinkItem(),selector=root) new.context['lang'] = self.lang new.add_xpath('profile', "substring-before(.//h3/a/@href, concat(substring('&', 1 div contains(.//h3/a/@href, 'profile.php')), substring('?', 1 div not(contains(.//h3/a/@href, 'profile.php')))))") new.add_xpath('action','.//div[1]//text()') new.add_xpath('date','.//abbr/text()') new.add_value('post_url',response.meta['link_url']) new.add_value('url',response.url) yield new.load_item()I expect the item to be stored in my "link" database but I keep running into this error " self.cursor.execute("""insert into link_tb values (%s,%s,%s,%s,%s)""", (AttributeError: 'LinkPipeline' object has no attribute 'cursor'"
|
You defined the constructor as _init_ instead of __init__
|
How to plot lines from a dataframe with column headers as the x-axis I figure I need to do some sort of data sorting/display it differently in order to plot the graph but I'm not sure how. I have tried transposing the data set but that doesn't seem to do the trick either.This is my data after slicing and I need to plot W values as x axis vs the R values as y1, y2, y3, y4 and y5import pandas as pddata = {'observations': [15, 28, 10, 6, 25], 'biomass': [94.67, 56.56, 81.33, 26.00, 65.78], 380: [0.013918, 0.012229, 0.013622, 0.015602, 0.011784], 390: [0.015578, 0.012762, 0.014548, 0.017856, 0.013304], 400: [0.016338, 0.014434, 0.014872, 0.019132, 0.014054]}data1 = pd.DataFrame(data, index=[14, 17, 9, 5, 24])data1.plot()
|
For each graph you need two arrays or lists x and y.Since x values are the same for every graph you can reuse them. You could get them from the keys of your DataFrame (assuming they are integers) like this:x = [key for key in df.keys() if type(key) == int]Next you need the y values for each graph. You can iterate the rows of a DataFrame with df.iterrows(): fig, ax = plt.subplots() # create figure and axes for index, row in data1[x].iterrows(): ax.plot(x, row) plt.show()data1[x] returns columns that are in xiterrows()returns tuple of index and row. Row is of type pandas.Series
|
Autostart a python program in RaspberryPi I am making a project related to RaspberryPi and Xbee, where it is essential that python program should start when i give power to RaspberryPi.I saw a techniqe on a udemy lecture, where it was said-sudo crontab -eA file will open. Go at the end of the file and then type@reboot sudo python3 /home/pi/mycode.pyReboot the Raspberrypi.Even by doing this, i am not getting any success. Please suggest where i am going wrong. This is a easy problem but i am stuck here. Please help.
|
sudo nano /home/pi/.bashrcGo to the last line of the script and add:echo Running at boot sudo python /home/pi/sample.pyThere are various other ways in this bloghttps://www.dexterindustries.com/howto/run-a-program-on-your-raspberry-pi-at-startup/
|
Optionally passing parameters onto another function with jit I am attempting to jit compile a python function, and use a optional argument to change the arguments of another function call. I think where jit might be tripping up is that the default value of the optional argument is None, and jit doesn't know how to handle that, or at least doesn't know how to handle it when it changes to a numpy array. See below for a rough overview:@jit(nopython=True)def foo(otherFunc,arg1, optionalArg=None): if optionalArg is not None: out=otherFunc(arg1,optionalArg) else: out=otherFunc(arg1) return outWhere optionalArg is either None, or a numpy arrayOne solution would be to turn this into three functions as shown below, but this feels kinda janky and I don't like it, especially because speed is very important for this task.def foo(otherFunc,arg1,optionalArg=None): if optionalArg is not None: out=func1(otherFunc,arg1,optionalArg) else: out=func2(otherFunc,arg1) return out@jit(nopython=True)def func1(otherFunc,arg1,optionalArg): out=otherFunc(arg1,optionalArg) return out@jit(nopython=True)def func2(otherFunc,arg1): out=otherFunc(arg1) return outNote that other stuff is happening besides just calling otherFunc that makes using jit worth it, but I'm almost certain that is not where the problem is since this was working before without the optionalArg portion, so I have decided not to include it. For those of you that are curious its runge-kutta order 4 implementation with optional extra parameters to pass to the differential equation. If you want to see the whole thing just ask.The traceback is rather long but here is some of it:inte.rk4(de2,y0,0.001,200,vals=np.ones(4))Traceback (most recent call last): File "<ipython-input-38-478197aa6a1a>", line 1, in <module> inte.rk4(de2,y0,0.001,200,vals=np.ones(4)) File "C:\Users\Alex\Anaconda3\lib\site-packages\numba\dispatcher.py", line 350, in _compile_for_args error_rewrite(e, 'typing') File "C:\Users\Alex\Anaconda3\lib\site-packages\numba\dispatcher.py", line 317, in error_rewrite reraise(type(e), e, None) File "C:\Users\Alex\Anaconda3\lib\site-packages\numba\six.py", line 658, in reraise raise value.with_traceback(tb)TypingError: Internal error at <numba.typeinfer.CallConstraint object at 0x00000258E168C358>:This continues...inte.rk4 is the equiavlent of foo, de2 is otherFunc, y0, 0.001 and 200 are just values, that I swaped out for arg1 in my problem description above, and vals is optionalArg.A similar thing happens when I try to run this with the vals parameter omitted:ysExp=inte.rk4(deExp,y0,0.001,200)Traceback (most recent call last): File "<ipython-input-39-7dde4bcbdc2f>", line 1, in <module> ysExp=inte.rk4(deExp,y0,0.001,200) File "C:\Users\Alex\Anaconda3\lib\site-packages\numba\dispatcher.py", line 350, in _compile_for_args error_rewrite(e, 'typing') File "C:\Users\Alex\Anaconda3\lib\site-packages\numba\dispatcher.py", line 317, in error_rewrite reraise(type(e), e, None) File "C:\Users\Alex\Anaconda3\lib\site-packages\numba\six.py", line 658, in reraise raise value.with_traceback(tb)TypingError: Internal error at <numba.typeinfer.CallConstraint object at 0x00000258E048EA90>:This continues...
|
If you see the documentation here, you can specify the optional type arguments explicitly in Numba. For example (this is the same example from documentation):>>> @jit((optional(intp),))... def f(x):... return x is not None...>>> f(0)True>>> f(None)FalseAdditionally, based on the conversation going on this Github issue you can use the following workaround to implement optional keyword. I have modified the code from the solution provided in the github issue to suit your example:from numba import jitclass, int32, njitfrom collections import OrderedDictimport numpy as npnp_arr = np.asarray([1,2])spec = OrderedDict()spec['x'] = int32@jitclass(spec)class Foo(object): def __init__(self, x): self.x = x def otherFunc(self, optionalArg): if optionalArg is None: return self.x + 10 else: return len(optionalArg)@njitdef useOtherFunc(arg1, optArg): foo = Foo(arg1) print(foo.otherFunc(optArg))arg1 = 5useOtherFunc(arg1, np_arr) # Output: 2useOtherFunc(arg1, None) # Output : 15See this colab notebook for the example shown above.
|
Why flask session didn't store the user info when making different posts to it from react? I wrote several APIs using Flask-RESTful and several react modules for testing purposes. Ideally, if I stored some info in session through a request, python should be able to detect whether there is such session even in other API entries with code, likeif session: return jsonify({'user': session['username'], 'status': 2000})return jsonify({'user': None, 'status': 3000})However, the problem I met was within a single request, say login request, session was indeed properly used and username was also stored in the session —— for example,from flask import session...# login APIclass UserLoginResource(Resource): @staticmethod def post(): ... ... # a user object (model) is defined session['username'] = user.username return jsonify({'status': 2000, 'user': session['username']})with this code, it returned the exact username from session, which meant info was stored. However, when I made another get request from react side to the index API, likefrom flask import session...# index API (without any practical use)class IndexResource(Resource): @staticmethod def get(): if session: return jsonify({'username': session['username']})In this case, the response was None, cuz the API didn't detect any session.// makePostRequest FunctionmakePostRequest = (e: any) => { e.preventDefault() const payload = { 'email': this.state.email, 'password': this.state.password } fetch('http://127.0.0.1:5000/api/login', { method: 'POST', headers: { 'Access-Control-Allow-Origin': '*', 'Content-Type': 'application/json' }, body: JSON.stringify(payload) }).then(res => res.json()) .then(res => {this.setState({ status: res['status'], username: res['user'] })}) .catch(err => console.log(err)) }This is the way I make login post request. If login successful, it returns status code 2000; and if the status code is 2000, it means the program has gone through the code session['username']=_the_username_. And I should be able to extract username data from session storage when accessing Index page.componentDidMount = () => { fetch('http://127.0.0.1:5000/api') .then(res => res.json()) .then(res => this.setState({ user: res['user'], status: res['status'] })) }This is how I make a get request on the homepage module. However, the user is always None and status is always 3000This may be just improper use of session, but I don't know how to actually correctly use the session in flask. So, what was the mistake here?Update:So, I added a GET request within class UserLoginResource(Resource) like thisclass UserLoginResource(Resource): @staticmethod def post(): ... # identical to the previous code @staticmethod(): def get(): # url: http://127.0.0.1:5000/api/login session['username'] = 'user_a' return jsonify({'message': 'session set'})And I made a get request in react side to http://127.0.0.1:5000/api/login and got the message: session set. However, when react then accessed http://127.0.0.1:5000/api, the result remained to be status 3000 and None username. Then, I directly accessed the url http://127.0.0.1:5000/api/login and then accessed http://127.0.0.1:5000/api and there we go we had the username user_a and status 2000.So, I think the problem here might be that the backend didn't recognize the browser that was accessing it was the same person, or it might be others.Also, I checked if it was something wrong with componentDidMount, but unfortunately componentDidMount wasn't the source of error — after I turned it into a normal function triggered by onClick, still it didn't work.How to fix this?
|
fetch does not supports cookie by default, you need to enable it using credentials: 'include'makePostRequest = (e: any) => { e.preventDefault() const payload = { 'email': this.state.email, 'password': this.state.password } fetch('http://127.0.0.1:5000/api/login', { method: 'POST', credentials: 'include', headers: { 'Access-Control-Allow-Origin': '*', 'Content-Type': 'application/json' }, body: JSON.stringify(payload) }).then(res => res.json()) .then(res => {this.setState({ status: res['status'], username: res['user'] })}) .catch(err => console.log(err)) }Enable cors on server using pip install flask-corsThen Add this to app.py, where you initialize your appfrom flask_cors import CORS app = Flask(__name__) CORS(app)
|
How to store .format() print output to a var for reuse I have a list of dictionaries which i want to display using Tkinter.So far i only managed to print the desired result.Example code:for x in list: for key, value in x.items(): print("{}: {}".format(key, value))>>>key: value key: value key: valueThe way it's printed is the exact way i want to display it on the application. How do i store this output as text?
|
Looks like you need.out = ""for x in list: for key, value in x.items(): out += "{}: {}\n".format(key, value))print(out)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.