Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
375,600
| 69,861,033
|
Tensorflow object detection "Use fn_output_signature instead" warning
|
<p>I am following this tutorial to train my own models.
<a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/</a></p>
<p>I followed all the steps exactly as described in this tutorial but when I start training it is throwing some warnings and got stuck at a warning. I have no GPU so I skipped the "<a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html#gpu-support-optional" rel="nofollow noreferrer">GPU Support (Optional)</a>" section.</p>
<pre><code>(venv) C:\Users\femik\Desktop\productDetection\TensorFlow\workspace\training_demo>python model_main_tf2.py --model_dir=models/my_ssd_resnet50_v1_fpn --pipeline_config_path=models/my_ssd_resnet50_v1_fpn/pipeline.config
2021-11-06 07:58:33.841767: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2021-11-06 07:58:41.135367: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2021-11-06 07:58:41.135532: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-11-06 07:58:41.138487: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-Q5DQGU9
2021-11-06 07:58:41.138708: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-Q5DQGU9
2021-11-06 07:58:41.139431: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operati
ons: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.
W1106 07:58:41.130341 4076 cross_device_ops.py:1387] There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.
WARNING:tensorflow:Collective ops is not configured at program startup. Some performance features may not be enabled.
W1106 07:58:41.130341 4076 mirrored_strategy.py:379] Collective ops is not configured at program startup. Some performance features may not be enabled.
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0',)
I1106 07:58:41.130341 4076 mirrored_strategy.py:369] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0',)
INFO:tensorflow:Maybe overwriting train_steps: None
I1106 07:58:41.145963 4076 config_util.py:552] Maybe overwriting train_steps: None
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I1106 07:58:41.145963 4076 config_util.py:552] Maybe overwriting use_bfloat16: False
WARNING:tensorflow:From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\object_detection\model_lib_v2.py:557: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib)
is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
W1106 07:58:41.224069 4076 deprecation.py:330] From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\object_detection\model_lib_v2.py:557: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.pyth
on.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
INFO:tensorflow:Reading unweighted datasets: ['annotations/train.record']
I1106 07:58:41.224069 4076 dataset_builder.py:163] Reading unweighted datasets: ['annotations/train.record']
INFO:tensorflow:Reading record datasets for input file: ['annotations/train.record']
I1106 07:58:41.224069 4076 dataset_builder.py:80] Reading record datasets for input file: ['annotations/train.record']
INFO:tensorflow:Number of filenames to read: 1
I1106 07:58:41.224069 4076 dataset_builder.py:81] Number of filenames to read: 1
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W1106 07:58:41.224069 4076 dataset_builder.py:87] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\object_detection\builders\dataset_builder.py:101: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated an
d will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_deterministic`.
W1106 07:58:41.224069 4076 deprecation.py:330] From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\object_detection\builders\dataset_builder.py:101: parallel_interleave (from tensorflow.python.data.experimental.ops.int
erleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_deterministic`.
WARNING:tensorflow:From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\object_detection\builders\dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and
will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
W1106 07:58:41.255315 4076 deprecation.py:330] From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\object_detection\builders\dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.d
ataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
WARNING:tensorflow:From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\tensorflow\python\util\dispatch.py:206: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future versio
n.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
W1106 07:58:47.147608 4076 deprecation.py:330] From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\tensorflow\python\util\dispatch.py:206: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will
be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
WARNING:tensorflow:From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\tensorflow\python\util\dispatch.py:206: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed
in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
W1106 07:58:49.922046 4076 deprecation.py:330] From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\tensorflow\python\util\dispatch.py:206: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is de
precated and will be removed in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
WARNING:tensorflow:From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\tensorflow\python\autograph\impl\api.py:464: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W1106 07:58:51.524998 4076 deprecation.py:330] From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\tensorflow\python\autograph\impl\api.py:464: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be r
emoved in a future version.
Instructions for updating:
Use `tf.cast` instead.
2021-11-06 07:58:53.478807: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\tensorflow\python\keras\backend.py:435: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a T
rue/False value to the `training` argument of the `__call__` method of your layer or model.
warnings.warn('`tf.keras.backend.set_learning_phase` is deprecated and '
WARNING:tensorflow:From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\tensorflow\python\util\deprecation.py:602: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecated and will be removed in a f
uture version.
Instructions for updating:
Use fn_output_signature instead
W1106 07:59:18.849742 14976 deprecation.py:528] From C:\Users\femik\Desktop\productDetection\venv\lib\site-packages\tensorflow\python\util\deprecation.py:602: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecat
ed and will be removed in a future version.
Instructions for updating:
Use fn_output_signature instead
</code></pre>
<p>when installing <code>pip install --ignore-installed --upgrade tensorflow==2.5.0</code></p>
<p>I got the following errors</p>
<pre><code>ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow-text 2.6.0 requires tensorflow<2.7,>=2.6.0, but you have tensorflow 2.5.0 which is incompatible.
tensorflow-metadata 1.4.0 requires absl-py<0.13,>=0.9, but you have absl-py 0.15.0 which is incompatible.
tensorflow-io 0.21.0 requires tensorflow<2.7.0,>=2.6.0, but you have tensorflow 2.5.0 which is incompatible.
</code></pre>
<p>So, I downgraded the modules accordingly. But still getting the same issue.</p>
|
<ul>
<li>For version TF_2.6.0</li>
<li>in file "piplile.config"</li>
<li>var batch_size = 8 -> 1-4 (using RAM)</li>
</ul>
|
python|tensorflow|object-detection|tensorflow-datasets|object-detection-api
| 0
|
375,601
| 69,886,693
|
From multiple values per rows of a pandas dataframe: get two columns with every realation of the values (to analyse the network with Networkx)
|
<p>I have a dataframe with names of persons in it. The persons work thogether on the same item.</p>
<pre><code>item names
a moriz, jon, cate
b jon, lenard
c cate, martin, leo, jil
</code></pre>
<ul>
<li>I like to prepare the names for a network-visualisation. I need to split the name-cells up in in two rows: in a way, that every relation is shown. like this:</li>
</ul>
<pre><code>item person 1 person 2
a moriz jon
a moriz cate
a jon cate
b jon lenard
c cate martin
c cate leo
c cate jil
c jil martin
c jil leo
c martin leo
</code></pre>
<ul>
<li>I know how to split the name-cell in multiple name-cells for each item. But I don't know how to list them in pairs with every relation per item.</li>
</ul>
|
<p>You could do something like this (<code>df</code> your dataframe):</p>
<pre><code>import pandas as pd
from itertools import combinations
df = pd.DataFrame(
{
'item': ['a', 'b', 'c'],
'names': ['moriz, jon, cate', 'jon, lenard', 'cate, martin, leo, jil']
}
)
df.names = df.names.str.split(", ").map(lambda l: list(combinations(l, 2)))
df = df.explode("names")
df[["person 1", "person 2"]] = df.names.str.join(",").str.split(",", expand=True)
df = df.drop(columns="names")
</code></pre>
<p>Result for the sample:</p>
<pre><code> item person 1 person 2
0 a moriz jon
0 a moriz cate
0 a jon cate
1 b jon lenard
2 c cate martin
2 c cate leo
2 c cate jil
2 c martin leo
2 c martin jil
2 c leo jil
</code></pre>
|
python|pandas|dataframe|networkx
| 1
|
375,602
| 69,844,273
|
Broadcast across pandas MultiIndex level even if index values happen to agree
|
<p>This one has me stumped. I have two <code>pd.Series</code> <code>s</code> and <code>t</code> as follows:</p>
<pre><code>Common Level s
Foo a 1
b 2
Name: s, dtype: int64
</code></pre>
<pre><code>Common Level t
Foo A 10
B 20
Name: t, dtype: int64
</code></pre>
<p><code>pandas</code> lets me add these and broadcasts across the common level <code>'Common'</code></p>
<p>Input:</p>
<pre><code>s + t
</code></pre>
<p>Output:</p>
<pre><code>Common Level s Level t
Foo a A 11
B 21
b A 12
B 22
dtype: int64
</code></pre>
<p>Consider now another <code>pd.Series</code> <code>u</code> where the index <em>labels</em> happen to agree with those of <code>s</code></p>
<pre><code>Common Level u
Foo a 100
b 200
Name: u, dtype: int64
</code></pre>
<p>In other words, we have <code>(s.index.values == u.index.values).all()</code> returns <code>True</code>. Because of this, <code>pandas</code> no longer broadcasts</p>
<p>Input:</p>
<pre><code>s + u
</code></pre>
<p>Output:</p>
<pre><code>Common Level s
Foo a 101
b 202
dtype: int64
</code></pre>
<p>even though <code>s.index.names</code> and <code>u.index.names</code> disagree.</p>
<p>Lastly, if the order is changed but not the labels, such as for <code>v</code>:</p>
<pre><code>Common Level v
Foo b 1000
a 2000
Name: v, dtype: int64
</code></pre>
<p>so that <code>s.index.values</code> and <code>v.index.values</code> don't agree outright, then broadcasting happens.</p>
<p>Input:</p>
<pre><code>s + v
</code></pre>
<p>Output:</p>
<pre><code>Common Level s Level v
Foo a b 1001
a 2001
b b 1002
a 2002
dtype: int64
</code></pre>
<p><strong>My question</strong>: How can I add <code>s</code> and <code>u</code> such that <code>pandas</code> still broadcasts? (For my particular application, I am actually interested in elementwise-and <code>s & u</code>, not the sum <code>s + u</code>.)</p>
<hr />
<p><strong>Code</strong></p>
<pre><code>s = pd.Series([1, 2],
index=pd.MultiIndex.from_tuples(
[('Foo', 'a'), ('Foo', 'b')],
names=['Common', 'Level s']), name='s')
t = pd.Series([10, 20],
index=pd.MultiIndex.from_tuples(
[('Foo', 'A'), ('Foo', 'B')],
names=['Common', 'Level t']), name='t')
u = pd.Series([100, 200],
index=pd.MultiIndex.from_tuples(
[('Foo', 'a'), ('Foo', 'b')],
names=['Common', 'Level u']), name='u')
v = pd.Series([1000, 2000],
index=pd.MultiIndex.from_tuples(
[('Foo', 'b'), ('Foo', 'a')],
names=['Common', 'Level v']), name='v')
</code></pre>
|
<p>The method that is called to normalise indexes of Series and DataFrames for broadcasting is <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.align.html" rel="nofollow noreferrer"><code>align</code></a>. This method is called internally to get the new indexes, we can see this with:</p>
<pre><code>left, right = s.align(t, join='outer', level=None, copy=False)
# left
Common Level s Level t
Foo a A 1
B 1
b A 2
B 2
Name: s, dtype: int64
# right
Common Level s Level t
Foo a A 10
B 20
b A 10
B 20
Name: t, dtype: int64
</code></pre>
<p>Notice that this call will produce the "non-broadcasted" values when the indexes are equal since outer join produces a single level:</p>
<pre><code>left, right = s.align(u, join='outer', level=None, copy=False)
# left
Common Level s
Foo a 1
b 2
Name: s, dtype: int64
# right
Common Level u
Foo a 100
b 200
Name: u, dtype: int64
</code></pre>
<hr />
<p>If we want to force the levels to generate we can use the branch from <a href="https://github.com/pandas-dev/pandas/blob/v1.3.4/pandas/core/generic.py#L8694-L8699" rel="nofollow noreferrer"><code>_align_series</code></a> for when indexes are non-equal:</p>
<blockquote>
<pre><code>join_index, lidx, ridx = self.index.join(
other.index, how=join, level=level, return_indexers=True
)
left = self._reindex_indexer(join_index, lidx, copy)
right = other._reindex_indexer(join_index, ridx, copy)
</code></pre>
</blockquote>
<p>We can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Index.join.html" rel="nofollow noreferrer"><code>Index.join</code></a> and <code>_reindex_indexer</code> to create the aligned Series:</p>
<pre><code>join_index, lidx, ridx = s.index.join(
u.index, how='outer', level=None, return_indexers=True
)
left = s._reindex_indexer(join_index, lidx, copy=False)
right = u._reindex_indexer(join_index, ridx, copy=False)
</code></pre>
<p>*Note: we are using a private method as there is no equivalent reindexer in the public API.</p>
<p>Now that we have aligned Series we can just do:</p>
<pre><code>left + right
</code></pre>
<p>To get the results:</p>
<pre><code>Common Level s Level u
Foo a a 101
b 201
b a 102
b 202
dtype: int64
</code></pre>
<hr />
<p>If we wanted to avoid using the private method, we could also <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.iloc.html" rel="nofollow noreferrer"><code>iloc</code></a> to select values using index locations then <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.set_axis.html" rel="nofollow noreferrer"><code>set_axis</code></a> to overwrite with the join index with appropriate labels:</p>
<pre><code>join_index, lidx, ridx = s.index.join(
u.index, how='outer', level=None, return_indexers=True
)
left = s.iloc[lidx].set_axis(join_index)
right = u.iloc[ridx].set_axis(join_index)
</code></pre>
<pre><code>left + right
</code></pre>
<pre><code>Common Level s Level u
Foo a a 101
b 201
b a 102
b 202
dtype: int64
</code></pre>
|
pandas|multi-index|array-broadcasting
| 1
|
375,603
| 69,989,219
|
Store values into keys with scrapy
|
<p>I want to extract information from a website like price and store that as values in a dictionary. However, I'm trying to learn scrapy so I'd like to know how to achieve this with it.</p>
<p>Here's how it would look like with <code>requests</code> and <code>BeautifulSoup</code></p>
<pre><code>import numpy as np
import requests as r
import pandas as pd
from bs4 import BeauitfulSoup
html = ['https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=1&_sop=16',
'https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=2&_sop=16',
'https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=3&_sop=16',
'https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=4&_sop=16',
'https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=5&_sop=16']
data = defaultdict(list)
for i in range(0, len(html):
r = requests.get(html[i])
soup = BeautifulSoup(r.content, 'lxml')
name = soup.select(".s-item__title")
value = soup.select(".ITALIC")
for n, v in zip(name, value):
data["card"].append(n.text.strip())
data["price"].append(v.text.strip())
</code></pre>
<p>Here's what I have tried with scrapy but I do not get any values after looking at the json output. I just get the links, how do I get the output like the code above?:</p>
<pre><code>import scrapy
from scrapy.loader import ItemLoader
from scrapy.item import Field
from itemloaders.processors import TakeFirst
from scrapy.crawler import CrawlerProcess
html = np.array(['https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=1&_sop=16',
'https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=2&_sop=16',
'https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=3&_sop=16',
'https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=4&_sop=16',
'https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=5&_sop=16'],
dtype=object)
url = pd.DataFrame(html, columns=['data'])
class StatisticsItem(scrapy.Item):
statistics_div = Field(output_processor=TakeFirst())
url = Field(output_processor=TakeFirst())
class StatisticsSpider(scrapy.Spider):
name = 'statistics'
start_urls = url.data.values
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(
url
)
def parse(self, response):
table = response.xpath("//div[@class='s-item__price']").get()
loader = ItemLoader(StatisticsItem())
loader.add_value('values', table)
loader.add_value('url', response.url)
yield loader.load_item()
process = CrawlerProcess(
settings={
'FEED_URI': 'ebay_data.json',
'FEED_FORMAT': 'jsonlines'
}
)
process.crawl(StatisticsSpider)
process.start()
</code></pre>
|
<p>I set the custom_settings to write to 'cards_info.json' with json format.</p>
<p>Inside parse I go through each card on the page (see xpath) and get the card's title and price, then I yield them. Scrapy will write them into 'cards_info.json'.</p>
<pre class="lang-py prettyprint-override"><code>import scrapy
from scrapy.item import Field
from itemloaders.processors import TakeFirst
class StatisticsItem(scrapy.Item):
statistics_div = Field(output_processor=TakeFirst())
url = Field(output_processor=TakeFirst())
class StatisticsSpider(scrapy.Spider):
name = 'statistics'
start_urls = ['https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=1&_sop=16',
'https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=2&_sop=16',
'https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=3&_sop=16',
'https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=4&_sop=16',
'https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=5&_sop=16']
custom_settings = {
'FEED_FORMAT': 'json',
'FEED_URI': 'cards_info.json'
}
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(
url
)
def parse(self, response):
all_cards = response.xpath('//div[@class="s-item__wrapper clearfix"]')
for card in all_cards:
name = card.xpath('.//h3/text()').get()
price = card.xpath('.//span[@class="s-item__price"]//text()').get()
# now do whatever you want, append to dictionary, yield as item.
# example with yield:
yield {
'card': name,
'price': price
}
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>[scrapy.core.scraper] DEBUG: Scraped from <200 https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=1&_sop=16>
{'card': 'Pokemon 1st Edition Shadowless Base Set 11 Blister Booster Pack Lot - DM To Buy!', 'price': '£93,805.84'}
[scrapy.core.scraper] DEBUG: Scraped from <200 https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=1&_sop=16>
{'card': 'Pokemon Team Rocket Complete Complete 83/82, German, 1. Edition', 'price': '£102,026.04'}
[scrapy.core.scraper] DEBUG: Scraped from <200 https://www.ebay.co.uk/b/Collectable-Card-Games-Accessories/2536/bn_2316999?LH_PrefLoc=2&mag=1&rt=nc&_pgn=1&_sop=16>
{'card': 'Yugioh E Hero Pit Boss 2013 World Championship Prize Card BGS 9.5 Gem Mint', 'price': '£100,000.00'}
...
...
</code></pre>
<p>cards_info.json:</p>
<pre class="lang-json prettyprint-override"><code>[
{"card": "1999 Pokemon Base Set Booster Box GREEN WING", "price": "\u00a340,000.00"},
{"card": "1996 MEDIA FACTORY POKEMON NO RARITY BASE SET CHARIZARD 006 BECKETT BGS MINT 9 ", "price": "\u00a339,999.99"},
{"card": "Yugioh - BGS8.5 Jump Festa Blue Eyes White Dragon -1999 - Limited - PSA", "price": "\u00a340,000.00"},
{"card": "PSA 8 CHARIZARD 1999 POKEMON 1ST EDITION THICK STAMP SHADOWLESS #4 HOLO NM-MINT", "price": "\u00a337,224.53"},
{"card": "PSA 9 MINT Pok\u00e9mon Play Promo 50000 PTS Gold Star Japanese Pokemon", "price": "\u00a338,261.06"},
...
...
]
</code></pre>
|
python|pandas|web-scraping|scrapy
| 1
|
375,604
| 69,965,208
|
Numpy where condition when met must modify the original value, if not original value must remain
|
<p>I have the following dataframe:</p>
<pre><code>Policy_id Value
A xyz
B abc
A pqr
C lmn
</code></pre>
<p>And I want to use <code>np.where()</code> such that whenever the <code>policy_id</code> is equal to <code>A</code> the corresponding value must be appended with a <code>*</code>.</p>
<pre><code>Policy_id Value
A xyz*
B abc
A pqr*
C lmn
</code></pre>
<p>How do I achieve this?</p>
|
<p>Try</p>
<pre><code>df['new'] = np.where(df['Policy_id'].eq('A'),df['Value']+'*',df['Value'])
</code></pre>
|
python|pandas|dataframe|numpy
| 0
|
375,605
| 69,759,141
|
How to reshape 3d numpy table?
|
<p>I have a 3d numpy table with shape=(2,3,4) like below:</p>
<pre><code>a = np.array([[[1., 2., 3., 4.],
[1., 2., 3., 4.],
[1., 2., 3., 4.]],
[[5., 6., 7., 8.],
[5., 6., 7., 8.],
[5., 6., 7., 8.]]])
</code></pre>
<p>And want to reshape this in a way where the columns in each dimension are stacked into a new column in a 2d matrix.</p>
<pre><code>1 5
1 5
1 5
2 6
2 6
2 6
3 7
3 7
3 7
4 8
4 8
4 8
</code></pre>
|
<p>Here you go:</p>
<pre><code>res = a.T.reshape((-1,2))
</code></pre>
<p>Output:</p>
<pre><code>array([[1., 5.],
[1., 5.],
[1., 5.],
[2., 6.],
[2., 6.],
[2., 6.],
[3., 7.],
[3., 7.],
[3., 7.],
[4., 8.],
[4., 8.],
[4., 8.]])
</code></pre>
|
python|numpy
| 4
|
375,606
| 69,848,969
|
How to build NumPy from source linked to Apple Accelerate framework?
|
<p>It is my understanding that NumPy dropped support for using the Accelerate BLAS and LAPACK at version 1.20.0. According to the release notes for NumPy 1.21.1, these bugs have been resolved and building NumPy from source using the Accelerate framework on MacOS >= 11.3 is now possible again: <a href="https://numpy.org/doc/stable/release/1.21.0-notes.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/release/1.21.0-notes.html</a>, but I cannot find any documentation on how to do so. This seems like it would be an interesting thing to try and do because the Accelerate framework is supposed to be highly-optimized for M-series processors. I imagine the process is something like this:</p>
<ol>
<li>Download numpy source code folder and navigate to this folder.</li>
<li>Make a <code>site.cfg</code> file that looks something like:</li>
</ol>
<pre><code>[DEFAULT]
library_dirs = /some/directory/
include_dirs = /some/other/directory/
[accelerate]
libraries = Accelerate, vecLib
</code></pre>
<ol start="3">
<li>Run <code>python setup.py build</code></li>
</ol>
<p>The problem is I do not know 1. what the variables <code>library_dirs</code> and <code>include_dirs</code> should be so that NumPy knows to use Accelerate BLAS and LAPACK and 2. if there are any other additional steps that need to be taken. If anyone knows how to do this or can provide any insight, it would be greatly appreciated.</p>
|
<p>No it doesn't have to be that complicated. I used these two commands and was able to install numpy with Apple Accelerate on Mac M1.</p>
<pre><code>pip install cython pybind11
pip install --no-binary :all: --no-use-pep517 numpy
</code></pre>
<p>Reference: <a href="https://stackoverflow.com/questions/65745683/how-to-install-scipy-on-apple-silicon-arm-m1/66536896#66536896">How to install SciPy on Apple Silicon (ARM / M1)</a></p>
|
python|numpy|build|apple-m1|accelerate-framework
| 2
|
375,607
| 69,779,690
|
Unable to install TensorFlow with miniconda on macOS Monterey
|
<p>I tried to install tensorflow following this issue: <a href="https://github.com/apple/tensorflow_macos/issues/153" rel="nofollow noreferrer">https://github.com/apple/tensorflow_macos/issues/153</a></p>
<p>Although for M1 Monterey the wheel is not working and showing the following error.</p>
<pre><code>pip install --upgrade --force --no-dependencies https://github.com/apple/tensorflow_macos/releases/download/v0.1alpha3/tensorflow_macos-0.1a3-cp38-cp38-macosx_11_0_arm64.whl https://github.com/apple/tensorflow_macos/releases/download/v0.1alpha3/tensorflow_addons_macos-0.1a3-cp38-cp38-macosx_11_0_arm64.whl
ERROR: tensorflow_macos-0.1a3-cp38-cp38-macosx_11_0_arm64.whl is not a supported wheel on this platform.
</code></pre>
<p>Looking for the updated wheel.</p>
|
<p>Please follow below steps to Install TensorFlow successfully</p>
<ol>
<li><p>Download and install <a href="https://www.anaconda.com/products/individual" rel="nofollow noreferrer">Anaconda</a> or the smaller <a href="https://docs.conda.io/en/latest/miniconda.html" rel="nofollow noreferrer">Miniconda</a>.</p>
</li>
<li><p>On macOS or Linux open a terminal window. Use the default bash shell on macOS or Linux.</p>
</li>
<li><p>Choose a name for your TensorFlow environment, such as “tf”.</p>
</li>
</ol>
<p>To install the current release of CPU-only TensorFlow, recommended for beginners:</p>
<pre><code>conda create -n tf tensorflow
conda activate tf
</code></pre>
<p>Or, to install the current release of GPU</p>
<pre><code>conda create -n tf-gpu tensorflow-gpu
conda activate tf-gpu
</code></pre>
<p>For more details, please check <a href="https://docs.anaconda.com/anaconda/user-guide/tasks/tensorflow/" rel="nofollow noreferrer">this</a> link.</p>
|
macos|tensorflow|miniconda|macos-monterey
| 0
|
375,608
| 69,880,250
|
Get first list element in apply function Pandas
|
<p>I have a function <code>preprocess_names</code> that splits an input variable:</p>
<pre><code>def preprocess_names(name):
#SOME PREPROCESSING
return name.split('')
</code></pre>
<p>Also, I have a Pandas DataFrame <code>df</code> to which I want to apply this function.
In my case, the <code>preprocess_names</code>'s output has length 3 and I want to use each value in a new column in <code>df</code>.</p>
<p>So, for the first new column, I want to use the first element from the output list. If I used this function on one str only, I would use <code>preprocess_names[0]</code> to get the correct variable. How can I do something similar while applying the method to df?</p>
<p>The code below raises <code>ValueError</code>, but is closest to what I would like to do.</p>
<pre><code>df['runs'] = df['name'].apply(preprocess_names)[0]
</code></pre>
|
<p>Do you want:</p>
<pre><code>df['runs'] = df['name'].apply(preprocess_names).str[0]
</code></pre>
|
python|pandas|dataframe|apply
| 1
|
375,609
| 69,823,422
|
Create 1 row dataframe from dictionary, with three columns and attach variable to first column
|
<p>I have a dictionary with a set of data as follows:</p>
<pre><code>fruit_dict = {'apples': 12.0, 'pears': 14.0, 'oranges': 5.0, 'lemons': 2.0}
</code></pre>
<p>I would like to create a dataframe of these items with the title of each fruit as a column as well as having one DATE column, like so (capitalised column titles needed):</p>
<pre><code> DATE APPLES PEARS ORANGES LEMONS
2017-09-01 12.0 14.0 5.0 2.0
</code></pre>
<p>I have created the necessary variable for the date as follows:</p>
<pre><code>now = datetime.datetime.now()
today = now.strftime("%d/%m/%Y")
print(today)
</code></pre>
<p>However when I try and create a dataframe from the dictionary, I end up with:</p>
<pre><code>apples 0 12.0
pears 0 14.0
oranges 0 5.0
lemons 0 2.0
</code></pre>
<p>The code I try to use to create said frame is:</p>
<pre><code>fruit_price = pd.DataFrame.from_dict(fruit_dict, orient='index').stack()
fruit_price.columns = ['DATE','APPLES', 'PEARS', 'ORANGES', 'LEMONS']
print(fruit_price)
</code></pre>
<p>If you have any advice I would greatly appreciate it.</p>
|
<p>First merge dictionaries for <code>Date</code> column and then pass to <code>list</code> in <code>DataFrame</code> constructor:</p>
<pre><code>d = {**{'date': today}, **fruit_dict}
fruit_price = pd.DataFrame([d]).rename(columns=lambda x: x.upper())
</code></pre>
|
python|pandas|dataframe|dictionary
| 2
|
375,610
| 69,990,800
|
Why is TensorFlow model reporting incorrect high confidence level for predictions?
|
<p>I wrote this function that takes in an image and generates a prediction. The level of confidence for the prediction that the function reports is greater than 100% many times. Sometimes the prediction is correct and reports a high level of confidence. Sometimes it is incorrect and still reports a high level of confidence. Can you please help me identify the error in my level of confidence code?</p>
<p>Final line of model structure:</p>
<pre><code>outputs = tf.keras.layers.Dense(3)(x)
</code></pre>
|
<p>If you want your output to be between 0 and 1, you should use either a <code>'sigmoid'</code> or <code>'softmax'</code> activation in your last layer:</p>
<pre><code>outputs = tf.keras.layers.Dense(3, activation='softmax')(x)
</code></pre>
<p>Careful, however, because softmax output can't really be interpreted as probability.</p>
|
python|tensorflow|machine-learning|keras
| 1
|
375,611
| 69,972,526
|
Fastest way to load_model for inference in Tensorflow Keras
|
<p>I’m trying to quickly load a model from disk to make predictions in a REST API. The <em>tf.keras.models.load_model</em> method takes ~1s to load so it’s too slow for what I’m trying to do. Compile flag is set to false.</p>
<p>What is the fastest way to load a model from disk for inference only in Tensorflow/Keras?</p>
<p>Is there any way to persist the model in memory between requests?</p>
<p>I tried caching but pickle deserialisation is very expensive and adds ~1.2s. I suspect the built-in Keras load model does some sort of serialisation too, which seems to be the killer.</p>
<p>PD: I'm aware of TFX but feels like an overkill as I've already set up a REST API. Predictions are fast, just need to quickly load the model from disk or persist in memory between requests.</p>
<p>Thanks in advance,
Joan</p>
|
<p>Doink! I had a bit of a brain fart moment just there so in case you have it too, here is a solution that does the job.</p>
<p>Just load the model when you start the server so all request can use the model.</p>
|
tensorflow|keras|tensorflow2.0
| 0
|
375,612
| 69,784,678
|
Loading model from saved checkpoints in keras
|
<p>I am using original DCGAN MNIST code (keras) for my project . My task is to generate an array and then I'll calculate some observables from that . I am saving model after each epochs so that I can find for which epoch I am getting best observables. I have used 50 Epochs so I have 50 saved checkpoints . Now I want to generate array (by generator) using some intermediate saved checkpoint so how should I load data from that ?
Code that I used for saving checkpoint is as follows:</p>
<pre><code>
checkpoint_dir = "./training_checkpoints"
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
</code></pre>
<p>It saves two types of files: ckpt-1.data-00000-of-00001 and ckpt-1.index.</p>
<p>How do I generate that array from it??
(Note: 'Array' that I want is something analogous to pixel array generated in MNIST case)</p>
|
<p>You can use <code>tf.train.CheckpointManager</code> to load your latest checkpoint or whatever checkpoint you like and then generate some images with your <code>generator</code> model based on random noise:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
checkpoint_dir = "./training_checkpoints"
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
ckpt_manager = tf.train.CheckpointManager(checkpoint, checkpoint_dir, max_to_keep=5)
if ckpt_manager.latest_checkpoint:
checkpoint.restore(ckpt_manager.latest_checkpoint)
# You can also access previous checkpoints like this: ckpt_manager.checkpoints[3]
print ('Latest checkpoint restored!!')
batch_size = 8
latent_dim = 32
noise = tf.random.normal([batch_size, latent_dim])
generated_images = generator(noise, training=False)
# Plot and/or save your images.
</code></pre>
|
python|tensorflow|keras|generative-adversarial-network|dcgan
| 1
|
375,613
| 69,975,400
|
Pytorch CNN: Expected input to have 1 channel but got 60000 channels instead
|
<p>While implementing a NN for Fashion MNIST dataset, I'm getting the following error:</p>
<pre><code>RuntimeError: Given groups=1, weight of size [6, 1, 5, 5], expected input[1, 60000, 28, 28] to have 1 channels, but got 60000 channels instead
</code></pre>
<p>I'm inferring that 60000 is the length of my entire dataset, but not sure why is the algorithm giving this error. Can someone help me fix this please?</p>
<p>My dataset:</p>
<pre><code>(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
train_data = []
test_data = []
train_data.append([X_train, y_train])
test_data.append([X_test, y_test])
trainloader = torch.utils.data.DataLoader(train_data, shuffle=True, batch_size=100)
testloader = torch.utils.data.DataLoader(test_data, shuffle=True, batch_size=100)
</code></pre>
<p>I'm getting the error in following order (as per stack trace):</p>
<pre><code> 8 #making predictions
----> 9 y_pred = model(images)
32 #first hidden layer
---> 33 x = self.conv1(x)
</code></pre>
<p><strong>Update 1</strong></p>
<p>Added the line:</p>
<pre><code>images = images.transpose(0, 1)
</code></pre>
<p>to transpose the image as pointed out by Ivan but now getting the error:</p>
<pre><code>RuntimeError: expected scalar type Byte but found Float
</code></pre>
|
<p>You input is shaped <code>(1, 60000, 28, 28)</code>, while it should be shaped <code>(60000, 1, 28, 28)</code>. You can fix this by transposing the first two axes:</p>
<pre><code>>>> x.transpose(0, 1)
</code></pre>
|
python|neural-network|pytorch|conv-neural-network|mnist
| 2
|
375,614
| 69,774,038
|
Error: index 9 is out of bounds for axis 0 with size 9
|
<p>I am attempting to make a Lagrange interpolation function however after construction I get the error <em>index 9 is out of bounds for axis 0 with size 9</em>. Why am a receiving this error and how can I fix it to perform my interpolation?</p>
<pre><code>
import numpy as np
b = np.arange(3,12)
y = np.arange(9)
from sympy import Symbol
t = Symbol('t')
d = len(b)
def interpolation(x, z):
if len(x) != len(z):
print("Error: the length of x and z is different")
else:
L = 0
for i in range (d+1):
p = 1
for j in range (d+1):
if j != i:
p *= (t-x[j]/(x[i] - x[j]))
L += z[i]*p
print(interpolation(b, y))
</code></pre>
|
<p>Because the first index is a zero you can only go to the index 8 and 9 is then out of bounds. Your 9 indices are 0, 1, 2, 3, 4, 5, 6, 7, 8.<br />
So you should not loop through d + 1. Use only d.</p>
|
python|function|numpy|interpolation
| 2
|
375,615
| 69,982,638
|
Count cumulative true Value
|
<p>I have the following column with <code>True</code> and <code>False</code> boolean values. I want create a new column performing a cumulative sum on the <code>True</code> values and if the value is <code>False</code> reset the count, like this:</p>
<pre><code> bool count
0 False 0
1 True 1
2 True 2
3 True 3
4 False 0
5 True 1
6 True 2
7 False 0
8 False 0
</code></pre>
|
<p>Yes, this can be done, using a series of steps:</p>
<pre><code>df['count'] = df.groupby(df['bool'].astype(int).diff().ne(0).cumsum())['bool'].cumsum()
</code></pre>
<p>Output:</p>
<pre><code>>>> df
bool count
0 False 0
1 True 1
2 True 2
3 True 3
4 False 0
5 True 1
6 True 2
7 False 0
8 False 0
</code></pre>
<p>Explanation:</p>
<p>This code creates separate groups for all consecutive true values (1's) coming before a false value (0), then, treating the trues as 1's and the falses as 0's, computes the cumulative sum for each group, then concatenates the results together.</p>
<ol>
<li><code>df.groupby</code> -
<ol>
<li><code>df['bool'].astype(int)</code> - Takes each value of <code>bool</code>, converts it to an int (true -> 1, false -> 0),</li>
<li><code>.diff()</code> - For each integer value, computes the difference between it an the previous value (so if the prev val was False and this is True, 1 (<code>1 - 0</code>); if prev was True and this True, 0 (<code>1 - 1</code>); etc.)</li>
<li><code>.ne(0)</code> - Converts all values that are <em>not equal to 0</em> to true, and zeros to false (because <code>(0 != 0) == False</code>)</li>
<li><code>.cumsum()</code> - Calculates cumulative sum for true (1) values. This way, all the trues before any false (0) get their own unique number, which is returned to the <code>groupby()</code> call, thus grouping separately each group of trues before a false</li>
</ol>
</li>
<li><code>['bool'].cumsum()</code> - From each group of consecutive true values (1), get the cumulative sum those 1s.</li>
</ol>
|
python|pandas|dataframe|boolean|pandas-groupby
| 1
|
375,616
| 69,747,667
|
Subset df on specific timestamps and previous seconds - python
|
<p>I've got a df containing timestamps and separate values. The timestamps are recorded in ms (10 rows per second). I want to subset specific timepoints <em>plus</em> the previous rows within that second.</p>
<p>Using below, the timestamps have been returned. I then subtract a second of each and concat back to original df. However, I'm hoping to include <em>all</em> timepoints within a second only. Then skip to the next timestamp and all timepoints within that second.</p>
<pre><code>df = pd.DataFrame({
'Time' : ['2021-03-20 09:27:28.400','2021-03-20 09:29:15.200','2021-03-20 09:30:38.200'],
'Label' : ['A','B','A'],
})
df['Time'] = pd.to_datetime(df['Time'])
df_prev = df.copy()
df_prev['Time'] = df_prev['Time'] - pd.Timedelta('0.9sec')
df_prev = df_prev[['Time']]
df_out = pd.concat([df, df_prev]).sort_values(by = 'Time').reset_index(drop = True)
df_out = (df_out.set_index(['Time', df_out.groupby('Time').cumcount()])
.unstack()
.asfreq('0.1S', method = 'pad')
.stack(dropna = False)
.reset_index(level = 1, drop = True)
.reset_index()
)
</code></pre>
<p>Intended output:</p>
<pre><code> Time Label
1 2021-03-20 09:27:27.500 NaN
2 2021-03-20 09:27:27.600 NaN
3 2021-03-20 09:27:27.700 NaN
4 2021-03-20 09:27:27.800 NaN
5 2021-03-20 09:27:27.900 NaN
6 2021-03-20 09:27:28.000 NaN
7 2021-03-20 09:27:28.100 NaN
8 2021-03-20 09:27:28.200 NaN
9 2021-03-20 09:27:28.300 NaN
10 2021-03-20 09:27:28.400 A
11 2021-03-20 09:29:14.300 NaN
12 2021-03-20 09:29:14.400 NaN
13 2021-03-20 09:29:14.500 NaN
14 2021-03-20 09:29:14.600 NaN
15 2021-03-20 09:29:14.700 NaN
16 2021-03-20 09:29:14.800 NaN
17 2021-03-20 09:29:14.900 NaN
18 2021-03-20 09:29:14.000 NaN
19 2021-03-20 09:29:15.100 NaN
20 2021-03-20 09:29:15.200 B
21 2021-03-20 09:30:37.300 NaN
22 2021-03-20 09:30:37.400 NaN
23 2021-03-20 09:30:37.500 NaN
24 2021-03-20 09:30:37.600 NaN
25 2021-03-20 09:30:37.700 NaN
26 2021-03-20 09:30:37.800 NaN
27 2021-03-20 09:30:37.900 NaN
28 2021-03-20 09:30:38.000 NaN
29 2021-03-20 09:30:38.100 NaN
30 2021-03-20 09:30:38.200 A
</code></pre>
|
<p>One way is to build a list of dates, and do an outer merge with the original <code>df</code> :</p>
<pre class="lang-py prettyprint-override"><code>prev = df.Time - pd.Timedelta('900ms')
# build new dates
new_values = pd.concat(pd.date_range(start, end,
periods=10,
name = 'Time').to_series(index=None)
for start, end in zip(prev, df.Time))
new_values.index = range(len(new_values))
df.merge(new_values, on='Time', how='outer', sort = True)
Out[286]:
Time Label
0 2021-03-20 09:27:27.500 NaN
1 2021-03-20 09:27:27.600 NaN
2 2021-03-20 09:27:27.700 NaN
3 2021-03-20 09:27:27.800 NaN
4 2021-03-20 09:27:27.900 NaN
5 2021-03-20 09:27:28.000 NaN
6 2021-03-20 09:27:28.100 NaN
7 2021-03-20 09:27:28.200 NaN
8 2021-03-20 09:27:28.300 NaN
9 2021-03-20 09:27:28.400 A
10 2021-03-20 09:29:14.300 NaN
11 2021-03-20 09:29:14.400 NaN
12 2021-03-20 09:29:14.500 NaN
13 2021-03-20 09:29:14.600 NaN
14 2021-03-20 09:29:14.700 NaN
15 2021-03-20 09:29:14.800 NaN
16 2021-03-20 09:29:14.900 NaN
17 2021-03-20 09:29:15.000 NaN
18 2021-03-20 09:29:15.100 NaN
19 2021-03-20 09:29:15.200 B
20 2021-03-20 09:30:37.300 NaN
21 2021-03-20 09:30:37.400 NaN
22 2021-03-20 09:30:37.500 NaN
23 2021-03-20 09:30:37.600 NaN
24 2021-03-20 09:30:37.700 NaN
25 2021-03-20 09:30:37.800 NaN
26 2021-03-20 09:30:37.900 NaN
27 2021-03-20 09:30:38.000 NaN
28 2021-03-20 09:30:38.100 NaN
29 2021-03-20 09:30:38.200 A
</code></pre>
|
python|pandas|datetime|timedelta
| 2
|
375,617
| 69,816,464
|
Pandas pivoting/stacking/reshaping from string in rows
|
<p>Accepted answer from Sammy as it did solve the original post. Editing to include further complexity when using the solutions, some of the values themselves has spaces in them and so the regex breaks these as well. Including example change in key1=value 11.</p>
<p>This data seem designed to be analytics unfriendly.</p>
<p>I would like to convert a dataset in the form of:</p>
<pre><code>pd.Series(["key1=value1 key2=value2 key3=value3", "key1=value 11 key2=value22 key3=value33", "key1=value111,key2=value222,key3=value333"])
#0 key1=value1 key2=value2 key3=value3
#1 key1=value 11 key2=value22 key3=value33
#2 key1=value111,key2=value222,key3=value333
#dtype: object
</code></pre>
<p>With the expected output:</p>
<pre><code>pd.DataFrame.from_dict({"key1":["value1", "value 11", "value111"], "key2":["value2", "value22", "value222"], "key3":["value3", "value33", "value333"]})
# key1 key2 key3
#0 value1 value2 value3
#1 value 11 value22 value33
#2 value111 value222 value333
</code></pre>
<p>The challenge of course is that both the variable names and values have to be parsed from the string. I would also like to keep the index unchanged.</p>
|
<p>You could do the entire transformation with python, which should be faster and easier. Given an input Series <code>s</code>:</p>
<pre class="lang-py prettyprint-override"><code>import re
pd.DataFrame([dict(e.split('=') for e in re.split("[\s,]", ent)) for ent in s])
key1 key2 key3
0 value1 value2 value3
1 value11 value22 value33
2 value111 value222 value333
</code></pre>
|
pandas
| 1
|
375,618
| 69,888,535
|
How to move files from a dataframe into separate folders?
|
<p>I am working with a covid dataset right now and I have loaded all the images into a dataframe.I have labelled covid positive images as 1 and normal images as zero.I want to separate the data into two folders namely 1 (<em>1 shall contain covid positive images</em>) and 0(<em>0 folder should contain normal images</em>).<a href="https://i.stack.imgur.com/zLa5O.png" rel="nofollow noreferrer">Picture of the dataframe</a></p>
<p>How can I do the same in google colab?</p>
|
<p>Just go through the dataframe element by element and save to a folder based on the flag:</p>
<pre><code>for file, flag in zip(df['filename'], df['categories']):
if flag == 0:
#save file to first directory
elif flag == 1:
#save file to second directory
</code></pre>
|
python|pandas|dataframe
| 0
|
375,619
| 69,706,273
|
replace substrings in pandas columns while skipping rows with value: None
|
<p>I stole this from here <a href="https://stackoverflow.com/questions/57200908/remove-replace-columns-values-based-on-another-columns-using-pandas?noredirect=1&lq=1">Remove/replace columns values based on another columns using pandas</a></p>
<pre><code>[a.replace(b,'') for a,b in zip(df1['asker'], df1['party']) if a != None]
</code></pre>
<p>I added <code>if a != None</code> because it always threw error: <strong>AttributeError: 'NoneType' object has no attribute 'replace'</strong></p>
<p>Here are different solutions to the same problem: replacing substring <code>df1['party']</code> within col <code>asker</code></p>
<pre><code>df1['new_column'] = df1['asker'].replace(to_replace=r'\b'+df1['party']+r'\b', value='',regex=True)
df1['asker'] = df1.apply(lambda x: x['asker'].replace(x['party'], ''), axis = 1)
</code></pre>
<p>None of them worked as soon as i added the exception for <code>None</code> values</p>
<p>example of <code>df1</code> columns <code>party</code></p>
<pre><code>[QQQ,
None,
RRR-Fraktion]
</code></pre>
<p>example of <code>df1</code> columns <code>asker</code></p>
<pre><code>[Konrad Munch QQQ,
None,
Heiko Baer RRR-Fraktion]
</code></pre>
|
<p>Use:</p>
<pre><code>[a.replace(b,'') if (a != None) and (b != None)
else a
for a,b in zip(df1['asker'], df1['party'])]
</code></pre>
<p>If need test <code>NaN</code>s or <code>None</code>s use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.notna.html" rel="nofollow noreferrer"><code>notna</code></a>:</p>
<pre><code>df1 = pd.DataFrame({"asker": ["Heiko Baer RRR-Fraktion", "a", "b",
np.nan, None, None, np.nan],
"party": ['RRR-Fraktion', None, np.nan, 'a', 's', None, np.nan]})
df1['asker'] = [a.replace(b,'') if pd.notna(a) and pd.notna(b)
else a
for a,b in zip(df1['asker'], df1['party'])]
print (df1)
asker party
0 Heiko Baer RRR-Fraktion
1 a None
2 b NaN
3 NaN a
4 None s
5 None None
6 NaN NaN
</code></pre>
|
python|pandas|list-comprehension|nonetype
| 1
|
375,620
| 69,893,417
|
Group by in Pandas based on a condition
|
<p>I have a dataframe</p>
<pre><code>|phone_number|call_date|answered| attempt|
|123 | 13thJune| 1 | 1 |
|234 | 15thJune| 0 | 1 |
|234 | 15thJune| 0 | 2 |
</code></pre>
<p>I want to perform a groupby and take out the max date of answered. i.e If the call is not answered which is 0 , then max date of answered should be blank.</p>
<pre><code>df.groupby(['phone_number'])['Call_Date'].max().reset_index()
</code></pre>
<p>only when <code>answered is > 0</code> else this groupby should give me a <code>blank</code></p>
<p>How do I achieve this?</p>
<p>expected df</p>
<pre><code>phone_number | max_call_date
123 | 13th June
234 | Nan
</code></pre>
|
<p>Use:</p>
<pre><code>df = df.groupby('phone_number').apply(lambda x: x[x['answered']!=0]['call_date'].max()).reset_index().rename(columns={0: 'max_call_date'})
print(df)
</code></pre>
<p><code>Output:</code></p>
<pre><code> phone_number max_call_date
0 123 13thJune
1 234 NaN
</code></pre>
|
python|pandas
| 2
|
375,621
| 69,859,586
|
How to get mean of selected rows with another column's values in pandas
|
<p>I have a dataframe like this:</p>
<p><a href="https://i.stack.imgur.com/MFNJw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MFNJw.png" alt="enter image description here" /></a></p>
<p>I want to take the mean of WFR between 2009-2015 for each NAME and put it for all the years of each NAME. any idea? Thanks</p>
<p><strong>How to setup</strong></p>
<pre><code>data = {'NAME': ['A', 'B', 'B', 'B', 'B', 'C', 'C', 'D', 'D', 'D'],
'YEAR': [2017, 2009, 2011, 2017, 2018, 2010, 2018, 2014, 2015, 2016],
'WFR': [20, 50, 80, 60, 90, 10, 30, 40, 55, 45]}
df = pd.DataFrame(data)
</code></pre>
|
<p>Use <code>groupby_mean</code> after filter years then map the mean for each name:</p>
<pre><code>tmp = df.loc[df['YEAR'].between(2009, 2015)].groupby('NAME')['WFR'].mean()
df['MEAN'] = df['NAME'].map(tmp)
</code></pre>
<p>Output:</p>
<pre><code>>>> df
NAME YEAR WFR MEAN
0 A 2017 20 NaN
1 B 2009 50 65.0
2 B 2011 80 65.0
3 B 2017 60 65.0
4 B 2018 90 65.0
5 C 2010 10 10.0
6 C 2018 30 10.0
7 D 2014 40 47.5
8 D 2015 55 47.5
9 D 2016 45 47.5
>>> tmp
NAME
B 65.0
C 10.0
D 47.5
Name: WFR, dtype: float64
</code></pre>
<p>Setup:</p>
<pre><code>data = {'NAME': ['A', 'B', 'B', 'B', 'B', 'C', 'C', 'D', 'D', 'D'],
'YEAR': [2017, 2009, 2011, 2017, 2018, 2010, 2018, 2014, 2015, 2016],
'WFR': [20, 50, 80, 60, 90, 10, 30, 40, 55, 45]}
df = pd.DataFrame(data)
</code></pre>
|
python|pandas|dataframe
| 4
|
375,622
| 70,001,458
|
I want to filter column with a character
|
<p>I want to filter the column NAME with just one letter "o".<br />
DataFrame:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">NAME</th>
<th style="text-align: left;">HOBBY</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">John</td>
<td style="text-align: left;">football</td>
</tr>
<tr>
<td style="text-align: left;">Kelly</td>
<td style="text-align: left;">chess</td>
</tr>
</tbody>
</table>
</div>
<p><code>df["NAME"].filter(like ="o")</code></p>
<p>I am looking for an output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">NAME</th>
<th style="text-align: left;">HOBBY</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">John</td>
<td style="text-align: left;">football</td>
</tr>
</tbody>
</table>
</div>
|
<p>You can use .contains to filter only the rows where name contains a lower case 'o'.</p>
<p>Using the below df:</p>
<pre><code>df = pd.DataFrame({'NAME' : ['John', 'Kelly'],
'HOBBY' : ['football', 'chess']})
</code></pre>
<p>Then you can use .loc and .contains to filter on desired rows:</p>
<pre><code> df.loc[df['NAME'].str.contains('o')]
</code></pre>
<p>Output:</p>
<pre><code> NAME HOBBY
0 John football
</code></pre>
|
python|pandas
| 1
|
375,623
| 70,012,098
|
Tensorflow Object Detection API taking forever to install in a Google Colab and failing
|
<p>I am trying to install the Tensorflow Object Detection API on a Google Colab and the part that installs the API, shown below, takes a very long time to execute (in excess of one hour) and eventually fails to install.</p>
<pre><code># Install the Object Detection API
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install
</code></pre>
<p>To discover What I was doing wrong, I reverted to the "Eager Few Shot Object Detection Colab" example available at <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/eager_few_shot_od_training_tf2_colab.ipynb" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/eager_few_shot_od_training_tf2_colab.ipynb</a> in a Google Colab Pro notebook, and the "python -m pip install" part hangs as well. Normally, this Colab runs in under 10 minutes, but in Google PRO Colab it is not running at all.</p>
<p>I can't seem to pinpoint what is causing this installation to fail. Anyone has any idea why the Object Detection API is no longer installing on Google Colab notebooks?</p>
<p>Update... yesterday the installation took over two hours, and failes, and this is the output:</p>
<pre><code>Processing /content/models/research
Collecting avro-python3
Using cached avro-python3-1.10.2.tar.gz (38 kB)
Collecting apache-beam
Using cached apache_beam-2.34.0-cp37-cp37m-manylinux2010_x86_64.whl (9.8 MB)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (7.1.2)
Requirement already satisfied: lxml in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (4.2.6)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (3.2.2)
Requirement already satisfied: Cython in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (0.29.24)
Requirement already satisfied: contextlib2 in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (0.5.5)
Collecting tf-slim
Using cached tf_slim-1.1.0-py2.py3-none-any.whl (352 kB)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (1.15.0)
Requirement already satisfied: pycocotools in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (2.0.2)
Collecting lvis
Using cached lvis-0.5.3-py3-none-any.whl (14 kB)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (1.4.1)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from object-detection==0.1) (1.1.5)
Collecting tf-models-official>=2.5.1
Using cached tf_models_official-2.7.0-py2.py3-none-any.whl (1.8 MB)
Collecting tensorflow_io
Using cached tensorflow_io-0.22.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (22.7 MB)
Collecting keras==2.6.0
Using cached keras-2.6.0-py2.py3-none-any.whl (1.3 MB)
Collecting tensorflow-addons
Using cached tensorflow_addons-0.15.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)
Requirement already satisfied: kaggle>=1.3.9 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.5.12)
Requirement already satisfied: gin-config in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (0.5.0)
Collecting sacrebleu
Using cached sacrebleu-2.0.0-py3-none-any.whl (90 kB)
Requirement already satisfied: psutil>=5.4.3 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (5.4.8)
Collecting py-cpuinfo>=3.3.0
Using cached py-cpuinfo-8.0.0.tar.gz (99 kB)
Collecting tensorflow-text>=2.7.0
Using cached tensorflow_text-2.7.0-cp37-cp37m-manylinux2010_x86_64.whl (4.9 MB)
Requirement already satisfied: oauth2client in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (4.1.3)
Collecting seqeval
Using cached seqeval-1.2.2.tar.gz (43 kB)
Requirement already satisfied: numpy>=1.15.4 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.19.5)
Collecting sentencepiece
Using cached sentencepiece-0.1.96-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)
Requirement already satisfied: tensorflow-datasets in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (4.0.1)
Collecting tensorflow-model-optimization>=0.4.1
Using cached tensorflow_model_optimization-0.7.0-py2.py3-none-any.whl (213 kB)
Requirement already satisfied: tensorflow-hub>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (0.12.0)
Collecting opencv-python-headless
Using cached opencv_python_headless-4.5.4.58-cp37-cp37m-manylinux2014_x86_64.whl (47.6 MB)
Collecting tensorflow>=2.7.0
Using cached tensorflow-2.7.0-cp37-cp37m-manylinux2010_x86_64.whl (489.6 MB)
Requirement already satisfied: google-api-python-client>=1.6.7 in /usr/local/lib/python3.7/dist-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.12.8)
Collecting pyyaml>=5.1
Using cached PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)
Requirement already satisfied: google-auth>=1.16.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.35.0)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.0.1)
Requirement already satisfied: google-api-core<2dev,>=1.21.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.26.3)
Requirement already satisfied: httplib2<1dev,>=0.15.0 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.17.4)
Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.7/dist-packages (from google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.0.4)
Requirement already satisfied: packaging>=14.3 in /usr/local/lib/python3.7/dist-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (21.2)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.23.0)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (1.53.0)
Requirement already satisfied: protobuf>=3.12.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.17.3)
Requirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2018.9)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (57.4.0)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (4.2.4)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (1.24.3)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (5.0.2)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (4.62.3)
Requirement already satisfied: certifi in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (2021.10.8)
Requirement already satisfied: python-dateutil in /usr/local/lib/python3.7/dist-packages (from kaggle>=1.3.9->tf-models-official>=2.5.1->object-detection==0.1) (2.8.2)
Requirement already satisfied: pyparsing<3,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=14.3->google-api-core<2dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.4.7)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth>=1.16.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (0.4.8)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.21.0->google-api-python-client>=1.6.7->tf-models-official>=2.5.1->object-detection==0.1) (3.0.4)
INFO: pip is looking at multiple versions of six to determine which version is compatible with other requirements. This could take a while.
Collecting six
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Using cached six-1.15.0-py2.py3-none-any.whl (10 kB)
Using cached six-1.14.0-py2.py3-none-any.whl (10 kB)
Using cached six-1.13.0-py2.py3-none-any.whl (10 kB)
INFO: pip is looking at multiple versions of scipy to determine which version is compatible with other requirements. This could take a while.
Collecting scipy
Using cached scipy-1.7.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (38.2 MB)
INFO: pip is looking at multiple versions of six to determine which version is compatible with other requirements. This could take a while.
Using cached scipy-1.7.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (28.5 MB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
Using cached scipy-1.7.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (28.5 MB)
Using cached scipy-1.6.3-cp37-cp37m-manylinux1_x86_64.whl (27.4 MB)
Using cached scipy-1.6.2-cp37-cp37m-manylinux1_x86_64.whl (27.4 MB)
Using cached scipy-1.6.1-cp37-cp37m-manylinux1_x86_64.whl (27.4 MB)
Using cached scipy-1.6.0-cp37-cp37m-manylinux1_x86_64.whl (27.4 MB)
INFO: pip is looking at multiple versions of scipy to determine which version is compatible with other requirements. This could take a while.
Using cached scipy-1.5.4-cp37-cp37m-manylinux1_x86_64.whl (25.9 MB)
Using cached scipy-1.5.3-cp37-cp37m-manylinux1_x86_64.whl (25.9 MB)
Using cached scipy-1.5.2-cp37-cp37m-manylinux1_x86_64.whl (25.9 MB)
Using cached scipy-1.5.1-cp37-cp37m-manylinux1_x86_64.whl (25.9 MB)
Using cached scipy-1.5.0-cp37-cp37m-manylinux1_x86_64.whl (25.9 MB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
Using cached scipy-1.4.1-cp37-cp37m-manylinux1_x86_64.whl (26.1 MB)
Using cached scipy-1.4.0-cp37-cp37m-manylinux1_x86_64.whl (26.1 MB)
Using cached scipy-1.3.3-cp37-cp37m-manylinux1_x86_64.whl (25.2 MB)
Using cached scipy-1.3.2-cp37-cp37m-manylinux1_x86_64.whl (25.2 MB)
Using cached scipy-1.3.1-cp37-cp37m-manylinux1_x86_64.whl (25.2 MB)
Using cached scipy-1.3.0-cp37-cp37m-manylinux1_x86_64.whl (25.2 MB)
Using cached scipy-1.2.3-cp37-cp37m-manylinux1_x86_64.whl (24.8 MB)
Using cached scipy-1.2.2-cp37-cp37m-manylinux1_x86_64.whl (24.8 MB)
Using cached scipy-1.2.1-cp37-cp37m-manylinux1_x86_64.whl (24.8 MB)
Using cached scipy-1.2.0-cp37-cp37m-manylinux1_x86_64.whl (26.6 MB)
Using cached scipy-1.1.0-cp37-cp37m-manylinux1_x86_64.whl (31.2 MB)
Using cached scipy-1.0.1.tar.gz (15.5 MB)
Using cached scipy-1.0.0.tar.gz (15.2 MB)
Using cached scipy-0.19.1.tar.gz (14.1 MB)
INFO: pip is looking at multiple versions of rsa to determine which version is compatible with other requirements. This could take a while.
Collecting rsa<5,>=3.1.4
Using cached rsa-4.7.2-py3-none-any.whl (34 kB)
Using cached rsa-4.7.1-py3-none-any.whl (36 kB)
Using cached rsa-4.7-py3-none-any.whl (34 kB)
Using cached rsa-4.6-py3-none-any.whl (47 kB)
Using cached rsa-4.5-py2.py3-none-any.whl (36 kB)
Using cached rsa-4.4.1-py2.py3-none-any.whl (33 kB)
Using cached rsa-4.3-py2.py3-none-any.whl (36 kB)
INFO: pip is looking at multiple versions of rsa to determine which version is compatible with other requirements. This could take a while.
Using cached rsa-4.2.tar.gz (46 kB)
Using cached rsa-4.1-py3-none-any.whl (32 kB)
Using cached rsa-4.0-py2.py3-none-any.whl (38 kB)
Using cached rsa-3.4.2-py2.py3-none-any.whl (46 kB)
Using cached rsa-3.4.1-py2.py3-none-any.whl (46 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
Using cached rsa-3.4-py2.py3-none-any.whl (46 kB)
Using cached rsa-3.3-py2.py3-none-any.whl (44 kB)
Using cached rsa-3.2.3-py2.py3-none-any.whl (44 kB)
Using cached rsa-3.2.2-py2.py3-none-any.whl (44 kB)
Using cached rsa-3.2-py2.py3-none-any.whl (43 kB)
Using cached rsa-3.1.4.tar.gz (36 kB)
INFO: pip is looking at multiple versions of idna to determine which version is compatible with other requirements. This could take a while.
Collecting idna<3,>=2.5
Using cached idna-2.10-py2.py3-none-any.whl (58 kB)
Using cached idna-2.9-py2.py3-none-any.whl (58 kB)
Using cached idna-2.8-py2.py3-none-any.whl (58 kB)
Using cached idna-2.7-py2.py3-none-any.whl (58 kB)
Using cached idna-2.6-py2.py3-none-any.whl (56 kB)
Using cached idna-2.5-py2.py3-none-any.whl (55 kB)
INFO: pip is looking at multiple versions of chardet to determine which version is compatible with other requirements. This could take a while.
Collecting chardet<4,>=3.0.2
Using cached chardet-3.0.4-py2.py3-none-any.whl (133 kB)
INFO: pip is looking at multiple versions of idna to determine which version is compatible with other requirements. This could take a while.
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
Downloading chardet-3.0.3-py2.py3-none-any.whl (133 kB)
Downloading chardet-3.0.2-py2.py3-none-any.whl (133 kB)
INFO: pip is looking at multiple versions of certifi to determine which version is compatible with other requirements. This could take a while.
Collecting certifi
Downloading certifi-2021.10.8-py2.py3-none-any.whl (149 kB)
INFO: pip is looking at multiple versions of chardet to determine which version is compatible with other requirements. This could take a while.
Downloading certifi-2021.5.30-py2.py3-none-any.whl (145 kB)
Downloading certifi-2020.12.5-py2.py3-none-any.whl (147 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
Downloading certifi-2020.11.8-py2.py3-none-any.whl (155 kB)
Downloading certifi-2020.6.20-py2.py3-none-any.whl (156 kB)
Downloading certifi-2020.4.5.2-py2.py3-none-any.whl (157 kB)
Downloading certifi-2020.4.5.1-py2.py3-none-any.whl (157 kB)
INFO: pip is looking at multiple versions of certifi to determine which version is compatible with other requirements. This could take a while.
Downloading certifi-2020.4.5-py2.py3-none-any.whl (156 kB)
Downloading certifi-2019.11.28-py2.py3-none-any.whl (156 kB)
Downloading certifi-2019.9.11-py2.py3-none-any.whl (154 kB)
Downloading certifi-2019.6.16-py2.py3-none-any.whl (157 kB)
Downloading certifi-2019.3.9-py2.py3-none-any.whl (158 kB)
DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.
pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555.
ERROR: Exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/pip/_internal/cli/base_command.py", line 180, in _main
status = self.run(options, args)
File "/usr/local/lib/python3.7/dist-packages/pip/_internal/cli/req_command.py", line 199, in wrapper
return func(self, options, args)
File "/usr/local/lib/python3.7/dist-packages/pip/_internal/commands/install.py", line 319, in run
reqs, check_supported_wheels=not options.target_dir
File "/usr/local/lib/python3.7/dist-packages/pip/_internal/resolution/resolvelib/resolver.py", line 128, in resolve
requirements, max_rounds=try_to_avoid_resolution_too_deep
File "/usr/local/lib/python3.7/dist-packages/pip/_vendor/resolvelib/resolvers.py", line 473, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.7/dist-packages/pip/_vendor/resolvelib/resolvers.py", line 384, in resolve
raise ResolutionTooDeep(max_rounds)
pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 2000000
</code></pre>
<p>Ivan</p>
|
<p>I have solved this problem with</p>
<pre><code>pip install --upgrade pip
</code></pre>
<p>Please refer to <a href="https://github.com/tensorflow/models/issues/10375" rel="nofollow noreferrer">this issue</a>.</p>
|
tensorflow|google-colaboratory|object-detection|object-detection-api
| 3
|
375,624
| 69,976,510
|
Pandas: Assign value based on value in another row
|
<p>I have the 3 columns data frame: page_number, line_number, text.</p>
<p>I want to create another column prev_text which contains text from previous page, the same line number eg:</p>
<p>For page 3, line 10 I want to have value from page 2, line 10 if such page & line exist.</p>
<p>At the moment I build 2 dimensional text array indexed by page and line. Next I use apply with function which refers to this two dimensional array.</p>
<p>Is there any better and faster way for making such assignment in Pandas?</p>
<p>I hope sort by line_number, page_number can work with other pandas function here.</p>
|
<p>Here's one approach:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'page_num': [1, 1, 1, 1, 2, 2, 3, 3, 3, 4],
'line_num': [1, 2, 3, 4, 1, 2, 1, 2, 3, 1],
'text': ['apple', 'banana', 'car',
'dog', 'egg', 'fire',
'goat', 'house', 'ink',
'jet']})
</code></pre>
<p>So <code>df</code> is</p>
<pre><code> page_num line_num text
0 1 1 apple
1 1 2 banana
2 1 3 car
3 1 4 dog
4 2 1 egg
5 2 2 fire
6 3 1 goat
7 3 2 house
8 3 3 ink
9 4 1 jet
</code></pre>
<p>Then:</p>
<pre><code>df['prev_page_num'] = df.groupby('line_num')['page_num'].shift(1, fill_value=0)
df_consecutive = df[df['page_num'] == df['prev_page_num'] + 1]
df['prev_text'] = df_consecutive.groupby('line_num').apply(lambda x: x['text'].shift(1)).droplevel(0)
df.sort_values(by=['page_num', 'line_num'], inplace=True)
df.reset_index(drop=True, inplace=True)
</code></pre>
<p>which leaves us with <code>df</code></p>
<pre><code> page_num line_num text prev_page_num prev_text
0 1 1 apple 0 NaN
1 1 2 banana 0 NaN
2 1 3 car 0 NaN
3 1 4 dog 0 NaN
4 2 1 egg 1 apple
5 2 2 fire 1 banana
6 3 1 goat 2 egg
7 3 2 house 2 fire
8 3 3 ink 1 NaN
9 4 1 jet 3 goat
</code></pre>
<p>There's probably a very clever way to do this by listing <code>connected_components</code> in a graph, but I'll leave that to someone else!</p>
|
python|pandas
| 0
|
375,625
| 69,917,766
|
Can't Plot Loss and Accuracy After training datasets
|
<p>I already training my dataset with this code before</p>
<pre><code>def train_model(model, criterion, optimizer, scheduler, num_epochs):
since = time.time()
#device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{} LR {:.6f}'.format(epoch, num_epochs - 1, scheduler.get_last_lr()[0]))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'validation']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
#print(model.train())
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
#print(inputs)
#print(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
if phase == 'train':
scheduler.step()
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'validation' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
criterion = nn.CrossEntropyLoss()optimizer = optim.Adam(model.parameters(), lr=0.0001)
#decay LR(Learning Rate) by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
model = train_model(model,criterion, optimizer, exp_lr_scheduler, num_epochs=5).to(device)
</code></pre>
<p>and after I training my dataset I want to plot the result of loss and accuracy from my training. I train my dataset with VGG16 architecture. and I'm using Dataset Caltech101 which is amount thousand of images</p>
<pre><code>ss_values = []
loss_values.append(model['train','validation'])
plt.plot(loss_values)
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
</code></pre>
<p>but it can't plotting my loss and accuracy result. and I don't know why</p>
|
<p>I finally fixed the problem. I couldn't plot the result before because I was not putting the result into the tensorboard. Now I can plot the result.</p>
|
python|pytorch
| 0
|
375,626
| 69,848,362
|
Degree Centrality and Clustering Coefficient in Adjacent matrix
|
<p>Based on a dataset extracted from this link: <a href="https://cosmosimfrazza.myfreesites.net/cosmic-web-and-brain-network-datasets" rel="nofollow noreferrer">Brain and Cosmic Web samples</a>, I'm trying to do some Complex Network analysis.</p>
<hr />
<p>The paper <a href="https://www.frontiersin.org/articles/10.3389/fphy.2020.525731/full" rel="nofollow noreferrer">The Quantitative Comparison Between the Neuronal Network and the Cosmic Web</a>, claims to have used this dataset, as well as its adjacent matrixes</p>
<p>"<em><code>Mij</code>, i.e., a matrix with rows/columns equal to the number of detected nodes, with value <code>Mij</code> = 1 if the nodes are separated by a distance <code>≤ llink</code> , or <code>Mij = 0</code> otherwise</em>".</p>
<p>I then probed into the matrix, like so:</p>
<pre><code>from astropy.io import fits
with fits.open('mind_dataset/matrix_CEREBELLUM_large.fits') as data:
matrix_cerebellum = pd.DataFrame(data[0].data)
</code></pre>
<p>which does not print a sparse matrix, but rather a matrix with distances from nodes expressed as pixels.</p>
<hr />
<p>I've learned that the correspondence between 1 pixel and scale is:</p>
<pre><code>neuronal_web_pixel = 0.32 # micrometers
</code></pre>
<p>And came up with a method in order to convert pixels to microns:</p>
<pre><code>def pixels_to_scale(df, mind=False, cosmos=False):
one_pixel_equals_parsec = cosmic_web_pixel
one_pixel_equals_micron = neuronal_web_pixel
if mind:
df = df/one_pixel_equals_micron
if cosmos:
df = df/one_pixel_equals_parsec
return df
</code></pre>
<p>Then, another method to binaryze the matrix after the conversion:</p>
<pre><code>def binarize_matrix(df, mind=False, cosmos=False):
if mind:
brain_Llink = 16.0 # microns
# distances less than 16 microns
brain_mask = (df<=brain_Llink)
# convert to 1
df = df.where(brain_mask, 1.0)
if cosmos:
cosmos_Llink = 1.2 # 1.2 mpc
brain_mask = (df<=cosmos_Llink)
df = df.where(brain_mask, 1.0)
return df
</code></pre>
<p>Finally, with:</p>
<pre><code>matrix_cerebellum = pixels_to_scale(matrix_cerebellum, mind=True)
matrix_cerebellum = binarize_matrix(matrix_cerebellum, mind=True)
</code></pre>
<p><code>matrix_cerebellum.head(5)</code> prints my sparse matrix of (mostly) 0.0s and 1.0s:</p>
<pre><code>0 1 2 3 4 5 6 7 8 9 ... 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857
0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
4 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
5 rows × 1858 columns
</code></pre>
<hr />
<p>Now I would like to calculate:</p>
<ol>
<li><p><em><strong>Degree Centrality</strong></em> of the network, given by the formula:</p>
<p>Cd(j) = Kj / n-1</p>
</li>
</ol>
<p>Where <code>kj</code> is the number of (undirected) connections to/from each <code>j-node</code> and <code>n</code> is the total number of nodes in the entire network.</p>
<ol start="2">
<li><p><em><strong>Clustering Coefficient</strong></em>, which quantifies the existence of infrastructure within the local vicinity of nodes, given by the formula:</p>
<p>C(j) = 2yi / Kj(Kj -1)</p>
</li>
</ol>
<p>in which <code>yj</code> is the number of links between neighbooring nodes of the <code>j-node</code>.</p>
<hr />
<p>For finding Degree Centrality, I have tried:</p>
<pre><code># find connections by adding matrix row values
matrix_cerebellum['K'] = matrix_cerebellum.sum(axis=1)
# applying formula
matrix_cerebellum['centrality'] = matrix_cerebellum['K']/matrix_cerebellum.shape[0]-1
</code></pre>
<p>Generates:</p>
<pre><code>... K centrality
9.0 -0.995156
6.0 -0.996771
7.0 -0.996771
11.0 -0.996233
11.0 -0.994080
</code></pre>
<p>According to the paper, I should be finding:</p>
<p><em>"For the cerebellum slices we measured 〈k〉 ∼ 1.9 − 3.7"</em>,</p>
<p>For the average numbers of connections per node.</p>
<p>Also I'm finding negative centralities.</p>
<hr />
<p>Does anyone know how to apply any of these formulas based on the dataframe above?</p>
|
<p>This is not really a programming question, but I will try to answer it. The webpage with the data sources states that the adjacent matrix files for brain samples give distances between connected nodes expressed in pixels of the images used to reconstruct the networks. The paper then explains that to get the real adjacency matrix Mij (with 0 and 1 values only) the authors consider as connected nodes where the distance is at most 16 micrometers. I don't see the information on how many pixels in the image corresponds to one micrometer. This would be needed to compute the same matrix Mij that the authors used in their calculations.</p>
<p>Furthermore, the value〈k〉is not the degree centrality or the clustering coefficient (that depend on a node), but rather the average number of connections per node in the network, computed using the matrix Mij. The paper then compares the observed distributions of degree centralities and clustering coefficients in the brain and cosmic networks to the distribution one would see in a random network with the same number of nodes and the same value of〈k〉. The conclusion is that brain and cosmic networks are highly non-random.</p>
<p><strong>Edits:</strong></p>
<p><strong>1.</strong> The conversion of 0.32 micrometers per pixel seems to be right. In the files with data on brain samples (both for cortex and cerebellum) the largest value is 50 pixels, which with this conversion corresponds to 16 micrometers. This suggests that the authors of the paper already thresholded the matrices, listing in them only distances not exceeding 16 micrometers. In view of this, to obtain the matrix Mij with 0 and 1 values only, one simply needs to replace all non-zero values with 1. An issue is that using the matrices obtained in this way one gets 〈k〉 = 9.22 for cerebellum and 〈k〉 = 7.13 for cortex, which is somewhat outside the ranges given in the paper. I don't know how to account for this discrepancy.</p>
<p><strong>2.</strong> Negative centrality values are due to a mistake (missing parentheses) in the code. It should be:</p>
<pre><code>matrix_cerebellum['centrality'] = matrix_cerebellum['K']/(matrix_cerebellum.shape[0] - 1)
</code></pre>
<p><strong>3.</strong> Clustering coefficient and degree centrality of each node can be computed using tools provided by the <code>networkx</code> library:</p>
<pre><code>from astropy.io import fits
import networkx as nx
# get the adjacency matrix for cortex
with fits.open('matrix_CORTEX_large.fits') as data:
M = data[0].data
M[M > 0] = 1
# create a graph object
G_cortex = nx.from_numpy_matrix(M)
# compute degree centrality of all nodes
centrality = nx.degree_centrality(G_cortex)
# compute clustering coefficient of all nodes
clustering = nx.clustering(G_cortex)
</code></pre>
|
python|pandas|cluster-analysis|adjacency-matrix|node-centrality
| 6
|
375,627
| 69,935,174
|
Iterate over duplicate partitions/groups of a Pandas DataFrame
|
<p>I have a df like this</p>
<pre><code>id val1 val2 val3
0 1 1 2
1 1 NaN 2
2 1 4 2
3 1 4 2
4 2 1 1
5 3 NaN 3
6 3 7 3
7 3 7 3
</code></pre>
<p>then</p>
<pre><code>temp_df = df.loc[df.duplicated(subset=['val1','val3'], keep=False)]
</code></pre>
<p>gives me this</p>
<pre><code>id val1 val2 val3
0 1 1 2
1 1 NaN 2
2 1 4 2
3 1 4 2
5 3 NaN 3
6 3 7 3
7 3 7 3
</code></pre>
<p>How can I iterate over each partition/group containing the duplicate values?</p>
<pre><code>for partition in temp_df......:
print(partition)
id val1 val2 val3
0 1 1 2
1 1 NaN 2
2 1 4 2
3 1 4 2
id val1 val2 val3
5 3 NaN 3
6 3 7 3
7 3 7 3
</code></pre>
<p>The goal is to impute the NaN value with the mode of the partition columns. E.g <code>mode(1, 4, 4) = 4</code> so I want to fill in the NaN value of the first partition with 4. Similarly, I want to fill in the NaN value of the second partition with 7.</p>
|
<p><strong>Update</strong></p>
<p>Use <code>groupby_apply</code>:</p>
<pre><code>df['val2'] = df.groupby(['val1', 'val3'])['val2'] \
.apply(lambda x: x.fillna(x.mode().squeeze()))
print(df)
# Output:
id val1 val2 val3
0 0 1 1.0 2
1 1 1 4.0 2
2 2 1 4.0 2
3 3 1 4.0 2
4 4 2 1.0 1
5 5 3 7.0 3
6 6 3 7.0 3
7 7 3 7.0 3
</code></pre>
<hr />
<p><strong>Old answer</strong></p>
<p>IIUC, use <code>groupby</code> after sorting dataframe by <code>val2</code> then fill forward:</p>
<pre><code>df['val2'] = df.sort_values('val2').groupby(['val1', 'val3'])['val2'].ffill()
print(df)
# Output:
id val1 val2 val3
0 0 1 1.1 2.2
1 1 1 1.1 2.2
2 3 2 1.3 1.0
3 4 3 1.5 6.2
4 5 3 1.5 6.2
</code></pre>
|
pandas|dataframe|duplicates|pandas-groupby
| 1
|
375,628
| 69,979,709
|
Read in .dat file with headers throughout
|
<p>I'm trying to read in a .dat file but it's comprised of chunks of non-columnular data with headers throughout.</p>
<p><a href="https://i.stack.imgur.com/QkOSz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QkOSz.png" alt="data opened in Excel" /></a></p>
<p>I've tried reading it in in pandas:</p>
<pre><code>new_df = pd.read_csv(os.path.join(pathname, item), delimiter='\t', skiprows = 2)
</code></pre>
<p>And it helpfully comes out like this:</p>
<pre><code> Cyclic Acquisition Unnamed: 1 Unnamed: 2 24290-15 Y Unnamed: 4 \
0 Stored at: 100 cycle NaN NaN
1 Points: 2 NaN NaN NaN
2 Ch 2 Displacement Ch 2 Force Time Ch 2 Count NaN
3 in lbf s segments NaN
4 -0.036677472 -149.27879 19.976563 198 NaN
5 0.031659406 149.65636 20.077148 199 NaN
6 Cyclic Acquisition NaN NaN 24290-15 Y NaN
7 Stored at: 200 cycle NaN NaN
8 Points: 2 NaN NaN NaN
9 Ch 2 Displacement Ch 2 Force Time Ch 2 Count NaN
10 in lbf s segments NaN
11 -0.036623772 -149.73801 39.975586 398 NaN
12 0.031438459 149.48193 40.078125 399 NaN
13 Cyclic Acquisition NaN NaN 24290-15 Y NaN
14 Stored at: 300 cycle NaN NaN
</code></pre>
<p>Do I need to resort to .genfromtext() or is there a panda-riffic way to accomplish this?</p>
|
<p>I developed a work-around. I needed the Displacement data pairs, as well as some data that was all divisible evenly by 100.</p>
<p>To get to the Displacement data, I first pretended 'Cyclic Acquisition' was a valid column name, coerced errors on the values forced to be numeric and forced the values included to be just the ones that worked out to numbers:</p>
<pre><code>displacement = new_df['Cyclic Acquisition'][pd.to_numeric(new_df['Cyclic Acquisition'], errors='coerce').notnull()]
4 -0.036677472
5 0.031659406
11 -0.036623772
12 0.031438459
</code></pre>
<p>Then, because the chunks remaining were paired low and high values that needed to be operated on together, I selected every other value for the "low" values starting with the 0th value, and the same logic for the "high" values. I reset the index because my plan was to create a different DataFrame with the necessary info in it and I wanted it to keep values in appropriate relationship to each other.</p>
<pre><code>displacement_low = displacement[::2].reset_index(drop = True)
0 -0.036677472
1 -0.036623772
displacement_high = displacement[1::2].reset_index(drop = True)
0 0.031659406
1 0.031438459
</code></pre>
<p>Then, to get the cycles, I followed the same basic principle to get that column down to just numbers, then I put the values into a list and used a list comprehension to require the divisibility, and switched it back to a Series.</p>
<pre><code>cycles = new_df['Unnamed: 1'][pd.to_numeric(new_df['Unnamed: 1'], errors='coerce').notnull()].astype('float').tolist()
[100.0, 2.0, -149.27879, 149.65636, 200.0, 2.0, -149.73801, 149.48193...]
cycles = pd.Series([val for val in cycles if val%100 == 0])
0 100.0
1 200.0
...
</code></pre>
<p>I then created a new df with that data and named the columns as desired:</p>
<pre><code>df = pd.concat([displacement_low, displacement_high, cycles], axis = 1)
df.columns = ['low', 'high', 'cycles']
low high cycles
0 -0.036677 0.031659 100.0
1 -0.036624 0.031438 200.0
</code></pre>
|
python|pandas
| 0
|
375,629
| 69,881,484
|
Python Pandas : find n in a sorted multindex dataframe and return [0:n+1] for each level 1 index
|
<p>I am having some issues with what I thought a simple filtering task.
We have a data that have roughly this shape :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>item</th>
<th>cumsum</th>
</tr>
</thead>
<tbody>
<tr>
<td>name1</td>
<td>item 1</td>
<td>0.05</td>
</tr>
<tr>
<td></td>
<td>item 2</td>
<td>0.10</td>
</tr>
<tr>
<td></td>
<td>item 3</td>
<td>0.31</td>
</tr>
<tr>
<td>name2</td>
<td>item 1</td>
<td>0.02</td>
</tr>
<tr>
<td></td>
<td>item 2</td>
<td>0.07</td>
</tr>
<tr>
<td>name3</td>
<td>item 1</td>
<td>0.01</td>
</tr>
<tr>
<td></td>
<td>item 2</td>
<td>0.07</td>
</tr>
<tr>
<td></td>
<td>item 3</td>
<td>0.21</td>
</tr>
<tr>
<td>name4</td>
<td>item 1</td>
<td>0.03</td>
</tr>
<tr>
<td></td>
<td>item 2</td>
<td>0.12</td>
</tr>
<tr>
<td></td>
<td>item 3</td>
<td>0.21</td>
</tr>
<tr>
<td></td>
<td>item 4</td>
<td>0.35</td>
</tr>
</tbody>
</table>
</div>
<p>What I would like to is to return the dataframe with items smaller than 0.2 and the item directly above. This is table I would like as an output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>item</th>
<th>cumsum</th>
</tr>
</thead>
<tbody>
<tr>
<td>name1</td>
<td>item 1</td>
<td>0.05</td>
</tr>
<tr>
<td></td>
<td>item 2</td>
<td>0.10</td>
</tr>
<tr>
<td></td>
<td>item 3</td>
<td>0.31</td>
</tr>
<tr>
<td>name2</td>
<td>item 1</td>
<td>0.02</td>
</tr>
<tr>
<td></td>
<td>item 2</td>
<td>0.07</td>
</tr>
<tr>
<td>name3</td>
<td>item 1</td>
<td>0.01</td>
</tr>
<tr>
<td></td>
<td>item 2</td>
<td>0.07</td>
</tr>
<tr>
<td></td>
<td>item 3</td>
<td>0.21</td>
</tr>
<tr>
<td>name4</td>
<td>item 1</td>
<td>0.03</td>
</tr>
<tr>
<td></td>
<td>item 2</td>
<td>0.12</td>
</tr>
<tr>
<td></td>
<td>item 3</td>
<td>0.21</td>
</tr>
</tbody>
</table>
</div>
<p>I tried for each 'name' to find the 'item' that have a cumsum greater than 0.2 and then return the whole range with the indexes as :</p>
<pre class="lang-py prettyprint-override"><code> df = df.loc['name1']
idx = df.loc[df['cumsum'] > 0.2].index[0]
iidx = df.index.get_loc(idx) + 1
df = df.iloc[:iidx]
</code></pre>
<p>and do this for each 'name'. However this fails for name2.</p>
<p>Can anybody help with this please ?</p>
|
<p>Use <code>|</code> for bitwise <code>OR</code> by mask shifted per groups by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.shift.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.shift</code></a>:</p>
<pre><code>m = (df['cumsum'] < 0.2)
df = df[m | m.groupby(level=0).shift(fill_value=False)]
print (df)
cumsum
name item
name1 item 1 0.05
item 2 0.10
item 3 0.31
name2 item 1 0.02
item 2 0.07
name3 item 1 0.01
item 2 0.07
item 3 0.21
name4 item 1 0.03
item 2 0.12
item 3 0.21
</code></pre>
|
python|pandas|filtering|multi-index
| 1
|
375,630
| 69,822,282
|
add a new column in multi index from the others
|
<pre><code>In [151]: df
Out[151]:
first bar baz
second one two one two
A 1 2 3 4
B 5 6 7 8
</code></pre>
<p>My question is simple: from this df, how to create a new columns three which the sum of one and two columns ie get:</p>
<pre><code>In [151]: df
Out[151]:
first bar baz
second one two three one two three
A 1 2 3. 3. 4. 7.
B 5 6 11. 7. 8. 15.
</code></pre>
<p>Thx.</p>
|
<p>One way to do this is:</p>
<pre><code>df1 = df.join(df.sum(level=0, axis=1))
</code></pre>
<p>which returns:</p>
<pre><code>(bar, one) (bar, two) (baz, one) (baz, two) bar baz
0 1 2 3 4 3 7
1 5 6 7 8 11 15
</code></pre>
<p>You can add this as well:</p>
<pre><code>columns = [('bar', 'one'), ('bar', 'two'), ('bar', 'three'), ('baz', 'one'), ('baz', 'two'), ('baz', 'three')]
df1.columns = pd.MultiIndex.from_tuples(columns)
print(df1.sort_index(level=0))
</code></pre>
<p>to have it in the format you asked for:</p>
<pre><code>bar baz
one two three one two three
0 1 2 3 4 3 7
1 5 6 7 8 11 15
</code></pre>
|
python|pandas|multi-index
| 0
|
375,631
| 69,805,233
|
I have a DataFrame and need to perform calculations between columns. Can my function do_something be vectorised?
|
<p>I have a DataFrame and need to perform calculations between columns. Can my function <code>do_something </code>be vectorised ?</p>
<p>Column <code>['1min', '2min', '5min', '15min', '30min', '1hour', '2hour', '4hour', '1day', '2day', '7day',]</code> need to be compared with price and the value of the previous column in turn. If the column is less than <code>price</code> and smaller than the <code>previous column</code>, <code>min_sig</code> is assigned the value of the column, and <code>min_bar</code> is assigned the name of the column. If it <code>does not match</code>, if it is the column '1min', then <code>min_sig</code> and <code>min_bar</code> are assigned the values <code>False</code>, and the other columns <code>interrupt</code> the loop.</p>
<p>My code can achieve the effect I want, can the function <code>generate_data()</code> be optimized by vectorization?</p>
<p>My code is as follows:</p>
<pre><code>import pandas as pd
import numpy as np
def generate_data():
code = ['a', 'b', 'c', 'd']
price = [72, 50.8, 77.8, 54.6]
min1 = [69.78, 49.21, 79.75, 56.21]
min2 = [69.9, 49.22, 79.4, 55.85]
min5 = [73.36, 51.81, 74.78, 52]
min15 = [79.07, 56.25, 67.86, 46.9]
min30 = [77.1, 54.86, 70.38, 48.91]
hour1 = [75.12, 53.49, 72.84, 51.29]
hour2 = [74.1, 52.75, 73.51, 51.79]
hour4 = [72.18, 51.69, 77.83, 55.96]
day1 = [78.13, 56.76, 73.47, 52.37]
day2 = [80.42, 58.72, 71.88, 51.78]
day7 = [110.79, 84.6, 83.73, 65.48]
dict1 = {'code': code, 'price': price, '1min': min1, '2min': min2, '5min': min5, '15min': min15, '30min': min30,
'1hour': hour1, '2hour': hour2, '4hour': hour4, '1day': day1, '2day': day2, '7day': day7, }
df = pd.DataFrame(dict1)
df['min_bar'] = np.NAN
df['min_sig'] = np.NAN
col = ['code', 'price', 'min_bar', 'min_sig', '1min', '2min', '5min', '15min', '30min', '1hour', '2hour', '4hour',
'1day', '2day', '7day', ]
df = df[col]
return df
def do_something(a):
list1 = ['1min', '2min', '5min', '15min', '30min', '1hour', '2hour', '4hour',
'1day', '2day', '7day', ]
for i in range(len(list1)):
bar = list1[i]
if i == 0:
if a['price'] >= a[bar]:
a['min_sig'] = a[bar]
a['min_bar'] = bar
else:
a['min_sig'] = False
a['min_bar'] = False
break
else:
if a['min_sig'] >= a[bar]:
a['min_sig'] = a[bar]
a['min_bar'] = bar
else:
break
return a
def main():
df = generate_data()
print('Dataframe before running generate_data():')
print(df)
df = df.apply(do_something, axis=1)
print('The result after running is the result I want:')
print(df)
if __name__ == '__main__':
main()
</code></pre>
<pre><code>Dataframe before running generate_data():
code price min_bar min_sig 1min ... 2hour 4hour 1day 2day 7day
0 a 72.0 NaN NaN 69.78 ... 74.10 72.18 78.13 80.42 110.79
1 b 50.8 NaN NaN 49.21 ... 52.75 51.69 56.76 58.72 84.60
2 c 77.8 NaN NaN 79.75 ... 73.51 77.83 73.47 71.88 83.73
3 d 54.6 NaN NaN 56.21 ... 51.79 55.96 52.37 51.78 65.48
[4 rows x 15 columns]
The result after running is the result I want:
code price min_bar min_sig 1min ... 2hour 4hour 1day 2day 7day
0 a 72.0 1min 69.78 69.78 ... 74.10 72.18 78.13 80.42 110.79
1 b 50.8 1min 49.21 49.21 ... 52.75 51.69 56.76 58.72 84.60
2 c 77.8 False False 79.75 ... 73.51 77.83 73.47 71.88 83.73
3 d 54.6 False False 56.21 ... 51.79 55.96 52.37 51.78 65.48
[4 rows x 15 columns]
</code></pre>
<pre><code>%timeit df.apply(do_something,axis=1)
4.88 ms ± 50.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
|
<p>IIUC, it looks like you want to get the <code>idxmin</code> and <code>min</code> masked with False if the first value is not greater than price.</p>
<p>You can use numpy to get both operations at once:</p>
<pre><code>m = np.argmin(df[list1].values, axis=1)
(pd.DataFrame({'min_bar': np.take(list1, m),
'min_sig': np.take(df[list1].values, m)})
.mask(df['price'].lt(df[list1[0]]), False)
)
</code></pre>
<p>(Then join or assign to the original df)</p>
<p>Output:</p>
<pre><code> min_bar min_sig
0 1min 69.78
1 1min 69.78
2 False False
3 False False
</code></pre>
<h4>using pandas</h4>
<p>This requires searching for the min twice, though</p>
<pre><code>m = df['price'].lt(df[list1[0]])
df['min_bar'] = df[list1].idxmin(axis=1).mask(m, False)
df['min_sig'] = df[list1].min(axis=1).mask(m, False)
</code></pre>
|
python|pandas|dataframe|vectorization
| 2
|
375,632
| 69,829,423
|
How do I save a N x M array/list using Pandas?
|
<p>I have a <code>N x M</code> numpy array / list. I want to save this matrix into a <code>.csv</code> file using Pandas. Unfortunately I don't know <strong>a priori</strong> the values of M and N which can be <em>large</em>. I am interested in Pandas because I find it manageable in terms of data columns access.</p>
<p>Let's start with this MWE:</p>
<pre><code>import numpy as np
import pandas as pd
N,M = np.random.randint(10,100, size = 2)
A = np.random.randint(10, size = (N,M))
columns = []
for i in range(len(A[0,:])):
columns.append( "column_{} ".format(i) )
</code></pre>
<p>I cannot do something like <code>pd.append( )</code> i.e. appending columns with new additional indices via a for loop.</p>
<p>Is there a way to save A into a .csv file?</p>
|
<p>Following the comment of Quang Hoang, there are 2 possibilities:</p>
<ol>
<li><code>pd.DataFrame(A).to_csv('yourfile.csv')</code>.</li>
<li><code>np.save("yourfile.npy",A)</code> and then <code>A = np.load("yourfile.npy")</code>.</li>
</ol>
|
python|pandas|dataframe|csv|save
| 0
|
375,633
| 69,733,845
|
how can i display json array to python dataframe
|
<p>I have a json file.</p>
<pre><code>[
{
'orderId': 1811,
'deliveryId': '000001811-1634732661563000',
'shippingBook': '[{"qtyOrdered":1,"bookNoList":["B8303-V05","B8304-V05","B8305-V05","B8306-V05","B8307-V05"],"courseCode":"A8399-S26"},{"courseCode":"A1399-S70","qtyOrdered":1,"bookNoList":["B1301-V06","B1302-V06","B1303-V06","B1304-V06","B1305-1-V06","B1305-2-V06","B1306-V06","B1307-V06"]}]',
}
]
</code></pre>
<p>but how can i display in dataframe in format
<a href="https://i.stack.imgur.com/Bmy9q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bmy9q.png" alt="enter image description here" /></a></p>
<p>thank you</p>
|
<p>You have string in <code>'shippingBook'</code> which may need <code>json.loads()</code> to convert it to Python's list with dictionaries.</p>
<p>And you could use normal <code>for</code>-loops to convert all data to normal list with expected data - and later convert it to <code>DataFrame</code></p>
<pre><code>import json
import pandas as pd
data = [
{
'orderId': 1811,
'deliveryId': '000001811-1634732661563000',
'shippingBook': '[{"qtyOrdered":1,"bookNoList":["B8303-V05","B8304-V05","B8305-V05","B8306-V05","B8307-V05"],"courseCode":"A8399-S26"},{"courseCode":"A1399-S70","qtyOrdered":1,"bookNoList":["B1301-V06","B1302-V06","B1303-V06","B1304-V06","B1305-1-V06","B1305-2-V06","B1306-V06","B1307-V06"]}]',
}
]
# --- organize data ---
all_rows = []
for order in data:
order_id = order['orderId']
delivery_id = order['deliveryId']
for book in json.loads(order['shippingBook']):
row = [order_id, delivery_id, book['courseCode'], book['bookNoList']]
#print(row)
all_rows.append(row)
# --- convert to DataFrame ---
df = pd.DataFrame(all_rows, columns=['orderId', 'deliveryId', 'courseCode', 'bookNoList'])
print(df.to_string()) # `to_string()` to display all data without `...`
</code></pre>
<p>Result:</p>
<pre><code> orderId deliveryId courseCode bookNoList
0 1811 000001811-1634732661563000 A8399-S26 [B8303-V05, B8304-V05, B8305-V05, B8306-V05, B8307-V05]
1 1811 000001811-1634732661563000 A1399-S70 [B1301-V06, B1302-V06, B1303-V06, B1304-V06, B1305-1-V06, B1305-2-V06, B1306-V06, B1307-V06]
</code></pre>
<hr />
<p><strong>EDIT:</strong></p>
<p>You may also try do the same directly in <code>DataFrame</code>.</p>
<p>It needs <code>explode</code> to split list into rows</p>
<pre><code>import json
import pandas as pd
data = [
{
'orderId': 1811,
'deliveryId': '000001811-1634732661563000',
'shippingBook': '[{"qtyOrdered":1,"bookNoList":["B8303-V05","B8304-V05","B8305-V05","B8306-V05","B8307-V05"],"courseCode":"A8399-S26"},{"courseCode":"A1399-S70","qtyOrdered":1,"bookNoList":["B1301-V06","B1302-V06","B1303-V06","B1304-V06","B1305-1-V06","B1305-2-V06","B1306-V06","B1307-V06"]}]',
}
]
#df = pd.DataFrame.from_records(data)
df = pd.DataFrame(data)
# convert string to list with dictionares
df['shippingBook'] = df['shippingBook'].apply(json.loads)
# split list `'shippingBook'` into rows
df = df.explode('shippingBook')
df = df.reset_index()
del df['index']
# split elements into columns
#df['courseCode'] = df['shippingBook'].apply(lambda item:item['courseCode'])
#df['bookNoList'] = df['shippingBook'].apply(lambda item:item['bookNoList'])
df['courseCode'] = df['shippingBook'].str['courseCode'] # unexpected behaviour for string functions `.str`
df['bookNoList'] = df['shippingBook'].str['bookNoList'] # unexpected behaviour for string functions `.str`
# remove `'shippingBook'`
del df['shippingBook']
print(df.to_string())
</code></pre>
<p>And the same with <code>apply(pd.Series)</code> to convert list into columns.</p>
<pre><code>import json
import pandas as pd
data = [
{
'orderId': 1811,
'deliveryId': '000001811-1634732661563000',
'shippingBook': '[{"qtyOrdered":1,"bookNoList":["B8303-V05","B8304-V05","B8305-V05","B8306-V05","B8307-V05"],"courseCode":"A8399-S26"},{"courseCode":"A1399-S70","qtyOrdered":1,"bookNoList":["B1301-V06","B1302-V06","B1303-V06","B1304-V06","B1305-1-V06","B1305-2-V06","B1306-V06","B1307-V06"]}]',
}
]
#df = pd.DataFrame.from_records(data)
df = pd.DataFrame(data)
# convert string to list with dictionares
df['shippingBook'] = df['shippingBook'].apply(json.loads)
# split list `'shippingBook'` into rows
df = df.explode('shippingBook')
df = df.reset_index()
del df['index']
# split elements into columns
new_columns = df['shippingBook'].apply(pd.Series)
#df[['qtyOrdered', 'bookNoList', 'courseCode']] = new_columns
#del df['qtyOrdered']
#df[['bookNoList', 'courseCode']] = new_columns[['bookNoList', 'courseCode']]
df = df.join(new_columns[['bookNoList', 'courseCode']])
# remove `'shippingBook'`
del df['shippingBook']
print(df.to_string())
</code></pre>
|
python|json|pandas
| 1
|
375,634
| 69,759,004
|
could not broadcast input array from shape (3,1) into shape (3,)
|
<p>I have the following python code for QR factorization. At line of <code>Q[:,i] = u / norm</code> , I get the error mentioned in the title. Can anyone help, please?</p>
<ul>
<li><code>Q</code> is has the shape (3,3),</li>
<li><code>u</code> is expected to has the shape (3,1)</li>
<li><code>norm</code> is a scalar</li>
</ul>
<p>The error message is:</p>
<pre><code>could not broadcast input array from shape (3,1) into shape (3,)
</code></pre>
<p>The expected output is the matrix Q with the shape (3,3).</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def QRfactorization(A):
# getting the size of the matrix
N = len(A)
# preallocating Q
Q = np.zeros((N, N))
for i in range(N):
u = A[:,i]
for j in range(i):
# calculating the projection
h = np.vdot(u,Q[:,j])
u = u - h*Q[:,j]
norm = np.linalg.norm(u)
Q[:,i] = u / norm
#getting R matrix
R = Q.T*A
return Q , R
def main():
# input is an square matrix A
A = np.matrix([[1,2,3],[4,5,6],[1,2,3]])
Q = QRfactorization(A)
print(Q)
main()
</code></pre>
|
<p>The use of <a href="https://numpy.org/doc/stable/reference/generated/numpy.matrix.html" rel="nofollow noreferrer"><code>numpy.matrix</code> is discouraged</a>, and using <code>numpy.array</code> also fixes this issue:</p>
<pre><code>def main():
# input is an square matrix A
A = np.array([[1,2,3],[4,5,6],[1,2,3]])
Q = QRfactorization(A)
print(Q)
</code></pre>
|
python|arrays|numpy
| 0
|
375,635
| 69,866,910
|
How to convert a pandas dataframe to NumPy array
|
<p>How can I convert a pandas dataframe (21 x 31) into a numpy array?</p>
<p>For example:</p>
<p>array_1 (n_1, n_2, n_3, ... , n31) <br>
array_2 (n_1, n_2, n_3, ... , n31)<br>
...<br>
array_21(n_1, n_2, n_3, ... , n31)<br></p>
<p>I tried the following code snippet:</p>
<pre><code>np.array(df)
</code></pre>
<p>.. and get the following result:</p>
<pre><code>array([[0.00290135, 0.00274017, 0.00531915, 0.00967118, 0.00676983,
0.0082205 , 0.01096067, 0.01821406, 0.01450677, 0.02401676,
0.0235332 , 0.03787879, 0.04239201, 0.04190845, 0.04819471,
0.04932302, 0.06399097, 0.07865893, 0.06995487, 0.06914894,
0.08107672, 0.06141199, 0.05157963, 0.05141844, 0.03852353,
0.03546099, 0.02611219, 0.01595745, 0.00435203, 0.00322373,
0.00257898],
[0. , 0.00392927, 0.00638507, 0.01866405, 0.00785855,
0.01915521, 0.00491159, 0.02308448, 0.01178782, 0.01915521,
0.03339882, 0.02996071, 0.03192534, 0.05451866, 0.03732809,
0.04125737, 0.05304519, 0.05599214, 0.0589391 , 0.09528487,
0.13752456, 0.05108055, 0.02603143, 0.05500982, 0.02799607,
0.01424361, 0.05157171, 0.02799607, 0. , 0.00049116,
0.00147348],
[0. , 0. , 0.01376462, 0. , 0.00825877,
0.01238816, 0.00757054, 0.00275292, 0.01307639, 0.01927047,
0.03234687, 0.04129387, 0.02959394, 0.02615279, 0.05161734,
0.03991741, 0.05574673, 0.12801101, 0.04335857, 0.07983482,
0.05918789, 0.12319339, 0.02546456, 0.08878183, 0.01169993,
0.04542326, 0.02064694, 0.01789401, 0. , 0.00275292,
0. ],
[...]])
</code></pre>
<p>The problem is that the second square bracket is too much. How can I solve this problem?</p>
|
<p>It seems that you want to convert the DataFrame into a 1D array (<strong>this should be clear in the post</strong>).</p>
<p>First, convert the DataFrame to a 2D numpy array using <code>DataFrame.to_numpy</code> (using <code>DataFrame.values</code> is discouraged) and then use <code>ndarray.ravel</code> or <code>ndarray.flatten</code> to flatten the array.</p>
<pre><code>arr = df.to_numpy().ravel()
</code></pre>
|
python|arrays|pandas|numpy
| 2
|
375,636
| 69,872,543
|
How to plot a stacked bar with annotations for multiple groups
|
<p>In the histogram a gap appears between 2 bars.. anyone knows why?</p>
<p>I get this error:</p>
<p>The number of FixedLocator locations (11), usually from a call to set_ticks, does not match the number of ticklabels (10).</p>
<p>The csv file is just 2 columns, one with the name of the country and the other with the type of medal achieved, each line a medal with its type and country.</p>
<p>The link to the file is: <a href="https://github.com/jpiedehierroa/files/blob/main/Libro1.csv" rel="nofollow noreferrer">https://github.com/jpiedehierroa/files/blob/main/Libro1.csv</a></p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path
my_csv = Path("C:/Usersjosep/Desktop/Libro1.csv")
df = pd.read_csv("Libro1.csv", sep=',')
# or load from github repo link
url = 'https://raw.githubusercontent.com/jpiedehierroa/files/main/Libro1.csv'
df = pd.read_csv(url)
# Prepare data
x_var = 'countries'
groupby_var = 'type'
df_agg = df.loc[:,[x_var, groupby_var]].groupby(groupby_var)
vals = [df[x_var].values.tolist() for i, df in df_agg]
# Draw
plt.figure(figsize=(10,10), dpi= 100)
colors= ("#CD7F32","silver","gold")
n, bins, patches = plt.hist(vals, df[x_var].unique().__len__(), stacked=True, density=False, color=colors[:len(vals)])
# Decoration
plt.legend(["bronze", "silver","gold"], loc="upper right")
plt.title(f"Histogram of medals achieved by ${x_var}$ colored by ${groupby_var}$ in Tokyo 2020", fontsize=18)
plt.text(2,80,"138")
plt.xlabel(x_var)
plt.ylabel("amount of medals by type")
plt.ylim(0, 130)
plt.xticks(ticks=bins, labels=np.unique(df[x_var]).tolist(), rotation=90, horizontalalignment='left')
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/8BOM1.png" alt="enter image description here" /></p>
<h2>Test Data</h2>
<ul>
<li>In case the link dies</li>
</ul>
<pre class="lang-py prettyprint-override"><code>countries,type
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,gold
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,silver
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
USA,bronze
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,gold
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,silver
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
China,bronze
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,gold
Japan,silver
Japan,silver
Japan,silver
Japan,silver
Japan,silver
Japan,silver
Japan,silver
Japan,silver
Japan,silver
Japan,silver
Japan,silver
Japan,silver
Japan,silver
Japan,silver
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
Japan,bronze
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,gold
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,silver
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
GB,bronze
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,gold
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,silver
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
ROC,bronze
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,gold
Australia,silver
Australia,silver
Australia,silver
Australia,silver
Australia,silver
Australia,silver
Australia,silver
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Australia,bronze
Netherlands,gold
Netherlands,gold
Netherlands,gold
Netherlands,gold
Netherlands,gold
Netherlands,gold
Netherlands,gold
Netherlands,gold
Netherlands,gold
Netherlands,gold
Netherlands,silver
Netherlands,silver
Netherlands,silver
Netherlands,silver
Netherlands,silver
Netherlands,silver
Netherlands,silver
Netherlands,silver
Netherlands,silver
Netherlands,silver
Netherlands,silver
Netherlands,silver
Netherlands,bronze
Netherlands,bronze
Netherlands,bronze
Netherlands,bronze
Netherlands,bronze
Netherlands,bronze
Netherlands,bronze
Netherlands,bronze
Netherlands,bronze
Netherlands,bronze
Netherlands,bronze
Netherlands,bronze
Netherlands,bronze
Netherlands,bronze
France,gold
France,gold
France,gold
France,gold
France,gold
France,gold
France,gold
France,gold
France,gold
France,gold
France,silver
France,silver
France,silver
France,silver
France,silver
France,silver
France,silver
France,silver
France,silver
France,silver
France,silver
France,silver
France,bronze
France,bronze
France,bronze
France,bronze
France,bronze
France,bronze
France,bronze
France,bronze
France,bronze
France,bronze
France,bronze
Germany,gold
Germany,gold
Germany,gold
Germany,gold
Germany,gold
Germany,gold
Germany,gold
Germany,gold
Germany,gold
Germany,gold
Germany,silver
Germany,silver
Germany,silver
Germany,silver
Germany,silver
Germany,silver
Germany,silver
Germany,silver
Germany,silver
Germany,silver
Germany,silver
Germany,bronze
Germany,bronze
Germany,bronze
Germany,bronze
Germany,bronze
Germany,bronze
Germany,bronze
Germany,bronze
Germany,bronze
Germany,bronze
Germany,bronze
Germany,bronze
Germany,bronze
Germany,bronze
Germany,bronze
Germany,bronze
Italy,gold
Italy,gold
Italy,gold
Italy,gold
Italy,gold
Italy,gold
Italy,gold
Italy,gold
Italy,gold
Italy,gold
Italy,silver
Italy,silver
Italy,silver
Italy,silver
Italy,silver
Italy,silver
Italy,silver
Italy,silver
Italy,silver
Italy,silver
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
Italy,bronze
</code></pre>
|
<ul>
<li>This is easier to implement as a stacked bar plot, as such, reshape the dataframe with <a href="https://pandas.pydata.org/docs/reference/api/pandas.crosstab.html" rel="nofollow noreferrer"><code>pandas.crosstab</code></a> and plot using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer"><code>pandas.DataFrame.plot</code></a> with <code>kind='bar'</code> and <code>stacked=True</code>
<ul>
<li>This should not be implemented with <code>plt.hist</code> because it's more convoluted, and it's easier to use the pandas plot method directly.</li>
<li>Also a histogram is more appropriate when the x values are a continuous range of numbers, not discrete categorical values.</li>
</ul>
</li>
<li><code>ct.iloc[:, :-1]</code> selects all but the last column, <code>'tot'</code> to be plotted as bars.</li>
<li>Use <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.bar_label.html" rel="nofollow noreferrer"><code>matplotlib.pyplot.bar_label</code></a> to add annotations
<ul>
<li><code>ax.bar_label(ax.containers[2], padding=3)</code> uses <code>label_type='edge'</code> by default, which results in annotating the edge with the cumulative sum (<code>'center'</code> annotates with the patch value), as shown in this <a href="https://stackoverflow.com/a/70280423/7758804">answer</a>.
<ul>
<li>The <code>[2]</code> in <code>ax.containers[2]</code> selects only the top containers to annotate with the cumulative sum. The <code>containers</code> are 0 indexed from the bottom.</li>
</ul>
</li>
<li>See this <a href="https://stackoverflow.com/a/67561982/7758804">answer</a> for additional details and examples</li>
<li>This <a href="https://stackoverflow.com/a/64202669/7758804">answer</a> shows how to do annotations the old way, without <code>.bar_label</code>. I do not recommend it.</li>
<li>This <a href="https://stackoverflow.com/a/68785457/7758804">answer</a> shows how to customize labels to prevent annotations for values under a given size.</li>
</ul>
</li>
<li><strong>Tested in <code>python 3.10</code>, <code>pandas 1.3.5</code>, <code>matplotlib 3.5.1</code></strong></li>
</ul>
<h2>Load and Shape the DataFrame</h2>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# load from github repo link
url = 'https://raw.githubusercontent.com/jpiedehierroa/files/main/Libro1.csv'
df = pd.read_csv(url)
# reshape the dataframe
ct = pd.crosstab(df.countries, df.type)
# total medals per country, which is necessary to sort the bars
ct['tot'] = ct.sum(axis=1)
# sort
ct = ct.sort_values(by='tot', ascending=False)
# display(ct)
type bronze gold silver tot
countries
USA 33 39 41 113
China 18 38 32 88
ROC 23 20 28 71
GB 22 22 21 65
Japan 17 27 14 58
Australia 22 17 7 46
Italy 20 10 10 40
Germany 16 10 11 37
Netherlands 14 10 12 36
France 11 10 12 33
</code></pre>
<h2>Plot</h2>
<pre class="lang-py prettyprint-override"><code>colors = ("#CD7F32", "silver", "gold")
cd = dict(zip(ct.columns, colors))
# plot the medals columns
title = 'Country Medal Count for Tokyo 2020'
ax = ct.iloc[:, :-1].plot(kind='bar', stacked=True, color=cd, title=title,
figsize=(12, 5), rot=0, width=1, ec='k' )
# annotate each container with individual values
for c in ax.containers:
ax.bar_label(c, label_type='center')
# annotate the top containers with the cumulative sum
ax.bar_label(ax.containers[2], padding=3)
# pad the spacing between the number and the edge of the figure
ax.margins(y=0.1)
</code></pre>
<p><a href="https://i.stack.imgur.com/lBI7E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lBI7E.png" alt="enter image description here" /></a></p>
<ul>
<li>An alternative way to annotate the top with the sum is to use the <code>'tot'</code> column for custom labels, but as shown, this is not necessary.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>labels = ct.tot.tolist()
ax.bar_label(ax.containers[2], labels=labels, padding=3)
</code></pre>
|
python|pandas|matplotlib|bar-chart
| 1
|
375,637
| 69,806,603
|
NumPy converting mixed data types in 2D arrays via astype
|
<p>I have been working on the <a href="https://www.ncdc.noaa.gov/ibtracs/index.php?name=ib-v4-access" rel="nofollow noreferrer">IBTrACS dataset</a> lately and would like to convert it to a 2D numpy array with the correct data types. I went through some filtering and selected the subset of data that I need, which is a 2D array with the following columns:</p>
<pre><code>Column number - Data type
0 - integer (season)
1 - string (name)
2 - timestamp
3-4 - float-typed columns
5-20 - other integer-typed columns
</code></pre>
<p>I have also subsequently filled in empty values with placeholders, such as <code>None</code> (NaN) for floats and <code>-99999</code> for integers. When I used <code>astype</code> to make numpy recognize the data types in the array, it apparently failed to process them column by column and tried to cast strings to integers even if there was no need.</p>
<p>The following is an MCVE.</p>
<p><strong>Code:</strong></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import csv
from datetime import datetime
import pytz
# reading the dataset
with open('ibtracs.WP.list.v04r00.csv', 'r') as file:
data = list(csv.reader(file, delimiter=','))
# remove CSV headers
ds = np.array(data[2:])
# selecting subsets of the data
mask_jtwc = ds[:,17] == 'jtwc_wp'
ds_jtwc = ds[mask_jtwc,:]
# remove unnecessary columns
columns_to_drop = [3,4] + list(range(8,13)) + [14,15,17,18,21,22, 25] + list(range(38,161))
ds_jtwc = np.delete(ds_jtwc, columns_to_drop, 1)
# further filtering
mask_nature = ds_jtwc[:,5] == 'TS'
ds_jtwc = ds_jtwc[mask_nature,:]
mask_tracktype = ds_jtwc[:,6] == 'main'
ds_jtwc = ds_jtwc[mask_tracktype,:]
mask_iflag = [True if item[0] != '_' else False for item in ds_jtwc[:,7]]
ds_jtwc = ds_jtwc[mask_iflag,:]
# remove columns that helped us perform the last step but not needed any more
columns_to_drop = [0,2,5,6,7]
ds_jtwc = np.delete(ds_jtwc, columns_to_drop, 1)
columns_to_drop = list(range(8, ds_jtwc.shape[1])) # representative columns only
ds_jtwc = np.delete(ds_jtwc, columns_to_drop, 1)
# manual processing to handle empty data and timestamps
dataset = ds_jtwc.tolist()
converted_set = []
for row in dataset:
converted_row = []
for i in range(len(row)):
if i == 1: # string type
converted_row.append(str(row[i]))
elif i == 2: # timestamp
timestamp = datetime.strptime(row[i], '%Y-%m-%d %H:%M:%S')
# timestamp = timestamp.replace(tzinfo=pytz.UTC) # no need timezones for modern numpy
converted_row.append(timestamp)
elif i == 3 or i == 4: # float type
if row[i] == " ":
converted_row.append(None) # NaN
else: converted_row.append(float(row[i]))
else: # default to integers
if row[i] == " ":
converted_row.append(-99999) # placeholder
else: converted_row.append(int(row[i]))
converted_set.append(converted_row)
dataset = np.array(converted_set)
# get sample data for reference
random_index = np.random.choice(dataset.shape[0], size=1, replace=False)
print("Sample data (row {0}):".format(random_index))
print(dataset[random_index, :])
print("Sample data (row 1):")
print(dataset[0])
### Code in question ###
print(dataset)
print(dataset.dtype)
dataset = dataset.astype([
('SEASON', 'i'),
('NAME', 'S'),
('ISO_TIME', 'datetime64[s]'),
('USA_LAT', 'f'),('USA_LON', 'f'),
('USA_WIND', 'i'),('USA_PRES', 'i'),
('USA_R34_NE', 'i')
])
print(dataset)
print(dataset.dtype)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Sample data (row [38692]):
[[1999 'MAGGIE' datetime.datetime(1999, 6, 8, 6, 0) 23.6 111.0 20 -99999
-99999]]
Sample data (row 1):
[1945 'ANN' datetime.datetime(1945, 4, 19, 12, 0) 9.5 160.3 25 -99999
-99999]
[[1945 'ANN' datetime.datetime(1945, 4, 19, 12, 0) ... 25 -99999 -99999]
[1945 'ANN' datetime.datetime(1945, 4, 19, 18, 0) ... 30 -99999 -99999]
[1945 'ANN' datetime.datetime(1945, 4, 20, 0, 0) ... 35 -99999 -99999]
...
[2019 'PHANFONE' datetime.datetime(2019, 12, 28, 12, 0) ... 25 1009
-99999]
[2019 'PHANFONE' datetime.datetime(2019, 12, 28, 18, 0) ... 20 1011
-99999]
[2019 'PHANFONE' datetime.datetime(2019, 12, 29, 0, 0) ... 20 1010
-99999]]
object
Traceback (most recent call last):
File "D:\path\Documents\Programming\path\Dataset\_forstackoverflow.py", line 64, in <module>
dataset = dataset.astype([
ValueError: invalid literal for int() with base 10: 'ANN'
</code></pre>
<p>If I do not perform the <code>astype</code> step, the data type will turn out to be <code>object</code>, but I believe it will be more convenient in the future if the data are already of the right types. I have also tried to specify sizes, but it gave me an identical error.</p>
<p><strong>Code:</strong></p>
<pre><code>dataset = dataset.astype([
('SEASON', 'i4'),
('NAME', 'U16'),
('ISO_TIME', 'datetime64[s]'),
('USA_LAT', 'f'),('USA_LON', 'f'),
('USA_WIND', 'i4'),('USA_PRES', 'i4'),
('USA_R34_NE', 'i4')
])
</code></pre>
<p>I wonder what is wrong with or missing in my <code>astype</code> call. Thanks in advance!</p>
|
<p>To illustrate my last comment:</p>
<pre><code>In [9]: arr = np.array([[1,2,'word'],[3,4,'other']])
In [10]: arr
Out[10]:
array([['1', '2', 'word'],
['3', '4', 'other']], dtype='<U21')
In [11]: arr.astype('i,i,U10')
Traceback (most recent call last):
File "<ipython-input-11-3800d012c681>", line 1, in <module>
arr.astype('i,i,U10')
ValueError: invalid literal for int() with base 10: 'word'
</code></pre>
<p>But if I make a list of tuples:</p>
<pre><code>In [14]: alist = [tuple(row) for row in arr]
In [15]: alist
Out[15]: [('1', '2', 'word'), ('3', '4', 'other')]
In [16]: np.array(alist, dtype='i,i,U10')
Out[16]:
array([(1, 2, 'word'), (3, 4, 'other')],
dtype=[('f0', '<i4'), ('f1', '<i4'), ('f2', '<U10')])
</code></pre>
<p>or</p>
<pre><code>In [17]: import numpy.lib.recfunctions as rf
In [19]: rf.unstructured_to_structured(arr, np.dtype('i,i,U10'))
Out[19]:
array([(1, 2, 'word'), (3, 4, 'other')],
dtype=[('f0', '<i4'), ('f1', '<i4'), ('f2', '<U10')])
</code></pre>
|
python|numpy
| 1
|
375,638
| 43,098,706
|
TFLearn/Tensorflow: Proper way to save an encoder extracted from an autoencoder
|
<p>This issue was originally posted on the tflearn github repo, but I haven't had any luck there:
<a href="https://github.com/tflearn/tflearn/issues/682" rel="nofollow noreferrer">https://github.com/tflearn/tflearn/issues/682</a></p>
<p>I'm trying to save an encoder model that represents the middle layer from an autoencoder. Using the MNIST example, when I run the script found here:</p>
<p><a href="https://github.com/tflearn/tflearn/blob/master/examples/images/autoencoder.py" rel="nofollow noreferrer">https://github.com/tflearn/tflearn/blob/master/examples/images/autoencoder.py</a></p>
<p>and then attempt to save the encoding_model using</p>
<pre><code>encoding_model = tflearn.DNN(encoder, session=model.session)
encoding_model.save('encoder.tfl')
</code></pre>
<p>I get the following error message:</p>
<blockquote>
<p>Traceback (most recent call last): File "", line 1, in File
"/usr/local/lib/python2.7/dist-packages/tflearn/models/dnn.py", line
260, in save self.trainer.save(model_file) File
"/usr/local/lib/python2.7/dist-packages/tflearn/helpers/trainer.py",
line 376, in save self.saver.save(self.session, model_file,
global_step=global_step) File
"/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py",
line 1363, in save {self.saver_def.filename_tensor_name:
checkpoint_file}) File
"/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py",
line 767, in run run_metadata_ptr) File
"/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py",
line 965, in _run feed_dict_string, options, run_metadata) File
"/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py",
line 1015, in _do_run target_list, options, run_metadata) File
"/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py",
line 1035, in _do_call raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError:
Attempting to use uninitialized value Global_Step_1 [[Node:
Global_Step_1/_96 = _SendT=DT_FLOAT, client_terminated=false,
recv_device="/job:localhost/replica:0/task:0/cpu:0",
send_device="/job:localhost/replica:0/task:0/gpu:0",
send_device_incarnation=1, tensor_name="edge_31_Global_Step_1",
_device="/job:localhost/replica:0/task:0/gpu:0"]]</p>
</blockquote>
<p>I think the ADAM optimizer variables are not initialized. What's the proper way to save a model like this?</p>
|
<p>In tensorflow you don't save into a .tfl file.</p>
<pre><code>saver = tf.train.Saver()
</code></pre>
<p>and then save into a .cpkt</p>
<p>Check this tutorial on saving:
<a href="https://www.tensorflow.org/programmers_guide/saved_model" rel="nofollow noreferrer">https://www.tensorflow.org/programmers_guide/saved_model</a></p>
|
python|tensorflow|deep-learning|tflearn
| 0
|
375,639
| 43,380,484
|
TensorFlow: Linear Regression with multiple inputs returns NaNs
|
<p>This is my first attemp at TensorFlow: I am building a <strong>Linear Regression</strong> model with <strong>multiple inputs</strong>.</p>
<p>The problem is that <strong>the result is always NaN</strong>, and I suspect that it is because I am a complete noob with matrix operations using numpy and tensorflow (matlab background hehe).</p>
<p>Here is the code:</p>
<pre><code>import numpy as np
import tensorflow as tf
N_INP = 2
N_OUT = 1
# Model params
w = tf.Variable(tf.zeros([1, N_INP]), name='w')
b = tf.Variable(tf.zeros([1, N_INP]), name='b')
# Model input and output
x = tf.placeholder(tf.float32, [None, N_INP], name='x')
y = tf.placeholder(tf.float32, [None, N_OUT], name='y')
linear_model = tf.reduce_sum(x * w + b, axis=1, name='out')
# Loss as sum(error^2)
loss = tf.reduce_sum(tf.square(linear_model - y), name='loss')
# Create optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss, name='train')
# Define training data
w_real = np.array([-1, 4])
b_real = np.array([1, -5])
x_train = np.array([[1, 2, 3, 4], [0, 0.5, 1, 1.5]]).T
y_train = np.sum(x_train * w_real + b_real, 1)[np.newaxis].T
print('Real X:\n', x_train)
print('Real Y:\n', y_train)
# Create session and init parameters
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# Training loop
train_data = {x: x_train, y: y_train}
for i in range(1000):
sess.run(train, train_data)
# Eval solution
w_est, b_est, curr_loss, y_pred = sess.run([w, b, loss, linear_model], train_data)
print("w: %s b: %s loss: %s" % (w_est, b_est, curr_loss))
print("y_pred: %s" % (y_pred,))
</code></pre>
<p>And here is the output:</p>
<pre><code>Real X:
[[ 1. 0. ]
[ 2. 0.5]
[ 3. 1. ]
[ 4. 1.5]]
Real Y:
[[-5.]
[-4.]
[-3.]
[-2.]]
w: [[ nan nan]] b: [[ nan nan]] loss: nan
y_pred: [ nan nan nan nan]
</code></pre>
|
<p>You need to add <code>keep_dims=True</code> inside your definition of <code>linear_model</code>. That is,</p>
<pre><code>linear_model = tf.reduce_sum(x * w + b, axis=1, name='out',keep_dims=True)
</code></pre>
<p>The reason is that otherwise the result is "flattened", and you cannot subtract <code>y</code> from it. </p>
<p>For example,</p>
<pre><code>'x' is [[1,2,3],
[4,5,6]]
tf.reduce_sum(x, axis=1) is [6, 15]
tf.reduce_sum(x, axis=1, keep_dims=True) is [[6], [15]]
</code></pre>
|
python|numpy|tensorflow
| 1
|
375,640
| 43,232,352
|
Create simple bar chart from data frame with many columns
|
<p>I would like to create a simple bar chart from a pandas data frame that looks like this:</p>
<p>A a1 a2 a3 a4 a5 a6...</p>
<p>B b1 b2 b3 b4 b5 b6...</p>
<p>C c1 c2 c3 c4 c5 c6...</p>
<p>The values are float numbers and there are over 2000 columns
The chart should have 3 bars in total with A,B,C on the x axis.</p>
<p>Thanks for your help!</p>
|
<p>You can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="nofollow noreferrer"><code>sum</code></a> first and then plot by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.bar.html" rel="nofollow noreferrer"><code>Series.plot.bar</code></a>:</p>
<pre><code>df.sum(axis=1).plot.bar()
</code></pre>
<p>Sample:</p>
<pre><code>df = pd.DataFrame({1:[1,2,3],
2:[4,5,6],
3:[7,8,9]}, index=list('ABC')) * 0.8
print (df)
1 2 3
A 0.8 3.2 5.6
B 1.6 4.0 6.4
C 2.4 4.8 7.2
print (df.sum(axis=1))
A 9.6
B 12.0
C 14.4
dtype: float64
df.sum(axis=1).plot.bar()
</code></pre>
<p><a href="https://i.stack.imgur.com/igfLd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/igfLd.png" alt="graph"></a></p>
|
python|pandas|matplotlib|bar-chart
| 3
|
375,641
| 43,125,962
|
Pandas column contents as index
|
<p>I am trying to set a new pandas column using an existing pandas columns as set of indices to separate list, but I keep getting the following error:</p>
<pre><code>TypeError: list indices must be integers or slices, not Series
</code></pre>
<p>See code below:</p>
<pre><code>import pandas as pd
monthvalues=[1,4,9,16,25,36,49,64,81,100,121,144]
df=pd.DataFrame([1,4,7,9], columns=['month'])
df['value']=monthvalues[df['month']-1]
</code></pre>
<p>Expected result</p>
<pre><code> "month" "value"
0 1 1
1 4 16
2 7 49
3 9 81
</code></pre>
<p>Note that this is just an example, I'm not actually trying to square the month digit in reality.</p>
|
<p>It's because <code>df['month']</code> returns a Pandas Series, not a list as I suspect you expect (although it still wouldn't work if it were a list).</p>
<p>I'm not sure if there's a neater way, but try this:</p>
<pre><code>df['value'] = [monthvalues[x] for x in df['month']-1]
</code></pre>
|
python|pandas
| 1
|
375,642
| 43,211,668
|
Check if a row in a pandas dataframe exists in other dataframes and assign points depending on which dataframes it also belongs to
|
<p>In <a href="https://stackoverflow.com/questions/38855204/check-if-a-row-in-one-data-frame-exist-in-another-data-frame">this</a> question this problem is solved partially to check if a row in a dataframe exists in another one.</p>
<p>What I have is many dataframes df1, df2, df3, df4 etc.
which are subsets of a larger dataframe df.</p>
<p>Now, for each row in df, I want to create a new column "RATING", and I want to assign a value.</p>
<p>For example if row1 in df is contained in df1 add 50 points, if it is also contained in df2 add another 30 points, in df3 add 40 points, in df4 subtract 10 points, etc.</p>
<p>row1 then will have a new column "RATING" with the total.
Then do the same for row2, etc.</p>
<p>How can I accomplish this?</p>
|
<p>Apply the exact methodology of the other question you are pointing at to get one additional boolean column per dataframe. You will end up with n extra columns being Exist_in_df1, Exist_in_df2, ..., Exist_in_dfn</p>
<p>Now you have a simple boolean matrix to work with against which you can apply your simple rating logic</p>
|
python|pandas|dataframe
| 0
|
375,643
| 43,302,821
|
Python: splitting trajectories into steps
|
<p>I have trajectories created from moves between clusters such as these:</p>
<pre><code>user_id,trajectory
11011,[[[86], [110], [110]]
2139671,[[89], [125]]
3945641,[[36], [73], [110], [110]]
10024312,[[123], [27], [97], [97], [97], [110]]
14270422,[[0], [110], [174]]
14283758,[[110], [184]]
14317445,[[50], [88]]
14331818,[[0], [22], [36], [131], [131]]
14334591,[[107], [19]]
14373703,[[35], [97], [97], [97], [17], [58]]
</code></pre>
<p>I would like to split the trajectories with multiple moves into individual segments, but I am unsure how.</p>
<p><strong>Example:</strong> </p>
<pre><code>14373703,[[35], [97], [97], [97], [17], [58]]
</code></pre>
<p><strong>into</strong> </p>
<pre><code>14373703,[[35,97], [97,97], [97,17], [17,58]]
</code></pre>
<p>The purpose is to then use these as edges in NetworkX to analyse them as a graph and identify dense movements (edges) between the individual clusters (nodes).</p>
<p>This is the code I've used to create the trajectories initially: </p>
<pre><code># Import Data
data = pd.read_csv('G:\Programming Projects\GGS 681\dmv_tweets_20170309_20170314_cluster_outputs.csv', delimiter=',', engine='python')
#print len(data),"rows"
# Create Data Fame
df = pd.DataFrame(data, columns=['user_id','timestamp','latitude','longitude','cluster_labels'])
# Filter Data Frame by count of user_id
filtered = df.groupby('user_id').filter(lambda x: x['user_id'].count()>1)
#filtered.to_csv('G:\Programming Projects\GGS 681\dmv_tweets_20170309_20170314_final_filtered.csv', index=False, header=True)
# Get a list of unique user_id values
uniqueIds = np.unique(filtered['user_id'].values)
# Get the ordered (by timestamp) coordinates for each user_id
output = [[id,filtered.loc[filtered['user_id']==id].sort_values(by='timestamp')[['cluster_labels']].values.tolist()] for id in uniqueIds]
# Save outputs as csv
outputs = pd.DataFrame(output)
#print outputs
headers = ['user_id','trajectory']
outputs.to_csv('G:\Programming Projects\GGS 681\dmv_tweets_20170309_20170314_cluster_moves.csv', index=False, header=headers)
</code></pre>
<p>If splitting this way is possible, can it be completed during the processing, as opposed to after the fact? I'd like to perform it while creating, to eliminate any postprocessing.</p>
|
<p>My solution uses the magic of pandas' <code>.apply()</code> function. I believe this should work (I tested this on your sample data). Notice that I also added an extra data points on the end for the case when there is only a single move, and when there is no move.</p>
<pre><code># Python3.5
import pandas as pd
# Sample data from post
ids = [11011,2139671,3945641,10024312,14270422,14283758,14317445,14331818,14334591,14373703,10000,100001]
traj = [[[86], [110], [110]],[[89], [125]],[[36], [73], [110], [110]],[[123], [27], [97], [97], [97], [110]],[[0], [110], [174]],[[110], [184]],[[50], [88]],[[0], [22], [36], [131], [131]],[[107], [19]],[[35], [97], [97], [97], [17], [58]],[10],[]]
# Sample frame
df = pd.DataFrame({'user_ids':ids, 'trajectory':traj})
def f(x):
# Creates edges given list of moves
if len(x) <= 1: return x
s = [x[i]+x[i+1] for i in range(len(x)-1)]
return s
df['edges'] = df['trajectory'].apply(lambda x: f(x))
</code></pre>
<p>Output:</p>
<pre><code>print(df['edges'])
edges
0 [[86, 110], [110, 110]]
1 [[89, 125]]
2 [[36, 73], [73, 110], [110, 110]]
3 [[123, 27], [27, 97], [97, 97], [97, 97], [97,...
4 [[0, 110], [110, 174]]
5 [[110, 184]]
6 [[50, 88]]
7 [[0, 22], [22, 36], [36, 131], [131, 131]]
8 [[107, 19]]
9 [[35, 97], [97, 97], [97, 97], [97, 17], [17, ...
10 [10]
11 []
</code></pre>
<p>As far as where you can put this in your pipeline - just put it right after you get your <code>trajectory</code> column (whether that's after you load the data, or after you do whatever filtering you require).</p>
|
python|pandas|graph|networkx
| 2
|
375,644
| 43,370,069
|
Converting xy co ords of Geotiff to numpy array positions in python
|
<p>I have geotiff file which I have read into an numpy array as described in the link below:</p>
<p><a href="https://stackoverflow.com/questions/7569553/working-with-tiffs-import-export-in-python-using-numpy">Working with TIFFs (import, export) in Python using numpy</a></p>
<p>The size of the Geotiff array that I have is (465,465) and I have acquired the meta data of the file using gdalinfo and it's using WGS84 as it's CRS.</p>
<p>What I wish to do with the file is to translate the x y lat lon co-ordinates that I see in QGIS and Gdalinfo to actual positions of points in the imported numpy array, how would I go about doing this?</p>
|
<p>You need to use the geotransform, from an opened GDAL dataset you can get it with:</p>
<p><code>gt = ds.GetGeoTransform()</code></p>
<p>From the GDAL documentation:</p>
<blockquote>
<p>The affine transform consists of six coefficients returned by
GDALDataset::GetGeoTransform() which map pixel/line coordinates into
georeferenced space using the following relationship:</p>
<pre><code>Xgeo = GT(0) + Xpixel*GT(1) + Yline*GT(2)
Ygeo = GT(3) + Xpixel*GT(4) + Yline*GT(5)
</code></pre>
</blockquote>
<p><a href="https://gdal.org/user/raster_data_model.html" rel="nofollow noreferrer">https://gdal.org/user/raster_data_model.html</a></p>
<p>If your raster is not rotated, its fairly simple, just subtract the origin and divide by the resolution.</p>
<p>There are several libraries which can perform such an affine transformation, for example the aptly named 'affine' library. A detailed explanation on its usage can be found at:</p>
<p><a href="http://www.perrygeo.com/python-affine-transforms.html" rel="nofollow noreferrer">http://www.perrygeo.com/python-affine-transforms.html</a></p>
|
python|arrays|numpy|gdal|geotiff
| 0
|
375,645
| 43,196,046
|
Plotting dictionary values into multi_line / timeseries Bokeh chart
|
<p><em>Note from maintainers: This question is about the obsolete <code>bokeh.charts</code> API removed years ago. For information on plotting with modern Boheh, including timseries, see:</em></p>
<p><a href="https://docs.bokeh.org/en/latest/docs/user_guide/plotting.html" rel="nofollow noreferrer">https://docs.bokeh.org/en/latest/docs/user_guide/plotting.html</a></p>
<hr>
<hr>
<p>I have a defined dictionary of values where key is a date in a form of string and values are an array of floats. </p>
<p>Dictionary looks like this:</p>
<pre><code>dict =
{'2017-03-23': [1.07874, 1.07930, 1.07917, 1.07864,],
'2017-03-27': [1.08382, 1.08392, 1.08410, 1.08454],
'2017-03-24': [1.07772, 1.07721, 1.07722, 1.07668]}
</code></pre>
<p>I want to display each date as a separate line on a Bokeh line_chart. Since the dates interval will change over time, I do not want to simply define p1.line, p2.line, p3.line (a static set) for each date because the amount of plotted dates will vary over time.</p>
<p>I have tried to follow tutorials here: <a href="http://docs.bokeh.org/en/0.9.3/docs/user_guide/charts.html" rel="nofollow noreferrer">http://docs.bokeh.org/en/0.9.3/docs/user_guide/charts.html</a> but I keep struggling and getting errors.</p>
<p>Here is my code:</p>
<pre><code>#input dates at this occasion
dates = ['2017-03-27','2017-03-24', '2017-03-23']
#dataframe is taken from input and contains columns date,time,close and other columns that I am not using
df
#I create a dictionary of dataframe in the structure described above
dict = {k: list(v) for k, v in df.groupby("date")["close"]}
#i want to plot chart
output_file("chart2.html")
p = figure(title="Dates line charts", x_axis_label='Index', y_axis_label='Price')
p = TimeSeries(dict, index='Index', legend=True, title="FX", ylabel='Price Prices')
show(p)
</code></pre>
<p>I am getting this error:</p>
<p><em>AttributeError: unexpected attribute 'index' to Chart, possible attributes are above, background_fill_alpha, background_fill_color, below, border_fill_alpha, border_fill_color, css_classes, disabled, extra_x_ranges, extra_y_ranges, h_symmetry, height, hidpi, inner_height, inner_width, js_callbacks, left, lod_factor, lod_interval, lod_threshold, lod_timeout, min_border, min_border_bottom, min_border_left, min_border_right, min_border_top, name, outline_line_alpha, outline_line_cap, outline_line_color, outline_line_dash, outline_line_dash_offset, outline_line_join, outline_line_width, plot_height, plot_width, renderers, right, sizing_mode, tags, title, title_location, tool_events, toolbar, toolbar_location, toolbar_sticky, v_symmetry, webgl, width, x_mapper_type, x_range, xlabel, xscale, y_mapper_type, y_range, ylabel or yscale</em></p>
<p>Thank you for the help.</p>
|
<p><em>Note from maintainers: This question is about the obsolete <code>bokeh.charts</code> API removed years ago. For information on plotting with modern Boheh, including timseries, see:</em></p>
<p><a href="https://docs.bokeh.org/en/latest/docs/user_guide/plotting.html" rel="nofollow noreferrer">https://docs.bokeh.org/en/latest/docs/user_guide/plotting.html</a></p>
<hr />
<hr />
<p>You are looking at very old documentation (0.9.3). The latest documentation (0.12.4) for bokeh Timeseries <a href="http://docs.bokeh.org/en/0.12.14/docs/reference/charts.html#timeseries" rel="nofollow noreferrer">can be found here</a>.</p>
<p>As you can see, Timeseries no longer accepts an <code>index</code> parameter. The available parameters are</p>
<blockquote>
<p><strong>data</strong> (list(list), numpy.ndarray, pandas.DataFrame, list(pd.Series)) –
a 2d data source with columns of data for each stepped line.</p>
<p><strong>x</strong> (str or
list(str), optional) – specifies variable(s) to use for x axis</p>
<p><strong>y</strong> (str
or list(str), optional) – specifies variable(s) to use for y axis</p>
<p><strong>builder_type</strong> (str or Builder, optional) – the type of builder to use
to produce the renderers. Supported options are ‘line’, ‘step’, or
‘point’.</p>
</blockquote>
<p>Just follow the example given in the most recent documentation and you should not run into the same problem.</p>
|
python|python-3.x|pandas|dictionary|bokeh
| 0
|
375,646
| 43,257,217
|
Why is being indent's wrong causes wrong function?
|
<p>I cannot understand why this error happen.
First,I wrote </p>
<pre><code>import urllib.request
from bs4 import BeautifulSoup
import time
import os
def download_image(url,name):
path = "./scrape_image/"
imagename = str(name) + ".jpg"
if not os.path.exists(path):
os.makedirs(path)
print(path)
urllib.request.urlretrieve(url,path+imagename)
url = "https://api.XXXkeyword=YYY&limit=1000"
response = urllib.request.urlopen(url)
rss = response.read().decode("utf-8")
soup = BeautifulSoup(rss, "xml")
name=0
for s in soup.find_all("photo"):
url = s.find_all("image_url")[0].string
name+=1
download_image(url, name)
</code></pre>
<p>by running this code,I can get 1 image from the API.But originally the right code can get 1000 images from the API.I fixed indents in the first code, so my code is like</p>
<pre><code>import urllib.request
from bs4 import BeautifulSoup
import time
import os
def download_image(url,name):
path = "./image/"
imagename = str(name) + ".jpg"
if not os.path.exists(path):
os.makedirs(path)
print(path)
urllib.request.urlretrieve(url, path+imagename)
time.sleep(1)
url = "https://api.XXXkeyword=YYY&limit=1000"
response = urllib.request.urlopen(url)
rss = response.read().decode("utf-8")
soup = BeautifulSoup(rss, "xml")
name = 0
for s in soup.find_all("photo"):
url = s.find_all("image_url")[0].string
name+=1
download_image(url,name)
</code></pre>
<p>At last,I can get 1000 images from the API. But I cannot understand why I can do so by fixing the indent.Please give me some explanations.</p>
|
<p>Because in the first example you're only getting the image if your condition passes:</p>
<pre><code>if not os.path.exists(path):
</code></pre>
<p>And that condition will only pass <em>once</em> because you immediately create the path:</p>
<pre><code>os.makedirs(path)
</code></pre>
<p>For every other iteration of the loop, the condition is false. So the code <em>within</em> the conditional block doesn't execute.</p>
<p>Basically, an <code>if</code> block only executes if the condition is true. When you move your code <em>out</em> of the <code>if</code> block, it always executes regardless of the condition.</p>
|
python-3.x|tensorflow
| 0
|
375,647
| 43,130,439
|
pandas: replace NaN with the last non-NaN value in column
|
<p>I have an excel file which lists basketball teams and the players on each team. The first row for a new team states the team name in column 0 and a player on that team in column 1. The next row simply has a player on that team in column 1 (nothing in column 0 as the team is implied from the last stated team). This is repeated for every team.</p>
<pre><code>Warriors Stephen Curry
- Klay Thompson
- Kevin Durant
Clippers Chris Paul
- Blake Griffen
- JJ Redick
Raptors Kyle Lowry
- Demar Derozan
</code></pre>
<p>I'm importing the data into a pandas dataframe and counting the number of players on each team.</p>
<pre><code>import pandas as pd
df = read_excel('data.xlsx')
print(df)
Team Player
0 Warriors Stephen Curry
1 NaN Klay Thompson
2 NaN Kevin Durant
3 Clippers Chris Paul
4 NaN Blake Griffen
5 NaN JJ Redick
6 Raptors Kyle Lowry
7 NaN Demar Derozan
</code></pre>
<p>Is there anyway I can replace <code>NaN</code> with the appropriate team name (I know I just need to fill in the empty spots in the excel file but it looks much cleaner if I handle this on the import or via pandas). I imagine I need to iterate through the dataframe, store the team name if it's not <code>NaN</code> and replace <code>NaN</code> with the currently stored team name until a new team arises.</p>
<p>If you don't know basketball, my dataframe should look like this when all is said and done:</p>
<pre><code> Team Player
0 Warriors Stephen Curry
1 Warriors Klay Thompson
2 Warriors Kevin Durant
3 Clippers Chris Paul
4 Clippers Blake Griffen
5 Clippers JJ Redick
6 Raptors Kyle Lowry
7 Raptors Demar Derozan
</code></pre>
|
<p>You can do this using the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html" rel="noreferrer"><code>fillna()</code></a> method on the dataframe. The <code>method='ffill'</code> tells it to fill forward with the last valid value.</p>
<pre><code>df.fillna(method='ffill')
</code></pre>
|
python|excel|pandas|missing-data
| 16
|
375,648
| 43,147,267
|
Python Pandas: selecting rows based on criteria
|
<p>I have a pandas DataFrame in the following format:</p>
<pre><code>df.head()
y y_pred
599 0 0
787 9 9
47 2 2
1237 1 1
1069 6 6
</code></pre>
<p>I want to find the rows / index numbers - where y != y_pred.</p>
<p>I am trying to do it through <code>Select</code> but am not able to do so. Please help.</p>
<p>TIA</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="nofollow noreferrer"><code>query</code></a>:</p>
<pre><code>df = df.query('y != y_pred').index
</code></pre>
<p>Sample:</p>
<pre><code>print (df)
y y_pred
599 0 1 <-values changed for match
787 9 9
47 2 2
1237 1 1
1069 6 3 <-values changed for match
df = df.query('y != y_pred').index
print (df)
Int64Index([599, 1069], dtype='int64')
</code></pre>
<p>Solutions with <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> are:</p>
<pre><code>df1 = df[df.y != df.y_pred].index
print (df1)
Int64Index([599, 1069], dtype='int64')
</code></pre>
<p>Or another <a href="https://stackoverflow.com/a/43147280/2901002">answer</a>.</p>
<p>For check different values:</p>
<pre><code>print (df.query('y != y_pred'))
y y_pred
599 0 1
1069 6 3
print (df[df.y != df.y_pred])
y y_pred
599 0 1
1069 6 3
</code></pre>
|
python|pandas
| 5
|
375,649
| 43,254,828
|
Extrapolating with a single data point
|
<p>Is there an function for extrapolating in numpy? </p>
<p>I tried using the interp but of course that interpolates between the range of my values and not outside the range of values.</p>
<p>So for example i have my x-values between 1 and 8, inclusive, and for each x-value, i have its corresponding y-value and I want to find the y-value when my x-value is 0</p>
<pre><code>import numpy as np
x = np.arange(1,8,1)
y = np.array((10,20,30,40,50,60,70))
np.interp(0,x,y)
</code></pre>
<p>Is there a function like the interp??</p>
|
<p>scipy.interpolate.interp1d allows extrapolation.</p>
<pre><code>import numpy as np
from scipy import interpolate
x = np.arange(1,8,1)
y = np.array((10,20,30,40,50,60,70))
interpolate.interp1d(x, y, fill_value='extrapolate')
</code></pre>
<p>hope this answers your question</p>
|
python|numpy|extrapolation
| 0
|
375,650
| 43,432,466
|
Run TensorFlow on Macbook Pro 2016 with OpenCL?
|
<p>My laptop is Macbook Pro 2016 with graphic card Radeon Pro 460.
Is there anyway that I can leverage my GPU to speed up tensorflow?</p>
<p>I understand that CUDA is for NVIDIA card, and CUDA version of TF doesn't work. Is there any other tool that I can use GPU to run TF such as OpenCL?</p>
<blockquote>
<p>Update: One potential answer is that: "You can run Tensorflow on a
Macbook Pro 2016 using tf-coriander . Disclosure: I'm the author. –
Hugh Perkins 2 days ago"</p>
</blockquote>
|
<p>Presumably, you've read: <a href="https://www.tensorflow.org/install/install_mac" rel="nofollow noreferrer">https://www.tensorflow.org/install/install_mac</a></p>
<p>You cannot "trick your computer" and the suggestion that you might is perhaps contributing to your downvotes. </p>
<p>If you want to learn more about the <em>potential</em> to run TF on your current hardware have a look at the in-progress work being done on TF OpenCL (which supports AMD cards): <a href="https://www.google.com/search?q=tensorflow%20for%20opencl&rct=j" rel="nofollow noreferrer">https://www.google.com/search?q=tensorflow%20for%20opencl&rct=j</a> that said, I think the OpenCL work for linux is much further along than for macos. Good luck!</p>
|
cuda|tensorflow|opencl
| 3
|
375,651
| 43,415,876
|
How to write a custom aggregation function for strings?
|
<p>I have a Dataframe of millions of records, i'm trying to make the whole dataframe to be grouped by one column 'napciente', that is done. But there are 63 columns which i need to aggregate as string based on a specific match, for example, if the Series contain "SI" and any other strings i want to return that "SI" as my result of the aggregation.</p>
<p><a href="https://i.stack.imgur.com/YjPXS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YjPXS.png" alt="enter image description here"></a></p>
<p>so i need to define my own aggregation that finds the string in the series and returns it. here i'm only posting data for 1 group and truncated columns</p>
<pre><code>data.groupby('npaciente')['asistencia'].apply(lambda x: if x.str.find("SI"): return "SI")
</code></pre>
<p>The above is invalid, suggestions?</p>
|
<p>You can use <code>apply</code> directly on the <code>groupby</code> object, then in the custom function, just return <code>pd.Series</code> in order for pandas to refer to it as columns:</p>
<pre><code>def agg_func(group):
"""group is actually a dataframe containing only the relevant rows"""
result = {}
if group["asistencia"].str.find("SI").any()
result["asistencia"] = "SI"
return pd.Series(result)
data.groupby('npaciente').apply(agg_func)
</code></pre>
<p>Of course, you need to add more logic to <code>agg_func</code> in order for it to do what you want it to do.</p>
|
python|string|python-3.x|pandas|anaconda
| 1
|
375,652
| 43,237,656
|
Extract Pattern in Pandas Dataframe
|
<p>I am extracting a pattern from the column of the dataframe. Some has the Word 'Oscar' and some has the Word 'Oscars'. How to extract in the panda dataframe . Below is the extract line code. This gives error.</p>
<pre><code> df['Oscar_Awards_Won'] = df['Awards'].str.extract('Won (\d+) (Oscar[s]?)', expand=True).fillna(0)
</code></pre>
<p>I am sorry for not posting Sample data.Sample data with column Awards. I am trying to extract the no of Oscars won. </p>
<pre><code>Awards
Won 3 Oscars. Another 234 wins & 312 nominations.
Won 7 Oscars. Another 215 wins & 169 nominations.
Won 11 Oscars. Another 174 wins & 113 nominations.
Won 4 Oscars. Another 122 wins & 213 nominations.
Won 3 Oscars. Another 92 wins & 150 nominations.
Won 1 Oscar. Another 91 wins & 95 nominations.
</code></pre>
|
<p>Is this what is needed?</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a': [1,2,3,4], 'b': ['is Oscar','asd','Oscars','not an Oscars q']})
df['c'] = ['Won 3 Oscars. Another 234 wins & 312 nominations.',
'Won 7 Oscars. Another 215 wins & 169 nominations.',
'Won 11 Oscar. Another 174 wins & 113 nominations.',
'Won 4 Oscars. Another 122 wins & 213 nominations.']
</code></pre>
<p>This line:</p>
<pre><code>df['c'].str.extract('Won (\d+) Oscar[s]?', expand=True).fillna(0)
</code></pre>
<p>Gives:</p>
<pre><code> 0
0 3
1 7
2 11
3 4
</code></pre>
|
python|pandas|numpy|dataframe
| 0
|
375,653
| 43,190,850
|
Python Seaborn Plot ValueError
|
<p>I have a pandas dataframe <code>df</code> and am trying to use the seaborn library to create a violin plot.</p>
<pre><code> rank sentiment category
0 1 0.657413 m
1 2 0.895769 m
2 3 -0.435457 m
3 4 -0.717959 m
4 5 0.869688 m
</code></pre>
<p>This is the seaborn line:</p>
<pre><code>sns.violinplot(x="rank", y="senitment", hue="category", data=df)
</code></pre>
<p>I keep getting this <code>ValueError</code></p>
<pre><code>ValueError: Could not interpret input 'senitment'
</code></pre>
<p>Full traceback</p>
<p>/Users/jrs/anaconda/lib/python3.5/site-packages/seaborn/categorical.py in violinplot(x, y, hue, data, order, hue_order, bw, cut, scale, scale_hue, gridsize, width, inner, split, orient, linewidth, color, palette, saturation, ax, **kwargs)
2299 bw, cut, scale, scale_hue, gridsize,
2300 width, inner, split, orient, linewidth,
-> 2301 color, palette, saturation)
2302
2303 if ax is None:</p>
<pre><code>/Users/jrs/anaconda/lib/python3.5/site-packages/seaborn/categorical.py in __init__(self, x, y, hue, data, order, hue_order, bw, cut, scale, scale_hue, gridsize, width, inner, split, orient, linewidth, color, palette, saturation)
535 color, palette, saturation):
536
--> 537 self.establish_variables(x, y, hue, data, orient, order, hue_order)
538 self.establish_colors(color, palette, saturation)
539 self.estimate_densities(bw, cut, scale, scale_hue, gridsize)
/Users/jrs/anaconda/lib/python3.5/site-packages/seaborn/categorical.py in establish_variables(self, x, y, hue, data, orient, order, hue_order, units)
145 if isinstance(input, string_types):
146 err = "Could not interpret input '{}'".format(input)
--> 147 raise ValueError(err)
148
149 # Figure out the plotting orientation
ValueError: Could not interpret input 'senitment'
</code></pre>
<p>I've tried using .reset_index() on the df and changing data types, but no luck. Thoughts?</p>
|
<pre><code>sns.violinplot(x="rank", y="sentiment", hue="category", data=df)
</code></pre>
|
python|pandas|seaborn
| 4
|
375,654
| 43,388,387
|
Searching for String Value in Pandas
|
<p>I am trying to search values in a Pandas dataframe.</p>
<p>This is how my DF looks like:</p>
<pre><code> 0 1 2 \
0 NaN NaN NaN
1 CITI Pass-T... NaN NaN
2 NaN NaN NaN
3 Certificateholder Distribution Summary NaN NaN
4 Class CUSIP Record Date
5 A-1 25151EAA1 12/30/2016
6 A-2 25151EAB9 12/30/2016
7 A-3 25151EAC7 12/30/2016
8 A-4 25151EAD5 12/30/2016
9 A-5A 25151EAE3 12/30/2016
10 A-5B 25151EAF0 12/30/2016
11 A-6 25151EAG8 12/30/2016
12 A-7 25151EAH6 12/30/2016
13 A-8 25151EAJ2 01/24/2017
14 M-1 25151EAK9 12/30/2016
15 M-2 25151EAL7 12/30/2016
16 M-3 25151EAM5 12/30/2016
17 M-4 25151EAN3 12/30/2016
18 M-5 25151EAP8 12/30/2016
19 M-6 25151EAQ6 12/30/2016
20 M-7 25151EAR4 12/30/2016
21 M-8 25151EAS2 12/30/2016
22 M-9 25151EAT0 12/30/2016
23 M-10 25151EAU7 12/30/2016
24 M-11 25151EAV5 12/30/2016
25 P 25151EAX1 12/30/2016
26 CE 25151EAW3 12/30/2016
27 R 25151EAY9 12/30/2016
28 Totals NaN NaN
29 This report is compiled by me, N... NaN NaN
30 All Record Dates are based upon the governing ... NaN NaN
31 NaN NaN NaN
</code></pre>
<p>So you see, there are no real column headers.
Now I want for example search for the value A-1.</p>
<p>This is what I did:</p>
<pre><code>for col in df:
print col
print df[df[col].str.contains("A-1", na=False)]
</code></pre>
<p>This actually gives me my desired result:</p>
<pre><code> 0 1 2 3 4 5 6 7 \
5 A-1 25151EAA1 12/30/2016 6.25 7218381.58 25379.0 143237.93 71982.98
8 9 10 11 12 13 14
5 7003160.66 168616.93 6169381.87 NaN NaN NaN NaN
</code></pre>
<p>But then I get the following error:</p>
<pre><code>AttributeError: Can only use .str accessor with string values, which use np.object_ dtype in pandas
</code></pre>
<p>Does someone has any idea what I am doing wrong?</p>
|
<p>I'll give it a go, you can check if column is not empty like this:</p>
<pre><code>for col in df:
if not df[col].empty:
print col
print df[df[col].str.contains("A-1", na=False)]
</code></pre>
|
python|pandas
| 1
|
375,655
| 43,085,422
|
Retrieve file from database using python
|
<p>I have a column <code>FileContent</code> (datatype <code>image</code>) in the database which store pdf, zip and docx file. </p>
<p>The <code>FileContent</code> column has the following value in database: <code>0x2550444...</code></p>
<p>I read the SQL table into DF using python and the values in column <code>FileContent</code> contains weird text instead of the <code>0x2550444...</code> :</p>
<blockquote>
<p>%PDF-1.7\n\n4 0 obj\n(Identity)\nendobj\n5 0 obj(Adobe)endobj8 0 obj> stream xœì½x\ÅÕ7>sïÝÞ«¶hµ»ZíJòªKV³,Õb[’eK²eKVqaÝmlÜ0Íу˜NB Á$ÙÆ¢›¼¦…’˜4JpH€ " éæÎcxóþŸïý¾G#Ÿ=¿™;3wæÌ™3gæÞ]#Œ²Ã‡€:Ê›fWÕþ°ã ’ý~+Bž£¥åó_{óÒÕ¿™€õ®ŠÒº²‹U3¯ý!E¤ª¼¢rÁ«|ˆ{w!..................................</p>
</blockquote>
<p>Is there a way to retrieve the file or convert the text above into a file (e.g. PDF) using <code>python</code>?</p>
<p>Will appreciate your input. Thank you.</p>
<p>My ultimate goal is to:
- retrieve the file from the column <code>FileContent</code> and extract the text within the files later on.</p>
<p>Code:</p>
<pre><code>import pymssql
conn = pymssql.connect(server="",user="",password="",database="")
stmt = "SELECT FileContent FROM [tablename]"
df = pd.read_sql(stmt,conn)
df.head()
print(df)
</code></pre>
|
<p>Everything is fine. What you see is different representations of the same content.</p>
<p>0x255044... is the hexadecimal representation of the first bytes. If you look up in an ASCII table,</p>
<ul>
<li>0x25 = '%'</li>
<li>0x50 = 'P'</li>
<li>0x44 = 'D'</li>
</ul>
<p>and so on. The other text is what the .pdf looks like in a text editor. The garbled mess after "stream" is zip-compressed content within the pdf.</p>
<p>Just write out the whole stream into a .pdf file (use binary mode!) and try to open in Acrobat Reader.</p>
<pre><code>with open('temp.pdf', 'wb') as outfile:
outfile.write(pdf_content_from_database)
</code></pre>
<p>should do.</p>
|
python|python-2.7|pandas|pdf|dataframe
| 3
|
375,656
| 43,310,962
|
Tensorflow installation problems in Windows
|
<p>I went through the official documentation to install tensorflow from <a href="https://www.tensorflow.org/install/install_windows" rel="nofollow noreferrer">https://www.tensorflow.org/install/install_windows</a> but I always get this error.</p>
<pre><code>tensorflow-1.0.1-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform.
You are using pip version 8.1.1, however version 9.0.1 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
</code></pre>
<p>for executing </p>
<p>"<code>pip3 install tensorflow-1.0.1-cp35-cp35m-win_amd64.whl</code>"</p>
<p>or </p>
<p>"<code>pip install tensorflow-1.0.1-cp35-cp35m-win_amd64.whl</code>" </p>
<p>I did update pip to 9.1 but I still get </p>
<p><code>"You are using pip version 8.1.1, however version 9.0.1 is available."</code> warning. </p>
<p>I have windows 7 64 bit with Python 2.7 & 3.5 both at 64 bit.</p>
<p>P.S. I have referred <a href="https://stackoverflow.com/questions/43069519/installing-tensorflow-on-windows">Installing tensorflow on windows</a> , <a href="https://stackoverflow.com/questions/35252888/tensorflow-installation-problems">tensorflow installation problems</a> </p>
<p>But they were of no help.</p>
<p>P.P.S. :- I executed the pep425 tags command & I these libs :- </p>
<pre><code>[('cp35', 'none', 'win_amd64'), ('py3', 'none', 'win_amd64'), ('cp35', 'none',
any'), ('cp3', 'none', 'any'), ('cp34', 'none', 'any'), ('cp33', 'none', 'any')
('cp32', 'none', 'any'), ('cp31', 'none', 'any'), ('cp30', 'none', 'any'), ('py
35', 'none', 'any'), ('py3', 'none', 'any'), ('py34', 'none', 'any'), ('py33',
none', 'any'), ('py32', 'none', 'any'), ('py31', 'none', 'any'), ('py30', 'none
, 'any')]
</code></pre>
|
<p>As you are using </p>
<blockquote>
<blockquote>
<p>pip install tensorflow-1.0.1-cp35-cp35m-win_amd64.whl</p>
</blockquote>
</blockquote>
<p>make sure of the following </p>
<p>You have python version 3.5 64 bit installed.</p>
<blockquote>
<blockquote>
<p>python --version</p>
</blockquote>
</blockquote>
<p>Pip supports all the tags in wheel file name. To check this run python shell and then import pip</p>
<blockquote>
<blockquote>
<p>import pip</p>
</blockquote>
</blockquote>
<p>Then run the following command</p>
<blockquote>
<blockquote>
<p>pip.pep425tags.get_supported()</p>
</blockquote>
</blockquote>
<p>Now output should contain <strong>cp35, cp35m, win_amd64</strong>, as these are the tags wheel in file name. </p>
<p>Your pip version does not support <strong>cp35m</strong>. You can either upgrade pip using</p>
<blockquote>
<blockquote>
<p>pip install --upgrade pip</p>
</blockquote>
</blockquote>
<p>If this does not work. Uninstall python and reinstall 3.5.2 (this worked for me).</p>
|
python|tensorflow|pip
| 0
|
375,657
| 43,142,475
|
How to convert n rows of xlsx to csv in Python while preserving date values
|
<p>I am trying to convert an xlsx file to one CSV file containing the header and another CSV file containing the actual data.
I have the following requirements:</p>
<ol>
<li>Header does not start at first row but at row <code>start_line</code>.</li>
<li>Dates should not be considered as floats but in some string format.</li>
<li>I don't know the total number of rows or columns of the file beforehand. I also don't want to specify which column is a date.</li>
</ol>
<p>Using <code>pandas</code> I get stuck at number 1.
I wanted to achieve this in two separate reads where I read from start_line to <code>start_line+1</code> and from <code>start_line+1</code> to the end.
However it seems like it is <a href="https://stackoverflow.com/questions/35747476/in-pandas-whats-the-equivalent-of-nrows-from-read-csv-to-be-used-in-read-ex">not possible</a> to read n lines from an offset. Below is the code I use to just get one file including the header.</p>
<pre><code>import pandas as pd
def parse_excel(file,start_line,sheet,table):
sh = pd.read_excel(file,sheet,skiprows=start_line)
sh.to_csv("output.csv",sep='\t',encoding='utf-8',index=False)
</code></pre>
<p>Next I have tried this using <code>xlrd</code> but this library treats all dates as floats like in Excel. The only workaround here seems to <a href="https://stackoverflow.com/questions/3727916/how-to-use-xlrd-xldate-as-tuple">go through all individual cells</a> which does not seem very efficient or well coded. What I have now:</p>
<pre><code>import xlrd
def parse_excel(file,start_line,sheet,table):
with xlrd.open_workbook(file) as wb:
sh = wb.sheet_by_name(sheet)
header_written = False
with open('{0}.csv'.format(table),'wb') as csv_file:
wr = csv.writer(csv_file,delimiter='\t')
for rownum in range(sh.nrows):
if not header_written and start_line == rownum:
with open('{0}_header.csv'.format(table),'wb') as header:
hwr = csv.writer(header,delimiter='\t')
hwr.writerow(sh.row_values(rownum))
header_written = True
elif header_written:
wr.writerow(sh.row_values(rownum))
</code></pre>
<p>Please point me to other solutions/libraries, show a workaround for either one of the above or explain why I should go for the <code>xlrd</code> workaround checking each individual cell.</p>
|
<p>As long as all of your data is below your header row then following should work. Assuming the header row is at row <code>n</code> (indexing beginning at 0 not 1 like excel).</p>
<pre><code>df = pd.read_excel('filepath', header=n)
df.head(0).to_csv('header.csv', index=False)
df.to_csv('output.csv', header=None, index=False)
</code></pre>
|
excel|python-2.7|csv|pandas|xlrd
| 1
|
375,658
| 43,241,733
|
Fastest way to go through a long list of arrays
|
<p>I have a data set from electrophysiological recordings in a hdf5 file in the form of what is really close to numpy arrays from my understanding and what I am trying to do is access it in the most efficient and fast way.</p>
<p><strong>Let me explain:</strong> The dataset is a list of arrays (2D-array?); each array contains x number of channels (recording sites), usually around 32-64.</p>
<p><strong>The problem is the following:</strong> There are millions of arrays and it's taking forever to loop through every individual array. Moreover, I have to loop through each channel in each array in order retrieve the values.</p>
<p>Here is my code:</p>
<pre><code>import h5py
f_kwd = h5py.File("experiment1_100.raw.kwd", "r") # reads hdf5 file
dset_data = f_kwd['recordings/0/data']
print (len(dset_data)) # prints 31646700
print (dset_data[0]) # prints the following
[ 94 1377 208 202 246 387 1532 1003 460 665
810 638 223 363 990 78 -139 191 63 630
763 60 682 1025 472 1113 -137 360 1216 297
-71 -35 -477 -498 -541 -557 27776 2281 -11370 32767
-28849 -30243]
list_value = []
for t_stamp in (dset_data):
for value in t_stamp:
if value > 400:
list_value.append(value)
</code></pre>
<p>Is there a way to make this a lot more efficient and quick?
Do I have to use numpy and if so, how can I make this happen? I feel like I am doing something wrong here.</p>
<p><strong>EDIT :</strong>
Here are some additional info about the first array in dataset for the following attributes: </p>
<blockquote>
<p>.shape -> (42,)<br>
.itemsize -> 2<br>
.dtype -> int16<br>
.size -> 42<br>
.ndim -> 1</p>
</blockquote>
<p><strong>EDIT2 :</strong>
..and the dataset itself: </p>
<blockquote>
<p>.shape -> (31646700, 42)<br>
.dtype -> int16<br>
.size -> 1329161400 </p>
</blockquote>
|
<p>If my guess that <code>t_stamp</code> is a 1d array of varying length, you could collect all elements >400 with:</p>
<pre><code>list_value = []
for t_stamp in (dset_data):
list_value.append(t_stamp[t_stamp>400])
# list_value.extend()
</code></pre>
<p>Use <code>append</code> if you want to collect the values in sublists. Use extend if you want one flat list.</p>
<p>It still iterates on the 'rows' of <code>dset_data</code>, but selection from each row will be much faster.</p>
<p>If all rows are 42 long, then <code>dset_data.value</code> will be a 2d numpy array:</p>
<pre><code>dset_data[dset_data>400]
</code></pre>
<p>will be a flat array of the selected values</p>
|
python|numpy|h5py
| 1
|
375,659
| 43,045,426
|
Linear Regression overfitting
|
<p>I'm pursuing course 2 on this coursera course on linear regression (<a href="https://www.coursera.org/specializations/machine-learning" rel="nofollow noreferrer">https://www.coursera.org/specializations/machine-learning</a>)</p>
<p>I've solved the training using graphlab but wanted to try out sklearn for the experience and learning. I'm using sklearn and pandas for this.</p>
<p>The model overfits on the data. How can I fix this? This is the code.</p>
<p>These are the coefficients i'm getting.</p>
<p>[ -3.33628603e-13 1.00000000e+00]</p>
<pre><code>poly1_data = polynomial_dataframe(sales["sqft_living"], 1)
poly1_data["price"] = sales["price"]
model1 = LinearRegression()
model1.fit(poly1_data, sales["price"])
print(model1.coef_)
plt.plot(poly1_data['power_1'], poly1_data['price'], '.',poly1_data['power_1'], model1.predict(poly1_data),'-')
plt.show()
</code></pre>
<p>The plotted line is like this. As you see it connects every data point.
<a href="https://i.stack.imgur.com/uuKfY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uuKfY.png" alt="enter image description here"></a>
and this is the plot of the input data
<a href="https://i.stack.imgur.com/7LLPp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7LLPp.png" alt="enter image description here"></a></p>
|
<p>I wouldn't even call this overfit. I'd say you aren't doing what you think you should be doing. In particular, you forgot to add a column of 1's to your design matrix, X. For example:</p>
<pre><code># generate some univariate data
x = np.arange(100)
y = 2*x + x*np.random.normal(0,1,100)
df = pd.DataFrame([x,y]).T
df.columns = ['x','y']
</code></pre>
<p>You're doing the following:</p>
<pre><code>model1 = LinearRegression()
X = df["x"].values.reshape(1,-1)[0] # reshaping data
y = df["y"].values.reshape(1,-1)[0]
model1.fit(X,y)
</code></pre>
<p>Which leads to:</p>
<pre><code>plt.plot(df['x'].values, df['y'].values,'.')
plt.plot(X[0], model1.predict(X)[0],'-')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/oJAL9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oJAL9.png" alt="enter image description here"></a></p>
<p>Instead, you want to add a column of 1's to your design matrix (X):</p>
<pre><code>X = np.column_stack([np.ones(len(df['x'])),df["x"].values.reshape(1,-1)[0]])
y = df["y"].values.reshape(1,-1)
model1.fit(X,y)
</code></pre>
<p>And (after some reshaping) you get:</p>
<pre><code>plt.plot(df['x'].values, df['y'].values,'.')
plt.plot(df['x'].values, model1.predict(X),'-')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/DMn1A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DMn1A.png" alt="enter image description here"></a></p>
|
pandas|scikit-learn
| 3
|
375,660
| 72,476,599
|
How do I use MLRun to train a model with Hyperparameters?
|
<p>I am interested in Hyperparamter Tuning with MLRun. Does MLRun include functionality similar to Tensorflow Hparams.</p>
|
<p>MLRun supports iterative tasks for automatic and distributed execution with variable parameters (hyperparams). Iterative tasks can be distributed across multiple containers for parallel execution.</p>
<p>MLRun iterations can be viewed as a child runs under the main task/run. Each child run gets a set of parameters that are computed/selected from the input hyperparameters based on the chosen strategy (Grid, List, Random, Custom).</p>
<p>You can read more <a href="https://docs.mlrun.org/en/latest/hyper-params.html?highlight=with_hyper_params" rel="nofollow noreferrer">here</a></p>
<p><a href="https://i.stack.imgur.com/9nZbh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9nZbh.png" alt="enter image description here" /></a></p>
|
python|tensorflow|mlops
| 0
|
375,661
| 72,332,332
|
Maximum Value in pandas dataframe in specific column from index to index
|
<p>ich have a pandas dataframe pd_data and want to find the maximum value from an index to another index in one specific column.</p>
<p>The dataframe looks like this:</p>
<pre><code> Open High Low Close Volume
0 1.21223 1.21246 1.21215 1.21227 132.32
1 1.21223 1.21226 1.21206 1.21215 200.66
2 1.21215 1.21222 1.21201 1.21204 165.59
3 1.21204 1.21217 1.21204 1.21207 32.89
4 1.21208 1.21208 1.21197 1.21201 135.67
</code></pre>
<p>I want to finde the maximum value from line 1 to 3 in column 'High'.</p>
<p>How can I do that?</p>
|
<p>You can try</p>
<pre class="lang-py prettyprint-override"><code>out = df.loc[1:4, 'High'].max()
# or
out = df['High'].iloc[1:4].max()
</code></pre>
<pre><code>print(out)
1.21226
</code></pre>
|
pandas|max
| 0
|
375,662
| 72,353,629
|
How better perform Pearson R from 2 arrays of dimensions (m, n) and (n), returning an array of (m) size? [Python, NumPy, SciPy]
|
<p>I'm trying to improve a simple algorithm to obtaining the Pearson correlation coefficient from two arrays, <strong>X(m, n)</strong> and <strong>Y(n)</strong>, returning me another array <strong>R</strong> of dimension <strong>(m)</strong>.<br />
In the case, I want to know the behavior each row of <strong>X</strong> regarding the values of <strong>Y</strong>. A sample (working) code is presented below:</p>
<pre><code>import numpy as np
from scipy.stats import pearsonr
np.random.seed(1)
m, n = 10, 5
x = 100*np.random.rand(m, n)
y = 2 + 2*x.mean(0)
r = np.empty(m)
for i in range(m):
r[i] = pearsonr(x[i], y)[0]
</code></pre>
<p>For this particular case, I get: <code>r = array([0.95272843, -0.69134753, 0.36419159, 0.27467137, 0.76887201, 0.08823868, -0.72608421, -0.01224453, 0.58375626, 0.87442889])</code></p>
<p>For small values of <strong>m</strong> (near 10k) this runs pretty fast, but I'm starting to work with <strong>m ~ 30k</strong>, and so this is taking much longer than I expected. I'm aware I could implement multiprocessing/multi-threading but I believe there's a (better) <em>pythonic</em> way of doing this.</p>
<p>I tried to use use <code>pearsonr(x, np.ones((m, n))*y)</code>, but it returns only <code>(nan, nan)</code>.</p>
|
<p><code>pearsonr</code> only supports 1D array internally. Moreover, it computes the p-values which is not used here. Thus, it would be more efficient not to compute it if possible. Additionally, the code also recompute the <code>y</code> vector every time and it does not efficiently make use of vectorized Numpy operations. This is why the computation is a bit slow. You can check this in the code <a href="https://github.com/scipy/scipy/blob/4cf21e753cf937d1c6c2d2a0e372fbc1dbbeea81/scipy/stats/_stats_py.py#L3900-L4117" rel="nofollow noreferrer">here</a>.</p>
<p>One way to compute this is by writing your own custom implementation based on the one of Scipy:</p>
<pre class="lang-py prettyprint-override"><code>def multi_pearsonr(x, y):
xmean = x.mean(axis=1)
ymean = y.mean()
xm = x - xmean[:,None]
ym = y - ymean
normxm = np.linalg.norm(xm, axis=1)
normym = np.linalg.norm(ym)
return np.clip(np.dot(xm/normxm[:,None], ym/normym), -1.0, 1.0)
</code></pre>
<p>It is <strong>450 times faster</strong> on my machine for m = 10_000.</p>
<p>Note that I did not keep the checks of the Scipy code, but it may be a good idea to keep them if your input is not guaranteed to be statistically safe (ie. well formatted for the computation of the Pearson test).</p>
|
python|numpy|scipy
| 1
|
375,663
| 72,482,224
|
Apply if else condition in specific pandas column by location
|
<p>I am trying to apply a condition to a pandas column by location and am not quite sure how. Here is some sample data:</p>
<pre><code> data = {'Pop': [728375, 733355, 695395, 734658, 732811, 789396, 727761, 751967],
'Pop2': [728375, 733355, 695395, 734658, 732811, 789396, 727761, 751967]}
PopDF = pd.DataFrame(data)
remainder = 6
#I would like to subtract 1 from PopDF['Pop2'] column cells 0-remainder.
#The remaining cells in the column I would like to stay as is (retain original pop values).
</code></pre>
<ol>
<li>
<pre><code>PopDF['Pop2']= PopDF['Pop2'].iloc[:(remainder)]-1
</code></pre>
</li>
<li>
<pre><code>PopDF['Pop2'].iloc[(remainder):] = PopDF['Pop'].iloc[(remainder):]
</code></pre>
</li>
</ol>
<p>The first line works to subtract 1 in the correct locations, however, the remaining cells become NaN. The second line of code does not work – the error is:</p>
<pre><code>ValueError: Length of values (1) does not match length of index (8)
</code></pre>
|
<p>Instead of selected the first N rows and subtracting them, subtract the entire column and only assign the first 6 values of it:</p>
<pre><code>df.loc[:remainder, 'Pop2'] = df['Pop2'] - 1
</code></pre>
<p>Output:</p>
<pre><code>>>> df
Pop Pop2
0 728375 728374
1 733355 733354
2 695395 695394
3 734658 734657
4 732811 732810
5 789396 789395
6 727761 727760
7 751967 751967
</code></pre>
|
python|pandas
| 1
|
375,664
| 72,233,338
|
Stack the same row in each layer of a 3D numpy array
|
<p>Hi is there a way to efficiently stack the same row in each layer of a 3D numpy array? I have an array like this:</p>
<pre><code> a = np.array([[["a111","a112","a113"],
["b","b","b"],
["c","c","c"],
["d","d","d"]],
[["a211","a212","a213"],
["b","b","b"],
["c","c","c"],
["d","d","d"]],
[["a311","a312","a313"],
["b","b","b"],
["c","c","c"],
["d","d","d"]],
[["a411","a412","a413"],
["b","b","b"],
["c","c","c"],
["d","d","d"]]])
</code></pre>
<p><strong>and i want to get something like this:</strong></p>
<pre><code>np.array([[["a111","a112","a113"],
["a211","a212","a213"],
["a311","a312","a313"],
["a411","a412","a413"]],
[["b","b","b"],
["b","b","b"],
["b","b","b"],
["b","b","b"]],
[["c","c","c"],
["c","c","c"],
["c","c","c"],
["c","c","c"]],
[["d","d","d"],
["d","d","d"],
["d","d","d"],
["d","d","d"]]])
</code></pre>
<p>Right now I'm looping through the whole array and stacking it manually.</p>
|
<p>Use <a href="https://numpy.org/doc/stable/reference/generated/numpy.swapaxes.html" rel="nofollow noreferrer"><code>swapaxes</code></a>:</p>
<pre><code>a.swapaxes(0,1)
</code></pre>
<p>output:</p>
<pre><code>array([[['a111', 'a112', 'a113'],
['a211', 'a212', 'a213'],
['a311', 'a312', 'a313'],
['a411', 'a412', 'a413']],
[['b', 'b', 'b'],
['b', 'b', 'b'],
['b', 'b', 'b'],
['b', 'b', 'b']],
[['c', 'c', 'c'],
['c', 'c', 'c'],
['c', 'c', 'c'],
['c', 'c', 'c']],
[['d', 'd', 'd'],
['d', 'd', 'd'],
['d', 'd', 'd'],
['d', 'd', 'd']]], dtype='<U4')
</code></pre>
|
arrays|python-3.x|numpy
| 1
|
375,665
| 72,401,746
|
Drop the duplicate where boolean column = False
|
<p>In the following example, I have been trying to drop the duplicate that is == false. I mean when id and year match with more than 1 row (subset =[id, year])</p>
<pre><code>df = pd.DataFrame({'id': ['1', '1', '1', '2', '2', '3', '4', '4'],
'Year': [2000, 2000, 2003, 2004, 2004, 2002, 2001, 2003], 'Boolean':['false', 'true', 'true', 'true', 'false', 'true', 'true', 'true']})
print(df)
# Output
id Year Boolean
0 1 2000 false
1 1 2000 true
2 1 2003 true
3 2 2004 true
4 2 2004 false
5 3 2002 true
6 4 2001 true
7 4 2003 true
</code></pre>
<pre><code># Attempted code
df2 = df.loc[df['Year'].eq('false').groupby([df.id]).idxmax()]
print(df2)
id Year Boolean
0 1 2000 false
3 2 2004 true
5 3 2002 true
6 4 2001 true
</code></pre>
<p>This code is incorrect as I am trying to keep the Boolean == false observations when there is a duplicate. It is also dropping other observations that are not duplicates. Not sure if it is easier to use the duplicate function for this.</p>
|
<p>"<em>I have been trying to drop the duplicate that is == false. I mean when id and year match with more than 1 row (subset =[id, year])</em>" -> so you need to group by <strong>both ID and Year</strong> (and use the correct column as boolean source):</p>
<pre><code>df.loc[df['Boolean'].eq('false').groupby([df['id'], df['Year']]).idxmax()]
</code></pre>
<p>output:</p>
<pre><code> id Year Boolean
0 1 2000 false
2 1 2003 true
4 2 2004 false
5 3 2002 true
6 4 2001 true
7 4 2003 true
</code></pre>
<h4>booleans</h4>
<p>Note that it would be much better to use real booleans (faster, less memory, shorter syntax…)</p>
<pre><code># convert to booleans
df['Boolean'] = df['Boolean'].eq('true')
df.loc[df.groupby(['id', 'Year'])['Boolean'].idxmin()]
id Year Boolean
0 1 2000 False
2 1 2003 True
4 2 2004 False
5 3 2002 True
6 4 2001 True
7 4 2003 True
</code></pre>
|
python|pandas|dataframe
| 2
|
375,666
| 72,460,821
|
Python Pandas - Find and replace across two dataframes
|
<p>I searched a lot but I can't solve my problem - maybe someone can give me a hint:</p>
<p>I have two Panda Dataframes, df1 and df2, with these columns:</p>
<pre><code>EAN PRODUCT-NAME OLD-DESCRIPTION PURCHASE-PRICE SALES-PRICE
EAN NEW-PRODUCT-NAME NEW-DESCRIPTION
</code></pre>
<p>I want to locate a product in df1 by the EAN number, and replace it's name and description with the new name & description in the corresponding product in df2.
It is important to know, that the first dataset is bigger and not every product will get a new name and description. I can't just replace the whole columns.</p>
<p>I have to go take the EAN numbers in df2, search for them in df1 to find the right product, and replace the name and description in df2.</p>
<p>I tried various methods like replace, isin, and even thought about nesting two loopsto solve the problem. But I might think to complicated...</p>
|
<p>EAN column is not common between both tables. df1 is way bigger.</p>
<p>I thought about something like this, but this is not elegant at all...</p>
<pre><code>for x in df1.index:
for y in df2.index:
if df1.loc[x, "EAN"] == df2.loc[y, "EAN"]:
print ("Found!")
df1["OLD-DESCRIPTION"] = df2["NEW-DESCRIPTION"]
</code></pre>
|
python|pandas|dataframe
| 1
|
375,667
| 72,462,160
|
How to aggregate DataFrame to stay rows with the highest date and add new column in Python Pandas?
|
<p>I have DataFrame in Python Pandas like below ("date_col" is in "datetime64" format):</p>
<pre><code>ID | date_col | purchase
----|------------|-------
111 | 2019-01-05 | apple
111 | 2019-05-22 | onion
222 | 2020-11-04 | banana
333 | 2020-04-19 | orange
</code></pre>
<p>I need to aggregate above table in the following way:</p>
<ol>
<li>add column "col1" with number of purchases which was made by client ("ID")</li>
<li>If some client ("ID") is duplicated - stay only one row with the highest date</li>
</ol>
<p>So as a result I need something like below:</p>
<pre><code>ID | date_col | purchase | col1
----|------------|----------|-----
111 | 2019-05-22 | onion | 2
222 | 2020-11-04 | banana | 1
333 | 2020-04-19 | orange | 1
</code></pre>
|
<p>Assuming the dataframe is sorted on <code>date_col</code> column, you can use <code>groupby</code>:</p>
<pre><code>g = df.groupby('ID', as_index=False)
g.last().merge(g.size())
</code></pre>
<hr />
<pre><code> ID date_col purchase size
0 111 2019-05-22 onion 2
1 222 2020-11-04 banana 1
2 333 2020-04-19 orange 1
</code></pre>
|
python|pandas|aggregate|aggregate-functions
| 1
|
375,668
| 72,284,278
|
use python variable to read specific rows from access table using sqlalchemy
|
<p>I have an access table called "Cell_list" with a key column called "Cell_#". I want to read the table into a dataframe, but only the rows that match indices which are specified in a python list "cell_numbers".
I tried several variations on:</p>
<pre><code> import pyodbc
import pandas as pd
cell_numbers = [1,3,7]
cnn_str = r'Driver={Microsoft Access Driver (*.mdb,*.accdb)};DBQ=C:\folder\myfile.accdb;'
conn = pyodbc.connect(cnn_str)
query = ('SELECT * FROM Cell_list WHERE Cell_# in '+tuple(cell_numbers))
df = pd.read_sql(query, conn)
</code></pre>
<p>But no matter what I try I get a syntax error.
How do I do this?</p>
|
<p>Consider best practice of parameterization which is supported in <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_sql.html" rel="nofollow noreferrer"><strong><code>pandas.read_sql</code></strong></a>:</p>
<pre class="lang-py prettyprint-override"><code># PREPARED STATEMENT, NO DATA
query = (
'SELECT * FROM Cell_list '
'WHERE [Cell_#] IN (?, ?, ?)'
)
# RUN SQL WITH BINDED PARAMS
df = pd.read_sql(query, conn, params=cell_numbers)
</code></pre>
<p>Consider even dynamic qmark placeholders dependent on length of <code>cell_numbers</code>:</p>
<pre class="lang-py prettyprint-override"><code>qmarks = [', '.join('?' for _ in cell_numbers)]
query = (
'SELECT * FROM Cell_list '
f'WHERE [Cell_#] IN ({qmarks})'
)
</code></pre>
|
python|pandas|ms-access|pyodbc
| 1
|
375,669
| 72,321,672
|
Jupyter notebook crashes when trying to create a df
|
<p>I'm trying to create a data frame where the rows are the result of vectorization of a list of stories and the columns are the words in those stories.</p>
<p>the end goal is to predict gender of the writer of each story</p>
<pre><code>vec = CountVectorizer()
X_train = vec.fit_transform(df_train["story"].tolist())
</code></pre>
<p>the problem is - every time I try to run the following line the notebook crashes, no error or anything...</p>
<pre><code>pd.DataFrame(X_train.toarray(), columns=vec.get_feature_names())
</code></pre>
<p>this code worked with different data on a different exercise...</p>
|
<p>The <code>X_train.toarray()</code> method changes the type of the spare matrix (containing only the non null entries) outputed by your <strong>CountVectorizer</strong> to a dense one (full of zeros), which can be hundred of times bigger. Your error is most likely a memory error.</p>
<p>I would suggest you only print the top vocabulary (for instance the top 100 words).</p>
<pre class="lang-py prettyprint-override"><code>n_words=100
print(
pd.DataFrame(
data=X_train[:, :n_words].toarray(),
columns=vec.get_feature_names()[:n_words]
)
)
</code></pre>
|
python|pandas|dataframe|jupyter-notebook
| 0
|
375,670
| 72,376,168
|
Converting date to Monday date of each week
|
<p>I have a <code>df</code>:</p>
<pre><code>date
2021-06-28
2021-06-29
2021-06-30
2021-07-02
2021-07-04
2021-07-07
2021-08-06
2021-08-07
</code></pre>
<p>I am trying to convert this to week, I know I can use <code>df.date.dt.isocalendar().week</code> but this returns the week number with the default start date, whereas my start date is <code>2021-06-28</code>.
I can achieve the dates in <code>df</code> using <code>pd.Grouper(key = 'date', freq = 'W-MON', label = 'left', closed = 'left')</code> in a <code>groupby</code> but I would like to have the same output without using <code>groupby</code> - simply applying a function to <code>date</code> column.</p>
<p>My goal is to have:</p>
<pre><code>date
2021-06-28
2021-06-28
2021-06-28
2021-06-28
2021-06-28
2021-07-05
2021-08-02
2021-08-02
</code></pre>
<p>Because these dates represent the Monday of each week that the date falls in.</p>
|
<p>You can convert to 'W-SUN' <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.to_period.html" rel="nofollow noreferrer">period</a> and get the first day:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
df['Monday'] = df['date'].dt.to_period('W-SUN').dt.start_time
</code></pre>
<p><em>NB. why 'W-SUN'? This indicates the weeks <strong>ending</strong> in SUN(day), so starting on Mondays. See <a href="https://pandas.pydata.org/docs/user_guide/timeseries.html#timeseries-offset-aliases" rel="nofollow noreferrer">anchored offsets</a>.</em></p>
<p>output:</p>
<pre><code> date Monday
0 2021-06-28 2021-06-28
1 2021-06-29 2021-06-28
2 2021-06-30 2021-06-28
3 2021-07-02 2021-06-28
4 2021-07-04 2021-06-28
5 2021-07-07 2021-07-05
6 2021-08-06 2021-08-02
7 2021-08-07 2021-08-02
</code></pre>
|
python|pandas|date
| 1
|
375,671
| 72,142,699
|
Get records that are a time interval away from a given date and specific conditions on a pandas DataFrame
|
<p>Let it be the following Python Panda DataFrame:</p>
<pre><code>| ID | date | direction | country_ID |
|-----------|-------------------------|---------------|------------|
| 0 | 2022-04-01 10:00:01 | IN | UK |
| unknown | 2022-04-01 10:00:03 | IN | UK |
| 0 | 2022-04-01 12:00:01 | OUT | UK |
| 0 | 2022-04-01 12:30:11 | IN | GER |
| 1 | 2022-04-01 10:00:00 | IN | GER |
| 1 | 2022-04-01 08:04:03 | OUT | GER |
| unknown | 2022-04-01 10:20:02 | OUT | USA |
| unknown | 2022-04-01 09:59:58 | IN | GER |
| unknown | 2022-04-01 05:04:03 | OUT | ITL |
| unknown | 2022-04-01 05:04:01 | OUT | ITL |
| 2 | 2022-04-01 05:03:59 | OUT | ITL |
</code></pre>
<p>I need to create a DataFrame, containing rows with ID value unknown, that have a matching record with direction and country_ID values 2 seconds (it can be changed) apart in time, but the ID of the row it matches is different from unknown.</p>
<p>All rows unknown:</p>
<pre><code>| ID | date | direction | country_ID |
|-----------|-------------------------|---------------|------------|
| unknown | 2022-04-01 10:00:03 | IN | UK |
| unknown | 2022-04-01 10:20:02 | OUT | USA |
| unknown | 2022-04-01 09:59:58 | IN | GER |
| unknown | 2022-04-01 05:04:03 | OUT | ITL |
| unknown | 2022-04-01 05:04:01 | OUT | ITL |
</code></pre>
<p>Matching examples for each row specified above:</p>
<pre><code>| ID | date | direction | country_ID |
|-----------|-------------------------|---------------|------------|
| unknown | 2022-04-01 10:00:03 | IN | UK |
| 0 | 2022-04-01 10:00:01 | IN | UK |
</code></pre>
<pre><code>| ID | date | direction | country_ID |
|-----------|-------------------------|---------------|------------|
| unknown | 2022-04-01 10:20:02 | OUT | USA |
</code></pre>
<pre><code>| ID | date | direction | country_ID |
|-----------|-------------------------|---------------|------------|
| unknown | 2022-04-01 09:59:59 | IN | GER |
| 1 | 2022-04-01 10:00:00 | IN | GER |
</code></pre>
<pre><code>| ID | date | direction | country_ID |
|-----------|-------------------------|---------------|------------|
| unknown | 2022-04-01 05:04:03 | OUT | ITL |
</code></pre>
<pre><code>| ID | date | direction | country_ID |
|-----------|-------------------------|---------------|------------|
| unknown | 2022-04-01 05:04:01 | OUT | ITL |
| 2 | 2022-04-01 05:03:59 | OUT | ITL |
</code></pre>
<p>We remove those that have not any match. We get the resulting DataFrame:</p>
<pre><code>| ID | date | direction | country_ID | date_match | ID_match |
|-----------|-------------------------|---------------|------------|----------------------|---------------|
| unknown | 2022-04-01 10:00:03 | IN | UK | 2022-04-01 10:00:01 | 0 |
| unknown | 2022-04-01 09:59:58 | IN | GER | 2022-04-01 10:00:00 | 1 |
| unknown | 2022-04-01 05:04:01 | OUT | ITL | 2022-04-01 05:03:59 | 2 |
</code></pre>
<p>Thank you in advance for your help.</p>
|
<p>You can use a mask to split the dataframe in two and <a href="https://pandas.pydata.org/docs/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>pandas.merge_asof</code></a> to find the matches by group and within 2 seconds:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
mask = df['ID'].eq('unknown')
idx = (pd
.merge_asof(df[mask].sort_values(by='date').reset_index(),
df[~mask].sort_values(by='date'),
by=['direction', 'country_ID'],
on='date',
direction='nearest', tolerance=pd.Timedelta('2s'),
)
.loc[lambda d: d['ID_y'].notna(), 'index']
)
df.loc[sorted(idx)]
</code></pre>
<p>output:</p>
<pre><code> ID date direction country_ID
1 unknown 2022-04-01 10:00:03 IN UK
7 unknown 2022-04-01 09:59:58 IN GER
9 unknown 2022-04-01 05:04:01 OUT ITL
</code></pre>
<h5>with merged data</h5>
<pre><code>df2 = (pd
.merge_asof(df[mask].sort_values(by='date').reset_index(),
df[~mask].sort_values(by='date').rename(columns={'date': 'date_match'}),
by=['direction', 'country_ID'],
left_on='date', right_on='date_match',
direction='nearest', tolerance=pd.Timedelta('2s'),
suffixes=('', '_match')
)
.loc[lambda d: d['ID_match'].notna()]
.set_index('index').sort_index()
)
</code></pre>
<p>output:</p>
<pre><code> ID date direction country_ID ID_match date_match
index
1 unknown 2022-04-01 10:00:03 IN UK 0 2022-04-01 10:00:01
7 unknown 2022-04-01 09:59:58 IN GER 1 2022-04-01 10:00:00
9 unknown 2022-04-01 05:04:01 OUT ITL 2 2022-04-01 05:03:59
</code></pre>
|
python|pandas|dataframe|datetime
| 1
|
375,672
| 72,327,467
|
Python to list vs explode
|
<pre><code>Set.Categories.str.split(',').tolist()
Set.Categories.str.split(',')).explode('Categories')
</code></pre>
<p>what is the difference between tolist and explode in pandas?</p>
|
<p>Those are completely different functions.</p>
<p>The first one returns (here, nested) python lists:</p>
<pre><code>df = pd.DataFrame({'Categories': ['a,b','c,d','e']})
df['Categories'].str.split(',').tolist()
[['a', 'b'], ['c', 'd'], ['e']]
</code></pre>
<p>The second one expands the rows of the Series to have one row per initial list item:</p>
<pre><code>df = pd.DataFrame({'Categories': ['a,b','c,d','e']})
df['Categories'].str.split(',').explode('Categories')
0 a
1 b
2 c
3 d
4 e
Name: Categories, dtype: object
</code></pre>
|
python|pandas|matplotlib
| 1
|
375,673
| 72,140,543
|
How to create a dummy only if a column has non-zero values for certain dates but zero for other dates
|
<p>Let's say, I want to identify traders who only traded during bull runs but did not trade (zero values) during downturns or stable periods. Let's say we have two bull runs, <code>2018Q4</code>, <code>2021Q4</code>. Below, <code>D</code> starts trading only from <code>2021Q4</code> (the second bull run period) but I want to include this as =1 well.</p>
<p>This is my df.</p>
<pre><code>id date value other variables..
A 2019Q4 2
A 2020Q4 2
A 2021Q4 3
B 2018Q4 2
B 2019Q4 0
B 2020Q4 0
B 2021Q4 4
C 2020Q4 3
C 2021Q4 4
D 2021Q4 4
E 2018Q4 3
E 2019Q4 0
E 2020Q4 0
E 2021Q4 2
. .
</code></pre>
<p>desired output would be</p>
<pre><code>id dummy
A 0
B 1
C 0
D 1
E 1
. .
</code></pre>
|
<p>You can test if both values not equal <code>0</code> and test both quarters, compare (thanks mozway for improvement) and last aggregate <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.all.html" rel="nofollow noreferrer"><code>GroupBy.all</code></a> for test if all <code>True</code>s per groups:</p>
<pre><code>m1 = df['value'].ne(0)
m2 = df['date'].isin(['2018Q4','2021Q4'])
df1 = (m1 == m2).groupby(df['id']).all().astype(int).reset_index(name='dummy')
print (df1)
id dummy
0 A 0
1 B 1
2 C 0
3 D 1
4 E 1
</code></pre>
|
python|pandas|dataframe
| 2
|
375,674
| 72,314,802
|
Tensor split to dynamic length tensors based on continuous mask values in tensorflow?
|
<p>I'm trying to figure out how to split my tensor of sequential data into multiple parts based on partitioning continuous masks with value of binary number '1'.</p>
<p>I've read the official documentation.
Howerver I can't find any function that can handle this easy.
Any helpful ways for this in python?</p>
<p>I have tried with 'tf.ragged.boolean_mask' but it doesn't seem to fit in my case.</p>
<p>The visualized example of my explanation is:</p>
<p>inputs:</p>
<pre><code># both are tensors, NOT data.
data_tensor = ([3,5,6,2,6,1,3,9,5])
mask_tensor = ([0,1,1,1,0,0,1,1,0])
</code></pre>
<p>expected output:</p>
<pre><code>output_tensor = ([[3],[5,6,2],[6,1],[3,9],[5]])
</code></pre>
<p>Thank you.</p>
|
<p>I recently discovered a method to do it in a very clean way in <a href="https://stackoverflow.com/a/72258918/2246849">this answer</a> by @AloneTogether:</p>
<pre><code>import tensorflow as tf
data_tensor = tf.constant([3,5,6,2,6,1,3,9,5])
mask_tensor = tf.constant([0,1,1,1,0,0,1,1,0])
# Index where the mask changes.
change_idx = tf.concat([tf.where(mask_tensor[:-1] != mask_tensor[1:])[:, 0], [tf.shape(mask_tensor)[0]-1]], axis=0)
# Ranges of indices to gather.
ragged_idx = tf.ragged.range(tf.concat([[0], change_idx[:-1] + 1], axis=0), change_idx + 1)
# Gather ranges into ragged tensor.
output_tensor = tf.gather(data_tensor, ragged_idx)
print(output_tensor)
</code></pre>
<pre><code><tf.RaggedTensor [[3], [5, 6, 2], [6, 1], [3, 9], [5]]>
</code></pre>
|
python|tensorflow
| 2
|
375,675
| 72,268,307
|
How to replace string on pandas dataframe before certain characters
|
<p>Here's my dataset</p>
<pre><code>Id Text
1 Animation_and_Cartoon - Comics and Anime/Cartoon_and_anime
2 Animation_and_Cartoon - Comics and Anime/Manga_and_anime
</code></pre>
<p>Expected output is all <code>_</code> before <code>-</code> is replaced by ' ', but after <code>-</code> is not</p>
<pre><code>Id Text
1 Animation_and_Cartoon - Comics and Anime/Cartoon_and_anime
2 Animation_and_Cartoon - Comics and Anime/Manga_and_anime
</code></pre>
|
<p>You can use:</p>
<pre><code>df['Text'] = df['Text'].str.replace(
r'^([^-]+)',
lambda m: m.group().replace('_and_',' and '),
regex=True)
</code></pre>
<p>Output:</p>
<pre><code> Id Text
0 1 Animation and Cartoon - Comics and Anime/Cartoon_and_anime
1 2 Animation and Cartoon - Comics and Anime/Manga_and_anime
</code></pre>
|
python|python-3.x|regex|pandas
| -1
|
375,676
| 72,476,560
|
How can I find out maximum percentage change for n years?
|
<p>Problem statement:
display the <strong>maximum percentage change</strong> and the <strong>year</strong> that it occurred.
The array looks like this:</p>
<pre><code>import numpy as np
array = np.array([[2010,2011,2012,2013,2014,2015,2016,2017,2018,2019],
[1996,2165,2342,2511,2829,3052,3299,3523,3741,3864]])
</code></pre>
<p>So what i have tried so far:</p>
<pre><code>compare = np.roll(array,1)
pcdiff = (compare - pcdiff)/compare))
</code></pre>
<p>can anyone help with this thank you!</p>
|
<p>Is that what you're looking for ?</p>
<pre><code>import numpy as np
array = np.array([[2010,2011,2012,2013,2014,2015,2016,2017,2018,2019],
[1996,2165,2342,2511,2829,3052,3299,3523,3741,3864]])
values = array[1]
perc_change = [round((values[i]-values[i-1])*100/values[i-1], 2) for i in range(1, len(values))]
max_perc = max(perc_change)
print(max_perc, array[0][perc_change.index(max_perc)+1])
</code></pre>
<p>output:</p>
<pre><code>12.66 2014
</code></pre>
|
python|arrays|numpy|compare|shift
| 1
|
375,677
| 72,297,267
|
RNN network: ValueError: Expected input batch_size (96) to match target batch_size (32)
|
<p>I am implementing a simple recurrent neural network architecture for CIFAR10 image classification. I have also changed the batch size 512 to 1536 but it didn't work out. The input_size and sequence length is 32.</p>
<pre><code>import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from torch.autograd import Variable
from torch.utils.data import DataLoader
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)
all_transforms = transforms.Compose([
transforms.Resize((32, 32)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.4914, 0.4822, 0.4465],
std=[0.2023, 0.1994, 0.2010])]
)
train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True, transform=all_transforms, download=True)
test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False, transform=all_transforms, download=True)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=32, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=32, shuffle=True)
print(train_dataset)
input_size = 32
hidden_size = 64
num_layers = 2
num_classes = 10
sequence_length = 32
# Recurrent Neural Network (many-to-one)
# Fully connected neural network with one hidden layer
class SimpleRNN(nn.Module):
def __init__(self, input_dim):
super(SimpleRNN, self).__init__()
self.rnn = nn.RNN(input_size=input_dim, hidden_size=64, num_layers=1, batch_first=True)
self.fc = nn.Linear(64, 10) # hidden dimension is output dimension
def forward(self, x):
h0 = torch.zeros(1, x.size(0), 64).to(device)
out, _ = self.rnn(x, h0)
out = out[:, -1, :]
out = self.fc(out)
return out
model = SimpleRNN(input_dim=input_size)
model.cuda()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, weight_decay=0.005, momentum=0.9)
total_step = len(train_loader)
epochs = 20
for epoch in range(epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.view(-1, 32, 32).cuda()
labels = labels.cuda()
outputs = model(images)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Epochs [{}/{}], Loss: {:4f}".format(epoch + 1, epochs, loss.item()))
# with torch.no_grad():
# correct = 0
# total = 0
# for images, labels in train_loader:
# images = images.reshape(-1, sequence_length, input_size).cuda()
# labels = images.cuda()
#
# outputs = model(images)
# _, predicted = torch.max(outputs.data, 1)
# total += labels.size(0)
# correct += (predicted == labels).sum().item()
#
# print('Accuracy of the network on the {} train images: {} %'.format(50000, 100 * correct / total))
</code></pre>
<p><strong>Traceback</strong></p>
<pre><code>Traceback (most recent call last):
File "/snap/pycharm-community/278/plugins/python-ce/helpers/pydev/pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/snap/pycharm-community/278/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/media/cvpr/CM_1/tutorials/pytorch_simple_rrn.py", line 63, in <module>
loss = criterion(outputs, labels)
File "/home/cvpr/anaconda3/envs/tutorials/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/cvpr/anaconda3/envs/tutorials/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 961, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/cvpr/anaconda3/envs/tutorials/lib/python3.8/site-packages/torch/nn/functional.py", line 2468, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/cvpr/anaconda3/envs/tutorials/lib/python3.8/site-packages/torch/nn/functional.py", line 2261, in nll_loss
raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'
ValueError: Expected input batch_size (96) to match target batch_size (32).
</code></pre>
|
<p>You've resized your images incorrectly, they are RGB images and so you need to include the channel information:</p>
<pre><code>images = images.view(-1, 32, 3*32).cuda()
</code></pre>
<p>This is why your batch is 3x what you expect because it was putting each channel as an element of your batch.</p>
|
pytorch|recurrent-neural-network
| 0
|
375,678
| 72,410,286
|
How to clean data which has columns of 'Object' Data type
|
<p>So recently again I was playing with NFL dataset and trying to do data cleaning assignment. While performing this activity, I came up with data which is of 'Object' datatype. Now the thing is I did some data exploration and what I came up with is that the columns has lot of unique values.
What I am looking for is to fill NaN values in this dataset and continue with this dataset.
I want to know some techniques which can be used to clean this type of data.</p>
<pre><code>missing_columns1 = [var for var in nfl2.columns if nfl2[var].isnull().mean()>0]
for data1 in range(len(missing_columns1)):
category1 = missing_columns1[data1]
print(category1)
print(nfl2[category1].dtype)
print(nfl2[category1].isnull().sum())
print(nfl2[category1].unique())
print("---------------")
</code></pre>
<p>When I ran the above python code on the dataset, here the output that came.</p>
<pre><code>Tackler1
object
147183
['M.Griffin' 'C.Hope' 'S.Tulloch' ... 'T.Gurley' 'K.Peko' 'C.Kaepernick']
---------------
Tackler2
object
318045
[nan 'J.Farrior' 'S.Tulloch' ... 'D.Carrier' 'L.Trail' 'D.Lowry']
---------------
FieldGoalResult
object
354431
[nan 'No Good' 'Blocked' 'Good']
---------------
RecFumbTeam
object
358513
[nan 'PIT' 'TEN' 'MIN' 'DET' 'TB' 'NYJ' 'HOU' 'JAC' 'PHI' 'CAR' 'BAL' 'KC'
'ATL' 'ARI' 'SEA' 'STL' 'NYG' 'WAS' 'BUF' 'NE' 'OAK' 'SD' 'NO' 'GB' 'CIN'
'SF' 'CLE' 'DEN' 'CHI' 'MIA' 'IND' 'DAL' 'LA']
---------------
RecFumbPlayer
object
358513
[nan 'K.Fox' 'S.Tulloch' ... 'M.Paradis' 'A.Gotsis' 'F.Clark']
---------------
ChalReplayResult
object
359476
[nan 'Upheld' 'Reversed']
---------------
PenalizedTeam
object
336362
[nan 'PIT' 'TEN' 'CLE' 'MIN' 'NO' 'DET' 'TB' 'DAL' 'HOU' 'NYJ' 'JAC' 'IND'
'CIN' 'DEN' 'PHI' 'CAR' 'BAL' 'KC' 'MIA' 'ATL' 'SF' 'ARI' 'STL' 'SEA'
'WAS' 'NYG' 'GB' 'CHI' 'BUF' 'NE' 'SD' 'OAK' 'LA']
---------------
PenalizedPlayer
object
337483
[nan 'T.Polamalu' 'D.Stewart' ... 'D.Latham' 'D.Phillips' 'S.Rankins']
---------------
````
You can see that the columns has lot many unique values.
How to fill Nan values based on this data in this column.
I have attached the dataset for your reference. You can find it in the release section since it is greater than 25 MB.
</code></pre>
|
<p>Null-like values can be replaced with <code>pd.DataFrame.fillna()</code> or <code>pd.Series.fillna()</code>.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html</a></p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.fillna.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.fillna.html</a></p>
<p>An example:</p>
<pre><code>import pandas as pd
sample_data = {"FieldGoalResult": [pd.NA, 'No Good', 'Blocked', 'Good']}
df = pd.DataFrame(sample_data)
df
FieldGoalResult
0 <NA>
1 No Good
2 Blocked
3 Good
filled_df = df.fillna(value="filler_of_choice")
filled_df
FieldGoalResult
0 filler_of_choice
1 No Good
2 Blocked
3 Good
</code></pre>
|
python|pandas|numpy|machine-learning|data-cleaning
| 2
|
375,679
| 72,363,556
|
Pandas Pivot Table - Adding Subtotals to Multiindex Table
|
<p>I have a table of data structured as it follows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Card</th>
<th>Payment ID</th>
<th>Amount</th>
</tr>
</thead>
<tbody>
<tr>
<td>John Doe</td>
<td>t077</td>
<td>7312637</td>
<td>54</td>
</tr>
<tr>
<td>John Doe</td>
<td>t077</td>
<td>1323131</td>
<td>34</td>
</tr>
<tr>
<td>Jane Doe</td>
<td>s044</td>
<td>1231321</td>
<td>13</td>
</tr>
<tr>
<td>John Doe</td>
<td>j544</td>
<td>4634564</td>
<td>53</td>
</tr>
</tbody>
</table>
</div>
<p>The output I want to achieve is to have a pivot table with a similar format:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Number of Transactions</th>
<th>Sum</th>
</tr>
</thead>
<tbody>
<tr>
<td>John Doe</td>
<td>3</td>
<td>141</td>
</tr>
<tr>
<td>--- t077</td>
<td>2</td>
<td>88</td>
</tr>
<tr>
<td>--- j544</td>
<td>1</td>
<td>53</td>
</tr>
<tr>
<td>Jane Doe</td>
<td>1</td>
<td>13</td>
</tr>
<tr>
<td>--- s044</td>
<td>1</td>
<td>13</td>
</tr>
</tbody>
</table>
</div>
<p>Please keep in mind that:</p>
<ul>
<li>Payment ID uniquely identifies the transaction (every line in the table)</li>
<li>Every Name can have one or multiple transactions with one or multiple cards</li>
</ul>
<p>I tried using pandas pivot_table, however I cannot find a way to structure the data as I want (including subtotals per Name), I can only group by Name and Card using</p>
<pre><code>pd.pivot_table(df, values='Amount', index=['Name','Card'], aggfunc=(np.sum, len))
</code></pre>
<p>Sorry for the poor formatting on the table, my markdown skills are quite limited.</p>
<p>Any help on this?</p>
|
<p>Pivot table is a good approach, try:</p>
<pre><code>table = pd.pivot_table(
df,
values=['Amount'],
index=['Name', 'Card'],
aggfunc=['count', 'sum'],
)
# Adds subtotals, and sorts:
pd.concat([
d.append(d.sum().rename((k, 'Total')))
for k, d in table.groupby(level=0)
]).sort_index(ascending=[False, True])
</code></pre>
<p>Output:</p>
<pre><code> count sum
Amount Amount
Name Card
Joe Doe Total 3 141
j544 1 53
t077 2 88
Jane Doe Total 1 13
s044 1 13
</code></pre>
<p>Subtotal reference: <a href="https://stackoverflow.com/questions/41383302/pivot-table-subtotals-in-pandas">link.</a></p>
|
python|pandas|pivot-table
| 0
|
375,680
| 72,338,772
|
How to fix Value error "Length of Values (1) doesn match length of index (15)"
|
<p>Workflow =></p>
<ul>
<li>Read CSV file and get <em>Unit Price</em> column data</li>
<li>Convert column data price and create a new column as name 'Fabric'</li>
<li>save the output as xlsx</li>
</ul>
<p>Sample:</p>
<pre><code>Unit Price
----------
330
350
380
I want to convert this data
Fabric
------
Card
Combed
Viscos
</code></pre>
<p>My code:</p>
<pre><code>##Fabric Data
getFabric = df_new['Unit Price']
result = []
for fabric in getFabric:
if fabric == 310:
result.append("Card")
elif fabric == 330:
result.append("Combed Dawah")
elif fabric == 350:
result.append("Combed Regular")
elif fabric == 490:
result.append("Viscos")
elif fabric == 550:
result.append("Pleated")
else:
result.append(fabric)
df_new['Fabric'] = result
</code></pre>
<p>Error :</p>
|
<p>That's easy dude...</p>
<pre><code>your_df["Fabric"] = your_df["Unit Price"].apply(lambda x: str(x).replace("330", "Card"))
# do this for every conversion
your_df.to_csv("filename.csv")
</code></pre>
<p>The above code can be saved as a CSV file that could be viewed in MS EXCEL</p>
|
python|pandas|valueerror
| 1
|
375,681
| 72,200,942
|
how do you fill row values of a column groupby with the max value of the grouped data
|
<p>I am trying to fill the values of a column in grouped data with the maximum value of the grouped data.</p>
<p>The following is a sample of the data</p>
<pre><code>
df1 = [[52, '1', '0'], [52, '1', '1'],
[52, '1', '0'], [52, '2', '0'],
[53, '2', '0'], [52, '2', '0']]
df = pd.DataFrame(df1, columns =['Cow','Lact', 'fail'])
Producing the following dataframe
Cow Lact fail
0 52 1 0
1 52 1 1
2 52 1 0
3 52 2 0
4 53 2 0
5 52 2 0
</code></pre>
<p>In this example I would like to replace the 0 values with 1 (max value) for cow = 52 lact = 1</p>
<pre><code>
Cow Lact fail
0 52 1 1
1 52 1 1
2 52 1 1
3 52 2 0
4 53 2 0
5 52 2 0
</code></pre>
<p>I have unsuccessfully modified code that appeared in <a href="https://stackoverflow.com/questions/53074734/pandas-groupby-change-values-in-one-column-based-on-values-in-another-column">Pandas groupby: change values in one column based on values in another column</a></p>
<pre><code>grouped = df.groupby(["Cow", "Lact"], as_index=False).max()['fail']
for i in grouped:
if i == 1:
df['fail'] = 1
</code></pre>
<p>Solutions and clarification re failure of my approach appreciated.
Thanks</p>
|
<p>You can use a group by in combination with a transform "max." I'm not sure if you would simply want to replace the 'fail' column or if you would want to make a new column but this should get you the expected results.</p>
<pre><code>df['fail'] = df.groupby(['Cow', 'Lact'])['fail'].transform(max)
</code></pre>
|
python|pandas|pandas-groupby
| 2
|
375,682
| 72,441,972
|
How to convert Pandas DataFrame to Julia DataFrame.jl
|
<p>I have not been able to find a way to convert my 30,000 x 1,000 Pandas.jl String DataFrame into a DataFrames.jl DataFrame. I have attempted previous stackoverflow solutions but they have not worked. I would like to know what the best way is to convert the dataframe. Thanks for your help.</p>
|
<p>Preparing data:</p>
<pre><code>julia> import Pandas
julia> import DataFrames
julia> df_df1 = DataFrames.DataFrame(string.(rand(1:10, 10, 5)), :auto)
10×5 DataFrame
Row │ x1 x2 x3 x4 x5
│ String String String String String
─────┼────────────────────────────────────────
1 │ 6 1 2 5 4
2 │ 9 5 1 1 9
3 │ 9 1 5 2 9
4 │ 6 7 9 1 5
5 │ 1 10 8 5 1
6 │ 8 5 9 9 6
7 │ 9 8 9 8 4
8 │ 2 6 10 5 4
9 │ 5 4 8 9 8
10 │ 5 4 10 5 8
julia> pd_df = Pandas.DataFrame(df_df1)
x1 x2 x3 x4 x5
0 6 1 2 5 4
1 9 5 1 1 9
2 9 1 5 2 9
3 6 7 9 1 5
4 1 10 8 5 1
5 8 5 9 9 6
6 9 8 9 8 4
7 2 6 10 5 4
8 5 4 8 9 8
9 5 4 10 5 8
</code></pre>
<p>and now the task you want to do:</p>
<pre><code>julia> DataFrames.DataFrame([col => collect(pd_df[col]) for col in pd_df.pyo.columns])
10×5 DataFrame
Row │ x1 x2 x3 x4 x5
│ String String String String String
─────┼────────────────────────────────────────
1 │ 6 1 2 5 4
2 │ 9 5 1 1 9
3 │ 9 1 5 2 9
4 │ 6 7 9 1 5
5 │ 1 10 8 5 1
6 │ 8 5 9 9 6
7 │ 9 8 9 8 4
8 │ 2 6 10 5 4
9 │ 5 4 8 9 8
10 │ 5 4 10 5 8
</code></pre>
<p>(unfortunately Pandas.jl does not correctly support Tables.jl interface so such work-around seems to be needed; I also decided to drop Pandas <code>Series</code> and convert it to standard Julia <code>Vector</code>)</p>
|
pandas|dataframe|julia|pycall|pycall.jl
| 4
|
375,683
| 72,360,428
|
tf.GradientTape giving None gradient while writing custom training loop
|
<p>I'm trying to write a custom training loop. Here is a sample code of what I'm trying to do. I have two training parameter and one parameter is updating another parameter. See the code below:</p>
<pre><code>x1 = tf.Variable(1.0, dtype=float)
x2 = tf.Variable(1.0, dtype=float)
with tf.GradientTape() as tape:
n = x2 + 4
x1.assign(n)
x = x1 + 1
y = x**2
val = tape.gradient(y, [x1, x2])
for v in val:
print(v)
</code></pre>
<p>and the output is</p>
<pre><code>tf.Tensor(12.0, shape=(), dtype=float32)
None
</code></pre>
<p>It seems like GradientTape is not watching the first(x2) parameter. Both parameter is <code>tf.Variable</code> type, so GradientTape should watch both the parameter. I also tried <code>tape.watch(x2)</code>, which is also not working. Am I missing something?</p>
|
<p>Check the <a href="https://www.tensorflow.org/guide/autodiff#getting_a_gradient_of_none" rel="nofollow noreferrer">docs</a> regarding a gradient of <code>None</code>. To get the gradients for <code>x1</code>, you have to track <code>x</code> with <code>tape.watch(x)</code>:</p>
<pre><code>x1 = tf.Variable(1.0, dtype=float)
x2 = tf.Variable(1.0, dtype=float)
with tf.GradientTape() as tape:
n = x2 + 4
x1.assign(n)
x = x1 + 1
tape.watch(x)
y = x**2
dv0, dv1 = tape.gradient(y, [x1, x2])
print(dv0)
print(dv1)
</code></pre>
<p>However, regarding <code>x2</code>, the output <code>y</code> is not connected to <code>x2</code> at all, since <code>x1.assign(n)</code> does not seem to be tracked and that is why the gradient is None. This is consistent with the <a href="https://www.tensorflow.org/guide/autodiff#4_took_gradients_through_a_stateful_object" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>State stops gradients. When you read from a stateful object, the tape can only observe the current state, not the history that lead to it.</p>
<p>A tf.Tensor is immutable. You can't change a tensor once it's created.
It has a value, but no state. All the operations discussed so far are
also stateless: the output of a tf.matmul only depends on its inputs.</p>
<p>A tf.Variable has internal state—its value. When you use the variable,
the state is read. It's normal to calculate a gradient with respect to
a variable, but the variable's state blocks gradient calculations from
going farther back</p>
</blockquote>
<p>If, for example, you do something like this:</p>
<pre><code>x1 = tf.Variable(1.0, dtype=float)
x2 = tf.Variable(1.0, dtype=float)
with tf.GradientTape() as tape:
n = x2 + 4
x1 = n
x = x1 + 1
tape.watch(x)
y = x**2
dv0, dv1 = tape.gradient(y, [x1, x2])
</code></pre>
<p>It should work.</p>
|
python|tensorflow|gradient-descent|gradienttape
| 2
|
375,684
| 72,363,741
|
pytorch dataloader - RuntimeError: stack expects each tensor to be equal size, but got [157] at entry 0 and [154] at entry 1
|
<p>I am a beginner with pytorch. I am trying to do an aspect based sentiment analysis. I am facing the error mentioned in the subject. My code is as follows: I request help to resolve this error. Thanks in advance. I will share the entire code and the error stack.
<code>!pip install transformers</code></p>
<pre><code>import transformers
from transformers import BertModel, BertTokenizer, AdamW, get_linear_schedule_with_warmup
import torch
import numpy as np
import pandas as pd
import seaborn as sns
from pylab import rcParams
import matplotlib.pyplot as plt
from matplotlib import rc
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
from collections import defaultdict
from textwrap import wrap
from torch import nn, optim
from torch.utils.data import Dataset, DataLoader
%matplotlib inline
%config InlineBackend.figure_format='retina'
sns.set(style='whitegrid', palette='muted', font_scale=1.2)
HAPPY_COLORS_PALETTE = ["#01BEFE", "#FFDD00", "#FF7D00", "#FF006D", "#ADFF02", "#8F00FF"]
sns.set_palette(sns.color_palette(HAPPY_COLORS_PALETTE))
rcParams['figure.figsize'] = 12, 8
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
torch.manual_seed(RANDOM_SEED)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
</code></pre>
<p><code>df = pd.read_csv("/Users/user1/Downloads/auto_bio_copy.csv")</code></p>
<p>I am importing a csv file which has content and label as shown below:</p>
<p><code>df.head()</code></p>
<pre><code> content label
0 I told him I would leave the car and come back... O O O O O O O O O O O O O O O O O O O O O O O ...
1 I had the ignition interlock device installed ... O O O B-Negative I-Negative I-Negative O O O O...
2 Aug. 23 or 24 I went to Walmart auto service d... O O O O O O O B-Negative I-Negative I-Negative...
3 Side note This is the same reaction I 'd gotte... O O O O O O O O O O O O O O O O O O O O O O O ...
4 Locked out of my car . Called for help 215pm w... O O O O O O O O O O O O O O O O O B-Negative O...
</code></pre>
<p><code>df.shape</code></p>
<p><code>(1999, 2)</code></p>
<p>I am converting the label values into integers as follows:
O=zero(0), B-Positive=1, I-Positive=2, B-Negative=3, I-Negative=4, B-Neutral=5, I-Neutral=6, B-Mixed=7, I-Mixed=8</p>
<pre><code>df['label'] = df.label.str.replace('O', '0')
df['label'] = df.label.str.replace('B-Positive', '1')
df['label'] = df.label.str.replace('I-Positive', '2')
df['label'] = df.label.str.replace('B-Negative', '3')
df['label'] = df.label.str.replace('I-Negative', '4')
df['label'] = df.label.str.replace('B-Neutral', '5')
df['label'] = df.label.str.replace('I-Neutral', '6')
df['label'] = df.label.str.replace('B-Mixed', '7')
df['label'] = df.label.str.replace('I-Mixed', '8')
</code></pre>
<p>Next, converting the string to integer list as follows:</p>
<pre><code>df['label'] = df['label'].str.split(' ').apply(lambda s: list(map(int, s)))
</code></pre>
<pre><code>df.head()
</code></pre>
<pre><code> content label
0 I told him I would leave the car and come back... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
1 I had the ignition interlock device installed ... [0, 0, 0, 3, 4, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
2 Aug. 23 or 24 I went to Walmart auto service d... [0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 0, 0, 0, 0, ...
3 Side note This is the same reaction I 'd gotte... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
4 Locked out of my car . Called for help 215pm w... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
</code></pre>
<pre><code>PRE_TRAINED_MODEL_NAME = 'bert-base-cased'
</code></pre>
<pre><code>tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME)
</code></pre>
<pre><code>token_lens = []
for txt in df.content:
tokens = tokenizer.encode_plus(txt, max_length=512, add_special_tokens=True, truncation=True, return_attention_mask=True)
token_lens.append(len(tokens))
MAX_LEN = 512
</code></pre>
<pre><code>class Auto_Bio_Dataset(Dataset):
def __init__(self, contents, labels, tokenizer, max_len):
self.contents = contents
self.labels = labels
self.tokenizer = tokenizer
self.max_len = max_len
def __len__(self):
return len(self.contents)
def __getitem__(self, item):
content = str(self.contents[item])
label = self.labels[item]
encoding = self.tokenizer.encode_plus(
content,
add_special_tokens=True,
max_length=self.max_len,
return_token_type_ids=False,
#padding='max_length',
pad_to_max_length=True,
truncation=True,
return_attention_mask=True,
return_tensors='pt'
)
return {
'content_text': content,
'input_ids': encoding['input_ids'].flatten(),
'attention_mask': encoding['attention_mask'].flatten(),
'labels': torch.tensor(label)
}
</code></pre>
<pre><code>df_train, df_test = train_test_split(
df,
test_size=0.1,
random_state=RANDOM_SEED
)
df_val, df_test = train_test_split(
df_test,
test_size=0.5,
random_state=RANDOM_SEED
)
</code></pre>
<pre><code>df_train.shape, df_val.shape, df_test.shape
</code></pre>
<pre><code>((1799, 2), (100, 2), (100, 2))
</code></pre>
<pre><code>def create_data_loader(df, tokenizer, max_len, batch_size):
ds = Auto_Bio_Dataset(
contents=df.content.to_numpy(),
labels=df.label.to_numpy(),
tokenizer=tokenizer,
max_len=max_len
)
return DataLoader(
ds,
batch_size=batch_size,
num_workers=2
)
</code></pre>
<pre><code>BATCH_SIZE = 16
train_data_loader = create_data_loader(df_train, tokenizer, MAX_LEN, BATCH_SIZE)
val_data_loader = create_data_loader(df_val, tokenizer, MAX_LEN, BATCH_SIZE)
test_data_loader = create_data_loader(df_test, tokenizer, MAX_LEN, BATCH_SIZE)
</code></pre>
<pre><code>data = next(iter(train_data_loader))
data.keys()
</code></pre>
<p>Error is as follows:</p>
<pre><code>---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-71-e0a71018e473> in <module>
----> 1 data = next(iter(train_data_loader))
2 data.keys()
~/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
528 if self._sampler_iter is None:
529 self._reset()
--> 530 data = self._next_data()
531 self._num_yielded += 1
532 if self._dataset_kind == _DatasetKind.Iterable and \
~/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)
1222 else:
1223 del self._task_info[idx]
-> 1224 return self._process_data(data)
1225
1226 def _try_put_index(self):
~/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _process_data(self, data)
1248 self._try_put_index()
1249 if isinstance(data, ExceptionWrapper):
-> 1250 data.reraise()
1251 return data
1252
~/opt/anaconda3/lib/python3.7/site-packages/torch/_utils.py in reraise(self)
455 # instantiate since we don't know how to
456 raise RuntimeError(msg) from None
--> 457 raise exception
458
459
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/Users/namrathabhandarkar/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/Users/namrathabhandarkar/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/Users/namrathabhandarkar/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 157, in default_collate
return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
File "/Users/namrathabhandarkar/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 157, in <dictcomp>
return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
File "/Users/namrathabhandarkar/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 138, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [157] at entry 0 and [154] at entry 1
</code></pre>
<p>I found in some github post that this error can be because of batch size, so i changed the batch size to 8 and then the error is as follows:</p>
<pre><code>BATCH_SIZE = 8
train_data_loader = create_data_loader(df_train, tokenizer, MAX_LEN, BATCH_SIZE)
val_data_loader = create_data_loader(df_val, tokenizer, MAX_LEN, BATCH_SIZE)
test_data_loader = create_data_loader(df_test, tokenizer, MAX_LEN, BATCH_SIZE)
</code></pre>
<pre><code>data = next(iter(train_data_loader))
data.keys()
</code></pre>
<pre><code>RuntimeError Traceback (most recent call last)
<ipython-input-73-e0a71018e473> in <module>
----> 1 data = next(iter(train_data_loader))
2 data.keys()
~/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
528 if self._sampler_iter is None:
529 self._reset()
--> 530 data = self._next_data()
531 self._num_yielded += 1
532 if self._dataset_kind == _DatasetKind.Iterable and \
~/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)
1222 else:
1223 del self._task_info[idx]
-> 1224 return self._process_data(data)
1225
1226 def _try_put_index(self):
~/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _process_data(self, data)
1248 self._try_put_index()
1249 if isinstance(data, ExceptionWrapper):
-> 1250 data.reraise()
1251 return data
1252
~/opt/anaconda3/lib/python3.7/site-packages/torch/_utils.py in reraise(self)
455 # instantiate since we don't know how to
456 raise RuntimeError(msg) from None
--> 457 raise exception
458
459
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/Users/namrathabhandarkar/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/Users/namrathabhandarkar/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/Users/namrathabhandarkar/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 157, in default_collate
return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
File "/Users/namrathabhandarkar/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 157, in <dictcomp>
return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
File "/Users/namrathabhandarkar/opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 137, in default_collate
out = elem.new(storage).resize_(len(batch), *list(elem.size()))
RuntimeError: Trying to resize storage that is not resizable
</code></pre>
<p>I am not sure what is causing the first error(the one mentioned in subject). I am using padding and truncate in my code, yet the error.</p>
<p>Any help to resolve this issue is highly appreciated.</p>
<p>Thanks in advance.</p>
|
<p>Quick answer: you need to implement your own <code>collate_fn</code> function when creating a <code>DataLoader</code>. See <a href="https://discuss.pytorch.org/t/dataloader-gives-stack-expects-each-tensor-to-be-equal-size-due-to-different-image-has-different-objects-number/91941/7" rel="nofollow noreferrer">the discussion from PyTorch forum</a>.</p>
<p>You should be able to pass the function object to <code>DataLoader</code> instantiation:</p>
<pre class="lang-py prettyprint-override"><code>def my_collate_fn(data):
# TODO: Implement your function
# But I guess in your case it should be:
return tuple(data)
return DataLoader(
ds,
batch_size=batch_size,
num_workers=2,
collate_fn=my_collate_fn
)
</code></pre>
<p>This should be the way to solving this, but as a temporary remedy in case anything is urgent or a quick test is nice, simply change <code>batch_size</code> to <code>1</code> to prevent torch from trying to stack things with different shapes up.</p>
|
pytorch|sentiment-analysis|pytorch-dataloader
| 1
|
375,685
| 72,263,932
|
pandas how to explode from two cells element-wise
|
<p>I have a dataframe:</p>
<pre><code>df =
A B C
1 [2,3] [4,5]
</code></pre>
<p>And I want to explode it element-wise based on [B,C] to get:</p>
<pre><code>df =
A B C
1 2 4
1 3 5
</code></pre>
<p>What is the best way to do so?
B and C are always at the same length.</p>
<p>Thanks</p>
|
<p>Try, in pandas <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html?highlight=explode" rel="nofollow noreferrer">1.3.2</a>:</p>
<pre><code>df.explode(['B', 'C'])
</code></pre>
<p>Output:</p>
<pre><code> A B C
0 1 2 4
0 1 3 5
</code></pre>
|
pandas|dataframe|data-science|data-munging|pandas-explode
| 1
|
375,686
| 72,370,869
|
Issues with generating the following block matrix using horizontal stacking and vertical stacking
|
<p>I am trying to generate the following block matrix consisting of submatrices <code>A</code> and <code>B</code>, and <code>N</code> is a positive integer. So far, my code is as follows:</p>
<pre><code>C_lower = B
for j in range(0,N):
for i in range(0,N-j):
col = np.linalg.matrix_power(A,i) @ B
C = np.hstack(np.vstack((C_lower,col)))
</code></pre>
<p><a href="https://i.stack.imgur.com/82yX8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/82yX8.png" alt="enter image description here" /></a></p>
<p>However, it seems like my code is not working because the loop continues forever. Any suggestions?</p>
<p>Similarly, I'm also having issues with constructing the following block diagonal matrices:
<a href="https://i.stack.imgur.com/1Gigc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1Gigc.png" alt="enter image description here" /></a></p>
<p>I tried using <code>block_diag</code> from <code>scipy</code>, but there is no way I can repeat <code>Q</code> as many times as <code>N</code> is equal to (i.e., N = 50 in my case). I had to do <code>block_diag(Q,Q,Q,Q,Q,Q,Q.......)</code> in order to get the block diagonal matrix I want.</p>
|
<p>Here's the answer to your first question. There are a number of issues in your code. This is a better way of achieving what you want:</p>
<pre><code>C = np.zeros((N, N, A.shape[0], B.shape[1]))
for i in range(N):
for j in range(i + 1):
C[i, j] = np.linalg.matrix_power(A, i - j) @ B
</code></pre>
<p>Similarly for your second question:</p>
<pre><code>Q_ = np.zeros((N, N, *Q.shape))
for i in range(N):
Q_[i, i] = Q
</code></pre>
|
python|numpy
| 1
|
375,687
| 72,218,279
|
How to get the min and the max index where a pandas column has the same value
|
<p>I have the following pandas dataframe</p>
<pre><code>foo = pd.DataFrame({'step': [1,2,3,4,5,6,7,8], 'val': [1,1,1,0,0,1,0,1]})
</code></pre>
<p>I would like to get the 1st and last <code>step</code> for each of the sequence of <code>1</code>s in the <code>val</code> column.
Explanation:</p>
<ul>
<li><p>The first sequence of ones happens at steps <code>1,2,3</code> -> first <code>step</code> is <code>1</code> last <code>step</code> is 3</p>
</li>
<li><p>The second sequence of ones happens at step 6 -> first <code>step</code> is <code>6</code> last <code>step</code> is 6</p>
</li>
<li><p>The last sequence of ones happens at step 8 -> first <code>step</code> is <code>8</code> last <code>step</code> is 8</p>
</li>
</ul>
<p>So the output is the list <code>[1,3,6,6,8,8]</code></p>
<p>Any ideas how to do that ?</p>
|
<p>IIUC, you can use a <code>groupby</code> aggregation, flatten using numpy and convert to list:</p>
<pre><code># compute groups of consecutive numbers
group = foo['val'].ne(foo['val'].shift()).cumsum()
out = (foo
.loc[foo['val'].eq(1), 'step'] # keep step only where vale is 1
.groupby(group).agg(['first', 'last']) # get first and last
.to_numpy().ravel().tolist() # reshape
)
</code></pre>
<p>output: <code>[1, 3, 6, 6, 8, 8]</code></p>
|
python|pandas
| 2
|
375,688
| 72,357,955
|
reuse function with multiple string values pandas
|
<p>I'm hoping to streamline a function that only return columns based on a single string value. Using below, I have two distinct colours in a df. I want to pass each colour to a function. But I only want the output to include columns relating to that colour.</p>
<p>If I have numerous colours and multiple outputs within the function, the returned df gets too large.</p>
<pre><code>import pandas as pd
import numpy as np
d = ({
'Date' : ['1/1/18','1/1/18','2/1/18','3/1/18','1/2/18','1/3/18','2/1/19','3/1/19'],
'Val' : ['A','B','C','D','A','B','C','D'],
'Blue' : ['Blue', 'Blue', 'Blue', np.NaN, np.NaN, 'Blue', np.NaN, np.NaN],
'Red' : [np.NaN, np.NaN, np.NaN, 'Red', 'Red', np.NaN, 'Red', 'Red']
})
df = pd.DataFrame(data = d)
df['Date'] = pd.to_datetime(df['Date'], format = '%d/%m/%y')
df['Count'] = df.Date.map(df.groupby('Date').size())
def func(df, val):
df['%s_cat' % val] = df['Count'] * 2
return df
blue = func(df, 'Blue')
red = func(df, 'Red')
</code></pre>
<p>Intended output (Blue):</p>
<pre><code> Date Val Blue Count Blue_cat
0 2018-01-01 A Blue 2 4
1 2018-01-01 B Blue 2 4
2 2018-01-02 C Blue 1 2
5 2018-03-01 B Blue 1 2
</code></pre>
<p>Intended output (Red):</p>
<pre><code> Date Val Blue Red Count Red_cat
3 2018-01-03 D NaN Red 1 2
4 2018-02-01 A NaN Red 1 2
6 2019-01-02 C NaN Red 1 2
7 2019-01-03 D NaN Red 1 2
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.copy.html" rel="nofollow noreferrer"><code>DataFrame.copy</code></a> for avoid <code>SettingWithCopyWarning</code>, because if you modify values in filtered DataFrame later you will find that the modifications do not propagate back to the original data, and that Pandas does warning:</p>
<pre><code>def func(df, val):
df = df[df[val].eq(val)].copy()
df[f'{val}_cat'] = df['Count'] * 2
return df
blue = func(df, 'Blue')
print (blue)
Date Val Blue Red Count Blue_cat
0 2018-01-01 A Blue NaN 2 4
1 2018-01-01 B Blue NaN 2 4
2 2018-01-02 C Blue NaN 1 2
5 2018-03-01 B Blue NaN 1 2
red = func(df, 'Red')
print (red)
Date Val Blue Red Count Red_cat
3 2018-01-03 D NaN Red 1 2
4 2018-02-01 A NaN Red 1 2
6 2019-01-02 C NaN Red 1 2
7 2019-01-03 D NaN Red 1 2
</code></pre>
|
python|pandas
| 1
|
375,689
| 72,352,546
|
separating values between rows with pandas
|
<p>I want to separate values in "alpha" column like this</p>
<p>Start:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>alpha</th>
<th>beta</th>
<th>gamma</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>A</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>B</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>C</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>End:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>alpha</th>
<th>beta</th>
<th>gamma</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>A</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>B</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>B</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>C</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>Thanks for help <3</p>
|
<p>You can try</p>
<pre class="lang-py prettyprint-override"><code>out = (df.groupby('alpha')
.apply(lambda g: pd.concat([g, pd.DataFrame([['X', 'X', 'X']], columns=df.columns)]))
.reset_index(drop=True)[:-1])
</code></pre>
<pre><code>print(out)
alpha beta gamma
0 A 1 0
1 A 1 1
2 X X X
3 B 1 0
4 B 1 1
5 B 1 0
6 X X X
7 C 1 1
</code></pre>
|
python|pandas|dataframe
| 3
|
375,690
| 72,469,363
|
Replace only "," by "." in a String Python
|
<p>i'm using a dataset that contains a column "Streams" dtype: object and i just need to replace "," by "." to later use pandas.to_numeric() and convert String by float64. Is there a way to replace only the
characters and keep the numbers?</p>
<p>Example: 48,633,449 to 48.633.449</p>
<p>Code:</p>
<pre><code>import pandas as pd
import numpy as np
dados = pd.read_csv("spotify_dataset.csv")
dados.dropna()
dados['Streams'].replace(",", ".")
dados['Streams'] = pd.to_numeric(dados['Streams'])
dados.head()
</code></pre>
<p>and got this:</p>
<blockquote>
<p><strong>ValueError: Unable to parse string "48,633,449" at position 0</strong></p>
</blockquote>
<p>[Error]</p>
<p><img src="https://i.stack.imgur.com/UGltH.png" alt="1" /></p>
|
<p>You are throwing away your <code>replace</code> since you are not assigning it to anything. Unless you explicitly use <code>inplace=True</code> arguments, Pandas methods do not change the current instance of an object (Series, Dataframes).</p>
<p>You can provide the result of <code>replace</code> as the argument to the <code>to_numeric</code> function</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
dados = pd.read_csv("spotify_dataset.csv")
dados = dados.dropna()
dados['Streams'] = pd.to_numeric(dados['Streams'].replace(",", "."))
dados.head()
</code></pre>
|
python|pandas|replace|dataset
| 1
|
375,691
| 72,305,984
|
Do LSTMs remember previous windows or is the hidden state reset?
|
<p>I am training an LSTM to forecast the next value of a timeseries. Let's say I have training data with the given shape (2345, 95) and a total of 15 files with this data, this means that I have 2345 window with 50% overlap between them (the timeseries was divided into windows). Each window has 95 timesteps. If I use the following model:</p>
<pre><code>input1 = Input(shape=(95, 1))
lstm1 = LSTM(units=100, return_sequences=False,
activation="tanh")(input1)
outputs = Dense(1, activation="sigmoid")(lstm1)
model = Model(inputs=input1, outputs=outputs)
model.compile(loss=keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.01))
</code></pre>
<p>I am feeding this data using a generator where it passes a whole file each time, therefore one epoch will have 15 steps. Now my question is, in a given epoch, for a given step, does the LSTM remember the previous window that it saw or is the memory of the LSTM reset after seeing each window? If it remembers the previous windows, then is the memory reset only at the end of an epoch?</p>
<p>I have seen similar questions like this <a href="https://stackoverflow.com/questions/38241410/tensorflow-remember-lstm-state-for-next-batch-stateful-lstm">TensorFlow: Remember LSTM state for next batch (stateful LSTM)</a> or <a href="https://datascience.stackexchange.com/questions/27628/sliding-window-leads-to-overfitting-in-lstm">https://datascience.stackexchange.com/questions/27628/sliding-window-leads-to-overfitting-in-lstm</a> but I either did not quite understand the explanation or I was unsure if what I wanted was explained. I'm looking for more of a technical explanation as to where in the LSTM architecture is the whole memory/hidden state reset.</p>
<p><strong>EDIT:</strong></p>
<ol>
<li>So from my understanding there are two concepts we can call "memory"
here. The weights that are updated through BPTT and the hidden state
of the LSTM cell. For a given window of timesteps the LSTM can
remember what the previous timestep was, this is what the hidden
state is for I think. Now the weight update does not directly
reflect memory if I'm understanding this correctly.</li>
<li>The size of the hidden state, in other words how much the LSTM
remembers is determined by the batch size, which in this case is one
whole file, but other question/answers
(<a href="https://datascience.stackexchange.com/questions/27628/sliding-window-leads-to-overfitting-in-lstm">https://datascience.stackexchange.com/questions/27628/sliding-window-leads-to-overfitting-in-lstm</a> and <a href="https://stackoverflow.com/a/50235563/13469674">https://stackoverflow.com/a/50235563/13469674</a>) state that if we
have to windows for instance: [1,2,3] and [4,5,6] the LSTM does not
know that 4 comes after 3 because they are in different windows,
even though they belong to the same batch. So I'm still unsure how
exactly memory is maintained in the LSTM</li>
<li>It makes some sense that the hidden state is reset between windows when we look at the LSTM cell diagram. But then the weights are only updated after each step, so where does the hidden state come into play?</li>
</ol>
|
<p>What you are describing is called "Back Propagation Through Time", you can google that for tutorials that describe the process.</p>
<p>Your concern is justified in one respect and unjustified in another respect.</p>
<p>The LSTM is capable of learning across multiple training iterations (e.g. multiple 15 step intervals). This is because the LSTM state is being passed forward from one iteration (e.g. multiple 15 step intervals) to the next iteration. This is feeding information forward across multiple training iterations.</p>
<p>Your concern is justified in that the model's weights are only updated with respect to the 15 steps (plus any batch size you have). As long as 15 steps is long enough for the model to catch valuable patterns, it will generally learn a good set of weights that generalize well beyond 15 steps. A good example of this is the Shakespeare character recognition model described in Karpathy's, "<a href="https://karpathy.github.io/2015/05/21/rnn-effectiveness/" rel="nofollow noreferrer">The unreasonable effectiveness of RNNs</a>".</p>
<p>In summary, the model is learning to create a good hidden state for the next step averaged over sets of 15 steps as you have defined. It is common that an LSTM will produce a good generalized solution by looking at data in these limited segments. Akin to batch training, but sequentially over time.</p>
<p>I might note that 100 is a more typical upper limit for the number of steps in an LSTM. At ~100 steps you start to see a <a href="https://imgur.com/gallery/vaNahKE" rel="nofollow noreferrer">vanishing gradient problem in which the earlier steps contribute nearly nothing to the gradient</a>.</p>
<p>Note that it is important to ensure you are passing the LSTM state forward from training step to training step over the course of an episode (any contiguous sequence). If this step was missed the model would certainly suffer.</p>
|
python|tensorflow|keras|lstm
| 1
|
375,692
| 72,348,873
|
Reading all the XML files to make dataframe
|
<p>I asked the question about reading the xml data to pandas dataframe</p>
<p><a href="https://stackoverflow.com/questions/72252937/nlp-using-xlm-dataset/72336520#72336520">NLP using XLM dataset</a></p>
<p>I got the following answer</p>
<pre><code>medlinecitation = pd.read_xml("Taxonomy_NLP/public_dat/trainset/17846141.xml", xpath=".//medlinecitation").dropna(axis=1)
abstract = pd.read_xml("Taxonomy_NLP/public_dat/trainset/17846141.xml", xpath=".//abstract")
dfa = pd.merge(
left=medlinecitation,
right=abstract,
how="outer",
left_index=True,
right_index=True,
).fillna(method="ffill")
</code></pre>
<p>The following output</p>
<pre><code> owner status pmid citationsubset otherid abstracttext
0 NLM MEDLINE 17846141 IM PMC2151464 Empirical use of beta-lactam antibiotics, the ...
</code></pre>
<p>I have the following files in my folder and want to read each file like above and make a data frame for each row representing the file.
I could read all the files through the following function, but don't know how to make data frame with all the files.</p>
<pre><code>for dirname, _, filenames in os.walk(path):
for filename in filenames:
print(os.path.join(dirname, filename))
</code></pre>
<pre><code>Taxonomy_NLP/public_dat/trainset/17846141.xml
Taxonomy_NLP/public_dat/trainset/10649814.xml
Taxonomy_NLP/public_dat/trainset/20091541.xml
Taxonomy_NLP/public_dat/trainset/11493721.xml
Taxonomy_NLP/public_dat/trainset/11505031.xml
Taxonomy_NLP/public_dat/trainset/14557142.xml
Taxonomy_NLP/public_dat/trainset/15174889.xml
Taxonomy_NLP/public_dat/trainset/1565551.xml
Taxonomy_NLP/public_dat/trainset/15159270.xml
Taxonomy_NLP/public_dat/trainset/12837416.xml
Taxonomy_NLP/public_dat/trainset/10629474.xml
</code></pre>
|
<p>Your code seems to be loading two different elements from the same XML file. You can create a function to do this which returns the new dataframe:</p>
<pre><code>def read_gct(path):
medlinecitation = pd.read_xml(path, xpath=".//medlinecitation")
.dropna(axis=1)
abstract = pd.read_xml(path, xpath=".//abstract")
dfa = pd.merge(
left=medlinecitation,
right=abstract,
how="outer",
left_index=True,
right_index=True,
).fillna(method="ffill")
return dfa
</code></pre>
<p>You can use <a href="https://docs.python.org/3/library/pathlib.html#pathlib.Path.glob" rel="nofollow noreferrer">pathlib.glob</a> to find all XML files in a path and load a dataframe for each one. Finally, you can concatenate all dataframes into a single one with <a href="https://pandas.pydata.org/docs/reference/api/pandas.concat.html" rel="nofollow noreferrer">pd.concat</a>:</p>
<pre><code>fileDfs=(read_gct(path) for path in pathlib.glob('root/**/*.xml'))
final_df=pd.concat(fileDfs)
</code></pre>
|
python|pandas|xml|dataframe
| 1
|
375,693
| 72,486,821
|
Summarization with Huggingface: How to generate one word at a time?
|
<p>I am using a DistilBART for abstractive summarization. The method <a href="https://huggingface.co/docs/transformers/v4.19.2/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate" rel="nofollow noreferrer"><code>generate()</code></a> is very straightforward to use. However, it returns complete, finished summaries. <strong>What I want is, at each step, access the logits to then get the list of next-word candidates and choose based on my own criteria.</strong> Once chosen, continue with the next word and so on until the EOS token is produced.</p>
<p>I am aware that I can access the logits by doing <code>model(**input).logits[:, -1, :]</code>, but here the input would be the whole (encoded) text, so what would exactly these logits correspond with? The first generated token? The last?</p>
<p>Thank you for your answers!</p>
|
<p>For future reference, <strong>here is how it can be done</strong> (<em>note:</em> this is specific to encoder-decoder models, like BART):</p>
<p><strong>1. Initialization</strong></p>
<pre class="lang-py prettyprint-override"><code>import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# Load model
tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-xsum-1-1")
model = AutoModelForSeq2SeqLM.from_pretrained("sshleifer/distilbart-xsum-1-1")
text = "..."
# Tokenize text
batch = tokenizer(text, return_tensors="pt")
</code></pre>
<br/>
<p><strong>2. Example 1: Summary generation with <em>greedy decoding</em> (no cache)</strong></p>
<pre class="lang-py prettyprint-override"><code>generated_sequence = torch.tensor([[tokenizer.sep_token_id]]) # initial token
# Generation loop
while True:
with torch.no_grad():
output = model(input_ids=batch["input_ids"], decoder_input_ids=generated_sequence)
next_token_logits = output.logits[:, -1, :]
next_token_scores = next_token_logits.softmax(dim=-1)
# Take token with highest probability
next_token = next_token_scores.argmax().unsqueeze(0).unsqueeze(0)
# Append token to generated sequence
generated_sequence = torch.cat((generated_sequence, next_token), dim=1)
# Stop if EOS token generated
if (generated_sequence.squeeze()[-1] == tokenizer.eos_token_id):
break
summary = tokenizer.batch_decode(generated_sequence, skip_special_tokens=True)
</code></pre>
<br/>
<p><strong>3. Example 2: Summary generation with <em>top-k, top-p sampling & temperature</em> (no cache)</strong></p>
<pre class="lang-py prettyprint-override"><code>from transformers.generation_utils import top_k_top_p_filtering
generated_sequence = torch.tensor([[tokenizer.sep_token_id]]) # initial token
# Generation loop
while True:
with torch.no_grad():
output = model(input_ids=batch["input_ids"], decoder_input_ids=generated_sequence)
logits = output.logits[:, -1, :] / temperature # apply temperature
filtered_logits = top_k_top_p_filtering(logits=logits, top_k=4, top_p=0.7)
probabilities = filtered_logits.softmax(dim=-1)
# Sample next token
next_token = torch.multinomial(probabilities, 1)
# Append token to generated sequence
generated_sequence = torch.cat((generated_sequence, next_token), dim=1)
# Stop if EOS token generated
if (generated_sequence.squeeze()[-1] == tokenizer.eos_token_id):
break
summary = tokenizer.batch_decode(generated_sequence, skip_special_tokens=True)
</code></pre>
<p>(Other <a href="https://huggingface.co/blog/how-to-generate" rel="nofollow noreferrer">generating strategies</a> would be analogous).</p>
<br/>
<p><strong>4. Using cache</strong></p>
<p>Since the input to the encoder (i.e., the text to be summarized) is always the same, we can cache it to greatly speed up the generation.</p>
<pre class="lang-py prettyprint-override"><code>generated_sequence = torch.tensor([[tokenizer.sep_token_id]]) # initial token
input_ids = batch["input_ids"]
past_key_values = None
with torch.no_grad():
output = model(
input_ids=input_ids,
decoder_input_ids=generated_sequence,
past_key_values=past_key_values
)
encoder_outputs=output.encoder_last_hidden_state
# Generation loop
while True:
# From here on, use cached attention
past_key_values = output.past_key_values
next_token_logits = output.logits[:, -1, :]
next_token_scores = next_token_logits.softmax(dim=-1)
next_token = next_token_scores.argmax().unsqueeze(0).unsqueeze(0) # greedy decoding
generated_sequence = torch.cat((generated_sequence, next_token), dim=1)
# Stop if EOS token generated
if (generated_sequence.squeeze()[-1] == tokenizer.eos_token_id):
break
with torch.no_grad():
output = model(
decoder_input_ids=torch.tensor([[generated_sequence.squeeze()[-1]]]),
past_key_values=past_key_values,
encoder_outputs=encoder_outputs
)
summary = tokenizer.batch_decode(generated_sequence, skip_special_tokens=True)
</code></pre>
|
huggingface-transformers|summarization|huggingface
| 0
|
375,694
| 72,295,381
|
Pandas dataframe Group by Time Interval and then ID with sum of Counts
|
<p>I'm trying to group a dataset by time first and then group by ID using pandas, while summing the counts. My data looks something along the lines of this:</p>
<pre><code>id,selected time,count
1,5/16/2022 3:58:06 PM,1
1,5/16/2022 3:55:10 PM,1
2,5/16/2022 3:52:01 PM,2
3,5/16/2022 3:19:33 PM,1
3,5/16/2022 3:15:04 PM,1
4,5/16/2022 3:12:38 PM,1
1,5/16/2022 2:42:58 PM,1
1,5/16/2022 2:26:13 PM,1
2,5/16/2022 2:21:02 PM,1
5,5/16/2022 2:18:21 PM,1
4,5/16/2022 2:15:18 PM,1
</code></pre>
<p>I'm trying to get my data to look something along the lines of this:</p>
<pre><code>id,5/16/2022 2:00:00 PM,5/16/2022 3:00:00 PM
1,2,2
2,2,1
3,2,0
4,1,1
5,1,0
</code></pre>
<p>Of course, this is a subset of the data, and the whole dataset encompasses many more ids over a 24 hour time period.</p>
<p>Among many other methods, I've tried this:</p>
<pre><code>df = df.groupby('id') \
.resample('60min', on='selected time')['count']
.sum() \
.unstack(1, fill_value=0) \
.reset_index(level=0)
</code></pre>
<p>But this method does not return the correct values (as it is grouping by id first and then time interval), and other methods I've tried either throw an error or also have the same problem.</p>
<p>I am new to pandas, so I am still learning. Any help would be greatly appreciated, Thanks!</p>
|
<p>first, a new columns is created where minute and seconds were made to zero, by flooring the hour. Then the Pivot_table gives the required result</p>
<pre><code>df['selected_time_2'] = df['selected time'].astype('datetime64').dt.floor('h').dt.strftime('%m/%d/%YY %I:%M:%S %p')
df.pivot_table(index='id',columns='selected_time_2', values='count', aggfunc='sum').fillna('').reset_index()
</code></pre>
<pre><code>selected_time_2 id 05/16/2022Y 02:00:00 PM 05/16/2022Y 03:00:00 PM
0 1 2 2
1 2 1 2
2 3 2
3 4 1 1
4 5 1
</code></pre>
<p>EDIT:
To show the columns in 12 hrs format</p>
<pre><code>import datetime
df['selected_time_2'] = df['selected time'].astype('datetime64').dt.floor('h') #.dt.strftime('%m/%d/%Y %I:%M:%S %p').astype('datetime64')
df2 = df.pivot_table(index='id',columns='selected_time_2', values='count', aggfunc='sum').fillna('').reset_index()
df2.columns = [col.strftime('%m/%d/%Y %I:%M:%S %p')
if (isinstance(col, datetime.date) )
else col
for col in df2.columns]
df2
</code></pre>
<p>RESULT:</p>
<pre><code> id 05/16/2022 11:00:00 AM 05/16/2022 12:00:00 PM 05/16/2022 02:00:00 PM 05/16/2022 03:00:00 PM
0 1 1.0 1.0 2.0
1 2 1.0 2.0
2 3 2.0
3 4 1.0 1.0
4 5 1.0
</code></pre>
<p>aggfunc added to get the sum, missed the first time around in the response
PS: I'm not sure how to format the result properly, and can take advise on it.</p>
|
python|pandas
| 0
|
375,695
| 72,276,265
|
'numpy.float64' object is not callable with numpy and pandas with custom function
|
<p>I have code of the form:</p>
<pre><code>import pandas as pd
import numpy as np
def StrdErr(vec):
return np.std(vec)/np.sqrt(len(vec))
df2 = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), columns=['a', 'b', 'c'])
for idx_q in range(0, df2.shape[0]):
StrdErr = StrdErr(np.array(df2.loc[idx_q, :]))
</code></pre>
<p>with the following error message:</p>
<pre><code>Traceback (most recent call last):
File "debug.py", line 11, in <module>
StrdErr = StrdErr(np.array(df2.loc[idx_q, :]))
TypeError: 'numpy.float64' object is not callable
</code></pre>
<p>I saw a <a href="https://stackoverflow.com/questions/19828212/typeerror-numpy-float64-object-is-not-callable">similar question with answer</a> but could not solve the problem
What am I doing wrong?</p>
|
<p>This looks like a very complicated way to compute:</p>
<pre><code>df2.std(1, ddof=0).div(np.sqrt(df2.shape[1]))
</code></pre>
<p>output:</p>
<pre><code>0 0.471405
1 0.471405
2 0.471405
dtype: float64
</code></pre>
<h5>even if it is inefficient, to fix your loop use:</h5>
<pre><code>out = []
for idx_q in range(0, df2.shape[0]):
out.append(StrdErr(np.array(df2.loc[idx_q, :])))
print(out)
# [0.47140452079103173, 0.47140452079103173, 0.47140452079103173]
</code></pre>
|
python|pandas|numpy|vector
| 2
|
375,696
| 72,304,677
|
Cleaning data using pandas and excel
|
<p>I have a massive data frame which I exported as an excel file to fix up spelling by removing duplicates and creating a column with all words corrected. Now I want to reimport the corrected data and replace the old values with the new ones so in the data frame every instance of 'Ne York' would become 'New York'. Here Location is the value in the data frame and Final Location is the edited one in excel.</p>
<pre><code>Location Final Location
New Yo New York
Austin Austin
Londn London
Pais Paris
Berlin Berlin
Mosscow Moscow
Varsaw Warsaw
</code></pre>
<p>Any help would be greatly appreciated.</p>
|
<p>You can load the new excel file as dataframe as follows.</p>
<pre><code>import pandas as pd
df = pd.read_excel(r'Path/Filename.xlsx')
print(df)
</code></pre>
<p>if you want to replace the location column in the old dataframe with Final column, you can do: <code>df_old['Location'] = df['Final']</code></p>
|
pandas|dataframe|data-cleaning|data-wrangling
| 0
|
375,697
| 72,231,902
|
Reversing row values in a panda
|
<p>i'm having a mind wipe, i cannot for the life of me figure out a simple way of reversing this input to the output, any help would be appreciated.</p>
<p>input:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>level1</th>
<th>level2</th>
<th>level3</th>
<th>level4</th>
</tr>
</thead>
<tbody>
<tr>
<td>4</td>
<td>2</td>
<td>1</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>NaN</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
<p>output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>level1</th>
<th>level2</th>
<th>level3</th>
<th>level4</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
<td>4</td>
<td>NaN</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>NaN</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
<p>Thanks</p>
|
<p>Here is one way, using reindexing:</p>
<pre><code>(df
.apply(lambda s: s.dropna()[::-1].reset_index(drop=True), axis=1)
.reindex(columns=range(df.shape[1]))
.set_axis(df.columns, axis=1)
)
</code></pre>
<p>output:</p>
<pre><code> level1 level2 level3 level4
0 1.0 2.0 4.0 NaN
1 1.0 2.0 NaN NaN
</code></pre>
|
python|pandas|dataframe
| 1
|
375,698
| 72,375,060
|
ValueError: Expected 2D array, got 1D array instead/ Signal Processing
|
<p>Can someone help to fix this error: I am a beginner and finding it difficult to figure out how to fix it.</p>
<p>This is the error I am getting :
ValueError: Expected 2D array, got 1D array instead:
array=[ 282 561 837 ... 649442 649701 649957].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.</p>
<p><a href="https://i.stack.imgur.com/dHL92.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dHL92.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/GF23T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GF23T.png" alt="enter image description here" /></a></p>
<pre><code>class MyDataset(Dataset):
def __init__(self, patient_ids,bih2aami=True):
self.patient_ids = patient_ids # list of patients ID
#self.directory=""
self.nb_qrs = 99 #number of beats
self.idx_tuples = flatten([[(patient_idx, rpeak_idx) for rpeak_idx in range(self.nb_qrs)]
for patient_idx in range(len(patient_ids))])
self.bih2aami=bih2aami
def __len__(self):#returns the size of the data set.
return len(self.idx_tuples) # length of the dataset
def __getitem__(self, idx): # get one sample from the dataset
patient_idx, rpeak_idx = self.idx_tuples[idx]
patient_id = self.patient_ids[patient_idx]
file = self.directory + patient_id
signal, normal_qrs_pos = get_signal(file)
# Create a range of windows positions
if (idx//2 == idx/2):
qrs_pos = normal_qrs_pos[rpeak_idx]
else:
qrs_pos = normal_qrs_pos[rpeak_idx] + randint(-round(.25*fs),round(.25*fs))
#win_pos = normal_qrs_pos # FIND CORRECT WIN_POS FOR THIS patient
beat, label = extract_beat(signal,qrs_pos,normal_qrs_pos)
if (label == 1):
print("==== FOUND ONE MATCHING QRS === pos = ", qrs_pos)
else:
print("==== NO MATCH === pos = ", qrs_pos)
X, y = torch.tensor(beat).float(), torch.tensor(label).float()
print(y.size())
return X,y
</code></pre>
<p>The code for beat extraction</p>
<pre><code>def extract_beat(signal, win_pos, qrs_positions, win_msec=40, fs=360, start_beat=36, end_beat=108):
"""
win_pos position at which you place the window of your beat
qrs_positions (list) the qrs indices from the annotations (read them from the atr file)-->obtained from annotation.sample
win_msec in milliseconds
"""
#extract signal
signal = np.array(signal)
#print(signal.shape)
#beat_array = np.zeros(start_beat+end_beat)#number of channels
start = int(max(win_pos-start_beat,0))
stop = start+start_beat+end_beat+1
#print(beat_array.shape,signal.shape)
beat = signal[start:stop]
#print(" =========== BEAT = ",len(beat))
#compute the nearest neighbor of win_pos among qrs_positions
tolerance = (fs*win_msec)//1000 #samples at a distance <tolerance are matched
nbr = NearestNeighbors(n_neighbors=1).fit(qrs_positions)
distances, indices = nbr.kneighbors(np.array([[win_pos]]).reshape(-1,1))
#label
if distances[0][0] <= tolerance:
label = 1
else:
label = 0
print(distances[0],tolerance,label)
return beat, label
</code></pre>
|
<p>As sklearn docs says in: <a href="https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.html#sklearn.neighbors.NearestNeighbors.fit" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.html#sklearn.neighbors.NearestNeighbors.fit</a></p>
<p>You should send a 2d array ( of shape (n_samples, n_features) ) to fit method.</p>
<p>And as your error write you can just reshape the array use:</p>
<pre><code>#compute the nearest neighbor of win_pos among qrs_positions
colerance = (fs*win_msec)//1000 #samples at a distance <tolerance are matched
nbr = NearestNeighbors(n_neighbors=1).fit(qrs_positions.reshape(-1,1))
distances, indices = nbr.kneighbors(np.array([[win_pos]]).reshape(-1,1))
</code></pre>
|
python|arrays|numpy|pytorch|valueerror
| 0
|
375,699
| 72,328,963
|
Combining two indexes in a Pandas Dataframe
|
<p><a href="https://i.stack.imgur.com/1bEtb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1bEtb.png" alt="enter image description here" /></a></p>
<p>I have the above dataframe and i would like to combine the two indexes so that only 1 remains and it is the addition of the indexes.</p>
|
<p>Get the 2 lines in the dataset and apply the logic below:</p>
<pre class="lang-py prettyprint-override"><code>list1 = [1,0,1,0,1]
list2 = [0,2,0,2,0]
list3 = [x or y for x, y in zip(list1, list2)]
print(list3)
</code></pre>
<p>Output:
<code>[1, 2, 1, 2, 1]</code></p>
|
python|pandas
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.