Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
3,200
| 52,935,668
|
What will be the appropriate separator?
|
<p>I have a text file which has below structure:</p>
<pre><code>>hsa:9934 K04299 purinergic receptor P2Y, G protein-coupled
MINSTSTQPPDESCSQNLLITQQIIPVLYCMVFIAGILLNGVSGWIFFYVPSSKSFIIYL
KNIVIADFVMSLTFPFKILGDSGLGPWQLNVFVCRVSAVLFYVNMYVSIVFFGLISFDRY
>hsa:9934 K04299 purinergic receptor P2Y, G protein-coupled
MINSTSTQPPDESCSQNLLITQQIIPVLYCMVFIAGILLNGVSGWIFFYVPSSKSFIIYL
KNIVIADFVMSLTFPFKILGDSGLGPWQLNVFVCRVSAVLFYVNMYVSIVFFGLISFDRY
</code></pre>
<p>I need to load and convert this file as below tabular structure:</p>
<pre><code>--------------------------------------------------------------
|>hsa:9934 K04299 purinergic receptor P2Y, G protein-coupled |
|MINSTSTQPPDESCSQNLLITQQIIPVLYCMVFIAGILLNGVSGWIFFYVPSSKSFIIYL|
|KNIVIADFVMSLTFPFKILGDSGLGPWQLNVFVCRVSAVLFYVNMYVSIVFFGLISFDRY|
--------------------------------------------------------------
|>hsa:9934 K04299 purinergic receptor P2Y, G protein-coupled |
|MINSTSTQPPDESCSQNLLITQQIIPVLYCMVFIAGILLNGVSGWIFFYVPSSKSFIIYL|
|KNIVIADFVMSLTFPFKILGDSGLGPWQLNVFVCRVSAVLFYVNMYVSIVFFGLISFDRY|
--------------------------------------------------------------
</code></pre>
<p>I tried below code:</p>
<pre><code>dataset = pd.read_csv(path, sep = ">")
</code></pre>
<p>But it doesn't worked as I expected! </p>
<p>How can I get the exact format?</p>
|
<p>you could use str.split('>') so you end up with an array for each value.
Unless '>' might appear in the hashes </p>
|
python|pandas
| 2
|
3,201
| 53,286,009
|
Filling in missing data Python
|
<p>I have a lot of missing data in between years and months of my dataframe that looks like:</p>
<pre><code> Year Month State Value
1969 12 NJ 5500
1969 12 NY 6418
1970 8 IL 10093
1970 12 WI 6430
1970 7 NY 6140
1971 10 IL 10093
1971 6 MN 6850
1971 3 SC 7686
1972 12 FL 8772
2016 1 NJ 9000
</code></pre>
<p>For each state I need to fill out all the missing data from the beginning of the year the values began until 2018 but the only data that exists is mostly in between 1969 and 1990 so I just need to fill in the blanks.</p>
<p>The desired output (for NJ but needed for all states) would be:</p>
<pre><code>Year Month State Value
1969 12 NJ 5500
1970 1 NJ 5500
1970 2 NJ 5500
1970 3 NJ 5500
1970 4 NJ 5500
1970 5 NJ 5500
1970 6 NJ 5500
.
.
1970 12 NJ 5500
.
.
2010 1 NJ 5500
2010 2 NJ 5500
2010 3 NJ 5500
.
.
2018 1 NJ 9000
</code></pre>
<p>I've tried turning the months into categorical values that range from 1-12 months, regroup and reset the index, and then use ffill to partition through the values into those newly made column indices like:</p>
<pre><code>df['Month'] = pd.Categorical(df['Month'], categories=range(1, 13))
df = df.groupby(['State', 'Year', 'Month']).first().reset_index()
df['Value'] = df.groupby('Region')['Value'].ffill()
</code></pre>
<p>But this method gives me NaN values like:</p>
<pre><code>State Year Month Value
NJ 1969 12 5500.0
NJ 1970 1 nan
NJ 1970 2 nan
NJ 1970 3 nan
.
.
NJ 2016 1 9000.0
</code></pre>
<p>I can't understand why this method has worked before as I've tested it on other data with actual results.</p>
|
<p>Sorry to all those who took time to correct this. It was a simple matter of accidentally grouping by a false column.</p>
<p>I had previously created a <code>'Region'</code> column based off a collection of the State variables which was called rather than the States themselves.</p>
<p>So to clarify:</p>
<pre><code>df['Value'] = df.groupby('Region')['Value'].ffill()
</code></pre>
<p>Needs to be changed into:</p>
<pre><code>df['Value'] = df.groupby('State')['Value'].ffill()
</code></pre>
<p>This method works correctly.</p>
|
python|python-3.x|pandas|dataframe|missing-data
| 1
|
3,202
| 65,905,177
|
Retrieving and printing a value from an Access database
|
<p>I'm trying to retrieve and print a row from an Access database. I want the user to input an ID and a field then a value to be printed.</p>
<p>This is my code so far...</p>
<pre><code>import pypyodbc
import pandas
conn = pypyodbc.connect(r'Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\Users\Operator\Documents\T31_DB.accdb;')
cursor = conn.cursor()
cursor.execute('SELECT * FROM [TABLE_SIGNAL_ID]')
data = cursor.fetchall()
df = pandas.DataFrame(data)
dfi = df.set_index([0], drop=False)
ID = raw_input("enter signal ID:")
ID = int()
res = dfi.iloc[[ID]]
print res
</code></pre>
<p>Is there a way I can keep the headings from Access so I can pull specific items from the row? don't want to rename the columns by hand because there are too many.</p>
<p>Any help appreciated.</p>
|
<p>I managed to do it but using SQL!</p>
<pre><code>import pandas as pd
import pypyodbc
import sqlalchemy
conn = pypyodbc.connect(r'Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\Users\Operator\Documents\T31_DB.accdb;')
df = pd.read_sql('SELECT * FROM [TABLE_SIGNAL_ID]', conn)
dfi = df.set_index("signal_id", drop=False)
ID = raw_input("enter signal ID:")
res = dfi.loc[ID, ["signal_id", "abname", "system_id"]]
print res
</code></pre>
|
pandas|python-2.7|ms-access|pypyodbc
| 0
|
3,203
| 65,803,087
|
How to add "OTHER" class in Neural Network?
|
<p>I have to classify between Real, Fake and Other images but I only have dataset of Real and Fake Faces, how do I add 'other' class, that is neither Real nor Fake face ?</p>
<p>This is how I loaded my dataset</p>
<pre><code>TRAINING_DIR = "Dataset\Training Data"
train_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(TRAINING_DIR,
batch_size=16,
target_size=(300, 300))
</code></pre>
<p>and this is my output</p>
<pre><code>Found 1944 images belonging to 2 classes.
</code></pre>
|
<blockquote>
<ol>
<li>Real Face 2. Fake Face 3. Other Object</li>
</ol>
</blockquote>
<blockquote>
<p>There is this machine learning competition and they told us to add "other" class. and they didn't provide data, so that's why I was asking</p>
</blockquote>
<p>Does this mean you are not allowed to use any additional data? If you can, take some other images that are not faces. Learn a second, separate model <code>M2</code> that has two classes: <code>FACE</code> and <code>OTHER</code>. For this model, label all of your face images (all real and fake ones together) as <code>FACE</code>.</p>
<p>Train your original model <code>M1</code> the way you are doing already, with the two classes <code>REAL</code> and <code>FAKE</code>.</p>
<p>After training those two models, follow a decision process such as this one:</p>
<pre><code>For an input image `I`,
Does `M2` predict that the input is a `FACE`?
|--Yes: Does `M1` predict the image is `REAL`?
|--Yes: Output "real image".
|--No: Output "fake image".
|--No: Output "other"
</code></pre>
<hr />
<p>If you cannot use any additional data, try Andrey's answer or look into methods that can detect out-of-distribution inputs.</p>
|
tensorflow|keras|deep-learning|neural-network|conv-neural-network
| 1
|
3,204
| 3,069,820
|
Ironpython call numpy problem
|
<p>Ironpython 2.6,
python 2.6.5,
numpy,
SciPy</p>
<pre>
import sys
sys.path.append(r'D:\Python26\dll')
sys.path.append(r'D:\Python26\Lib')
sys.path.append(r'D:\Python26\Lib\site-packages')
» import numpy
Traceback (most recent call last):
File "", line 1, in
File "D:\Python26\Lib\site-packages\numpy\__init__.py", line 132, in
File "D:\Python26\Lib\site-packages\numpy\add_newdocs.py", line 9, in
File "D:\Python26\Lib\site-packages\numpy\lib\__init__.py", line 4, in
File "D:\Python26\Lib\site-packages\numpy\lib\type_check.py", line 8, in
File "D:\Python26\Lib\site-packages\numpy\core\__init__.py", line 5, in
ImportError: No module named multiarray</pre>
<p>What's wrong?
Thanks.</p>
|
<p>From the comments, it looks like <a href="https://stackoverflow.com/questions/3069820/ironpython-call-numpy-problem#comment3161202_3069820">Giles' answer</a> did the trick:</p>
<blockquote>
<p>From looking at the IronPython source, it looks like you'll need to set LanguageSetup.Options["Frames"] = ScriptingRuntimeHelpers.True when you're setting up hosting.</p>
</blockquote>
|
python|ironpython|numpy
| 0
|
3,205
| 2,674,437
|
Adding a numpy array to a scipy.sparse.dok_matrix
|
<p>I have a <code>scipy.sparse.dok_matrix</code> (dimensions m x n), wanting to add a flat numpy-array with length m.</p>
<pre><code>for col in xrange(n):
dense_array = ...
dok_matrix[:,col] = dense_array
</code></pre>
<p>However, this code raises an Exception in <code>dok_matrix.__setitem__</code> when it tries to delete a non existing key (<code>del self[(i,j)]</code>).</p>
<p>So, for now I am doing this the unelegant way:</p>
<pre><code>for col in xrange(n):
dense_array = ...
for row in dense_array.nonzero():
dok_matrix[row, col] = dense_array[row]
</code></pre>
<p>This <em>feels</em> very ineffecient.
So, what is the most efficient way of doing this?</p>
<p>Thanks!</p>
|
<p>I'm surprised that your unelegant way doesn't have the same problems as the slice way. This looks like a bug to me upon looking at the Scipy code. When you try to set a certain row and column in a dok_matrix to zero when it is already zero, there is be an error because it tries to delete the value at that row and column without checking if it exists.</p>
<p>In answer to your question, what you are doing in your inelegant way is exactly what the <code>__setitem__</code> method does currently with your elegant method (after a couple of isinstance checks and what not). If you want to use the elegant way, you can fix the bug I mentioned in your own Scipy package by opening up dok.py in <code>Lib/site-packages/scipy/sparse/</code> and changing line 222 from</p>
<pre><code>if value==0:
</code></pre>
<p>to</p>
<pre><code>if value==0 and self.has_key((i,j)):
</code></pre>
<p>Then you can use the elegant way and it should work just fine. I went to submit a bug fix, but this it is already fixed for the next version and this is the way that it was fixed.</p>
|
python|numpy|scipy|sparse-matrix
| 2
|
3,206
| 63,421,750
|
Pytorch Custom data loading with HTTPS is very slow
|
<p>I tried implementing a custom data loader that will make a web request and will return a sample. My purpose of the program is to see if this idea would be faster than the original data loader. My web server code is run with</p>
<pre><code>srun -n24 --mem = 12g python web.py
</code></pre>
<p>Which will then create 24 "workers" that run in the cluster. Then each worker will write its portname to a file to make itself known to the data loader that he exist. So, when the dataloader is called in the training loop. The data loader selects a random server from the files and send them a web request with an index. The web server will then load the sample and do augmentation and return via http response. From my view, i thought it would be faster than the original data loader as, each data loader worker would send a request to the webserver and get a sample. Thus, distributing data to different server so they load the images faster.</p>
<p>However, when i do a comparison with original data using COCO dataset. The original data loader takes 743.820 sec to complete loading an epoch while my custom data loader takes 1503.26 sec to complete. I couldn't figure out which part of my code is taking a long time, so i would like to ask for assistance. Please if my explaination is bad/not great please let me know. Any help is appreciated. Thankyou.</p>
<p>The following the code for starting webserver:</p>
<pre><code>class PytorchDataHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.end_headers()
get_param = self.path
get_param = parse_qs(urlparse(get_param).query)
batch_list = [[],[]]
c_batches = []
index = get_param['index']
if index :
for data in index:
result = imagenet_data[int(data)]
batch_list[0].append(result[0])
batch_list[1].append(result[1])
c_batches.append(batch_list)
torch.save(batch_list, self.wfile)
else:
write_log('Empty Parameter')
def main():
sock = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
hostname = socket.gethostname()
n_hostname = hostname.split(".")
# Bind to random port
sock.bind(('0.0.0.0', 0))
# Get Port Number
PORT = int(sock.getsockname()[1])
current_dir = os.getcwd()
create_dir = os.path.join(current_dir, r'worker_file')
#filename = create_dir + '/' + str(n_hostname[0]) + '.cvl-tengig:' + str(PORT)
filename = create_dir + '/' + str(n_hostname[0]) + ':' + str(PORT)
os.makedirs(os.path.dirname(filename), exist_ok=True)
open_file = open(filename, 'w')
open_file.write(str(n_hostname[0]) + ':' + str(PORT))
open_file.close()
try :
SERVER = HTTPServer(('', PORT), PytorchDataHandler)
SERVER.serve_forever()
except KeyboardInterrupt:
print('Shutting down server, ^C')
os.remove(filename)
SERVER.socket.close()
if __name__ == '__main__':
main()
</code></pre>
<p>The code for custom data loader:</p>
<pre><code>class DistData(Dataset):
def __init__(self, data, transform = None):
self.data = data
# Get file path
current_dir = os.getcwd()
create_dir = os.path.join(current_dir, r'worker_file')
# Get all item in file
self.arr = os.listdir(create_dir)
self.selected = []
def __getitem__(self, index):
# Select a random server
random_server = random.choice(self.arr)
# Remove selected server from the server list
self.arr.remove(random_server)
# Append selected server to the selected list
self.selected.append(random_server)
return self.post_request(index, random_server)
def __len__(self):
return len(self.data)
def post_request(self, index, random_server):
params = {'index': index}
url = 'http://' + random_server + '/get'
r = requests.get(url , params = params)
print("Response Time : {:<10} , worker : {:<10} ".format(r.elapsed.total_seconds(), torch.utils.data.get_worker_info().id ))
# Remove server from selected once there's a response
self.selected.remove(random_server)
# Add back to main server list after response
self.arr.append(random_server)
buffer = io.BytesIO(r.content)
response = torch.load(buffer)
return response
def train(net, device, trainloader, criterion, optimizer):
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
print('Epoch : {}'.format(epoch + 1))
print('----------------------------')
start_time = time.time()
total_time = 0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
print("Train: Time taken to load batch {} is {}".format(i+1,time.time() - start_time))
total_time += time.time() - start_time
start_time = time.time()
print('Epoch : {} , Total Time Taken : {}'.format(epoch + 1, total_time))
print('Finished Training')
imagenet_data =torchvision.datasets.CocoCaptions('/db/shared/detection+classification/coco/train2017/' ,
'/db/shared/detection+classification/coco/annotations/captions_train2017.json')
training_set = DistData(imagenet_data)
trainloader = DataLoader(training_set, sampler = BatchSampler(RandomSampler(training_set), batch_size = 24, drop_last = False),
num_workers = 4)
train(trainloader)
</code></pre>
|
<h2>PyTorch way of distribution</h2>
<p>First of all, you should get yourself acquainted with <a href="https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html#distributeddataparallel" rel="nofollow noreferrer">torch.nn.parallel.DistributedDataParallel</a> to see example how to distribute data in an efficient manner.</p>
<p>You can check <a href="https://stackoverflow.com/a/62864931/10886420">this answer of mine</a> and accompanying <a href="https://stackoverflow.com/a/62831823/10886420">Daniel's answer</a> to get some intuition about possible strategies. Also <a href="https://pytorch.org/tutorials/intermediate/ddp_tutorial.html" rel="nofollow noreferrer">PyTorch introduction</a> is a great resource.</p>
<p>In short, what it does:</p>
<ul>
<li>main worker loads the data (large batch)</li>
<li>this batch is distributed across other workers evenly (together with neural network)</li>
<li><code>forward</code> & backward pass is done on each worker with part of data</li>
<li><code>gradient</code> from each worker is send to <code>main worker</code> where it's averaged
and optimizer improves <code>model</code></li>
<li><code>model</code> is distributed across workers (together with new batch)</li>
</ul>
<p>In this case there three things send across the network (or devices):</p>
<ul>
<li>batch of data (preferably large)</li>
<li>model</li>
<li>gradients of model</li>
</ul>
<p>This leads us to the main caveat with your solution; data transfers across the web should be minimized as those are <strong>really slow</strong>.</p>
<h2>Your way of distribution</h2>
<p>You are making a request for <strong>every sample</strong>. It is really inefficient (due to network transfers), what you should be after is requesting data for the whole batch. Furthermore, each server should have this data preloaded (so four batches ahead let's say), so it can be send any time it is needed.</p>
<p>As you are using <code>14</code> workers on <code>host</code> and each is sending request for data to other servers, it is possible that a few of them will request for the same server. In this case each has to wait for another. It would be better to point each worker to each server.</p>
<p>Still, this approach isn't really efficient as the model on <code>host</code> has to wait for <code>data</code> from servers.</p>
<p>If possible, you could split whole <code>COCO</code> dataset across workers. Also, on each worker, there should be a module doing <code>forward</code> and <code>backward</code>. This would work similarly to <strong>PyTorch way of distribution</strong> described above, except for <code>batch</code> transfers across devices. On the downside, training would be less randomized.</p>
<h2>Questions</h2>
<blockquote>
<p>However, i am a bit unclear on requesting a batch instead of sample.
Do i send a list of index to the server and returns a batch as a
response?</p>
</blockquote>
<p>Yeah, check out <a href="https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader" rel="nofollow noreferrer"><code>torch.utils.data.DataLoader</code>'s <code>batch_sampler</code></a> argument.</p>
<blockquote>
<p>and how do you preload data when you are not receiving any request in
the first place?</p>
</blockquote>
<p>You can send multiple indices in a single request (e.g. indices for three batches). You prepare the first <code>batch</code> on a worker and send it to host and prepare another one (second batch) during network communication. So when you are communicating there is always one batch being prepared.</p>
|
python|https|pytorch|basehttpserver
| 0
|
3,207
| 63,609,704
|
Tabular String data convert to python data
|
<p>I have a string like this</p>
<pre><code>"""PID TTY TIME CMD
1 ? 00:00:01 systemd
2 ? 00:00:00 kthreadd
3 ? 00:00:00 rcu_gp
4 ? 00:00:00 rcu_par_gp"""
</code></pre>
<p>now I want the data to be like so that i can access it like <code>data["PID"]</code> will give me <code>1,2,3,4 and so for other headers.</code></p>
<p>I have used <code>pandas</code> and <code>StringIO</code> to convert it to a <code>dataframe</code> but the output of <code>df.columns</code> give <code>['PID TTY', 'TIME CMD']</code> which is not something i want.</p>
<p>It will be better if the logic is python related and not with pandas</p>
|
<p>Use <code>sep="\s+"</code> for separator by whitespace:</p>
<pre><code>from io import StringIO
temp="""PID TTY TIME CMD
1 ? 00:00:01 systemd
2 ? 00:00:00 kthreadd
3 ? 00:00:00 rcu_gp
4 ? 00:00:00 rcu_par_gp"""
df = pd.read_csv(StringIO(temp), sep="\s+")
print (df)
PID TTY TIME CMD
0 1 ? 00:00:01 systemd
1 2 ? 00:00:00 kthreadd
2 3 ? 00:00:00 rcu_gp
3 4 ? 00:00:00 rcu_par_gp
</code></pre>
<hr />
<pre><code>print (df.columns)
Index(['PID', 'TTY', 'TIME', 'CMD'], dtype='object')
</code></pre>
|
python|python-3.x|pandas|list
| 2
|
3,208
| 63,406,138
|
ValueError: Failed to find data adapter that can handle input: <class 'NoneType'>, <class 'NoneType'> in keras model.predict
|
<p>I have made a CNN model in Keras and saved it as 'model.h5'. It takes an input shape of 128x128. Now, I am in a new file and am trying to make predictions with this model.Here is what I have done so far:</p>
<pre><code>import keras
from keras.preprocessing.image import load_img, img_to_array
from keras.models import load_model
import PIL
img = load_img("img.jpg")
img = img_to_array(img)
img = img.resize((128, 128))
model = load_model('model.h5')
model.summary()
abc = model.predict(img)
print(abc)
</code></pre>
<p>Here is my error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-e23dbdb3fe22> in <module>()
14 model.summary()
15
---> 16 abc = model.predict(img)
17
18 print(abc)
3 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in select_data_adapter(x, y)
969 "Failed to find data adapter that can handle "
970 "input: {}, {}".format(
--> 971 _type_name(x), _type_name(y)))
972 elif len(adapter_cls) > 1:
973 raise RuntimeError(
ValueError: Failed to find data adapter that can handle input: <class 'NoneType'>, <class 'NoneType'>
</code></pre>
<p>Any help would be appreciated.</p>
<p>Thanks in advance</p>
|
<p>You are trying to resize img array after this line:</p>
<p><code>img = img_to_array(img)</code></p>
<p>You might be trying to use reshape the array instead of using resize. If you want to resize the loaded image, you might want to do it before converting it to an array, i.e. before this line:</p>
<pre><code>img = img_to_array(img)
</code></pre>
<p><strong>UPDATE:</strong></p>
<p>You are trying to use resize function on an array that is meant for an image object. Hence it is returning NoneType, which in turn is causing the issue.</p>
<p>Another thing is your model expects a 4-dimension vector(inspected using the file provided by you) as input and you are passing it NoneType, also if you expect the resize function of PIL as you expected to reshape your array to 128 * 128, it will still be a 2-d vector, hence is giving the error on using reshape instead of the resize.</p>
<p>You can make your code work with the following change:</p>
<pre><code>img = load_img("img.jpg")
img = img.resize((128, 128))
img = img_to_array(img)
img = img.reshape( -1,128, 128,3)
print(img.shape)
model = load_model('hotdogs.h5')
model.summary()
abc = model.predict(img)
print(abc)
</code></pre>
<p>Here, using reshape to convert the input array to a 4-dimensional array that is expected by your model.</p>
<p>I hope this helps. I am a newbie on StackOverflow. It would be motivating if you could give me an upvote if you find this answer helpful.</p>
|
python|tensorflow|keras
| 4
|
3,209
| 21,648,654
|
Does Scipy Sparse use the (sparse) BLAS library?
|
<p>Numpy can use one of a number of BLAS libraries (eg. ATLAS, MKL, OpenBLAS etc.).</p>
<p>Does the scipy.sparse matrix module support the sparse BLAS library?</p>
|
<p>searching the <code>scipy</code> <code>github</code> for <code>sparse BLAS</code> produces a few files like</p>
<p><a href="https://github.com/scipy/scipy/blob/6a4460f68315f0669604054be91ceeacd606f0b6/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_blas3.c" rel="nofollow">https://github.com/scipy/scipy/blob/6a4460f68315f0669604054be91ceeacd606f0b6/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_blas3.c</a></p>
<p><a href="https://github.com/scipy/scipy/blob/6a4460f68315f0669604054be91ceeacd606f0b6/scipy/sparse/linalg/dsolve/SuperLU/README" rel="nofollow">https://github.com/scipy/scipy/blob/6a4460f68315f0669604054be91ceeacd606f0b6/scipy/sparse/linalg/dsolve/SuperLU/README</a></p>
<p>This <code>SuperLU</code> solver is the only place where the two are mentioned together.</p>
|
numpy|scipy|sparse-matrix|blas
| 2
|
3,210
| 30,224,143
|
How to speed up the code - searching through a dataframe takes hours
|
<p>I've got a CSV file containing the distance between centroids in a GIS-model in the next format:</p>
<pre><code>InputID,TargetID,Distance
1,2,3050.01327866
1,7,3334.99565217
1,5,3390.99115304
1,3,3613.77046864
1,4,4182.29900892
...
...
3330,3322,955927.582933
</code></pre>
<p>It is sorted on origin (<code>InputID</code>) and then on the nearest destination (<code>TargetID</code>).</p>
<p>For a specific modelling tool I need this data in a CSV file, formatted as follows (the numbers are the centroid numbers):</p>
<pre><code>distance1->1, distance1->2, distance1->3,.....distance1->3330
distance2->1, distance2->2,.....
.....
distance3330->1,distance3330->2....distance3330->3330
</code></pre>
<p>So no InputID's or TargetID's, just the distances with the origins on the rows and the destinations on the columns:
(example for the first 5 origins/destinations)</p>
<pre><code> 0,3050.01327866,3613.77046864,4182.29900892,3390.99115304
3050.01327866,0,1326.94611797,1175.10254872,1814.45584129
3613.77046864,1326.94611797,0,1832.209595,3132.78725738
4182.29900892,1175.10254872,1832.209595,0,1935.55056767
3390.99115304,1814.45584129,3132.78725738,1935.55056767,0
</code></pre>
<p>I've built the next code, and it works. But it is so slow that running it will take days to get the 3330x3330 file. As I am a beginner in Python I think I am overlooking something...</p>
<pre><code>import pandas as pd
import numpy as np
file=pd.read_csv('c:\\users\\Niels\\Dropbox\\Python\\centroid_distances.csv')
df=file.sort_index(by=['InputID', 'TargetID'], ascending=[True, True])
number_of_zones=3330
text_file = open("c:\\users\\Niels\\Dropbox\\Python\\Output.csv", "w")
for origin in range(1,number_of_zones):
output_string=''
print(origin)
for destination in range(1,number_of_zones):
if origin==destination:
distance=0
else:
distance_row=df[(df['InputID']==origin) & (df['TargetID'] == destination)]
# I guess this is the time-consuming part
distance=distance_row.iloc[0]['Distance']
output_string=output_string+str(distance)+','
text_file.write(output_string[:-1]+'\n') #strip last ',' of line
text_file.close()
</code></pre>
<p>Could you give me some hints to speed up this code?</p>
|
<p>IIUC, all you need is <code>pivot</code>. If you start from a frame like this:</p>
<pre><code>df = pd.DataFrame(columns="InputID,TargetID,Distance".split(","))
df["InputID"] = np.arange(36)//6 + 1
df["TargetID"] = np.arange(36) % 6 + 1
df["Distance"] = np.random.uniform(0, 100, len(df))
df = df[df.InputID != df.TargetID]
df = df.sort(["InputID", "Distance"])
>>> df.head()
InputID TargetID Distance
2 1 3 6.407198
3 1 4 43.037829
1 1 2 52.121284
4 1 5 86.769620
5 1 6 96.703294
</code></pre>
<p>and we know the InputID and TargetID are unique, we can simply <code>pivot</code>:</p>
<pre><code>>>> pv = df.pivot(index="InputID", columns="TargetID", values="Distance").fillna(0)
>>> pv
TargetID 1 2 3 4 5 6
InputID
1 0.000000 52.121284 6.407198 43.037829 86.769620 96.703294
2 53.741611 0.000000 27.555296 85.328607 59.561345 8.895407
3 96.142920 62.532984 0.000000 6.320273 37.809105 69.896308
4 57.835249 49.350647 38.660269 0.000000 7.151053 45.017780
5 72.758342 48.947788 4.212775 98.183169 0.000000 15.702280
6 32.468329 83.979431 23.578347 30.212883 82.580496 0.000000
>>> pv.to_csv("out_dist.csv", index=False, header=False)
>>> !cat out_dist.csv
0.0,52.1212839519,6.40719759732,43.0378290605,86.769620064,96.7032941473
53.7416111725,0.0,27.5552964592,85.3286070586,59.5613449796,8.89540736892
96.1429198049,62.5329836475,0.0,6.32027280686,37.8091052942,69.8963084944
57.8352492462,49.3506467609,38.6602692461,0.0,7.15105257546,45.0177800391
72.7583417281,48.9477878574,4.21277494476,98.183168992,0.0,15.7022798801
32.4683285321,83.9794307564,23.578346756,30.2128827937,82.5804959193,0.0
</code></pre>
<p>The <a href="http://pandas.pydata.org/pandas-docs/stable/reshaping.html" rel="noreferrer">reshaping</a> section of the tutorial might be useful.</p>
|
python|csv|python-3.x|pandas
| 7
|
3,211
| 53,734,405
|
panda convert column to list and add new column
|
<p>New to Python. I am trying to convert column C to list and add it as extra column D in df.
I tried list() it work for individual raw. But it doesn't work for whole column C in df.
I need hint/help to move forward.</p>
<p>Input </p>
<pre><code>A B C
----------------
1 21 12457643
2 32 34576543
3 41 23456789
</code></pre>
<p>Output</p>
<pre><code>A B C D
------------------------------------
1 21 12457643 [1,2,4,5,7,6,4,3]
2 32 34576543 [3,4,5,7,6,5,4,3]
3 41 23456789 [2,3,4,5,6,7,8,9]
</code></pre>
|
<p>In general, storing lists in pandas columns is not a very good idea. But if you insist, convert the numbers to strings and then to lists of characters:</p>
<pre><code>df['D'] = df['C'].astype(str).apply(list)
#0 [1, 2, 4, 5, 7, 6, 4, 3]
#1 [3, 4, 5, 7, 6, 5, 4, 3]
#2 [2, 3, 4, 5, 6, 7, 8, 9]
</code></pre>
<p>The result is a list of characters. If you want a list of single-digit numbers, you need to apply a custom function via <code>lambda</code>:</p>
<pre><code>df['D'] = df['C'].astype(str).apply(lambda x: list(map(int, x)))
#0 [1, 2, 4, 5, 7, 6, 4, 3]
#1 [3, 4, 5, 7, 6, 5, 4, 3]
#2 [2, 3, 4, 5, 6, 7, 8, 9]
</code></pre>
|
python|pandas|multiple-columns
| 0
|
3,212
| 53,591,919
|
Python Scikit - Learn: Cross Validation with multi-index
|
<p>Hi I want to use one of the scikit learn's functions for cross validation. What I want is that the splitting of the folds is determined by one of the indexes. For example lets say I have this data with "month" and "day" being the indexes: </p>
<pre><code>Month Day Feature_1
January 1 10
2 20
February 1 30
2 40
March 1 50
2 60
3 70
April 1 80
2 90
</code></pre>
<p>Lets say I want to have a 1/4 of the data as test set for each validation. I want this fold seperation to be done by the first index which is the month. In this case the test set will be one of the months and the remaining 3 months will be the training set. As an example one of the train and test split will look like this: </p>
<pre><code>TEST SET:
Month Day Feature_1
January 1 10
2 20
TRAINING SET:
Month Day Feature_1
February 1 30
2 40
March 1 50
2 60
3 70
April 1 80
2 90
</code></pre>
<p>How can I do this. Thank you.</p>
|
<p>This is called splitting by a group. Check out the <a href="https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators-for-grouped-data" rel="nofollow noreferrer">user-guide in scikit-learn here to understand more about it</a>:</p>
<blockquote>
<p>...</p>
<p>To measure this, we need to ensure that all the samples in the
validation fold come from groups that are not represented at all in
the paired training fold.</p>
<p>...</p>
</blockquote>
<p>You can use the <a href="https://scikit-learn.org/stable/modules/cross_validation.html#group-k-fold" rel="nofollow noreferrer"><code>GroupKFold</code></a> or other strategies that have Group in the name. A sample can be</p>
<pre><code># I am not sure about this exact command,
# but after this, you should have individual columns for each index
df = df.reset_index()
print(df)
Month Day Feature_1
January 1 10
January 2 20
February 1 30
February 2 40
March 1 50
March 2 60
March 3 70
groups = df['Month']
from sklearn.model_selection import GroupKFold
gkf = GroupKFold(n_splits=3)
for train, test in gkf.split(X, y, groups=groups):
# Here "train", "test" are indices of location,
# you need to use "iloc" to get actual values
print("%s %s" % (train, test))
print(df.iloc[train, :])
print(df.iloc[test, :])
</code></pre>
<p><strong>Update</strong>: For passing this into cross-validation methods, just pass the months data to <code>groups</code> param in those. Like below:</p>
<pre><code>gkf = GroupKFold(n_splits=3)
y_pred = cross_val_predict(estimator, X_train, y_train, cv=gkf, groups=df['Month'])
</code></pre>
|
python|pandas|scikit-learn
| 2
|
3,213
| 20,277,982
|
Fastest pairwise distance metric in python
|
<p>I have an 1D array of numbers, and want to calculate all pairwise euclidean distances. I have a method (thanks to SO) of doing this with broadcasting, but it's inefficient because it calculates each distance twice. And it doesn't scale well.</p>
<p>Here's an example that gives me what I want with an array of 1000 numbers. </p>
<pre><code>import numpy as np
import random
r = np.array([random.randrange(1, 1000) for _ in range(0, 1000)])
dists = np.abs(r - r[:, None])
</code></pre>
<p>What's the fastest implementation in scipy/numpy/scikit-learn that I can use to do this, given that it has to scale to situations where the 1D array has >10k values. </p>
<p>Note: the matrix is symmetric, so I'm guessing that it's possible to get at least a 2x speedup by addressing that, I just don't know how.</p>
|
<p>Neither of the other answers quite answered the question - 1 was in Cython, one was slower. But both provided very useful hints. Following up on them suggests that <code>scipy.spatial.distance.pdist</code> is the way to go.</p>
<p>Here's some code:</p>
<pre><code>import numpy as np
import random
import sklearn.metrics.pairwise
import scipy.spatial.distance
r = np.array([random.randrange(1, 1000) for _ in range(0, 1000)])
c = r[:, None]
def option1(r):
dists = np.abs(r - r[:, None])
def option2(r):
dists = scipy.spatial.distance.pdist(r, 'cityblock')
def option3(r):
dists = sklearn.metrics.pairwise.manhattan_distances(r)
</code></pre>
<p>Timing with IPython:</p>
<pre><code>In [36]: timeit option1(r)
100 loops, best of 3: 5.31 ms per loop
In [37]: timeit option2(c)
1000 loops, best of 3: 1.84 ms per loop
In [38]: timeit option3(c)
100 loops, best of 3: 11.5 ms per loop
</code></pre>
<p>I didn't try the Cython implementation (I can't use it for this project), but comparing my results to the other answer that did, it looks like <code>scipy.spatial.distance.pdist</code> is roughly a third slower than the Cython implementation (taking into account the different machines by benchmarking on the np.abs solution).</p>
|
python|arrays|numpy|scipy|scikit-learn
| 32
|
3,214
| 16,026,181
|
Downloading treasury data with Pandas read_csv
|
<p>I'm trying to download treasury data from <a href="http://www.federalreserve.gov/releases/h15/data.htm" rel="nofollow">this page</a> using Pandas read_csv.</p>
<pre><code>url = "http://www.federalreserve.gov/datadownload/Output.aspx?rel=H15&series=bcb44e57fb57efbe90002369321bfb3f&lastObs=&from=&to=&filetype=csv&label=include&layout=seriescolumn"
res = requests.get(url)
csvio = StringIO(res.content)
dataframe = pd.read_csv(csvio, header=5, index_col=0, parse_dates=True)
columns_dic = {"RIFLGFCY10_N.B":'BC_10YEAR'}
dataframe = dataframe.rename(columns=columns_dic)
print (dataframe.head())
</code></pre>
<p>The output looks a little strange to me:</p>
<pre><code> BC_10YEAR
Time Period
1962-01-02 4.06
1962-01-03 4.03
1962-01-04 3.99
1962-01-05 4.02
1962-01-08 4.03
</code></pre>
<p>I don't understand why the header is split between two rows when I print it. Also, it's not clear to me that the dates are being properly parsed. Is there a way I can fix my call to read_csv?</p>
|
<p>The header is split because of your <code>index_col=0</code> argument. Try without an index column</p>
<pre><code>In [20]: dataframe = read_csv(csvio, header=5, index_col=None, parse_dates=True)
In [21]: dataframe
Out[21]:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 13379 entries, 0 to 13378
Data columns:
Time Period 13379 non-null values
RIFLGFCY10_N.B 13379 non-null values
dtypes: object(2)
In [22]: dataframe.head()
Out[22]:
Time Period RIFLGFCY10_N.B
0 1962-01-02 4.06
1 1962-01-03 4.03
2 1962-01-04 3.99
3 1962-01-05 4.02
4 1962-01-08 4.03
</code></pre>
<p>and the first column of data from the StringIO object becomes a column in the DataFrame, instead of becoming the index.</p>
|
python|csv|pandas
| 1
|
3,215
| 15,734,756
|
Is there a "freq" function in numpy/python?
|
<p>Suppose you have:</p>
<pre><code>arr = np.array([1,2,1,3,3,4])
</code></pre>
<p>Is there a built in function that returns the most frequent element?</p>
|
<p>Yes, Python's <a href="http://docs.python.org/2.7/library/collections.html#counter-objects" rel="nofollow noreferrer"><em>collections.Counter</em></a> has direct support for finding the most frequent elements:</p>
<pre><code>>>> from collections import Counter
>>> Counter('abracadbra').most_common(2)
[('a', 4), ('r', 2)]
>>> Counter([1,2,1,3,3,4]).most_common(2)
[(1, 2), (3, 2)]
</code></pre>
<p>With <em>numpy</em>, you might want to start with the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html#numpy-histogram" rel="nofollow noreferrer">histogram() function</a> or the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.bincount.html" rel="nofollow noreferrer">bincount() function</a>.</p>
<p>With <em>scipy</em>, you can search for the modal element with <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mstats.mode.html#scipy-stats-mstats-mode" rel="nofollow noreferrer">mstats.mode</a>.</p>
|
python|numpy
| 13
|
3,216
| 71,883,306
|
How to apply regex substitutions to Pandas Series containing string, lists?
|
<p>I'd like to find a way to apply a regular expression substitution to every element of a Pandas Series where the series may contain strings, lists, or dicts.</p>
<p>The objective would be to replace instances of text matching a specified pattern in a DataFrame column before that column is saved into Google BigQuery.</p>
<p>I can achieve this no problem where the Series contains only string values, e.g.:</p>
<p><code>['string 1', 'string 2', 'string 3']</code></p>
<p>However there may be instances in which the Series contains 'nested' data - in which each item in the series is a list containing <em>n</em> dicts:</p>
<pre><code>[
[
{ 'val1':'string 1', 'val2':'string 2' },
{ 'val1':'string 3', 'val2':'string 4' }
],
[
{ 'val1':'string 1', 'val2':'string 6' }
]
]
</code></pre>
<p>Ideally, I'd be able to create a function which can perform the substitution for each element in a series regardless of whether the series contains 'nested' values or not.</p>
<p>As an example, and using the sample values above, it may be that the function would be used to replace instances of 'string 1' with 'abcdef' - with the following outputs:</p>
<p><code>['abcdef', 'string 2', 'string 3']</code></p>
<pre><code>[
[
{ 'val1':'abcdef', 'val2':'string 2' },
{ 'val1':'string 3', 'val2':'string 4' }
],
[
{ 'val1':'abcdef', 'val2':'string 6' }
]
]
</code></pre>
<p>I had thought that it might be possible to do something similar to the below:</p>
<pre><code>def regex_replace_in_series(series_item, regex_pattern, replacement):
if type(series_item) == 'str':
return re.sub(regex_pattern, replacement, series_item)
elif type(series_item) == 'list':
return [regex_replace_in_series(item, regex_pattern, replacement) for item in series_item]
elif type(series_item) == 'dict':
for key in series_item:
series_item[key] = regex_replace_in_series(series_item[key], regex_pattern, replacement)
return series_item
else:
return series_item
my_series.apply(regex_replace_in_series, args=[some_pattern, some_replacement])
</code></pre>
<p>But I imagine that this would be a relatively slow/error prone way of achieving the desired outcome.</p>
<p>Is there a better way?</p>
<p>Thanks</p>
|
<p>What you could do is:</p>
<ul>
<li>transform your dataframe to csv</li>
<li>replace the strings you need by exploiting string context</li>
<li>get back your dataframe</li>
</ul>
<p>Here's a quick snippet that does replace strings within dictionaries by taking into account keys as a context:</p>
<pre><code>import io
import re
# transforming dataframe to csv string
string_df = df.to_csv(index=False)[1:]
# applying replacements on dicts
strings_to_replace_in_dict = [('val1', 'string 1', 'abcdef')]
for key, value, replacement in strings_to_replace_in_dict:
string_df = re.sub(r"('"+key+"': ')"+value, r"\1"+replacement, string_df)
# transforming csv string back to dataframe
df = pd.read_csv(io.StringIO(string_df))
</code></pre>
<p>You can handle list replacements in an analogous way.</p>
<p>Does this answer provide help for your problem?</p>
|
python|regex|pandas
| 0
|
3,217
| 71,866,069
|
Grouping a grouped dataframe in a nested loop
|
<p>I have a scenario where I have to group a dataframe by a column and again group the resulting dataframe using a different column. The reason I am iteratively grouping is because its easier for me to sack the results to a different dataframe. So the way I am trying to do is</p>
<pre><code>g=df.groupby(col1):
for a,b in g:
for c,d in b.groupby(col2):
</code></pre>
<p>When I try to do the above I am getting a None value in c instead of value associated with col2. What am I doing incorrectly?</p>
|
<p>We can't know with what you've given us, but your code should work fine... it does for me.</p>
<pre><code>df = pd.DataFrame({'EmployeeNo':[11111,11112,11113,11115,11116,11128],
'OutletName':['Outlet1', 'Outlet2', 'Outlet3','Outlet4', 'Outlet5','Outlet6'],
'EmployeeName':['John','Tom','Bob','Sam', 'Sean', 'Zac'],
'TargetAmount':[1000,500,400,500,300,800]})
g=df.groupby('OutletName')
for a,b in g:
for c,d in b.groupby('EmployeeName'):
print(c)
</code></pre>
<p>Output:</p>
<pre><code>John
Tom
Bob
Sam
Sean
Zac
</code></pre>
|
pandas|pandas-groupby
| 2
|
3,218
| 18,922,604
|
Convolve an RGB image with a custon neighbout kernel using Python and Numpy
|
<p>I'm trying to implement an algorithm to verify the 4 neighbout (up, down, left and right) pixels of an RGB image, if all pixel RGB values are equal I mark an pixel in the output image as 1, otherwise it will be 0. The non vectorized implementation is:</p>
<pre><code>def set_border_interior(img):
img_rows = img.shape[0]
img_cols = img.shape[1]
res = np.zeros((img_rows,img_cols))
for row in xrange(1,img_rows-1):
for col in xrange(1,img_cols-1):
data_b = set()
data_g = set()
data_r = set()
up = row - 1
down = row + 1
left = col - 1
right = col + 1
data_b.add(img.item(row,col,0))
data_g.add(img.item(row,col,1))
data_r.add(img.item(row,col,2))
data_b.add(img.item(up,col,0))
data_g.add(img.item(up,col,1))
data_r.add(img.item(up,col,2))
data_b.add(img.item(down,col,0))
data_g.add(img.item(down,col,1))
data_r.add(img.item(down,col,2))
data_b.add(img.item(row,left,0))
data_g.add(img.item(row,left,1))
data_r.add(img.item(row,left,2))
data_b.add(img.item(row,right,0))
data_g.add(img.item(row,right,1))
data_r.add(img.item(row,right,2))
if (len(data_b) == 1) and (len(data_g) == 1) and (len(data_r) == 1):
res.itemset(row,col, False)
else:
res.itemset(row,col, True)
return res
</code></pre>
<p>This non vectorized way, but it is really slow (even using img.item to read data and img.itemset to set new values). Is there a better way to implement this in Numpy (or scipy)?</p>
|
<p>Leaving the border aside, where your function is not well defined anyway, you could do the following:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
rows, cols = 480, 640
rgb_img = np.zeros((rows, cols, 3), dtype=np.uint8)
rgb_img[:rows//2, :cols//2] = 255
center_slice = rgb_img[1:-1, 1:-1]
left_slice = rgb_img[1:-1, :-2]
right_slice = rgb_img[1:-1, 2:]
up_slice = rgb_img[:-2, 1:-1]
down_slice = rgb_img[2:, 1:-1]
all_equal = (np.all(center_slice == left_slice, axis=-1) &
np.all(center_slice == right_slice, axis=-1) &
np.all(center_slice == up_slice, axis=-1) &
np.all(center_slice == down_slice, axis=-1))
plt.subplot(211)
plt.imshow(rgb_img, interpolation='nearest')
plt.subplot(212)
plt.imshow(all_equal, interpolation='nearest')
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/iLEPH.png" alt="enter image description here"></p>
|
python|image|numpy|scipy
| 4
|
3,219
| 22,178,439
|
Variable changes value without changing it at all in the program
|
<p>The numpy array <code>pos_0</code> changes its value without anything happening to it in the program.</p>
<p>Relevant steps:</p>
<ol>
<li>I assign a value to <code>pos_0</code></li>
<li>I set <code>pos=pos_0</code></li>
<li>I change <code>pos</code> (in a while loop)</li>
<li>I print both <code>pos</code> and <code>pos_0</code> and <code>pos_0</code> is now equal to the value that <code>pos</code> has after the while loop.</li>
</ol>
<p>Nowhere after the assignment of <code>pos=pos_0</code> does the name of the variable even come up.</p>
<p>Also, the program takes FOREVER to run. I know it's a long loop so it's not a surprise, but any advice on how to speed it up would be so great.</p>
<p>Thanks a ton!</p>
<p>Here is the code:</p>
<pre><code>import math
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import sys
import random
#starting conditions:
q_b=5.19230065e-39 #the magnetic charge of the particle in Ampere*kpc by dirac quantization cond.
m=1.78266184e-10 #mass (kg). An estimate from Wick (2002).
pos_0=np.array([-8.33937979, 0.00, 0.00]) #starting position (kpc)
vel_0=np.array([0, 0, 0])*(3.24077929e-20) #starting velocity (kpc/s), but enter the numbers in m, conversion is there.
dt=1e8 #the timestep (smaller = more accuracy, more computing time) in seconds
distance_to_track= .01 #how far to track the particle (kpc)
#disk parameters. B is in tesla.
b1=0.1e-10
b2=3.0e-10
b3=-0.9e-10
b4=-0.8e-10
b5=-2.0e-10
b6=-4.2e-10
b7=0.0e-10
b8=2.7e-10
b_ring=0.1e-10
h_disk=0.4
w_disk=0.27
#halo parameters
B_n=1.4e-10
B_s=-1.1e-10
r_n=9.22
r_s=16.7 #NOTE: parameter has a large error.
w_h=0.2
z_0=5.3
#X-field parameters
B_X=4.6e-10
Theta0_X=49.0
rc_X=4.8
r_X=2.9
#striation parameters (the striated field is currently unused)
#gamma= 2.92
#alpha= 2.65 #taken from page 9, top right paragraph
#beta= gamma/alpha-1
#other preliminary measures:
c=9.7156e-12 #speed of light in kpc/s
def mag(V):
return math.sqrt(V[0]**2+V[1]**2+V[2]**2)
initialposition=pos_0
pos=pos_0
vel=vel_0
trailx=(pos[0],) #trailx,y,z is used to save the coordinates of the particle at each step, to plot the path afterwards
traily=(pos[1],)
trailz=(pos[2],)
gam=1/math.sqrt(1-(mag(vel)/c)**2)
KE=m*c**2*(gam-1)*5.942795e48 #KE, converted to GeV
KEhistory=(KE,)
distance_tracked=0 #set the distance travelled so far to 0
time=0
#boundary function (between disk and halo fields)
def L(Z,H,W):
return (1+math.e**(-2*(abs(Z)-H)/W))**-1
#halo boundary spirals:
i=11.5 #this is the opening "pitch" angle of the logarithmic spiral boundaries.
r_negx=np.array([5.1, 6.3, 7.1, 8.3, 9.8, 11.4, 12.7, 15.5]) #these are the radii at which the spirals cross the x-axis
def r1(T):
return r_negx[0]*math.e**(T/math.tan(math.radians(90-i)))
def r2(T):
return r_negx[1]*math.e**(T/math.tan(math.radians(90-i)))
def r3(T):
return r_negx[2]*math.e**(T/math.tan(math.radians(90-i)))
def r4(T):
return r_negx[3]*math.e**(T/math.tan(math.radians(90-i)))
def r5(T):
return r_negx[4]*math.e**(T/math.tan(math.radians(90-i)))
def r6(T):
return r_negx[5]*math.e**(T/math.tan(math.radians(90-i)))
def r7(T):
return r_negx[6]*math.e**(T/math.tan(math.radians(90-i)))
def r8(T):
return r_negx[7]*math.e**(T/math.tan(math.radians(90-i)))
#X-field definitions:
def r_p(R,Z):
if abs(Z)>=math.tan(Theta0_X)*math.sqrt(R)-math.tan(Theta0_X)*rc_X:
return R-abs(Z)/math.tan(Theta0_X)
else:
return R*rc_X/(rc_X+abs(Z)/math.tan(Theta0_X))
def b_X(R_P):
return B_X*math.e**(-R_P/r_X)
def Theta_X(R,Z):
if abs(Z)>=math.tan(Theta0_X)*math.sqrt(R)-math.tan(Theta0_X)*rc_X:
return math.atan(abs(Z)/(R-r_p(R,Z)))
else:
return Theta0_X
#preliminary check:
if mag(vel) >= c:
print("Error: Velocity cannot exceed the speed of light. Currently, it is",mag(vel)/c,"times c.")
sys.exit("Stopped program.")
#print initial conditions:
print()
print()
print("=========================PARTICLE TRAIL INFO=========================")
print("Your Initial Parameters: \nq_b =",q_b,"A*kpc m =",m,"kg dt =",dt,"s distance to track =",distance_to_track,"kpc","KE =",KE,"GeV")
print("initial position (kpc) =",pos_0,"\ninitial velocity (kpc/s) =",vel_0)
print()
#ok, let's start tracking the monopole. Most of the calculations in the loop are to find the bfield.
while distance_tracked<distance_to_track:
#definitions:
r=math.sqrt(pos[0]**2+pos[1]**2)
theta=math.atan(pos[1]/pos[0])
gam=1/math.sqrt(1-(mag(vel)/c)**2)
#now for bfield calculation, component by component: halo, then X-field, then disk (striated not currently used)
#halo component:
if pos[2]>=0:
bfield = math.e**(-abs(pos[2])/z_0)*L(pos[2],h_disk,w_disk)* B_n*(1-L(r,r_n,w_h)) * np.array([-1*pos[1]/r, pos[0]/r,0])
else:
bfield = math.e**(-abs(pos[2])/z_0)*L(pos[2],h_disk,w_disk)* B_s*(1-L(r,r_s,w_h)) * np.array([-1*pos[1]/r, pos[0]/r,0])
#X-field component:
if abs(pos[2])>=math.tan(Theta0_X)*math.sqrt(r)-math.tan(Theta0_X)*rc_X:
bfield += b_X(r_p(r,pos[2]))*(r_p(r,pos[2])/r)**2*np.array([math.cos(Theta_X(r,pos[2]))**2,
math.sin(Theta_X(r,pos[2]))*math.cos(Theta_X(r,pos[2])),
math.sin(Theta_X(r,pos[2]))])
else:
bfield += b_X(r_p(r,pos[2]))*(r_p(r,pos[2])/r)*np.array([math.cos(Theta_X(r,pos[2]))**2,
math.sin(Theta_X(r,pos[2]))*math.cos(Theta_X(r,pos[2])),
math.sin(Theta_X(r,pos[2]) ) ])
#disk component:
if r>=3.0 and r< 5.0 :
bfield+=b_ring*(1-L(pos[2],h_disk,w_disk))
elif r>=5.0 and r<=20.0:
if r>=r1(theta) and r<r2(theta): #region 6
bfield+=(b6/r)*(1-L(pos[2],h_disk,w_disk))*np.array([math.sin(math.radians(11.5))*pos[0]/r - math.cos(math.radians(11.5))*pos[1]/r,
math.sin(math.radians(11.5))*pos[1]/r + math.cos(math.radians(11.5))*pos[0]/r,
1])
elif r>=r2(theta) and r<r3(theta): #region 7
bfield+=(b7/r)*(1-L(pos[2],h_disk,w_disk))*np.array([math.sin(math.radians(11.5))*pos[0]/r - math.cos(math.radians(11.5))*pos[1]/r,
math.sin(math.radians(11.5))*pos[1]/r + math.cos(math.radians(11.5))*pos[0]/r,
1])
elif r>=r3(theta) and r<r4(theta): #region 8
bfield+=(b8/r)*(1-L(pos[2],h_disk,w_disk))*np.array([math.sin(math.radians(11.5))*pos[0]/r - math.cos(math.radians(11.5))*pos[1]/r,
math.sin(math.radians(11.5))*pos[1]/r + math.cos(math.radians(11.5))*pos[0]/r,
1])
elif r>=r4(theta) and r<r5(theta): #region 1
bfield+=(b1/r)*(1-L(pos[2],h_disk,w_disk))*np.array([math.sin(math.radians(11.5))*pos[0]/r - math.cos(math.radians(11.5))*pos[1]/r,
math.sin(math.radians(11.5))*pos[1]/r + math.cos(math.radians(11.5))*pos[0]/r,
1])
elif r>=r5(theta) and r<r6(theta): #region 2
bfield+=(b2/r)*(1-L(pos[2],h_disk,w_disk))*np.array([math.sin(math.radians(11.5))*pos[0]/r - math.cos(math.radians(11.5))*pos[1]/r,
math.sin(math.radians(11.5))*pos[1]/r + math.cos(math.radians(11.5))*pos[0]/r,
1])
elif r>=r6(theta) and r<r7(theta): #region 3
bfield+=(b3/r)*(1-L(pos[2],h_disk,w_disk))*np.array([math.sin(math.radians(11.5))*pos[0]/r - math.cos(math.radians(11.5))*pos[1]/r,
math.sin(math.radians(11.5))*pos[1]/r + math.cos(math.radians(11.5))*pos[0]/r,
1])
elif r>=r7(theta) and r<r8(theta): #region 4
bfield+=(b4/r)*(1-L(pos[2],h_disk,w_disk))*np.array([math.sin(math.radians(11.5))*pos[0]/r - math.cos(math.radians(11.5))*pos[1]/r,
math.sin(math.radians(11.5))*pos[1]/r + math.cos(math.radians(11.5))*pos[0]/r,
1])
elif r>=r8(theta) and r<r1(theta): #region 5
bfield+=(b5/r)*(1-L(pos[2],h_disk,w_disk))*np.array([math.sin(math.radians(11.5))*pos[0]/r - math.cos(math.radians(11.5))*pos[1]/r,
math.sin(math.radians(11.5))*pos[1]/r + math.cos(math.radians(11.5))*pos[0]/r,
1])
#striated fields (unfinished, unused):
#zeroorone=randrange(2)
#if zeroorone==0:
# bfield-= math.sqrt(beta)*bfield
#if zeroorone==1:
# bfield+= math.sqrt(beta)*bfield
#CALCULATION OF THE CHANGE IN POSITION:
#nonrelativistic:
#acc=bfield*(q_b/m)
#pos=np.array([pos[0]-vel[0]*dt-0.5*acc[0]*(dt**2),
# pos[1]-vel[1]*dt-0.5*acc[1]*(dt**2),
# pos[2]-vel[2]*dt-0.5*acc[2]*(dt**2)])
#distance_tracked+=math.sqrt((vel[0]*dt+0.5*acc[0]*(dt**2))**2+(vel[1]*dt+0.5*acc[1]*(dt**2))**2+(vel[2]*dt+0.5*acc[2]*(dt**2))**2)
#vel=np.array([vel[0]-acc[0]*dt,
# vel[1]-acc[1]*dt,
# vel[2]-acc[2]*dt])
#trailx=trailx+(pos[0],)
#traily=traily+(pos[1],)
#trailz=trailz+(pos[2],)
#KE=9.521406e38*6.24150934e15*gam*m*c**2 #calculate KE, and convert from kg*kpc^2/s^2 to J to MeV
#KEhistory=KEhistory+(KE,)
#RELATIVISTIC:
force=q_b*bfield #In kg*kpc/s^2
acc_prefactor=(gam*m*(c**2+gam**2*mag(vel)**2))**-1
acc=np.array([acc_prefactor*(force[0]*(c**2+gam**2*vel[1]**2+gam**2*vel[2]**2) - gam**2*vel[0]*(force[1]*vel[1]+force[2]*vel[2])),
acc_prefactor*(force[1]*(c**2+gam**2*vel[0]**2+gam**2*vel[2]**2) - gam**2*vel[1]*(force[0]*vel[0]+force[2]*vel[2])),
acc_prefactor*(force[2]*(c**2+gam**2*vel[0]**2+gam**2*vel[1]**2) - gam**2*vel[2]*(force[0]*vel[0]+force[1]*vel[1]))])
vel_i=vel
vel+= -acc*dt
pos+= -vel_i*dt-0.5*acc*dt**2
KE=m*c**2*(gam-1)*5.942795e48 #converted to GeV from kg*kpc^2/s^2
KEhistory=KEhistory+(KE,)
time+=dt
if random.randint(1,100000)==1:
print("distance traveled:",distance_tracked)
print("pos",pos)
print("vel",vel)
print("KE",KE)
print("time",time)
distance_tracked+=math.sqrt((vel_i[0]*dt+0.5*acc[0]*dt**2)**2+(vel_i[1]*dt+0.5*acc[1]*dt**2)**2+(vel_i[2]*dt+0.5*acc[2]*dt**2)**2)
trailx=trailx+(pos[0],)
traily=traily+(pos[1],)
trailz=trailz+(pos[2],)
#print(trailx)
#print()
#print(traily)
#print()
#print(trailz)
print(pos_0)
distance_from_start=math.sqrt( (pos[0]-pos_0[0])**2 +(pos[1]-pos_0[1])**2 +(pos[2]-pos_0[2])**2)
print("The final position (kpc) is ( " + str(pos[0]) + ", " + str(pos[1]) + ", " + str(pos[2]) + ")." )
print("The final velocity (kpc/s) is ( " + str(vel[0]) + ", " + str(vel[1]) + ", " + str(vel[2]) + ")." )
print("The final Kinetic Energy (GeV) is",KE)
print()
print("Distance from initial position is", distance_from_start,"kpc")
print("The journey took",time,"seconds.")
print()
print("The galactic center is plotted as a blue dot, and the sun is plotted as a yellow dot.")
print()
print()
fig = plt.figure(figsize=(5,8))
fig.canvas.set_window_title('Monopole Trail')
ax = fig.add_subplot(211, projection='3d')
ax.plot(trailx,traily,trailz)
ax.plot([-8.33937979],[0],[0],'yo')
#ax.plot([0],[0],[0],'bo')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.suptitle("Particle Trail (click and drag to change view)",fontsize=12,position=(0.5,0.93),weight='bold')
plt.title('Particle Kinetic Energy vs Time', position=(0.5,-0.2),fontsize=12,weight='bold')
t_array=np.arange(0,dt*len(KEhistory),dt)
ax2=fig.add_subplot(212)
ax2.plot(t_array,KEhistory)
ax2.set_xlabel("Time (s)")
ax2.set_ylabel("Particle Kinetic Energy (GeV)")
plt.grid(True)
plt.show()
</code></pre>
|
<p>Once you do <code>pos = pos_0</code>, changing <code>pos</code> changes <code>pos_0</code> because numpy arrays much like lists are mutable.</p>
<p>Here's a simplified example:</p>
<pre><code>>>> a = [1, 2, 3]
>>> b = a
>>> b.append(4)
>>> b
[1, 2, 3, 4]
>>> a
[1, 2, 3, 4]
</code></pre>
<p>In python, when you create a list by saying <code>a = [1, 2, 3]</code>, what's happening is that a list object is created and an <code>a</code> tag is put on it. When you assign <code>b</code> to <code>a</code>, the same object now gets a <code>b</code> tag on it too. So, <code>a</code> and <code>b</code> refer to the same object and changing one changes the other.</p>
|
python|variables|numpy|while-loop|physics
| 1
|
3,220
| 22,240,749
|
Make all columns (dates) index of data frame
|
<p>My data is organized like this: </p>
<p><img src="https://i.stack.imgur.com/CUTRI.png" alt="enter image description here"></p>
<p>Where country code is the index of the data frame and the columns are the years for the data. First, is it possible to plot line graphs (using matplotlib.pylot) over time for each country without transforming the data any further?</p>
<p>Second, if the above is not possible, how can I make the columns the index of the table so I can plot time series line graphs?</p>
<p>Trying <code>df.t</code> gives me this:</p>
<p><img src="https://i.stack.imgur.com/M6h5z.png" alt="enter image description here"></p>
<p>How can I make the dates the index now?</p>
|
<ol>
<li><p>Transpose using <code>df.T</code>.</p></li>
<li><p>Plot as usual.</p></li>
</ol>
<p>Sample:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({1990:[344,23,43], 1991:[234,64,23], 1992:[43,2,43]}, index = ['AFG', 'ALB', 'DZA'])
df = df.T
df
AFG ALB DZA
1990 344 23 43
1991 234 64 23
1992 43 2 43
# transform index to dates
import datetime as dt
df.index = [dt.date(year, 1, 1) for year in df.index]
import matplotlib.pyplot as plt
df.plot()
plt.savefig('test.png')
</code></pre>
<p><img src="https://i.stack.imgur.com/9jagY.png" alt="enter image description here"></p>
|
python|pandas|matplotlib|dataframe
| 1
|
3,221
| 55,435,006
|
fit method in keras (shape of the array)
|
<p>While compiling my code in the fit transform method it is showing an error about the shape of the array
"
ValueError: Error when checking input: expected dense_1_input to have shape (6,) but got array with shape (11,)"</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('Churn_Modelling.csv')
x = dataset.iloc[:, 3:13].values
y = dataset.iloc[:, 13].values
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_x_1 = LabelEncoder()
x[:, 1] = labelencoder_x_1.fit_transform(x[:, 1])
labelencoder_x_2 = LabelEncoder()
x[:, 2] = labelencoder_x_2.fit_transform(x[:, 2])
onehotencoder = OneHotEncoder(categorical_features =[1])
x = onehotencoder.fit_transform(x).toarray()
x =x[:, 1:]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y, test_size =0.2, random_state =0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
import keras
from keras.models import Sequential
from keras.layers import Dense
classifier = Sequential()
classifier.add(Dense(output_dim =6, init = 'uniform', activation= 'relu', input_dim= 6))
classifier.add(Dense(output_dim =6, init = 'uniform', activation= 'relu' ))
classifier.add(Dense(output_dim =1, init = 'uniform', activation = 'sigmoid' ))
classifier.compile(optimizer ='adam', loss = 'binary_crossentropy', metrics =['accuracy'])
classifier.fit(x_train, y_train, batch_size = 10, nb_epoch = 100)
y_pred = classifier.predict(x_test)
y_pred = (y_pred > 0.5)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
</code></pre>
|
<p>The discrepency is in <code>x_train</code> and possibly <code>x_test</code>. If you look at <code>print(x_train.shape)</code> you'll probably get something like <code>(N, 11)</code> where N is the number of samples each containing 11 features. But wait, your model is defined to have 6 <code>input_dim</code> features. So you can:</p>
<ul>
<li>either change the <code>input_dim=11</code> in the first layer</li>
<li>or look at the preprocessing to make sure you get back 6 features.</li>
</ul>
|
python|numpy|tensorflow|keras|neural-network
| 0
|
3,222
| 55,305,171
|
What does it mean to order the eigen vectors?
|
<p>I was working on a task that required me to compute the <a href="https://en.wikipedia.org/wiki/Eigenface" rel="nofollow noreferrer">Eigenfaces</a>. To compute the Eigenfaces, it is required to compute the <a href="https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors" rel="nofollow noreferrer">Eigenvalues and Eigenvectors</a>.</p>
<p>I computed the eigenvalues and eigenvectors using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html" rel="nofollow noreferrer">numpy's eigh</a> function. I think I understand what are eigenvectors. They are the vectors that do not change position when an image is transformed from one geometry/plane to another. In that they are able to uniquely identify an image. Eigenvalues correspond to each eigenvector which represent the scalar changes the eigenvector has undergone. </p>
<p>What I do not understand is a statement from <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html" rel="nofollow noreferrer">numpy's documentation</a> that says:</p>
<blockquote>
<p>The function returns the eigenvalues in ascending order, each repeated according to its multiplicity.</p>
</blockquote>
<p>What is this thing about ordering? What order does the documentation refer to? </p>
<p>For example:</p>
<pre><code>arr = np.random.uniform(size=(3,3,3))
eigen_val, eigen_vec = np.linalg.eigh(arr)
</code></pre>
<p>The eigen vectors returned from my above run looks like:</p>
<pre><code> array([[[ 0.73988841, 0.42234431, -0.52363195],
[ 0.00792645, -0.78378814, -0.62097771],
[-0.67268292, 0.45530367, -0.58326346]],
[[-0.57948585, 0.3848149 , -0.7184105 ],
[-0.32564468, -0.91740718, -0.22873479],
[ 0.74709551, -0.10139798, -0.6569374 ]],
[[-0.77375832, 0.50124139, -0.38736951],
[-0.12305613, -0.7187746 , -0.68426622],
[ 0.62141392, 0.48178849, -0.61783865]]])
</code></pre>
<p>What do I interpret from the ordering here?</p>
<p>The whole context is that Eigenvectors are computed during <a href="https://en.wikipedia.org/wiki/Principal_component_analysis" rel="nofollow noreferrer">PCA</a> and I read that top K eigenvectors explain the best variance. But I could not understand what it meant.</p>
|
<p>They are ordered by eigenvalue magnitude in ascending order, just as the documentation explains:</p>
<pre><code>print(eigen_val)
array([[-0.65484945, 0.53345853, 1.2783374 ],
[-0.54451155, 0.23566298, 1.32844171],
[-0.11539487, 0.49887717, 1.55005921]])
</code></pre>
<p>The eigenvalues with the smallest magnitude are listed first, and those with the largest magnitude are listed last. Each eigenvalue has a corresponding eigenvector, and so the order of the eigenvectors is fixed once the eigenvalues are ordered.</p>
|
python|numpy|opencv|image-processing|computer-vision
| 0
|
3,223
| 55,233,243
|
Effective way to compare 1d and a 2d series in python
|
<p>I have a data frame with one column as an array of strings and the 2nd column as one string value.</p>
<pre><code>a = pd.Series([["a","b","c", "d"],["a","b","c", "d"],["a","b","c", "d"],["a","b","c", "d"],["a","b","c", "d"]])
b = pd.Series(["a","d","e", "c", "b"])
</code></pre>
<p>i wish to check whether b is contained in a, but i am receiving an error when running the isin function.</p>
<pre><code>b.isin(a)
</code></pre>
<p>Any solution to this. i was particularly trying to avoid loops here, not sure if that was a good strategy based on run time.</p>
<p>Edit: </p>
<pre><code>**a b**
["a","b","c", "d"] a
["a","b","c", "d"] d
["a","b","c", "d"] e
["a","b","c", "d"] c
["a","b","c", "d"] b
</code></pre>
<p>the intended output is a series making row wise comparison.</p>
<pre><code>[True True False True True]
</code></pre>
|
<p><code>pandas.Series</code> implements the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.combine.html" rel="nofollow noreferrer">combine</a> method which you could use in the following way to find the elements in <code>b</code> that also appear in the <code>a</code> lists:</p>
<pre><code>import pandas as pd
a = pd.Series([["a","b","c", "d"],["a","b","c", "d"],["a","b","c", "d"],["a","b","c", "d"],["a","b","c", "d"]])
b = pd.Series(["a","d","e", "c", "b"])
a.combine(b, lambda a,b: b in a)
</code></pre>
<p>Output:</p>
<pre><code>0 True
1 True
2 False
3 True
4 True
dtype: object
</code></pre>
|
python|string|pandas|numpy|series
| 1
|
3,224
| 7,445,769
|
Cython numpy array indexing
|
<p>I am trying to speed up some python code with cython, and I'm making use of cython's <code>-a</code> option to see where I can improve things. My understanding is that in the generated html file, the highlighted lines are ones where python functions are called - is that correct?</p>
<p>In the following trivial function, I have declared the numpy array argument <code>arr</code> using the buffer syntax. I thought that this allows indexing operations to take place purely in C without having to call python functions. However, <code>cython -a</code> (version 0.15) highlights the line where I set the value of an element of <code>arr</code>, though not the one where i read one of its elements. Why does this happen? Is there a more efficient way of accessing numpy array elements?</p>
<pre><code>import numpy
cimport numpy
def foo(numpy.ndarray[double, ndim=1] arr not None):
cdef int i
cdef double elem
for i in xrange(10):
elem = arr[i] #not highlighted
arr[i] = 1.0 + elem #highlighted
</code></pre>
<p>EDIT: Also, how does the <code>mode</code> buffer argument interact with numpy? Assuming I haven't changed the <code>order</code> argument of <code>numpy.array</code> from the default, is it always safe to use <code>mode='c'</code>? Does this actually make a difference to performance?</p>
<p>EDIT after delnan's comment: <code>arr[i] += 1</code> also gets highlighted (that is why I split it up in the first place, to see which part of the operation was causing the issue). If I turn off bounds checking to simplify things (this makes no difference to what gets highlighted), the generated c code is:</p>
<pre><code> /* "ct.pyx":11
* cdef int i
* cdef double elem
* for i in xrange(10): # <<<<<<<<<<<<<<
* elem = arr[i]
* arr[i] = 1.0 + elem
*/
for (__pyx_t_1 = 0; __pyx_t_1 < 10; __pyx_t_1+=1) {
__pyx_v_i = __pyx_t_1;
/* "ct.pyx":12
* cdef double elem
* for i in xrange(10):
* elem = arr[i] # <<<<<<<<<<<<<<
* arr[i] = 1.0 + elem
*/
__pyx_t_2 = __pyx_v_i;
__pyx_v_elem = (*__Pyx_BufPtrStrided1d(double *, __pyx_bstruct_arr.buf, __pyx_t_2, __pyx_bstride_0_arr));
/* "ct.pyx":13
* for i in xrange(10):
* elem = arr[i]
* arr[i] = 1.0 + elem # <<<<<<<<<<<<<<
*/
__pyx_t_3 = __pyx_v_i;
*__Pyx_BufPtrStrided1d(double *, __pyx_bstruct_arr.buf, __pyx_t_3, __pyx_bstride_0_arr) = (1.0 + __pyx_v_elem);
}
</code></pre>
|
<p>The answer is that the highlighter fools the reader.
I compiled your code and the instructions generated under the highlight are those needed
to handle the error cases and the return value, they are not related to the array assignment.</p>
<p>Indeed if you change the code to read :</p>
<pre><code>def foo(numpy.ndarray[double, ndim=1] arr not None):
cdef int i
cdef double elem
for i in xrange(10):
elem = arr[i]
arr[i] = 1.0 + elem
return # + add this
</code></pre>
<p>The highlight would be on the last line and not more in the assignment.</p>
<p>You can further speed up your code by using the @cython.boundscheck:</p>
<pre><code>import numpy
cimport numpy
cimport cython
@cython.boundscheck(False)
def foo(numpy.ndarray[double, ndim=1] arr not None):
cdef int i
cdef double elem
for i in xrange(10):
elem = arr[i]
arr[i] = 1.0 + elem
return
</code></pre>
|
python|arrays|indexing|numpy|cython
| 5
|
3,225
| 56,451,588
|
How to select Image URL by not including "/"?
|
<p>I'm trying to figure out how to Series.str.extract() the image Urls (image-image-image.jpg) to a new column, but i'm having issues with the Regex. What am I doing wrong ?</p>
<p><strong>Here's how my data looks</strong></p>
<pre><code><a href="https://website.com/wp-content/uploads/2018/09/image-image.image.jpg"><img class="alignnone size-medium wp-image-11275" src="https://website.com/wp-content/uploads/2018/09/image-image.image-300x200.jpg" alt="" width="300" height="200" /></a> <a href="https://kids-at-home.ch/wp-content/uploads/2018/09/image2-image2-image2.jpg"><img class="alignnone size-medium wp-image-11271" src="https://kids-at-home.ch/wp-content/uploads/2018/09/image2-image2-image2.jpg-300x200.jpg" alt="" width="300" height="200" />
</code></pre>
<p>I've tried excluding all the "/" from the matches and have a positive lookback for "/" so it starts there and a positive lookahead for "">" , but it doesn't seem to work. I'm using Regexr, and my Jupyter Notebook, if the problem comes from there.</p>
<p>Here's my Regex code
r'^(?:(?!/).)<em>$(?<=/)(.</em>.jpg)(?=\">)'</p>
<p>I expected the regex match to be <strong>image-image.image.jpg</strong> and <strong>image2-image2.image2.jpg</strong> but it doesn't match anything.</p>
<p><strong>SOLVED REGEX CODE</strong></p>
<pre><code>r'''(?<=/)([^/"']*\.jpe?g)(?=\"\>)"'''
</code></pre>
|
<p>A little more exhaustive solution: </p>
<pre><code>https?:\/\/[A-z0-9-_.\/%]+\/([A-z0-9-_.%]+?\.(png|jpe?g|png))
</code></pre>
<p>It seems a bit scary but it is a little more verbose and supports encoded URLs too. You can find the name of your image in the first matched group($1).</p>
|
regex|python-3.x|pandas|regex-negation
| 1
|
3,226
| 56,496,513
|
How to add values in one array according to repeated values in another array?
|
<p>Suppose I have an array:</p>
<pre><code>Values = np.array([0.221,0.35,25.9,54.212,0.0022])
Indices = np.array([22,10,11,22,10])
</code></pre>
<p>I would like to add elements of 'Values' together that share the same number in 'Indices'.</p>
<p>In other words, my desired outputs(s):</p>
<pre><code>Total = np.array([0.221+54.212,0.35+0.002,25.9])
Index = np.array([22,10,11])
</code></pre>
<p>I've been trying to use np.unique to no avail. Can't quite figure this out!</p>
|
<p>We can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html" rel="nofollow noreferrer"><code>np.unique</code></a> with its optional arg <code>return_inverse</code> to get IDs based on uniqueness within <code>Indices</code> and then use those with <code>bincount</code> to get binned (ID based) summations and hence solve it like so -</p>
<pre><code>Index,idx = np.unique(Indices, return_inverse=True)
Total = np.bincount(idx, Values)
</code></pre>
<p>Outputs for given sample -</p>
<pre><code>In [32]: Index
Out[32]: array([10, 11, 22])
In [33]: Total
Out[33]: array([ 0.3522, 25.9 , 54.433 ])
</code></pre>
<p>Alternatively, we can use <code>pandas.factorize</code> to get the unique IDs and then bincount as shown earlier. So, the first step could be replaced by something like this -</p>
<pre><code>import pandas as pd
idx,Index = pd.factorize(Indices)
</code></pre>
|
python|arrays|numpy|sorting
| 4
|
3,227
| 56,591,069
|
Clean up this int64 variable in python
|
<p>This is the raw distribution of the var FREQUENCY</p>
<pre><code>NaN 22131161
1.0 4182626
7.0 218343
3.0 145863
1 59432
0.0 29906
2.0 28129
4.0 15237
5.0 4553
8.0 3617
3 2754
7 2635
9.0 633
2 584
4 276
0 112
8 51
5 42
6.0 19
A 9
I 7
9 6
Q 3
Y 2
X 2
Z 1
C 1
N 1
G 1
B 1
Name: FREQUENCY, dtype: int64
</code></pre>
<ol>
<li>group 1.0 should be the same as 1. I wrote df['x']=df['x].replace({'1.0:'1'}). it does not change anything. 9.0 vs 9, 3.0 vs.3 have same symptom</li>
<li>How could frequency be render as int64 where letters are present? </li>
<li>Desired outcome 1: group all letter groups +NaN into one group. Remaining numeric value groups consolidate (1.0 and 1 =1,for example). In SAS, I just run this : y=1*X. I just give a value of 10 to represent character groups + NaN. How to do it in Python, especially elegantly?</li>
<li>Outcome 2: extract a binary variable z=1 if x=NaN. Otherwise z=0</li>
</ol>
|
<p>The first issue "
group 1.0 should be the same as 1. I wrote df['x']=df['x].replace({'1.0:'1'}). it does not change anything. 9.0 vs 9, 3.0 vs.3 have same symptom"
was fixed once I add dtype={'FREQUANCY':'object'} while reading the csv file. Group 1.0 collapsed with group 1... After than replace works just fine. </p>
<p>All other issues pretty much are resolved, except issue 2 in that it still sets the variable type to be int64 where character variables are present. My guess is perhaps Python adopts a majority rule to vote on data type. It is indeed true numeric values dominate the count. </p>
|
python|pandas|replace|recode|int64
| 0
|
3,228
| 25,942,217
|
Cannot install Pandas on Python 3.4.1 on Windows 8
|
<p>I downloaded version 0.14.0 of pandas and try to import it, but it says that I am missing <code>dateutil</code>. I can't figure out what I'm doing wrong. I'm using Python version 3.4.1</p>
<p>Download: <a href="https://pypi.python.org/pypi/pandas/0.14.0/" rel="nofollow">https://pypi.python.org/pypi/pandas/0.14.0/</a></p>
<p>What I get when I try to import pandas</p>
<blockquote>
<p>No module named 'dateutil' Traceback (most recent call last): File
"C:/Users/Joshua/PycharmProjects/test2/test2.py", line 5, in
import pandas File "C:\Python34\lib\site-packages\pandas__init__.py", line 6, in
from . import hashtable, tslib, lib File "tslib.pyx", line 37, in init pandas.tslib (pandas\tslib.c:60928) ImportError: No module
named 'dateutil'</p>
</blockquote>
|
<p>I am using Anaconda and still get this error after updating pandas to latest version using <code>conda update pandas</code>.
Help?</p>
<pre><code>C:\Anaconda3>conda info
Current conda install:
platform : win-64
conda version : 3.8.3
conda-build version : 1.8.2
python version : 3.4.1.final.0
requests version : 2.5.1
root environment : C:\Anaconda3 (writable)
default environment : C:\Anaconda3
envs directories : C:\Anaconda3\envs
package cache : C:\Anaconda3\pkgs
channel URLs : http://repo.continuum.io/pkgs/free/win-64/
http://repo.continuum.io/pkgs/pro/win-64/
config file : None
is foreign system : False
C:\Anaconda3>conda list
# packages in environment at C:\Anaconda3:
#
anaconda 2.1.0 np19py34_0
argcomplete 0.8.1 py34_0
astropy 0.4.2 np19py34_0
beautiful-soup 4.3.2 py34_0
beautifulsoup4 4.3.2 <pip>
binstar 0.10.1 py34_1
bitarray 0.8.1 py34_1
blaze 0.6.3 np19py34_0
blz 0.6.2 np19py34_0
bokeh 0.6.1 np19py34_0
boto 2.32.1 py34_0
bottleneck 0.8.0 <pip>
cffi 0.8.6 py34_0
clyent 0.3.2 py34_0
colorama 0.3.1 py34_0
conda 3.8.3 py34_0
conda-build 1.8.2 py34_0
conda-env 2.0.1 py34_0
configobj 5.0.6 py34_0
cryptography 0.5.4 py34_0
cython 0.21 py34_0
cytoolz 0.7.0 py34_0
datashape 0.3.0 np19py34_1
dateutil 2.1 py34_2
decorator 3.4.0 py34_0
docutils 0.12 py34_0
dynd-python 0.6.5 np19py34_0
flask 0.10.1 py34_1
future 0.13.1 py34_0
greenlet 0.4.4 py34_0
h5py 2.3.1 np19py34_0
ipython 2.2.0 py34_0
ipython-notebook 2.2.0 py34_0
ipython-qtconsole 2.2.0 py34_0
itsdangerous 0.24 py34_0
jdcal 1.0 py34_0
jinja2 2.7.3 py34_1
launcher 1.0.0 1
libpython 1.0 py34_1
llvmpy 0.12.7 py34_0
lxml 3.4.0 py34_0
markupsafe 0.23 py34_0
matplotlib 1.4.0 np19py34_0
menuinst 1.0.4 py34_0
mingw 4.7 1
mock 1.0.1 py34_0
multipledispatch 0.4.7 py34_0
networkx 1.9.1 py34_0
nltk 3.0.0 np19py34_0
node-webkit 0.10.1 0
nose 1.3.4 py34_0
numba 0.14.0 np19py34_0
numexpr 2.3.1 np19py34_0
numpy 1.9.1 py34_0
openpyxl 1.8.5 py34_0
pandas 0.15.2 np19py34_0
patsy 0.3.0 np19py34_0
pip 1.5.6 py34_0
ply 3.4 py34_0
psutil 2.1.1 py34_0
py 1.4.25 py34_0
pycosat 0.6.1 py34_0
pycparser 2.10 py34_0
pycrypto 2.6.1 py34_2
pyflakes 0.8.1 py34_0
pygments 1.6 py34_0
pyopenssl 0.14 py34_0
pyparsing 2.0.1 py34_0
pyqt 4.10.4 py34_0
pyreadline 2.0 py34_0
pytables 3.1.1 np19py34_1
pytest 2.6.3 py34_0
python 3.4.1 2
python-dateutil 2.1 <pip>
pytz 2014.9 py34_0
pywin32 219 py34_0
pyyaml 3.11 py34_0
pyzmq 14.3.1 py34_0
requests 2.5.1 py34_0
rope 0.9.4 py34_1
rope-py3k-0.9.4 1 <pip>
runipy 0.1.1 py34_0
scikit-image 0.10.1 np19py34_0
scikit-learn 0.15.2 np19py34_0
scipy 0.15.1 np19py34_0
setuptools 12.0.5 py34_0
six 1.9.0 py34_0
sklearn-pandas 0.0.9 <pip>
sockjs-tornado 1.0.1 py34_0
sphinx 1.2.3 py34_0
spyder 2.3.1 py34_0
spyder-app 2.3.1 py34_0
sqlalchemy 0.9.7 py34_0
statsmodels 0.5.0 np19py34_2
sympy 0.7.5 py34_0
tables 3.1.1 <pip>
toolz 0.7.0 py34_0
tornado 4.0.2 py34_0
ujson 1.33 py34_0
werkzeug 0.9.6 py34_1
wheel 0.24.0 <pip>
winpdb 1.3.6 <pip>
xlrd 0.9.3 py34_0
xlsxwriter 0.5.7 py34_0
</code></pre>
|
pandas|python-3.4|python-dateutil
| 1
|
3,229
| 66,903,255
|
Retrieve values from Scipy gaussian_kde
|
<p>It's the first time I'm using Scipy because I couldn't find many libraries that could generate KDE data directly without plotting beforehand like what <em>Pandas</em> does (data.plot(kind='kde').
I'm trying to get the data in the KDE as a list or array but it's referring to the scipy object <strong><scipy.stats.kde.gaussian_kde object at 0x000002C4A8D077F0></strong></p>
<p>Is there a np.array(density) <em>(Numpy)</em> or density.values <em>(Pandas)</em> similar function that could retrieve the values ?</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
data = [992.9832, 846.1371, 994.2491, ..., 0.0]
# generate histogram data
h, e = np.histogram(data, bins='auto')
width = 1 * (e[1] - e[0])
center = (e[:-1] + e[1:]) / 2
print(np.array(data).mean())
x = np.linspace(e.min(), e.max())
# plot the histogram
plt.figure(figsize=(8,6))
plt.bar(center, h, align='center', width=width, label='histogram')
plt.axvline(np.array(data).mean(), color='k', linestyle='dashed', linewidth=1)
plt.legend()
plt.show()
# Plot KDE
density = stats.gaussian_kde(data)
print('DENSITY TYPE:', type(density))
print('DENSITY:', density)
plt.plot(center, density(center))
</code></pre>
<p><a href="https://i.stack.imgur.com/DcjMz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DcjMz.png" alt="enter image description here" /></a></p>
|
<p>One you have estimated the density</p>
<pre><code>kde = stats.gaussian_kde(data)
</code></pre>
<p>you need to evaluate the density in the range of data (or a wider range, you can choose)</p>
<pre><code>evaluated = kde.evaluate(np.linspace(data.min(), data.max(), 100))
</code></pre>
<p>Let's try</p>
<pre><code># generate random variates
np.random.seed(42)
data = sps.norm(loc=200, scale=5).rvs(100)
plt.hist(data, density=True);
</code></pre>
<p><a href="https://i.stack.imgur.com/fGo3Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fGo3Z.png" alt="enter image description here" /></a></p>
<p>now let's estimate and evaluate density</p>
<pre><code>density = gaussian_kde(data)
data_space = np.linspace(data.min(), data.max())
evaluated = density.evaluate(data_space)
plt.hist(data, density=True)
plt.plot(data_space, evaluated);
</code></pre>
<p><a href="https://i.stack.imgur.com/MLdjl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MLdjl.png" alt="enter image description here" /></a></p>
<p>and you have the array of the density in the range you've chosen (<code>data_space</code> in this case, but you can define the linspace you want)</p>
<pre><code>print(evaluated)
[0.00371907 0.00455801 0.00561179 0.00693696 0.0085618 0.01047394
0.01262245 0.01493411 0.01733577 0.01977291 0.02221777 0.02466837
0.0271459 0.02969787 0.03240771 0.03540247 0.03884495 0.04290023
0.04767829 0.05316985 0.05920179 0.06543718 0.07142939 0.07671788
0.08093304 0.08387167 0.08551486 0.08598526 0.08546673 0.08412519
0.0820653 0.07933652 0.07597516 0.07205101 0.06768935 0.06305743
0.05832807 0.05364531 0.04911138 0.04479586 0.04075165 0.03701897
0.03361161 0.03049646 0.02758628 0.02475911 0.02190045 0.01894903
0.01592369 0.01291898]
</code></pre>
<h2>note</h2>
<p>The evaluated PDF is not normalized, i.e. it doesn't sum to 1</p>
<pre><code>evaluated.sum()
2.147314809573033
</code></pre>
<p>If you need to normalize it, you can simply divide it by the sum (equal to divide by the integral for a continuous variable)</p>
<pre><code>evaluated /= evaluated.sum()
evaluated.sum()
1.0
</code></pre>
|
python|numpy|scipy
| 3
|
3,230
| 66,798,585
|
Find value from string using the characters from list Using Python
|
<p>I have been working on an Excel sheet using python, where i have to extract only the specific value from the column, using an list with set of charaters.</p>
<p>Need to check every character from the column check with the list, If it matches need to return the matched value into the dataframe which can be used for further analysis.</p>
<p><strong>Input Data :</strong></p>
<pre><code> text-value
19 Freezeland Lane, United Kingdom BD23 0UN
44 Bishopthorpe Road, United States LL55 1EU
Worthy Lane Denmark, LN11 9LP
88 Carriers Road, Mexico , DG3 1LB
HongKong
</code></pre>
<p><strong>Expected Output:</strong></p>
<pre><code>text_value
United Kingdom
United States
Denmark
Mexico
HongKong
</code></pre>
<p><strong>Code Snippet:</strong></p>
<pre><code>import pandas as pd
import re
countries=['United Kingdom','Denmark','India','United States','Mexico','HongKong']
df['text_value'] = re.findall(countries, df.text_value)
</code></pre>
<p>But It didn't worked
Also Tried :</p>
<pre><code>if re.compile('|'.join(countries),re.IGNORECASE).search(df['text_value']):
df['text_value']
</code></pre>
|
<p>You can use</p>
<pre class="lang-py prettyprint-override"><code>df['country_list'] = df['text_value'].str.findall(r'(?i)\b(?:{})\b'.format('|'.join(countries)))
</code></pre>
<p>Here, <code>Series.str.findall</code> returns all matches found in each cell in the <code>country_list</code> column, and the pattern, that looks like <code>(?i)\b(?:Country1|Country2|...)\b</code>, matches</p>
<ul>
<li><code>(?i)</code> - case insensitive inline modifier option</li>
<li><code>\b</code> - a word boundary</li>
<li><code>(?:Country1|Country2|...)</code> - a list of countries</li>
<li><code>\b</code> - a word boundary</li>
</ul>
|
python|regex|pandas
| 1
|
3,231
| 68,118,646
|
What is the purpose of optimizer's state_dict in PyToch Big Graph's embedding dataset?
|
<p>The documentation for PyTorch Big Graph (PBG) states that "An additional dataset may exist, optimizer/state_dict, which contains the binary blob (obtained through torch.save()) of the state dict of the model’s optimizer." When inspecting this dataset, it seems to be stored as an array of bytes. Could someone conceptually explain the point of state_dict and why it's stored as an array rather than a dictionary?</p>
|
<blockquote>
<p>Could someone conceptually explain the point of state_dict</p>
</blockquote>
<p>If you know about Adam or SGD's momentum you probably know that there're some parameters in the optimizer that change in every step. When resume training on top of loading the model weights it'll make convergence faster if you load these parameters too.</p>
<p>You can get away without it, just that sometime it'll almost as if you start training from scratch.</p>
<blockquote>
<p>why it's stored as an array rather than a dictionary?</p>
</blockquote>
<p>If it's really obtained through <code>torch.save()</code> then it is in fact stored in dictionary or at least a list of dictionaries. Just that your process of "inspecting" it is wrong. Try</p>
<pre><code>print(torch.load('path_to_the_file'))
</code></pre>
|
python|pytorch|bigdata|data-science|h5py
| 1
|
3,232
| 68,403,672
|
Turn MultiIndex Series into pivot table design by unique value counts
|
<pre><code>Sample Data:
Date,code
06/01/2021,405
06/01/2021,405
06/01/2021,400
06/02/2021,200
06/02/2021,300
06/03/2021,500
06/02/2021,500
06/03/2021,300
06/05/2021,500
06/04/2021,500
06/03/2021,400
06/02/2021,400
06/04/2021,400
06/03/2021,400
06/01/2021,400
06/04/2021,200
06/05/2021,200
06/02/2021,200
06/06/2021,300
06/04/2021,300
06/06/2021,300
06/05/2021,400
06/03/2021,400
06/04/2021,400
06/04/2021,500
06/01/2021,200
06/02/2021,300
import pandas as pd
df = pd.read_csv(testfile.csv)
code_total = df.groupby(by="Date",)['code'].value_counts()
print(code_total)
Date code
06/01/2021 400 2
405 2
200 1
06/02/2021 200 2
300 2
400 1
500 1
06/03/2021 400 3
300 1
500 1
06/04/2021 400 2
500 2
200 1
300 1
06/05/2021 200 1
400 1
500 1
06/06/2021 300 2
dates = set([x[0] for x in code_total.index])
codes = set([x[1] for x in code_total.index])
test = pd.DataFrame(code_total,columns=sorted(codes),index=sorted(dates))
print(test)
</code></pre>
<p>Is there a way to transpose the second index into a column and retain the value for the counts? Ultimately I'm trying to plot the count of unique error codes on a line graph. I've been searching up many different ways but am always missing something. any help would be appreciated.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>Series.unstack</code></a>:</p>
<pre><code>df = df.groupby(by="Date",)['code'].value_counts().unstack(fill_value=0)
</code></pre>
|
pandas|pandas-groupby
| 1
|
3,233
| 68,435,630
|
Python Numpy polyfit row index as x?
|
<p>I'm trying to calculate a best fit single order regression, because I would like to get the slop.</p>
<p>How can I use the data row index as x?</p>
<pre><code>import pandas as pd
import numpy as np
import json
data1 = [("Temp"),
(101),
(103),
(104),
(101)
]
df1 = pd.DataFrame(data1)
shape = df1.shape
N = (shape[0])
print(df1)
print(N)
T1 = pd.concat([df1.iloc[1: , 0]], axis=1)
print(T1)
T1V1 = np.polyfit(x, T1, 1)
print(T1V1)
</code></pre>
<p>I'm unsure if I'm even getting the sort of info I want (I'm after the slope). But this errors out, with the following...</p>
<p>Output:</p>
<pre><code> 0
0 Temp
1 101
2 103
3 104
4 101
5
0
1 101
2 103
3 104
4 101
Traceback (most recent call last):
File "getslope-tiny.py", line 24, in <module>
T1V1 = np.polyfit(x, T1, 1)
NameError: name 'x' is not defined
</code></pre>
|
<p>There are some problems with your dataframe. You should definitely preprocess your data first. For example, in this case, the 'Temp' in your <code>df1</code> are all <code>objects</code>, whereas we want them to be numeric (int or floats). Let's fix that first.</p>
<pre><code>data1 = [("Temp"), (101), (103), (104), (101)]
data1 = [i for i in data1]
df1 = pd.DataFrame({data1[0]: data1[1:]})
# reset index to use as x for fitting
df1 = df1.reset_index()
z = np.polyfit(df1['index'], df1['Temp'], 1)
# array([1.00e-01, 1.02e+02])
</code></pre>
<p>You can also visualise the fit as follows:</p>
<pre><code>z = np.polyfit(df1['index'], df1['Temp'], 1)
p = np.poly1d(z)
xp = np.linspace(1, 4, 10)
plt.plot(x, y, label='Data')
plt.plot(xp, p(xp), label='Fit')
plt.legend(loc="upper left")
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/6iYro.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6iYro.png" alt="Linear fit" /></a></p>
<p>As you can see, linear fit is probably not the best for your data.</p>
|
python|pandas|numpy
| 0
|
3,234
| 68,156,710
|
Row wise calculations in Python and add them to the dataframe in pandas
|
<p>I have a DataFrame:</p>
<pre><code>df_IJR
Out[40]:
Date Close
0 2015-01-02 56.610001
1 2015-01-05 55.744999
2 2015-01-06 54.814999
3 2015-01-07 55.384998
4 2015-01-08 56.355000
</code></pre>
<p>How do I perform row wise calculation in a loop.
For Example.</p>
<pre><code>for i in df_IJR:
x = 1000/df_IJR.iloc[i,:]['Close']
df_IJR['Shares']=x
</code></pre>
<p>I would like to perform a series of calculations and add them to the dataframe in a row and do the same for the rest of the rows.
I would like to make a series of calculations on every row and add that data to every row and determine the available capital for me to make my next purchase. I may have $500 to make my purchase the next day.</p>
<p>Step 1: I make an initial purchase of 105.9 shares from my initial capital of $1000.</p>
<pre><code> Date Close Shares_IJR Capital
0 2015-01-02 56.610001 105.988340 1000
1 2015-01-05 55.744999
2 2015-01-06 54.814999
3 2015-01-07 55.384998
4 2015-01-08 56.355000
</code></pre>
<p>Step 2: After a series of calculations, I find that I have $500 capital and I make my next purchase of 8.9 shares on 2015-01-05. The dataframe now becomes</p>
<pre><code> Date Close Shares_IJR Capital
0 2015-01-02 56.610001 105.988340 1000
1 2015-01-05 55.744999 114.957754 500
2 2015-01-06 54.814999
3 2015-01-07 55.384998
4 2015-01-08 56.355000
</code></pre>
<p>Step 3: Again after a series of calculations, I fnd that I now have $1500 capital and I make my next purchase of 27.4 shares on 2015-01-06. The dataframe now becomes.</p>
<pre><code> Date Close Shares_IJR Capital
0 2015-01-02 56.610001 105.988340 1000
1 2015-01-05 55.744999 114.957754 500
2 2015-01-06 54.814999 142.322527 1500
3 2015-01-07 55.384998
4 2015-01-08 56.355000
</code></pre>
<p>Hopefully this clears my request.</p>
<p>Please help.</p>
|
<p>DataFrames have vectorized operations that are more performant and easier to read than iterative solutions.</p>
<pre class="lang-py prettyprint-override"><code>df_IJR['Shares'] = df_IJR['Capital'] / df_IJR['Close']
</code></pre>
<p>If I understand your code correctly, this will have the same effect. The value in the <code>Capital</code> column for each row will be divided by the value in the <code>Close</code> column for each row, creating a new column that is then inserted into the frame under the name <code>Shares</code>.</p>
|
python-3.x|pandas|dataframe
| 0
|
3,235
| 68,237,952
|
Problem with Tensorflow GPU-anyone knows how to solve it?
|
<p>I have a Problem and I don't know what to do, has anyone expirienced this problem or knows what to do? I want to use my GPU, but it is using my CPU. I have a RTX 2060 CUDA 11.4 and the cudNN version for it. Everything is set up and installed.
heres my "error" message:</p>
<pre><code>2021-07-03 18:04:08.192447: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2021-07-03 18:04:14.322737: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library nvcuda.dll
2021-07-03 18:04:14.364105: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 2060 computeCapability: 7.5
coreClock: 1.2GHz coreCount: 30 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 245.91GiB/s
2021-07-03 18:04:14.364218: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2021-07-03 18:04:14.378476: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll
2021-07-03 18:04:14.378575: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll
2021-07-03 18:04:14.386958: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cufft64_10.dll
2021-07-03 18:04:14.389748: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library curand64_10.dll
2021-07-03 18:04:14.395241: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusolver64_11.dll
2021-07-03 18:04:14.400934: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusparse64_11.dll
2021-07-03 18:04:14.401962: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudnn64_8.dll
2021-07-03 18:04:14.402129: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-03 18:04:14.402506: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-07-03 18:04:14.404017: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 2060 computeCapability: 7.5
coreClock: 1.2GHz coreCount: 30 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 245.91GiB/s
2021-07-03 18:04:14.405038: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-03 18:04:14.999077: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-03 18:04:14.999188: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264] 0
2021-07-03 18:04:14.999495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0: N
2021-07-03 18:04:14.999834: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3961 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2060, pci bus id: 0000:01:00.0, compute capability: 7.5)
WARNING:tensorflow:Collective ops is not configured at program startup. Some performance features may not be enabled.
W0703 18:04:15.003447 19488 mirrored_strategy.py:379] Collective ops is not configured at program startup. Some performance features may not be enabled.
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
I0703 18:04:15.198488 19488 mirrored_strategy.py:369] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
INFO:tensorflow:Maybe overwriting train_steps: 2000
I0703 18:04:15.203375 19488 config_util.py:552] Maybe overwriting train_steps: 2000
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I0703 18:04:15.203375 19488 config_util.py:552] Maybe overwriting use_bfloat16: False
WARNING:tensorflow:From C:\Users\vrabe\Dropbox\Programming\Python Thilo\Artificial Intelligence\Object Detection\TFODCourse\tfod\lib\site-packages\object_detection-0.1-py3.9.egg\object_detection\model_lib_v2.py:557: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
W0703 18:04:15.220866 19488 deprecation.py:330] From C:\Users\vrabe\Dropbox\Programming\Python Thilo\Artificial Intelligence\Object Detection\TFODCourse\tfod\lib\site-packages\object_detection-0.1-py3.9.egg\object_detection\model_lib_v2.py:557: StrategyBase.experimental_distribute_datasets_from_function (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
rename to distribute_datasets_from_function
INFO:tensorflow:Reading unweighted datasets: ['Tensorflow\\workspace\\annotations\\train.record']
I0703 18:04:15.227620 19488 dataset_builder.py:163] Reading unweighted datasets: ['Tensorflow\\workspace\\annotations\\train.record']
INFO:tensorflow:Reading record datasets for input file: ['Tensorflow\\workspace\\annotations\\train.record']
I0703 18:04:15.228616 19488 dataset_builder.py:80] Reading record datasets for input file: ['Tensorflow\\workspace\\annotations\\train.record']
INFO:tensorflow:Number of filenames to read: 1
I0703 18:04:15.228616 19488 dataset_builder.py:81] Number of filenames to read: 1
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W0703 18:04:15.230612 19488 dataset_builder.py:87] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From C:\Users\vrabe\Dropbox\Programming\Python Thilo\Artificial Intelligence\Object Detection\TFODCourse\tfod\lib\site-packages\object_detection-0.1-py3.9.egg\object_detection\builders\dataset_builder.py:101: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_deterministic`.
W0703 18:04:15.240116 19488 deprecation.py:330] From C:\Users\vrabe\Dropbox\Programming\Python Thilo\Artificial Intelligence\Object Detection\TFODCourse\tfod\lib\site-packages\object_detection-0.1-py3.9.egg\object_detection\builders\dataset_builder.py:101: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_deterministic`.
WARNING:tensorflow:From C:\Users\vrabe\Dropbox\Programming\Python Thilo\Artificial Intelligence\Object Detection\TFODCourse\tfod\lib\site-packages\object_detection-0.1-py3.9.egg\object_detection\builders\dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
W0703 18:04:15.278014 19488 deprecation.py:330] From C:\Users\vrabe\Dropbox\Programming\Python Thilo\Artificial Intelligence\Object Detection\TFODCourse\tfod\lib\site-packages\object_detection-0.1-py3.9.egg\object_detection\builders\dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
WARNING:tensorflow:From C:\Users\vrabe\Dropbox\Programming\Python Thilo\Artificial Intelligence\Object Detection\TFODCourse\tfod\lib\site-packages\tensorflow\python\util\dispatch.py:206: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
W0703 18:04:20.026224 19488 deprecation.py:330] From C:\Users\vrabe\Dropbox\Programming\Python Thilo\Artificial Intelligence\Object Detection\TFODCourse\tfod\lib\site-packages\tensorflow\python\util\dispatch.py:206: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
WARNING:tensorflow:From C:\Users\vrabe\Dropbox\Programming\Python Thilo\Artificial Intelligence\Object Detection\TFODCourse\tfod\lib\site-packages\tensorflow\python\util\dispatch.py:206: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
W0703 18:04:22.222466 19488 deprecation.py:330] From C:\Users\vrabe\Dropbox\Programming\Python Thilo\Artificial Intelligence\Object Detection\TFODCourse\tfod\lib\site-packages\tensorflow\python\util\dispatch.py:206: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
WARNING:tensorflow:From C:\Users\vrabe\Dropbox\Programming\Python Thilo\Artificial Intelligence\Object Detection\TFODCourse\tfod\lib\site-packages\tensorflow\python\autograph\impl\api.py:464: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
W0703 18:04:23.631255 19488 deprecation.py:330] From C:\Users\vrabe\Dropbox\Programming\Python Thilo\Artificial Intelligence\Object Detection\TFODCourse\tfod\lib\site-packages\tensorflow\python\autograph\impl\api.py:464: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
2021-07-03 18:04:25.670524: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
</code></pre>
|
<p><a href="https://www.jquery-az.com/3-ways-convert-python-list-string-join-map-str/" rel="nofollow noreferrer">List</a> physical devices ( and print number of physical devices ) using <a href="https://www.tensorflow.org/api_docs/python/tf/config/list_physical_devices" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/config/list_physical_devices</a></p>
<pre><code>physical_devices = tf.config.list_physical_devices('GPU')
print("List of GPUs " * ','.join(map(str, physical_devices)))
print("Num GPUs:", len(physical_devices))
</code></pre>
<p>Check if a GPU is used and name it with <a href="https://www.tensorflow.org/api_docs/python/tf/test/gpu_device_name" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/test/gpu_device_name</a></p>
<pre><code>tf.test.gpu_device_name()
</code></pre>
<p>You can then work on the named device with as in this (Test) example</p>
<pre><code>class MyTest(tf.test.TestCase):
def test_add_on_gpu(self):
if not tf.test.is_built_with_gpu_support():
self.skipTest("test is only applicable on GPU")
with tf.device(tf.test.gpu_device_name()):
self.assertEqual(tf.math.add(1.0, 2.0), 3.0)
</code></pre>
|
python|tensorflow
| 0
|
3,236
| 68,200,045
|
How to compare two dataframes with different indices, and print out the duplicate rows?
|
<p>I am attempting to compare two dataframes by their respective UniqueID column. The code for the following dataframes can be seen below.</p>
<pre><code># Define first dataframe
list1 = {'UniqueID': [13579, 24680, 54678, 1169780, 1195847, 23572],
'Name': ['Joe', 'Pete', 'Jessica', 'Jackson', 'Griffin', 'Katie'],
'Level': ['Beginner', 'Beginner', 'Intermediate', 'Advanced', 'Intermediate', 'Advanced']}
df1 = pd.DataFrame(list1, columns=['UniqueID','Name','Level'])
# Define second dataframe
list2 = {'UniqueID': (88922,13579, 24680, 54678, 1169780, 1195847, 23572, 54895, 478952, 45921),
'Name': ('Zain','Joe', 'Pete', 'Jessica','Griffin','Jackson','Katie', 'Gaby', 'Haley', 'Caden'),
'Level': ('Beginner', 'Intermediate', 'Intermediate', 'Advanced', 'Intermediate','Advanced','Advanced',
'Beginner', 'Intermediate', 'Novice')}
df2 = pd.DataFrame(list2, columns=['UniqueID','Name','Level'])
</code></pre>
<p>It can be seen above that the dataframes have a differing length in regards to their index. This is what leads to my next problem. My process to find the duplicates goes as follows.</p>
<pre><code># Define new column which displays Match iff the UniqueID of the first dataframe is equal to that of the second
df1['UniqueMatch'] = np.where(df1.UniqueID == df2.UniqueID, 'Match','Ignore') #Create
# Simplify the list to only display rows that are duplicates
df_match = df1[df1['UniqueMatch'] =='Match']
</code></pre>
<p>I run into an error whenever I try to find where the dataframes UniqueID's are equal to each other. The error i receive is 'ValueError: Can only compare identically-labeled Series objects'. Which, from my understanding, means that the process I am using can only be achieve if the indices of the two dataframes are equal to each other. I figure their has to be a way around this, if not then how could you compare dataframes of different sizes.</p>
|
<p><strong>Update</strong> according your comment:</p>
<blockquote>
<p>After I find the duplicated, I would then like to iterate through each cell of level, and update df1 from the updated level listed in df2. For example, Joe goes from beginner to intermediate from df1 to df2. I would like to auto update those instances.</p>
</blockquote>
<p>Concatenate your 2 dataframes and keep the last values (df2) from duplicates:</p>
<pre><code>df3 = pd.concat([df1, df2], ignore_index=True) \
.drop_duplicates(['UniqueID', 'Name'], keep='last')
</code></pre>
<pre><code>>>> df3
UniqueID Name Level
3 1169780 Jackson Advanced
4 1195847 Griffin Intermediate
6 88922 Zain Beginner
7 13579 Joe Intermediate # Joe is now Intermediate
8 24680 Pete Intermediate # Pete is now Intermediate
9 54678 Jessica Advanced # Jessica is now Advanced
10 1169780 Griffin Intermediate
11 1195847 Jackson Advanced
12 23572 Katie Advanced
13 54895 Gaby Beginner
14 478952 Haley Intermediate
15 45921 Caden Novice
</code></pre>
<hr />
<p><em>Old answer</em></p>
<p>Find duplicates with <code>merge</code> and <code>query</code>:</p>
<pre><code>dup = pd.merge(df1, df2, on='UniqueID') \
.query("(Name_x == Name_y) & (Level_x == Level_y)")
</code></pre>
<pre><code>>>> dup
UniqueID Name_x Level_x Name_y Level_y
5 23572 Katie Advanced Katie Advanced
</code></pre>
|
python|pandas|numpy
| 1
|
3,237
| 59,078,615
|
Python itertools groupby with aggregate
|
<p>I am trying to group on a column based on the sequence it appears (timestamp) and simultaneously finding aggregate (mean) on the other variables within the small group. I can successfully group it but unable to aggregate</p>
<p>Here is my sample input:</p>
<pre><code>Date T/F X1
12/02/19 T 10
12/02/19 T 20
12/02/19 F 15
12/02/19 T 12
12/03/19 F 10
12/03/19 F 20
12/03/19 T 30
12/04/19 T 40
</code></pre>
<p>Expected O/P</p>
<pre><code>Date T/F X1 Count
12/02/19 T 15 2
12/02/19 F 15 1
12/02/19 T 12 1
12/03/19 F 15 2
12/03/19 T 35 2
</code></pre>
<p>Here is the code I am using, which groups and give me the count for each group, how do I get the avg of X1 as well, within that group</p>
<pre><code>import itertools
for (key,group) in itertools.groupby(df['T/F']):
print (key, len(list(group)))
</code></pre>
<p>Thanks for the help!</p>
|
<p>You can use the function <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a>:</p>
<pre><code>df1 = df.assign(Count=np.nan).\
groupby(df['T/F'].ne(df['T/F'].shift()).cumsum(), as_index=False).\
agg({'Date': 'first', 'T/F': 'first', 'X1': 'mean', 'Count': 'size'})
print(df1)
</code></pre>
<p>Output:</p>
<pre><code> Date T/F X1 Count
0 12/02/19 T 15 2
1 12/02/19 F 15 1
2 12/02/19 T 12 1
3 12/03/19 F 15 2
4 12/03/19 T 35 2
</code></pre>
|
python|pandas|itertools
| 3
|
3,238
| 45,201,552
|
Keras does not show progress bar arrow while training
|
<p>At first, I was running keras with tensorflow backend, and the progress bar was fine. Then I installed Theano, and tried using it for a while before switching back to tensorflow. After installation of Theano, the progress bar that appears at each epoch only comes up as after the epoch is done, so while it's training, I don't see its progress.</p>
<pre><code>Epoch 1/50
21/21 [=============================] 10s - loss:0.6928 - loss_val: 0.6912
</code></pre>
<p>I want it to show the progress while it is training, like this:</p>
<pre><code>Epoch 1/50
21/21 [=====>.......................] 10s - loss:0.6928 - loss_val: 0.6912
</code></pre>
<p>Why does it change the progress bar format after installing theano, and how can I change it bak to showing the progress?</p>
|
<p>Try using: </p>
<pre><code> model.fit(.....,.....,....,verbose=1)
</code></pre>
<p>The verbose variable is used for showing training progress. You can look at the Keras documentation:</p>
<blockquote>
<p>verbose: 0 for no logging to stdout, 1 for progress bar logging, 2 for one log line per epoch.</p>
</blockquote>
|
tensorflow|keras|theano
| 9
|
3,239
| 45,239,256
|
Data imputation with fancyimpute and pandas
|
<p>I have a large pandas data fame <code>df</code>. It has quite a few missings. Dropping row/or col-wise is not an option. Imputing medians, means or the most frequent values is not an option either (hence imputation with <code>pandas</code> and/or <code>scikit</code> unfortunately doens't do the trick). </p>
<p>I came across what seems to be a neat package called <code>fancyimpute</code> (you can find it <a href="https://pypi.python.org/pypi/fancyimpute" rel="noreferrer">here</a>). But I have some problems with it.</p>
<p>Here is what I do: </p>
<pre><code>#the neccesary imports
import pandas as pd
import numpy as np
from fancyimpute import KNN
# df is my data frame with the missings. I keep only floats
df_numeric = = df.select_dtypes(include=[np.float])
# I now run fancyimpute KNN,
# it returns a np.array which I store as a pandas dataframe
df_filled = pd.DataFrame(KNN(3).complete(df_numeric))
</code></pre>
<p>However, <code>df_filled</code> is a single vector somehow, instead of the filled data frame. How do I get a hold of the data frame with imputations?</p>
<h1>Update</h1>
<p>I realized, <code>fancyimpute</code> needs a <code>numpay array</code>. I hence converted the <code>df_numeric</code> to a an array using <code>as_matrix()</code>. </p>
<pre><code># df is my data frame with the missings. I keep only floats
df_numeric = df.select_dtypes(include=[np.float]).as_matrix()
# I now run fancyimpute KNN,
# it returns a np.array which I store as a pandas dataframe
df_filled = pd.DataFrame(KNN(3).complete(df_numeric))
</code></pre>
<p>The output is a dataframe with the column labels gone missing. Any way to retrieve the labels?</p>
|
<p>Add the following lines after your code:</p>
<pre><code>df_filled.columns = df_numeric.columns
df_filled.index = df_numeric.index
</code></pre>
|
python|python-3.x|pandas|imputation|fancyimpute
| 7
|
3,240
| 56,895,836
|
I am having a problem with a foor loop that includes dataframes
|
<p>I have a dataframe with 8 columnds. If two of those columns satisfy a condition, I have to fill two columns with the product of other two. And after running the algorithm it is not working.</p>
<p>I have tryed to use series, I have tryed to use import warnings
<code>warnings.filterwarnings("ignore")</code> but it is not working</p>
<pre><code>for i in seq:
if dataframefinal['trade'][i] == 1 and dataframefinal['z'][i] > 0:
dataframefinal['CloseAdj2'][i]= dataframefinal['Close2'][i] *
dataframefinal['trancosshort'][i]
dataframefinal['CloseAdj1'][i]= dataframefinal['Close1'][i] *
dataframefinal['trancostlong'][i]
elif dataframefinal['trade'][i] == 1 and dataframefinal['z'][i] < 0:
dataframefinal['CloseAdj2'][i]= dataframefinal['Close1'][i] *
dataframefinal['trancosshort'][i]
dataframefinal['CloseAdj1'][i]= dataframefinal['Close2'][i] *
dataframefinal['trancostlong'][i]
else:
dataframefinal['CloseAdj1'][i]= dataframefinal['Close1'][i]
dataframefinal['CloseAdj2'][i]= dataframefinal['Close2'][i]
</code></pre>
|
<p>You can use vectorized condition function <code>numpy.select()</code> to do this quickly:</p>
<pre><code>import pandas as pd
from numpy.random import randn, randint
n = 10
df_data = pd.DataFrame(dict(trade=randint(0, 2, n),
z=randn(n),
Close1=randn(n),
Close2=randn(n),
trancosshort=randn(n),
trancostlong=randn(n)))
df_data["CloseAdj1"] = 0
df_data["CloseAdj2"] = 0
seq = [1, 3, 5, 7, 9]
df = df_data.loc[seq]
cond1 = df.eval("trade==1 and z > 0")
cond2 = df.eval("trade==2 and z < 0")
df["CloseAdj2"] = np.select([cond1, cond2],
[df.eval("Close2 * trancosshort"),
df.eval("Close1 * trancosshort")], df.Close2)
df["CloseAdj1"] = np.select([cond1, cond2],
[df.eval("Close1 * trancostlong"),
df.eval("Close2 * trancostlong")], df.Close1)
df_data.loc[seq, ["CloseAdj1", "CloseAdj2"]] = df[["CloseAdj1", "CloseAdj2"]]
</code></pre>
|
pandas|dataframe|for-loop|operation
| 0
|
3,241
| 57,257,230
|
Replace value with earliest date time record for dataframe
|
<p>The dataframe with monthly date is as below, and I would like to get the earliest Startdate to fill the column Startdate(including NA) for every month.</p>
<pre><code>ID Month Startdate
a 2019-05-01 NA
a 2019-06-01 2019-04-01
a 2019-07-01 2019-05-01
b 2019-05-01 2019-03-01
b 2019-06-01 2019-04-01
b 2019-07-01 2019-05-01
</code></pre>
<p>The expected output would be: </p>
<pre><code>ID Month Startdate
a 2019-05-01 *2019-04-01*
a 2019-06-01 2019-04-01
a 2019-07-01 *2019-04-01*
b 2019-05-01 2019-03-01
b 2019-06-01 *2019-03-01*
b 2019-07-01 *2019-03-01*
</code></pre>
|
<p>IIUC, you want <code>startdate</code> to be the earliest in the record:</p>
<pre><code># change to datetime if not already is
df['Month'] = pd.to_datetime(df['Month'])
df['Startdate'] = pd.to_datetime(df['Startdate'])
# update min
df['Startdate'] = df.groupby('ID').Startdate.transform('min')
</code></pre>
<p>output:</p>
<pre><code> ID Month Startdate
0 a 2019-05-01 2019-04-01
1 a 2019-06-01 2019-04-01
2 a 2019-07-01 2019-04-01
3 b 2019-05-01 2019-03-01
4 b 2019-06-01 2019-03-01
5 b 2019-07-01 2019-03-01
</code></pre>
|
python-3.x|pandas
| 2
|
3,242
| 57,059,709
|
Selection in dataframe with array as column value
|
<p>I have a dataframe filled with twitter data. The columns are:</p>
<ul>
<li>row_id : Int</li>
<li>content : String</li>
<li>mentions : [String]</li>
<li>value : Int</li>
</ul>
<p>So for every tweet I have it's row id in the dataframe, the content of the tweet, the mentions used in it (for example: '@foo') as an array of strings and a value that I calculated based on the content of the tweet.</p>
<p>An example of a row would be:</p>
<ul>
<li>row_id : 12</li>
<li>content : 'Game of Thrones was awful'</li>
<li>mentions : ['@hbo', '@tv', '@dissapointment', '@whatever']</li>
<li>value: -0.71</li>
</ul>
<p>So what I need is a way to do the following 3 things:</p>
<ul>
<li>find all rows that contain the mention '@foo' in the mentions-field</li>
<li>find all rows that ONLY contain the mention '@foo' in the mentions-field</li>
<li>above two but checking for an array of strings instead of checking for only one handle</li>
</ul>
<p>If anyone could help met with this, or even just point me in the right direction that'd be great.</p>
|
<p>Let's call your DataFrame df.</p>
<p>For the first task you use:</p>
<pre><code>result = df[(Dataframe(df['mentions'].tolist()) == '@foo').any(1)]
</code></pre>
<p>Here, the <code>Dataframe(df['mentions'])</code> creates a new DataFrame where each column is a mention and each row a tweet.</p>
<p>Then <code>== '@foo'</code> generates a boolean dataframe containing True where the mentions are '@foo'.</p>
<p>Finally <code>.any(1)</code> returns a boolean index which elements are True if any element in the row is True.</p>
<p>I think with this help you can manage to solve the rest for yourself.</p>
|
python|pandas|data-science
| 1
|
3,243
| 57,269,160
|
Pythonic way to extract and replace text in Dataframe
|
<p>I have a dataframe containing user-submitted postcodes, many of which aren't in the desired format I need to look them up with the Google Maps Geocoder API to get associated co-ordinates. </p>
<p>I have thus attempted to format it to return them in the format like 'IG1 2BF', 'E6 2QA', 'RH10 4DG'. </p>
<p>This works but is slow and I imagine there is a more 'Pythonic' way to write this. Any suggestions? </p>
<pre><code>df['postcode'] = df['postcode'].str.replace(" ", "").str.upper()
for i in range(0, df['postcode'].size):
if len(df['postcode'].iloc[i]) == 5:
df['postcode'].iloc[i] = df['postcode'].iloc[i][:2] + " " + df['postcode'].iloc[i][2:]
if len(df['postcode'].iloc[i]) == 6:
df['postcode'].iloc[i] = df['postcode'].iloc[i][:3] + " " + df['postcode'].iloc[i][3:]
if len(df['postcode'].iloc[i]) == 7:
df['postcode'].iloc[i] = df['postcode'].iloc[i][:4] + " " + df['postcode'].iloc[i][4:]
</code></pre>
<p>Some sample data is provided of what is fed into the for loop: </p>
<pre><code>1 E176PA
2 S8 0ZW
3 DT29BU
4 S44 5TE
5 HP17 9TN
6 N12 0QF
7 S25 1YT
8 OX13 6AP
</code></pre>
<p>Only rows 1 and 3 are in an undesired format.</p>
|
<p>Not sure about this being "pythonic", but seeing as the second block of UK postcodes is always made up of 3 characters, you can just slice the string using that fact:</p>
<pre><code>def format_postcode(postcode):
postcode = postcode.replace(" ", "").upper()
return "{} {}".format(postcode[:-3], postcode[-3:])
</code></pre>
<p>Here, <code>postcode[:-3]</code> goes from the first to the 4th to last character, and <code>postcode[-3:]</code> goes from the 3rd to last to the last character.</p>
<p>You can then apply the function to the column of the DataFrame:</p>
<pre><code>df['postcode'].apply(format_postcode)
</code></pre>
|
python-3.x|pandas|postal-code
| 2
|
3,244
| 56,944,896
|
Finding all variations of a list of substrings in a pandas dataframe column
|
<p>I have a list of strings of movie names which I want to search in a pandas dataframe column <code>description</code> and make a new column <code>movie_name</code> if it is found in the description entered by a user.</p>
<p>Now, since the descriptions are not standardised, how can I search all the possible variations of a particular name. For eg. one of the movie names is <code>HARRY POTTER 4</code>. Now, I need to search all possible inputs like <code>HARRYPOTTER 4</code>, <code>HARRY POTTER4</code>, <code>HARRYPOTTER4</code> etc. There may be cases where user didn't left space after the <code>4</code> and typed other stuff for eg. <code>HARRY POTTER 4is a good movie</code>. </p>
<p>I need to extract the movie names given in the list from the descriptions and add a new column of just <code>movie_name</code>. Is there any other way than adding all the possible variations in the list, using <code>.contains</code> and <code>.extract</code> and later mapping all of them to 1 final movie name using <code>.map</code> or <code>.replace</code>?</p>
|
<p>I suggest you take a look at the FuzzyWuzzy library. </p>
<p>Here is an easy to understand article: <a href="https://www.geeksforgeeks.org/fuzzywuzzy-python-library/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/fuzzywuzzy-python-library/</a> </p>
|
python|string|pandas|list
| 0
|
3,245
| 45,767,186
|
Pandas pivot table value loop
|
<p>I have a dataset with dates and data points for that specific date (d1, d2, d3, etc.) for every stock for each country. Some datapoints are missing for some stocks within each country and I want to replace them with average for those stocks in other countries</p>
<pre><code>date stock d1 d2 d3 country
12.94 xyz corp 12 3 4 US
12.95 xyz corp 13 NaN 1 US
12.95 123 corp 3 4 12 US
12.94 abc corp 1 3 5 CA
12.94 abc corp NaN 3 4 CA
</code></pre>
<p>So, in above data point d2 for xyz on 12.95 needs to be replaced by average of d2 within US for 12.95</p>
<p>I would appreciate any insight on how to do that. I created an index of unique dates and planned on using pivot table where values iterate through various data points such as d1, d2, etc</p>
<pre><code>cnt_avgs=rawdt.pivot_table(values=["d1",index=["country","],aggfunc=np.mean)
</code></pre>
|
<p>I'm not exactly sure if this is what you are looking for. But you can iterate over all the NaN columns and then the missing value rows and substitute the missing values using numpy.mean and conditional pandas slicing:</p>
<p>convert list into a pandas dataframe:</p>
<pre><code>df = pd.DataFrame(dt[1:], columns=dt[0])
</code></pre>
<p>Check and iterate over columns with NaN values. Then, for the columns that have NaN, iterate over the rows and change the data using numpy mean function and pandas conditional slicing: </p>
<pre><code>for col in df.columns[df.isnull().any()]:
for row in df[df[col].isnull()].iterrows():
df.loc[row[0], col] = np.mean(df[(df['date'] == row[1]['date']) & (df['country'] == row[1]['country'])][col])
</code></pre>
|
python|pandas
| 0
|
3,246
| 46,062,169
|
array = np.int(array) TypeError: only length-1 arrays can be converted to Python scalars
|
<pre><code>array = df.as_matrix()
array = np.int(array)
</code></pre>
<p>I tried:</p>
<pre><code>array = np.int(array)
</code></pre>
<p>for :</p>
<pre><code>array[i][10]/5
</code></pre>
<p>but got :</p>
<blockquote>
<p>array = np.int(array) TypeError: only length-1 arrays can be converted to Python scalars????</p>
</blockquote>
|
<p>Try:</p>
<pre><code>>>> a = np.random.normal(20, 5, size=(2,2))
array([[ 24.04255462, 24.24954137],
[ 14.64894245, 16.17946985]])
>>> a = a.astype(int)
>>> a
array([[24, 24],
[14, 16]])
</code></pre>
<p>Check the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.astype.html" rel="nofollow noreferrer">docs</a>.</p>
|
python-2.7|numpy|math
| 1
|
3,247
| 35,684,121
|
Getting Large Dataset Out of MySQL into Pandas Dataframe keeps Failing , Even With Chunksize
|
<p>I am trying to pull ~700k rows out of mysql into a Pandas dataframe.</p>
<p>I kept getting the same error over and over again:</p>
<p>Traceback (most recent call last):</p>
<blockquote>
<p>File "C:\Anaconda3\lib\site-packages\mysql\connector\network.py", </p>
<p>line 245, in recv_plain read = self.sock.recv_into(packet_view, rest)
ConnectionResetError: [WinError 10054]</p>
</blockquote>
<p>An existing connection was forcibly closed by the remote host</p>
<p>Searching on StackOverflow, I found <a href="https://stackoverflow.com/questions/18107953/how-to-create-a-large-pandas-dataframe-from-an-sql-query-without-running-out-of">a great suggestion from ThePhysicist</a> in another thread, so I modified my code as below. It won't run at all if the chunk size is over 200, and even if it is 200, the process will eventually throw up the "forcibly closed" error.</p>
<p>If anybody has any idea as to how to fix this, it would be very much appreciated. It is extremely frustrating as I never have similar issues when using R</p>
<p>Thanks</p>
<p>My current code (apologies for the formatting - can't figure out how to do it here):</p>
<pre><code>from pandas import DataFrame
import time
import mysql.connector
import pandas as pd
import pandas.io.sql as psql
chunk_size = 300
offset = 0
cnx = mysql.connector.connect(user='xxx', password='xxx', database='xxx', host='xxx')
dfs = []
while True:
print(offset)
sql = "select a,b,c,d,e,f,g,h from table where a='xyz' order by b,c,d limit %d offset %d " % (chunk_size,offset)
dfs.append(psql.read_sql(sql, cnx))
offset += chunk_size
if len(dfs[-1]) < chunk_size:
print("break")
break
full_df = pd.concat(dfs)
</code></pre>
<p>Explain Extended Returns:</p>
<pre><code>select_type table type possible_keys key key_len ref rows filtered Extra
SIMPLE table ref idx_clientid,idx_char idx_char 48 const 1173586 100 Using index condition
</code></pre>
<p>When I move the code to the AWS server where the database resides, it runs fine with no issues. The issue seems to be when I run the code and the machine is not residing on AWS...</p>
|
<p>It sounds like you have a short time out period and are probably lacking appropriate indexes. I would suggest creating an index on <code>(a, b, c, d)</code>:</p>
<pre><code>create index idx_table_a_b_c_d on table(a, b, c, d);
</code></pre>
<p>This needs to be executed only once in the database (and that can be done through Python).</p>
<p>If this is not possible, then we might guess that the <code>order by</code> is the time consuming part. To handle that, remove the <code>order by</code>, load the data in Python, and then sort it in Python. This isn't my usual advice -- databases are better for data operations, but it might be necessary under some circumstances.</p>
|
python|mysql|pandas
| 1
|
3,248
| 35,598,490
|
How to edit source csv file data using pandas
|
<p>I have a csv file that contains large amount of data,but the data that contained in csv file is not cleaned.The example of csv data is as follows</p>
<pre><code>country branch no_of_employee total_salary count_DOB count_email
x a 30 2500000 20 25
x b 20 350000 15 20
y c 30 4500000 30 30
z d 40 5500000 40 40
z e 10 1000000 10 10
z f 15 1500000 15 15
</code></pre>
<p>after applying the group by i am not getting the proper result .</p>
<pre><code>df = data_df.groupby(['country', 'customer_branch']).count()
</code></pre>
<p>the result is in the form of</p>
<pre><code>country branch no of employees
x 1 30
x 1 20
y 1 30
z 3 65
</code></pre>
<p>country x is repeating twise.This is because of source file data,in source file the country field contains "X" and "X ". That is why it display X twise .How can i ignore this problem using pandas</p>
|
<p>You can call the vectorised <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow"><code>str.strip</code></a> to trim leading and trailing whitespaces:</p>
<pre><code>df['country'] = df['country'].str.strip(' ')
</code></pre>
<p>So the above should work to clean your data and then you can call <code>groupby</code> to get the desired results or <code>set_index</code> so you can <code>sum</code> on an index level which looks like what you really want</p>
<p>Example:</p>
<pre><code>In [4]:
df = pd.DataFrame({'country':['x', 'x ','y','z','z','z'], 'branch':list('abcdef'), 'no_of_employee':[30,20,30,40,10,15]})
df
Out[4]:
branch country no_of_employee
0 a x 30
1 b x 20
2 c y 30
3 d z 40
4 e z 10
5 f z 15
In [9]:
df['country'] = df['country'].str.strip()
df.set_index(['country', 'branch']).sum(level=0)
Out[9]:
no_of_employee
country
x 50
y 30
z 65
</code></pre>
|
python|pandas
| 3
|
3,249
| 28,745,909
|
Neither builtin power function nor np.power works
|
<p>I've got the following simple two things given:</p>
<pre><code>n = 2.01
array = np.array([-0.3700708 , -0.41282227, -0.25959961])
</code></pre>
<p>Now I want each of the array elements to be raised to the power of <code>(n-1.)</code>. So I tried the following:</p>
<pre><code>>>> array**(n-1.)
array([ nan, nan, nan])
>>> np.power(array, (n-1.))
array([ nan, nan, nan])
</code></pre>
<p>If I take out each element and raise it to the power of <code>(n-1.)</code>, it works fine. Where's the error?</p>
|
<p>The reason is that the results are <em>complex values</em>, e.g. </p>
<pre><code> -0.3700708 ** 1.01 == -0.366229 - 0.011509i
</code></pre>
<p>Edit: When computing the value at <strong>Wolfram Alpha</strong> do (raise negative into power)</p>
<pre><code> (-0.3700708) ** 1.01
</code></pre>
<p>and not (first raise into power, then negate)</p>
<pre><code> -0.3700708 ** 1.01
</code></pre>
|
python|numpy
| 3
|
3,250
| 51,070,345
|
Sklearn vectoring a document by sentences for classification
|
<pre><code>temp = []
for i in chunks:
vectorizer2 = CountVectorizer()
vectorizer2.fit_transform(i).todense()
temp.append(vectorizer2)
print(vectorizer2.vocabulary_)
x = [LinearSVC_classifier.classify(y) for y in temp ]
</code></pre>
<p>I have a document that I am trying to put in the proper format to use my classifiers against. I have broken down the doc into individual lists. So the data looks like this..</p>
<pre><code>chunks = [[ 'sentence1'] , ['sentence2'], ['sentences']]
</code></pre>
<p>The function I have written gets me partially there but then I get this error.
ValueError: empty vocabulary; perhaps the documents only contain stop words
but also getting this...</p>
<pre><code>{u'and': 4, u'www': 53, u'is': 25, u'some': 44, u'commitment': 10}
</code></pre>
<p>If I run each sentences manually and individually they each work with 0 errors and the classifier works. I am hoping my results at the end would look like this.</p>
<pre><code>['sentence1', 'no'] , ['senence2', 'yes']
</code></pre>
<p>or anyway i can see each sentences classification works honestly. I am just unsure where the error lies and if it is fixable or I need a new approach. Any help would be greatly appreciated. </p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-608-c2fb95ef6621> in <module>()
4 for i in chunks:
5 print (i)
----> 6 vectorizer2.fit_transform(i).todense()
7 temp.append(vectorizer2)
8 print(vectorizer2.vocabulary_)
C:\Program Files\Anaconda2\lib\site-
packages\sklearn\feature_extraction\text.pyc in fit_transform(self,
raw_documents, y)
867
868 vocabulary, X = self._count_vocab(raw_documents,
--> 869 self.fixed_vocabulary_)
870
871 if self.binary:
C:\Program Files\Anaconda2\lib\site-
packages\sklearn\feature_extraction\text.pyc in _count_vocab(self,
raw_documents, fixed_vocab)
809 vocabulary = dict(vocabulary)
810 if not vocabulary:
--> 811 raise ValueError("empty vocabulary; perhaps the
documents only"
812 " contain stop words")
813
ValueError: empty vocabulary; perhaps the documents only contain stop words
</code></pre>
|
<p>Just put the initialization outside the loop like this, else it will be re-initilized again and again for each sentence seperately which is incorrect.</p>
<pre><code>temp = []
vectorizer2 = CountVectorizer() #<--- This needs to be initialized only once
for i in chunks:
vectorizer2.fit_transform(i).todense()
temp.append(vectorizer2)
print(vectorizer2.vocabulary_)
x = [LinearSVC_classifier.classify(y) for y in temp ]
</code></pre>
|
python|scikit-learn|nlp|classification|sklearn-pandas
| 1
|
3,251
| 50,809,257
|
Returning dataset from tf.data.Dataset.map() causes 'TensorSliceDataset' object has no attribute 'get_shape' error
|
<p>I'm using the Dataset API to create an input pipeline. I'm using the tf.data.Dataset.map() method in a pattern similar to the following:</p>
<pre><code>def mapped_fn(_):
X = tf.random_uniform([3,3])
y = tf.random_uniform([3,1])
dataset = tf.data.Dataset.from_tensor_slices((X,y))
return dataset
with tf.Session() as sess:
first = tf.random_uniform([1,2])
unimportant_dataset = tf.data.Dataset.from_tensors(first)
dataset = unimportant_dataset.map(mapped_fn)
sess.run(dataset)
</code></pre>
<p>I'm getting the following error: <code>AttributeError: 'TensorSliceDataset' object has no attribute 'get_shape'</code></p>
<p>The overall context is that <code>mapped_fn</code> deserializes an Example protobuf (represented by <code>unimportant_dataset</code> in this case) from a .tfrecords file, reshapes the feature vector (<code>X</code>), and needs to return a dataset with elements defined by slices from the new feature vector (of shape <code>(3,)</code> in this case). I've gotten a similar error when returning a <code>ZipDataset</code>. Thanks in advance!</p>
|
<p><a href="https://stackoverflow.com/a/50811745/3574081">DomJack's answer</a> is absolutely correct about the signature of <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map" rel="nofollow noreferrer"><code>Dataset.map()</code></a>: it expects the return value of the passed <code>mapped_fn</code> to be one or more tensors (or sparse tensors).</p>
<p>If you do have a function that returns a <code>Dataset</code>, you can use <code>Dataset.flat_map()</code> to flatten and concatenate all of the returned datasets into a single dataset, as follows:</p>
<pre><code>def mapped_fn(_):
X = tf.random_uniform([3,3])
y = tf.random_uniform([3,1])
dataset = tf.data.Dataset.from_tensor_slices((X,y))
return dataset
# Generate 100 dummy elements.
unimportant_dataset = tf.data.Dataset.range(100)
# Convert each dummy element into a dataset of 3 nested elements, and concatenate them.
dataset = unimportant_dataset.flat_map(mapped_fn)
</code></pre>
|
tensorflow|machine-learning|deep-learning|tensorflow-datasets
| 3
|
3,252
| 51,012,920
|
Keras "return_sequences" option returns 2D array instead of 3D
|
<p>I'm trying to use a simple character-level Keras model for extract key text from a sentence. </p>
<p>I feed it <code>x_train</code> a padded sequence of dim <code>(n_examples, 500)</code> representing the entire sentence and <code>y_train</code>, a padded sequence of dim <code>(n_examples, 100)</code> representing the import text to extract. </p>
<p>I try a simple model like such: </p>
<pre><code>vocab_size = 1000
src_txt_length = 500
sum_txt_length = 100
inputs = Input(shape=(src_txt_length,))
encoder1 = Embedding(vocab_size, 128)(inputs)
encoder2 = LSTM(128)(encoder1)
encoder3 = RepeatVector(sum_txt_length)(encoder2)
decoder1 = LSTM(128, return_sequences=True)(encoder3)
outputs = TimeDistributed(Dense(100, activation='softmax'))(decoder1)
model = Model(inputs=inputs, outputs=outputs)
model.compile(loss='categorical_crossentropy', optimizer='adam')
</code></pre>
<p>When I try to train it with the following code: </p>
<pre><code>hist = model.fit(x_train, y_train, verbose=1, validation_data=(x_test, y_test), batch_size=batch_size, epochs=5)
</code></pre>
<p>I get the error: </p>
<pre><code>ValueError: Error when checking target: expected time_distributed_27 to have 3 dimensions, but got array with shape (28500, 100)
</code></pre>
<p>My question is: I have the return_sequences parameter set to <code>True</code> on the last LSTM layer, but the Dense fully-connected layer is telling me that the input is 2-dimensional. </p>
<p>What am I doing wrong here? Any help would be greatly appreciated! </p>
|
<p>It isn't complaining about the input to <code>TimeDistributed</code> but the target <code>y_train.shape == (n_examples, 100)</code> which isn't 3D. You have a mismatch between predicting a sequence and a single point. In other words, <code>outputs</code> is 3D but <code>y_train</code> is 2D.</p>
|
tensorflow|neural-network|keras|nlp
| 1
|
3,253
| 50,921,288
|
Sort Dict by order in list (python)
|
<p>I'm trying to convert a <code>dict</code> into an ordered <code>df</code>. The <code>dict</code> represents a <code>scatter plot</code>, which displays the coordinates in various <code>bins</code>. </p>
<p>Lets say <code>x,y lists</code> are as follows:</p>
<pre><code>x = [10,40,33,44,66,77,33,44,55,2]
y = [1,4,53,34,56,47,83,44,25,12]
</code></pre>
<p>I sort the coordinates into appropriate bins with the output being:</p>
<pre><code>bins =({
1: [(10, 1), (2, 12)],
2: [(40, 4), (33, 53), (44, 34), (33, 83), (44, 44)],
3: [(66, 56), (55, 25)],
4: [(77, 47)]
})
</code></pre>
<p>If I convert to a df:</p>
<pre><code>df = pd.DataFrame.from_dict(bins, orient = 'index')
d = df.transpose()
</code></pre>
<p>Output:</p>
<pre><code> 1 2 3 4
0 (10, 1) (40, 4) (66, 56) (77, 47)
1 (2, 12) (33, 53) (55, 25) None
2 None (44, 34) None None
3 None (33, 83) None None
4 None (44, 44) None None
</code></pre>
<p>What I'm hoping to do is order the <code>df</code> by order in the <code>list</code>, so I'd like the output to be:</p>
<pre><code> 1 2 3 4
0 (10, 1)
1 (40, 4)
2 (33, 53)
3 (44, 34)
4 (66, 56)
5 (77, 47)
6 (33, 83)
7 (44, 44)
8 (55, 25)
9 (2, 12)
</code></pre>
<p>I not sure if this can be done. So I'm thinking of trying to return the bin No. for each scatter point at each index:</p>
<pre><code> Bin
0 1
1 2
2 2
3 2
4 4
5 4
6 2
7 2
8 3
9 1
</code></pre>
<p>I have tried <code>collections.OrderedDict</code> but I need to order by index not by <code>keys</code>.</p>
<p>I'm not sure if I could make the input a list of lists?</p>
|
<p><em>skipping over the pandas part</em></p>
<p><code>b = collections.OrderedDict(bins) #Resort by order in list</code></p>
<p>collections.OrderedDict does not sort the value given in the constructor. it simply preserves the order in which the dict was created.</p>
<p>if you need to re-order the dict and preserve the order of it, then you will need to sort the bins (based on your requirements) and then pass it on to an ordered dict</p>
<p>as an example; if you like to sort the <code>bins</code> on the key,</p>
<pre><code>b = collections.OrderedDict()
for k in sorted(bins):
b[k] = bins[k]
</code></pre>
|
python|pandas|dictionary|ordereddictionary
| 1
|
3,254
| 20,473,005
|
OpenCV detect if image file is colour prior to loading
|
<p>I know you can use imread to get an image into a numpy array.</p>
<pre><code>cv2.imread(path,0)
</code></pre>
<p>Loads a greyscale image and...</p>
<pre><code>cv2.imread(path,1)
</code></pre>
<p>Loads a colour one.</p>
<p>I can call the second one on a greyscale image and it returns a shape (y,x,3) hence it is still in the format of a colour image even though it has none.</p>
<p>Is there a function in opencv to detect what colour format the file is in so I can call the right imread function.</p>
|
<p>I assume your are using Python since you mentioned numpy.</p>
<p>You can´t check which type an image has before loading it.</p>
<p>The easiest solution to guarantee that you have a grayscale image, would be to use something like this (if you want to always have a grayscale):</p>
<pre><code> if len(im.shape) > 2:
im = cv2.cvtColor(im, cv2.cv.CV_RGB2GRAY)
</code></pre>
|
python|opencv|numpy
| 2
|
3,255
| 20,726,493
|
Python Pandas qcut behavior with # of observations not divisible by # of bins
|
<p>Suppose I had a pandas series of dollar values and wanted to discretize into 9 groups using <code>qcut</code>. The # of observations is not divisible by 9. SQL Server's <code>ntile</code> function has a standard approach for this case: it makes the first <em>n</em> out of 9 groups 1 observation larger than the remaining (9-<em>n</em>) groups. </p>
<p>I noticed in pandas that the assignment of which groups had <em>x</em> observations vs <em>x</em> + 1 observations seemed random. I tried to decipher the code in algos to figure out how the quantile function deals with this issue but could not figure it out. </p>
<p>I have three related questions:</p>
<ol>
<li>Any pandas developers out there than can explain <code>qcut</code>'s behavior? Is it random which groups get the larger number of observations?</li>
<li>Is there a way to force <code>qcut</code> to behave similarly to <code>NTILE</code> (i.e., first groups get <em>x</em> + 1 observations)?</li>
<li>If the answer to #2 is no, any ideas on a function that would behave like <code>NTILE</code>? (If this is a complicated endeavor, just an outline to your approach would be helpful.)</li>
</ol>
<p>Here is an example of SQL Server's <code>NTILE</code> output.</p>
<pre><code>Bin |# Observations
1 26
2 26
3 26
4 26
5 26
6 26
7 26
8 25
9 25
</code></pre>
<p>Here is pandas:</p>
<pre><code>Bin |# Observations
1 26
2 26
3 26
4 25 (Why is this 25 vs others?)
5 26
6 26
7 25 (Why is this 25 vs others?)
8 26
9 26
</code></pre>
|
<p>The <code>qcut</code> behaves like this because it's more accurate. Here is an example:</p>
<p>for the <em>i</em>th level, it starts at quantile (<em>i</em>-1)*10%:</p>
<pre><code>import pandas as pd
import numpy as np
a = np.random.rand(26*10+3)
r = pd.qcut(a, 10)
np.bincount(r.labels)
</code></pre>
<p>the output is:</p>
<pre><code>array([27, 26, 26, 26, 27, 26, 26, 26, 26, 27])
</code></pre>
<p>If you want NTILE, you can calculate the quantiles yourself:</p>
<pre><code>n = len(a)
ngroup = 10
counts = np.ones(ngroup, int)*(n//ngroup)
counts[:n%ngroup] += 1
q = np.r_[0, np.cumsum(counts / float(n))]
q[-1] = 1.0
r2 = pd.qcut(a, q)
np.bincount(r2.labels)
</code></pre>
<p>the output is:</p>
<pre><code>array([27, 27, 27, 26, 26, 26, 26, 26, 26, 26])
</code></pre>
|
python|pandas|ranking-functions
| 2
|
3,256
| 20,532,621
|
Pandas import error when debugging using PVTS
|
<p>I am dealing with a very silly error, and wondering if any of you have the same problem. When I try to import pandas using <code>import pandas as pd</code> I get an error in copy.py. I debugged into the pamdas imports, and I found that the copy error is thrown when pandas tries to import this: <br> <code>from pandas.io.html import read_html</code>
<br>
The exception that is throwns is: <br><br>
<code>un(shallow)copyable object of type <type 'Element'></code></p>
<p>I do not get this error if I try to straight up run the code and not use the PVTS debugger. I am using the python 2.7 interpreter, pandas version 0.12 which came with the python xy 2.7.5.1 distro and MS Visual Studio 2012. </p>
<p>Any help would be appreciated. Thanks!</p>
|
<p>This is a limitation of the way PTVS detects unhandled exceptions - it can't see the except-block that's going to catch this exception because it is in the code that is eval'd from a string. See the <a href="https://pytools.codeplex.com/workitem/2077">bug in the tracker</a> for more details.</p>
<p>As a workaround, disable "Debug standard library" checked in Tools -> Options -> Python Tools -> Debugging - this should cause the exception to be ignored.</p>
|
python|python-2.7|visual-studio-2012|pandas|ptvs
| 5
|
3,257
| 20,435,432
|
ValueError and odepack.error using integrate.odeint()
|
<p>I'm trying to write an equation to model and then plot an integral control system (specifically regarding cruise control). However I'm receiving two errors whenever I run it:</p>
<p>ValueError: object too deep for desired array
odepack.error: Result from function call is not a proper array of floats.</p>
<p>I've read these questions:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/18184541/scipy-curve-fit-error-result-from-function-call-is-not-a-proper-array-of-floats">scipy curve_fit error: Result from function call is not a proper array of floats</a> </li>
<li><a href="https://stackoverflow.com/questions/20019427/how-to-solve-this-differential-equation-using-scipy-odeint">How to solve this differential equation using scipy odeint?</a></li>
<li><a href="https://stackoverflow.com/questions/15709933/object-too-deep-for-desired-array-scipy-integrate-odeint">Object Too Deep for Desired Array - scipy.integrate.odeint</a></li>
</ul>
<p>Which seem like they should be helpful, however I'm unsure how to apply those to my problem. I'm fairly new to python so please bear with me if I've missed something obvious or done something exceptionally silly. I have no problems with plotting it, so once I figure out how to actually get this working I think I'm set.</p>
<pre><code>import numpy as np
import scipy.integrate as integrate
##Parameters
kp=.5 #proportional gain
ki=.1 #integral gain
vr=30 #desired velocity in m/s
Tm=190 #Max Torque in Nm
wm=420 #engine speed
B=0.4 #Beta
an=12 #at gear 4
p=1.3 #air density
Cd=0.32 #Drag coefficient
Cr=.01 #Coefficient of rolling friction
A=2.4 #frontal area
##Variables
m=18000 #weight
v=20 #starting velocity
time=np.linspace(0,10,50) #time
theta=np.radians(4) #Theta
def vderivs(state,t):
v = state
vel=[]
ti=0
while ti < t:
v1 = an*controller(ti,vr,v)*torque(v)
v2 = m*Cr*np.sign(v)
v3 = 0.5*p*Cd*A*v**2
v4 = m*np.sin(theta)
if t < 10:
vtot = v1+v2+v3
vfin = np.divide(vtot,m)
else:
vtot = v1+v2+v3+v4
vfin = np.divide(vtot,m)
vel.append(vfin)
ti+=1
trueVel = np.array(vel, float)
return trueVel
def uderivs(state,t):
v = state
deltax = vr - v
return deltax
def controller(time,desired,currentV):
z = integrate.odeint(uderivs, currentV, time)
u = kp*(vr-currentV)+ki*z
return u.flatten()
def torque(v):
return Tm*(1-B*(np.divide(an*v,wm)-1)**2)
def velocity(mass,desired,theta,t):
v = integrate.odeint(vderivs, desired, t)
return v.flatten()
test = velocity(m,vr,theta,time)
print(test)
</code></pre>
<p>Please let me know if there is anything else you need from me!</p>
|
<p>Posting this as separate, because I got your code to work. Well, to run and produce output :P</p>
<p>Actually one big problem is some stealth broadcasting that I didn't notice, but I changed a lot of other things along the way.</p>
<p>First the stealth broadcasting is that if you integrate a 1d function with one parameter, <code>odeint</code> returns a column vector, and then when you do stuff with that result that is a row vector, then you get a 2d array (matrix). For example:</p>
<pre><code>In [704]: a
Out[704]: array([0, 1, 2, 3, 4])
In [705]: b
Out[705]:
array([[0],
[1],
[2]])
In [706]: a+b
Out[706]:
array([[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6]])
</code></pre>
<p>You were getting output for velocity that was a column vector like <code>b</code> and adding it to some other function of time, and getting a matrix.</p>
<hr>
<p>With regards to the recursion, I think I sovled that issue. The two derivative functions should take a single scalar <code>t</code> at which point they calculate the derivative. To do that, <code>vderivs</code> needs to do the integral, which it should do over all time up to <code>t</code>. So I redefined them as such:</p>
<pre><code>dt = .1 # another global constant parameter
def vderivs(v, t):
ts = np.arange(0, t, dt)
v1 = an * controller(v, ts) * torque(v)
v2 = m*Cr*np.sign(v)
v3 = 0.5*p*Cd*A*v**2
v4 = m*np.sin(theta)
vtot = v1+v2+v3+v4*(ts>=10) # a vector of times includes incline only after ts = 10
return vtot/m
</code></pre>
<p>And of course <code>uderivs</code> is fine as is but can be written more simply:</p>
<pre><code>def uderivs(v, t):
return vr - v
</code></pre>
<p>Then, make sure that <code>velocity</code> and <code>controller</code> pass the right values (using <code>v0</code> instead of <code>v</code> for the starting velocity):</p>
<pre><code>def controller(currentV, time):
z = integrate.odeint(uderivs, currentV, time)
return kp*(vr-currentV) + ki*z.squeeze()
def velocity(desired, theta, time):
return integrate.odeint(vderivs, desired, time)
</code></pre>
<p>Who knows if the physics is correct, but this gives:</p>
<p><img src="https://i.stack.imgur.com/OTRcA.png" alt="short time"></p>
<p>Note that it hasn't reached the desired velocity, so I increased the time over which it was to be solved</p>
<pre><code>time = np.linspace(0,50,50) #time
</code></pre>
<p><img src="https://i.stack.imgur.com/0T96f.png" alt="long time"></p>
<p>Here is all the code that I ran:</p>
<pre><code>import matplotlib.pylab as plt
import numpy as np
import scipy.integrate as integrate
##Parameters
kp = .5 #proportional gain
ki = .1 #integral gain
vr = 30 #desired velocity in m/s
Tm = 190 #Max Torque in Nm
wm = 420 #engine speed
B = 0.4 #Beta
an = 12 #at gear 4
p = 1.3 #air density
Cd = 0.32 #Drag coefficient
Cr = .01 #Coefficient of rolling friction
A = 2.4 #frontal area
##Variables
m = 18000.0 #weight
v0 = 20. #starting velocity
t = np.linspace(0, 20, 50) #time
dt = .1
theta = np.radians(4) #Theta
def torque(v):
return Tm * (1 - B*(an*v/wm - 1)**2)
def vderivs(v, t):
ts = np.arange(0, t, dt)
v1 = an * controller(v, ts) * torque(v)
v2 = m*Cr*np.sign(v)
v3 = 0.5*p*Cd*A*v**2
v4 = m*np.sin(theta)
vtot = v1+v2+v3+v4*(ts>=10)
return vtot/m
def uderivs(v, t):
return vr - v
def controller(currentV, time):
z = integrate.odeint(uderivs, currentV, time)
return kp*(vr-currentV) + ki*z.squeeze()
def velocity(desired, theta, time):
return integrate.odeint(vderivs, desired, time)
plt.plot(t, velocity(v0, theta, t), 'k-', lw=2, label='velocity')
plt.plot(t, controller(v0, t), 'r', lw=2, label='controller')
plt.legend()
plt.show()
</code></pre>
|
python|arrays|numpy|odeint
| 1
|
3,258
| 33,451,800
|
Decimal to binary Half-Precision IEEE 754 in Python
|
<p>I was only able to convert a decimal into a binary single-precision IEEE754, using the <code>struct.pack</code> module, or do the opposite (float16 or float32) using <code>numpy.frombuffer</code></p>
<p>Is it possible to convert a decimal to a binary half precision floating point, using Numpy?</p>
<p>I need to print the result of the conversion, so if I type <code>"117.0"</code>, it should print <code>"0101011101010000"</code> </p>
|
<blockquote>
<p>if I type "117.0", it should print "0101011101010000"</p>
</blockquote>
<pre><code>>>> import numpy as np
>>> bin(np.float16(117.0).view('H'))[2:].zfill(16)
'0101011101010000'
</code></pre>
<p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.view.html" rel="nofollow noreferrer"><code>.view('H')</code></a> reinterprets the memory occupied by the <code>float16</code> value as an unsigned integer.</p>
|
python|numpy|floating-point|precision|ieee-754
| 12
|
3,259
| 9,155,478
|
How to try-except an illegal matrix operation due to singularity in NumPy
|
<p>In NumPy, I'm trying to use <code>linalg</code> to compute matrix inverses at each step of a Newton-Raphson scheme (the problem size is small intentionally so that we can invert analytically computed Hessian matrices). However, after I get far along towards convergence, the Hessian gets close to singular.</p>
<p>Is there any method within NumPy that lets me test whether a matrix is considered singular (computing determinant is not robust enough)? Ideally, it would be nice if there's a way to use a <code>try</code> <code>except</code> block to catch NumPy's singular array error.</p>
<p>How would I do this? The NumPy error given at the terminal is:</p>
<pre><code>raise LinAlgError, 'Singular matrix'
numpy.linalg.linalg.LinAlgError: Singular matrix
</code></pre>
|
<p>The syntax would be like this:</p>
<pre><code>import numpy as np
try:
# your code that will (maybe) throw
except np.linalg.LinAlgError as err:
if 'Singular matrix' in str(err):
# your error handling block
else:
raise
</code></pre>
|
python|numpy|linear-algebra
| 51
|
3,260
| 66,349,979
|
How to fillna limited by date in a groupby
|
<p>I am working with the following Dataframe that has some NaN values inside.</p>
<pre><code>df = pd.DataFrame({'day':[pd.datetime(2020,1,1),pd.datetime(2020,1,3),pd.datetime(2020,1,4),pd.datetime(2020,1,5),pd.datetime(2020,1,6),pd.datetime(2020,1,7),pd.datetime(2020,1,8),pd.datetime(2020,1,8),pd.datetime(2020,6,9)],
'TradeID':['01','02','03','04','05','06','07','08','09'],
'Security': ['GOOGLE', 'GOOGLE', 'APPLE', 'GOOGLE', 'GOOGLE','GOOGLE','GOOGLE','GOOGLE','GOOGLE'],
'ID': ['ID001', 'ID001', 'ID001', 'ID001', 'ID001','ID001','ID001','ID001','ID001'],
'BSType': ['B', 'S', 'B', 'B', 'B','S','S','S','B'],
'Price':[105.901,106.969,np.nan,107.037,107.038,107.136,np.nan,107.25,np.nan],
'Quantity':[1000000,-300000,np.nan,7500000,100000,-100000,np.nan,-7800000,np.nan]
})
Out[318]:
day TradeID Security ID BSType Price Quantity
0 2020-01-01 01 GOOGLE ID001 B 105.901 1000000.0
1 2020-01-03 02 GOOGLE ID001 S 106.969 -300000.0
2 2020-01-04 03 APPLE ID001 B NaN NaN
3 2020-01-05 04 GOOGLE ID001 B 107.037 7500000.0
4 2020-01-06 05 GOOGLE ID001 B 107.038 100000.0
5 2020-01-07 06 GOOGLE ID001 S 107.136 -100000.0
6 2020-01-08 07 GOOGLE ID001 S NaN NaN
7 2020-01-08 08 GOOGLE ID001 S 107.250 -7800000.0
8 2020-06-09 09 GOOGLE ID001 B NaN NaN
</code></pre>
<p>My goal is to fillna with the method ffill only for the same Security, same ID and limited for the next 60 days (not the next 60 observations, because there may be more than one observation per day).</p>
<p>Here is what i tried but is not working, it does not replace any of my NaN values</p>
<pre><code>df=df.groupby(['day',"Security","ID"], as_index=False).fillna(method='ffill',limit=60)
</code></pre>
<p>The expected output should look like this: (Note that only the second pair of NaN values have been filled)</p>
<ul>
<li>The first pair of NaN values should not be filled because is not the same Security.</li>
<li>The second pair of NaN values should be filled with the previous observation.</li>
<li>The third pair on NaN should not be filled because they are out of the 60 days scope.</li>
</ul>
<pre><code>Out[320]:
day TradeID Security ID BSType Price Quantity
0 2020-01-01 01 GOOGLE ID001 B 105.901 1000000.0
1 2020-01-03 02 GOOGLE ID001 S 106.969 -300000.0
2 2020-01-04 03 APPLE ID001 B NaN NaN
3 2020-01-05 04 GOOGLE ID001 B 107.037 7500000.0
4 2020-01-06 05 GOOGLE ID001 B 107.038 100000.0
5 2020-01-07 06 GOOGLE ID001 S 107.136 -100000.0
6 2020-01-08 07 GOOGLE ID001 S 107.136 -100000.0
7 2020-01-08 08 GOOGLE ID001 S 107.250 -7800000.0
8 2020-06-09 09 GOOGLE ID001 B NaN NaN
</code></pre>
<p>So, my question is, ¿is there a plausible way to fill NaN values limiting the ffill method on a certain period?</p>
<p>Thank you very much for you time.</p>
|
<p>You can <code>group</code> the dataframe on columns <code>Security</code> and <code>ID</code> along with an additional <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Grouper.html" rel="nofollow noreferrer"><code>grouper</code></a> for column <code>day</code> with frequency set to <code>60 days</code> then use <code>ffill</code> to forward fill the values for the next <code>60 days</code>:</p>
<pre><code>g = pd.Grouper(key='day', freq='60d')
df.assign(**df.groupby(["Security","ID", g]).ffill())
</code></pre>
<hr />
<pre><code> day TradeID Security ID BSType Price Quantity
0 2020-01-01 01 GOOGLE ID001 B 105.901 1000000.0
1 2020-01-03 02 GOOGLE ID001 S 106.969 -300000.0
2 2020-01-04 03 APPLE ID001 B NaN NaN
3 2020-01-05 04 GOOGLE ID001 B 107.037 7500000.0
4 2020-01-06 05 GOOGLE ID001 B 107.038 100000.0
5 2020-01-07 06 GOOGLE ID001 S 107.136 -100000.0
6 2020-01-08 07 GOOGLE ID001 S 107.136 -100000.0
7 2020-01-08 08 GOOGLE ID001 S 107.250 -7800000.0
8 2020-06-09 09 GOOGLE ID001 B NaN NaN
</code></pre>
|
python|pandas|group-by|fillna
| 1
|
3,261
| 66,588,116
|
Why does the `title_text` argument of `fig.update_layout` appear not to be working for Plotly table in Jupyter?
|
<p>I'm showing a Pandas DataFrame in a Plotly Figure Factory table, and the arguments in <code>fig.update_layout</code> are all working as expected except for <code>title_text</code> (example <a href="https://plotly.com/python/figure-factory-table/" rel="nofollow noreferrer">here</a>).</p>
<pre><code>import pandas as pd
import plotly.figure_factory as ff
df = pd.DataFrame({
'member': 'A B C D E F G'.split(' '),
'amount': [515, 45, 315, 321, 43, 244, 433]
})
fig = ff.create_table(df)
fig.update_layout(
title_text='Titletext'
autosize=False,
width=150*df.shape[1],
height=40*df.shape[0],
)
fig.show()
</code></pre>
<p>This generates the table with the right cell sizes, but no title <em>showing in the Jupyter Notebook output</em>.</p>
<p><a href="https://i.stack.imgur.com/wsf1Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wsf1Z.png" alt="enter image description here" /></a></p>
|
<p>You may need to adjust the margins of the plot.
A similar issue was tagged on github: <a href="https://www.github.com/plotly/plotly.py/issues/2795" rel="nofollow noreferrer">https://www.github.com/plotly/plotly.py/issues/2795</a>.</p>
<p>The solution: <code>fig.update_layout({'margin': {'t': 50}})</code>. This means create a 50px margin from the top of the figure, so that the title isn't "cutoff" from the plot.</p>
|
python|pandas|jupyter-notebook|plotly|jupyter-lab
| 1
|
3,262
| 66,481,456
|
Creating loop that pulls back every three months until 6 months are pulled - Python
|
<p>How can I create a loop to print every 3 months ago until it gives me back 6 months? For example, I want the loop to pull back dates like this 202012, 202009, 202006, 202003, 201912, 201909, essentially skipping every 3 until it does it for 6 times.</p>
|
<p>Assuming that when you say 'gives me back 6 months' you mean generates 6 different months, including the starting month:</p>
<pre><code>import datetime
import dateutil.relativedelta
date = datetime.datetime.strptime("202003", "%Y%m")
delta = dateutil.relativedelta.relativedelta(months=3)
print(date.strftime("%Y%m"))
for _ in range(5):
date += delta
print(date.strftime("%Y%m"))
# edit: I see OP wanted to go back in time. The += should be -=
</code></pre>
|
python|pandas|dataframe|loops
| 2
|
3,263
| 66,628,140
|
Python pandas to_csv issue
|
<p>I am running python program which takes csv data as string, need to sort it and return output as string</p>
<p>input= "Bet,Charle,Daniell,Ada,Eri\n17945,10091,10088,3907,10132\n2,12,13,48,11"</p>
<p>desired output = "Ada,Bet,Charle,Daniell,Eri\n3907,17945,10091,10088,10132\n48,2,12,13,11"</p>
<pre><code>import pandas as pd
from io import StringIO
def sort_csv(input):
str_data = StringIO(input)
output_df = pd.read_csv(str_data, sep=",")
output_df = output_df.sort_index(axis=1)
output_df.to_csv(path_or_buf='temp.csv', index=False)
data = open('temp.csv').read()
return data
</code></pre>
<p>I am facing below error
TypeError: slice indices must be integers or None or have an <strong>index</strong> method</p>
<p>pandas package is upgraded, using python 3.4.
Any help?</p>
|
<p>Here is an implementation that avoids using <code>pandas</code>, and allows for the ability to sort by different columns. Once you break your input string into a list of lists, your specified sorting column (i.e. the list you want to use to sort from your list of lists) is sorted whilst keeping note of the original indices, which are then passed as the sorting key to the other lists. Afterwards, you simply re-join the string together:</p>
<pre><code>x = "Bet,Charle,Daniell,Ada,Eri\n17945,10091,10088,3907,10132\n2,12,13,48,11"
col_to_sort = 0
sublists = [i.split(',') for i in x.split()]
sorted_list = [[l[i] for i in sorted(range(len(sublists[col_to_sort])), key=lambda k: sublists[col_to_sort][k])] for l in sublists]
print(repr('\n'.join(','.join(k) for k in sorted_list)))
</code></pre>
<p>Yields:</p>
<pre><code>'Ada,Bet,Charle,Daniell,Eri\n3907,17945,10091,10088,10132\n48,2,12,13,11'
</code></pre>
|
python|pandas
| 2
|
3,264
| 16,189,386
|
Random bits array by given probabilities with numpy
|
<p>I have a deterministic neural network and I want to make it stochastic.</p>
<p>Two questions:</p>
<ol>
<li>I'm not sure if it means that I need to use the result of the sigmoid to determine the probabilities for the output, or if the probabilities are simply the neurons input, and a sigmoid function is now redundant.</li>
<li>How to do that efficiently with numpy? I know how to generate random bits, but how do you do that with given probabilities inside a large array? (My current sigmoid function is tanh if it matters)</li>
</ol>
|
<ol>
<li>The sigmoid function is still required, as the backpropagation works on computing the derivative of the sigmoid function, and not whether or not the neuron fired.</li>
<li><p>After computing the activation as before, I now run the result array x through this: </p>
<p><code>return numpy.random.ranf(x.shape) < x</code></p>
<p>My timing for this is <code>3.03323280772e-05</code></p>
<p>Also note that numpy treats boolean values as if they were 1 and 0 so no need to transfer the result back to int/float). Because this is now 0 and 1, I had to change my code a bit - before I used -1 and 1 for target values and inputs.</p></li>
</ol>
|
numpy|neural-network|stochastic-process
| 0
|
3,265
| 57,496,285
|
Why is the memory in GPU still in use after clearing the object?
|
<p>Starting with zero usage:</p>
<pre><code>>>> import gc
>>> import GPUtil
>>> import torch
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 0% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
</code></pre>
<p>Then I create a big enough tensor and hog the memory:</p>
<pre><code>>>> x = torch.rand(10000,300,200).cuda()
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 26% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
</code></pre>
<p>Then I tried several ways to see if the tensor disappears.</p>
<p><strong>Attempt 1:</strong> Detach, send to CPU and overwrite the variable</p>
<p><strong>No, doesn't work.</strong></p>
<pre><code>>>> x = x.detach().cpu()
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 26% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
</code></pre>
<p><strong>Attempt 2:</strong> Delete the variable</p>
<p><strong>No, this doesn't work either</strong></p>
<pre><code>>>> del x
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 26% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
</code></pre>
<p><strong>Attempt 3:</strong> Use the <code>torch.cuda.empty_cache()</code> function</p>
<p>Seems to work, but it seems that there are some lingering overheads...</p>
<pre><code>>>> torch.cuda.empty_cache()
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 5% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
</code></pre>
<p><strong>Attempt 4:</strong> Maybe clear the garbage collector.</p>
<p><strong>No, 5% is still being hogged</strong></p>
<pre><code>>>> gc.collect()
0
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 5% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
</code></pre>
<p><strong>Attempt 5:</strong> Try deleting <code>torch</code> altogether (as if that would work when <code>del x</code> didn't work -_- )</p>
<p><strong>No, it doesn't...*</strong></p>
<pre><code>>>> del torch
>>> GPUtil.showUtilization()
| ID | GPU | MEM |
------------------
| 0 | 0% | 5% |
| 1 | 0% | 0% |
| 2 | 0% | 0% |
| 3 | 0% | 0% |
</code></pre>
<p>And then I tried to check <code>gc.get_objects()</code> and it looks like there's still quite a lot of odd <code>THCTensor</code> stuff inside... </p>
<p><strong>Any idea why is the memory still in use after clearing the cache?</strong></p>
|
<p>It looks like PyTorch's caching allocator reserves some fixed amount of memory even if there are no tensors, and this allocation is triggered by the first CUDA memory access
(<code>torch.cuda.empty_cache()</code> deletes unused tensor from the cache, but the cache itself still uses some memory).</p>
<p>Even with a tiny 1-element tensor, after <code>del</code> and <code>torch.cuda.empty_cache()</code>, <code>GPUtil.showUtilization(all=True)</code> reports exactly the same amount of GPU memory used as for a huge tensor (and both <code>torch.cuda.memory_cached()</code> and <code>torch.cuda.memory_allocated()</code> return zero).</p>
|
python|memory-leaks|garbage-collection|gpu|pytorch
| 15
|
3,266
| 57,374,871
|
How to create a New column variable with the data from all other columns using Pandas
|
<p>I am working on a Data set, and want to create a new variable column with all the columns which are present in data set:</p>
<p><a href="https://i.stack.imgur.com/2tJwA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2tJwA.jpg" alt="enter image description here"></a></p>
<p>I want to make a new variable from <code>CODE</code> with all the related rows to same number (8480-6 and 8462-4)</p>
|
<pre class="lang-py prettyprint-override"><code> import pandas as pd
data = pd.DataFrame(
{'Date':['10/2/2011', '11/2/2011', '12/2/2011','13/2/2011'],
'Event':['Music', 'Poetry', 'Theater', 'Comedy'],
'Cost':[10000, 5000, 15000, 2000]})
data['Discounted_Price'] = data['Cost'] - (0.1 * data['Cost'])
print(data)
</code></pre>
<p>Add discount column to the data frame I think this will be helpful</p>
|
pandas
| 1
|
3,267
| 57,606,801
|
pandas style options to latex
|
<p>Pandas has two nice functionalities I use a lot - that's the <code>df.style...</code> option and the <code>df.to_latex()</code> call. But I do not know how to combine both.</p>
<p>The .style option makes looking at tables much more pleasant. It lets you grasp information rapidly because of visual enhancements. This works perfectly in a jupyter notebook, for example. Here is an arbitrary example I copied from the <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html" rel="noreferrer">documentation</a>.</p>
<pre><code>df.style.bar(subset=['A', 'B'], align='mid', color=['#d65f5f', '#5fba7d'])
</code></pre>
<p>This yields:</p>
<p><a href="https://i.stack.imgur.com/VR69Q.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VR69Q.png" alt="Dataframe with style columns"></a></p>
<p>However, as nice as this looks in a jupyter notebook, I can not put this to latex code. I get the following error message instead, if chaining a 'to_latex()' call at the end of my visual enhancements: <code>AttributeError: 'Styler' object has no attribute</code>. Does that mean it's simply not possible, because the displayed colorful table is not a DataFrame object any more, but now a Styler object, now? </p>
<p>Is there any workaround? At least with easier tables, let's say where only cells have a single background color with respect to their value, instead of a 'complicated' bar graph.</p>
|
<p>Instead of trying to export this formatting to bulky LaTeX markup, I would go the route explored already over in TeX.SE: add the functionality as LaTeX code that draws similar formatting based on the same data.</p>
<ul>
<li>Red/green value bars:<br>
<a href="https://tex.stackexchange.com/questions/81994/partially-coloring-cell-background-with-histograms">Partially coloring cell background with histograms</a></li>
<li>Coloured cell backgrounds:<br>
<a href="https://tex.stackexchange.com/q/174998/30821">Are there an easy way to coloring tables depending on the value in each cell?
</a></li>
<li>Shaded backgrounds (similar, but points to excellent package <code>pgfplotstable</code>):<br>
<a href="https://tex.stackexchange.com/questions/42444/parametrize-shading-in-table-through-tikz">Parametrize shading in table through TikZ</a></li>
</ul>
|
python|pandas|dataframe|latex
| 2
|
3,268
| 57,579,641
|
my question is that what changes there should be in a code
|
<h2>problem</h2>
<p>I am training CNN model on my GPU with tensorflow but I am running out of memory</p>
<h2>things I tried</h2>
<p>I have tried changing my batch_size , there was a positive change but eventually it was out of memory</p>
<p>model = Sequential()</p>
<h2>CODE</h2>
<p><code>enter code here</code></p>
<pre><code>model.add(Conv2D(64, (3, 3), input_shape=X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(1))
model.add(Activation("sigmoid"))
model.compile(loss="binary_crossentropy",optimizer="adam",metrics=
['accuracy'])
model.fit(X, Y, batch_size=32, validation_split=0.1)
</code></pre>
<h2>OUTPUT</h2>
<pre><code>C:\Anaconda3\envs\tutorial\pythonw.exe "C:/Users/roshaan zafar/PycharmProjects/InternshipRiseTech/main.py"
WARNING: Logging before flag parsing goes to stderr.
W0820 13:05:23.726494 24488 deprecation.py:506] From C:\Anaconda3\envs\tutorial\lib\site-packages\tensorflow\python\ops\init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
W0820 13:05:23.817250 24488 deprecation.py:323] From C:\Anaconda3\envs\tutorial\lib\site-packages\tensorflow\python\ops\nn_impl.py:180: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Train on 360 samples, validate on 40 samples
2019-08-20 13:05:24.028720: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-08-20 13:05:24.030744: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library nvcuda.dll
2019-08-20 13:05:24.976333: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.645
pciBusID: 0000:01:00.0
2019-08-20 13:05:24.976601: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-08-20 13:05:24.977484: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-08-20 13:05:25.734584: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-20 13:05:25.734785: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2019-08-20 13:05:25.734905: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2019-08-20 13:05:25.735694: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6376 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
2019-08-20 13:05:26.180767: W tensorflow/core/framework/allocator.cc:107] Allocation of 1240006656 exceeds 10% of system memory.
2019-08-20 13:05:26.834340: W tensorflow/core/framework/allocator.cc:107] Allocation of 1240006656 exceeds 10% of system memory.
2019-08-20 13:05:27.476075: W tensorflow/core/framework/allocator.cc:107] Allocation of 1240006656 exceeds 10% of system memory.
2019-08-20 13:05:28.102630: W tensorflow/core/framework/allocator.cc:107] Allocation of 1240006656 exceeds 10% of system memory.
2019-08-20 13:05:28.715843: W tensorflow/core/framework/allocator.cc:107] Allocation of 1240006656 exceeds 10% of system memory.
2019-08-20 13:05:47.982488: W tensorflow/core/common_runtime/bfc_allocator.cc:314] Allocator (GPU_0_bfc) ran out of memory trying to allocate 9.34GiB (rounded to 10029662208). Current allocation summary follows.
2019-08-20 13:05:47.983224: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (256): Total Chunks: 47, Chunks in use: 47. 11.8KiB allocated for chunks. 11.8KiB in use in bin. 1.5KiB client-requested in use in bin.
2019-08-20 13:05:47.983956: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (512): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.984651: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (1024): Total Chunks: 1, Chunks in use: 1. 1.3KiB allocated for chunks. 1.3KiB in use in bin. 1.0KiB client-requested in use in bin.
2019-08-20 13:05:47.985413: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (2048): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.986243: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (4096): Total Chunks: 2, Chunks in use: 2. 13.5KiB allocated for chunks. 13.5KiB in use in bin. 13.5KiB client-requested in use in bin.
2019-08-20 13:05:47.988224: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (8192): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.988864: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (16384): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.989820: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (32768): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.990495: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (65536): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.991146: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (131072): Total Chunks: 2, Chunks in use: 2. 288.0KiB allocated for chunks. 288.0KiB in use in bin. 288.0KiB client-requested in use in bin.
2019-08-20 13:05:47.992567: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (262144): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.993545: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (524288): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.994186: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (1048576): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.994859: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (2097152): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.995569: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (4194304): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.996235: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (8388608): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.996924: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (16777216): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.997650: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (33554432): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.998404: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (67108864): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.999135: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (134217728): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-08-20 13:05:47.999876: I tensorflow/core/common_runtime/bfc_allocator.cc:764] Bin (268435456): Total Chunks: 5, Chunks in use: 3. 6.23GiB allocated for chunks. 2.75GiB in use in bin. 2.75GiB client-requested in use in bin.
2019-08-20 13:05:48.000650: I tensorflow/core/common_runtime/bfc_allocator.cc:780] Bin for 9.34GiB was 256.00MiB, Chunk State:
2019-08-20 13:05:48.001093: I tensorflow/core/common_runtime/bfc_allocator.cc:786] Size: 450.00MiB | Requested Size: 450.00MiB | in_use: 0 | bin_num: 20, prev: Size: 256B | Requested Size: 8B | in_use: 1 | bin_num: -1, next: Size: 256B | Requested Size: 128B | in_use: 1 | bin_num: -1
2019-08-20 13:05:48.003835: I tensorflow/core/common_runtime/bfc_allocator.cc:786] Size: 3.04GiB | Requested Size: 0B | in_use: 0 | bin_num: 20, prev: Size: 256B | Requested Size: 4B | in_use: 1 | bin_num: -1
2019-08-20 13:05:48.004577: I tensorflow/core/common_runtime/bfc_allocator.cc:793] Next region of size 6686052608
2019-08-20 13:05:48.013828: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705400000 next 1 of size 1280
2019-08-20 13:05:48.014294: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705400500 next 2 of size 256
2019-08-20 13:05:48.014708: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705400600 next 3 of size 256
2019-08-20 13:05:48.015131: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705400700 next 4 of size 256
2019-08-20 13:05:48.015622: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705400800 next 5 of size 256
2019-08-20 13:05:48.016053: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705400900 next 6 of size 256
2019-08-20 13:05:48.016492: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705400A00 next 7 of size 256
2019-08-20 13:05:48.016914: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705400B00 next 8 of size 256
2019-08-20 13:05:48.017347: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705400C00 next 9 of size 256
2019-08-20 13:05:48.017774: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705400D00 next 10 of size 256
2019-08-20 13:05:48.018202: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705400E00 next 11 of size 256
2019-08-20 13:05:48.019604: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705400F00 next 12 of size 256
2019-08-20 13:05:48.020000: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705401000 next 13 of size 256
2019-08-20 13:05:48.020407: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705401100 next 14 of size 256
2019-08-20 13:05:48.020801: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705401200 next 15 of size 256
2019-08-20 13:05:48.021203: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705401300 next 16 of size 256
2019-08-20 13:05:48.022177: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705401400 next 17 of size 256
2019-08-20 13:05:48.022845: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705401500 next 18 of size 256
2019-08-20 13:05:48.023458: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000705401600 next 19 of size 1240006656
2019-08-20 13:05:48.024110: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000074F291600 next 20 of size 256
2019-08-20 13:05:48.024721: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000074F291700 next 21 of size 147456
2019-08-20 13:05:48.025371: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000074F2B5700 next 22 of size 6912
2019-08-20 13:05:48.026024: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000074F2B7200 next 23 of size 256
2019-08-20 13:05:48.026686: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000074F2B7300 next 24 of size 256
2019-08-20 13:05:48.027396: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000074F2B7400 next 25 of size 1240006656
2019-08-20 13:05:48.027798: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 0000000799147400 next 26 of size 147456
2019-08-20 13:05:48.028202: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000079916B400 next 27 of size 6912
2019-08-20 13:05:48.028598: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000079916CF00 next 28 of size 256
2019-08-20 13:05:48.028990: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000079916D000 next 29 of size 256
2019-08-20 13:05:48.029868: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000079916D100 next 30 of size 256
2019-08-20 13:05:48.030492: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000079916D200 next 31 of size 256
2019-08-20 13:05:48.030887: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000079916D300 next 32 of size 256
2019-08-20 13:05:48.031538: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000079916D400 next 33 of size 256
2019-08-20 13:05:48.031931: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000079916D500 next 34 of size 256
2019-08-20 13:05:48.032327: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000079916D600 next 35 of size 256
2019-08-20 13:05:48.032722: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 000000079916D700 next 36 of size 256
2019-08-20 13:05:48.033116: I tensorflow/core/common_runtime/bfc_allocator.cc:800] Free at 000000079916D800 next 37 of size 471859200
2019-08-20 13:05:48.034291: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007B536D800 next 38 of size 256
2019-08-20 13:05:48.034879: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007B536D900 next 39 of size 256
2019-08-20 13:05:48.035434: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007B536DA00 next 40 of size 256
2019-08-20 13:05:48.035832: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007B536DB00 next 41 of size 471859200
2019-08-20 13:05:48.036554: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156DB00 next 42 of size 256
2019-08-20 13:05:48.037253: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156DC00 next 43 of size 256
2019-08-20 13:05:48.037949: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156DD00 next 44 of size 256
2019-08-20 13:05:48.038697: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156DE00 next 45 of size 256
2019-08-20 13:05:48.039204: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156DF00 next 46 of size 256
2019-08-20 13:05:48.039676: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156E000 next 47 of size 256
2019-08-20 13:05:48.040135: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156E100 next 48 of size 256
2019-08-20 13:05:48.041145: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156E200 next 49 of size 256
2019-08-20 13:05:48.041535: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156E300 next 50 of size 256
2019-08-20 13:05:48.041819: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156E400 next 51 of size 256
2019-08-20 13:05:48.042130: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156E500 next 52 of size 256
2019-08-20 13:05:48.042426: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156E600 next 53 of size 256
2019-08-20 13:05:48.042713: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156E700 next 54 of size 256
2019-08-20 13:05:48.043016: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156E800 next 55 of size 256
2019-08-20 13:05:48.043276: I tensorflow/core/common_runtime/bfc_allocator.cc:800] InUse at 00000007D156E900 next 56 of size 256
2019-08-20 13:05:48.043572: I tensorflow/core/common_runtime/bfc_allocator.cc:800] Free at 00000007D156EA00 next 18446744073709551615 of size 3261998848
2019-08-20 13:05:48.043902: I tensorflow/core/common_runtime/bfc_allocator.cc:809] Summary of in-use Chunks by size:
2019-08-20 13:05:48.044196: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 47 Chunks of size 256 totalling 11.8KiB
2019-08-20 13:05:48.044466: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 1280 totalling 1.3KiB
2019-08-20 13:05:48.044760: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 2 Chunks of size 6912 totalling 13.5KiB
2019-08-20 13:05:48.045032: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 2 Chunks of size 147456 totalling 288.0KiB
2019-08-20 13:05:48.045250: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 471859200 totalling 450.00MiB
2019-08-20 13:05:48.045553: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 2 Chunks of size 1240006656 totalling 2.31GiB
2019-08-20 13:05:48.045830: I tensorflow/core/common_runtime/bfc_allocator.cc:816] Sum Total of in-use chunks: 2.75GiB
2019-08-20 13:05:48.046120: I tensorflow/core/common_runtime/bfc_allocator.cc:818] total_region_allocated_bytes_: 6686052608 memory_limit_: 6686052843 available bytes: 235 curr_region_allocation_bytes_: 13372105728
2019-08-20 13:05:48.046453: I tensorflow/core/common_runtime/bfc_allocator.cc:824] Stats:
Limit: 6686052843
InUse: 2952194560
MaxInUse: 3424053760
NumAllocs: 64
MaxAllocSize: 1240006656
2019-08-20 13:05:48.046834: W tensorflow/core/common_runtime/bfc_allocator.cc:319] **************************************______********________________________________________________
2019-08-20 13:05:48.052167: W tensorflow/core/framework/op_kernel.cc:1502] OP_REQUIRES failed at conv_ops.cc:486 : Resource exhausted: OOM when allocating tensor with shape[32,64,1278,958] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
2019-08-20 13:05:48.052167: W tensorflow/core/framework/op_kernel.cc:1502] OP_REQUIRES failed at conv_ops.cc:486 : Resource exhausted: OOM when allocating tensor with shape[32,64,1278,958] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
File "C:/Users/roshaan zafar/PycharmProjects/InternshipRiseTech/main.py", line 109, in <module>
model.fit(X, Y, batch_size=32, validation_split=0.1)
</code></pre>
|
<p>Since your network is pretty small, and you are taking only 32 images per batch, it might be the case that your images are of very high resolution, in which case you can try following things</p>
<ul>
<li>Try reducing the size of images, but do take care of retaining original aspect ratio while doing this</li>
<li>Try to extract random patches of lesser size images again with same resolution</li>
<li>Lastly, you can try to reduce Batch_Size even more to 8 or 4, if above solutions don't work</li>
</ul>
<p>GPU memory generally does fills up quickly than a corresponding CPU having same amount of memory. Hope this helps.</p>
|
tensorflow|out-of-memory|conv-neural-network|tf.keras
| 0
|
3,269
| 57,704,331
|
Is there a way to create a numpy array full of functions in place of elements?
|
<p>I'm looking to build a numpy array as follows so that I don't have to hard code countless numpy arrays by hand:</p>
<pre class="lang-py prettyprint-override"><code>
def func_1(x):
return x**2
def func_2(x):
return x**3+1
</code></pre>
<p>So the array becomes:</p>
<pre class="lang-py prettyprint-override"><code> | func_1(x) func_1(x) func_2(x) |
| func_1(x) func_1(x) func_2(x) |
A = | func_1(x) func_1(x) func_2(x) |
| func_1(x) func_1(x) func_2(x) |
</code></pre>
<p>now with this array filled with functions for each element create many versions of A:</p>
<pre class="lang-py prettyprint-override"><code> | 1 1 2 |
| 1 1 2 |
A(x=1) = | 1 1 2 |
| 1 1 2 |
| 4 4 9 |
| 4 4 9 |
A(x=2) = | 4 4 9 |
| 4 4 9 |
</code></pre>
<hr>
<p><strong><em>Update</em></strong></p>
<p>I implemented this as follows:</p>
<pre class="lang-py prettyprint-override"><code> def h(x):
return np.exp(-((x - 1)**2/ (2*(0.25**2))))
def l(x):
return np.exp(-((x - 0)**2/ (2*(0.25**2))))
def m(x):
return np.exp(-((x - 0.5)**2/ (2*(0.25**2))))
def fuzzy_patterns(x):
return np.array([
# pattern_1
np.array ([
[l(x), l(x), h(x)],
[l(x), l(x), h(x)],
[l(x), l(x), h(x)]
]),
# pattern_2
np.array ([
[h(x), h(x), l(x)],
[h(x), h(x), l(x)],
[h(x), h(x), l(x)]
]),
# pattern_3
np.array ([
[h(x), h(x), h(x)],
[l(x), l(x), l(x)],
[l(x), l(x), l(x)]
]),
.
.
.,
# pattern_n
np.array ([
[m(x), m(x), m(x)],
[m(x), l(x), m(x)],
[m(x), m(x), m(x)]
]),
</code></pre>
<p>In the end, this seemed like the most straightforward way to go considering the readability of code. I'll accept hiro protagonist's answer as my implementation is most similar to their answer.</p>
|
<p>this reproduces what you want:</p>
<pre><code>def A(x):
a = np.full(shape=(3, 2), fill_value=func_1(x))
b = np.full(shape=(3, 1), fill_value=func_2(x))
return np.concatenate((a, b), axis=1)
</code></pre>
<p>i <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html#numpy.concatenate" rel="nofollow noreferrer"><code>concatenate</code></a> the 2 constant arrays (<a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.full.html#numpy.full" rel="nofollow noreferrer"><code>np.full</code></a>) to the result.</p>
<p>you may want to add <code>dtype=int</code> to <code>np.full</code> if you want your arrays to be integer-valued.</p>
<hr>
<p>if your functions depend on the coordinates there is <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfunction.html" rel="nofollow noreferrer"><code>numpy.fromfunction</code></a>.</p>
|
python|numpy
| 1
|
3,270
| 24,390,645
|
Python Pandas merge samed name columns in a dataframe
|
<p>So I have a few CSV files I'm trying to work with, but some of them have multiple columns with the same name.</p>
<p>For example I could have a csv like this:</p>
<pre><code>ID Name a a a b b
1 test1 1 NaN NaN "a" NaN
2 test2 NaN 2 NaN "a" NaN
3 test3 2 3 NaN NaN "b"
4 test4 NaN NaN 4 NaN "b"
</code></pre>
<p>loading into pandasis giving me this:</p>
<pre><code>ID Name a a.1 a.2 b b.1
1 test1 1 NaN NaN "a" NaN
2 test2 NaN 2 NaN "a" NaN
3 test3 2 3 NaN NaN "b"
4 test4 NaN NaN 4 NaN "b"
</code></pre>
<p>What I would like to do is merge those same name columns into 1 column (if there are multiple values keeping those values separate) and my ideal output would be this</p>
<pre><code>ID Name a b
1 test1 "1" "a"
2 test2 "2" "a"
3 test3 "2;3" "b"
4 test4 "4" "b"
</code></pre>
<p>So wondering if this is possible? </p>
|
<p>You could use <code>groupby</code> on <code>axis=1</code>, and experiment with something like</p>
<pre><code>>>> def sjoin(x): return ';'.join(x[x.notnull()].astype(str))
>>> df.groupby(level=0, axis=1).apply(lambda x: x.apply(sjoin, axis=1))
ID Name a b
0 1 test1 1.0 a
1 2 test2 2.0 a
2 3 test3 2.0;3.0 b
3 4 test4 4.0 b
</code></pre>
<p>where instead of using <code>.astype(str)</code>, you could use whatever formatting operator you wanted.</p>
|
python|pandas
| 14
|
3,271
| 43,698,263
|
Python Pandas hierarchical (tuple) row indexing -- how to select all of an intermediate row?
|
<p>Consider the following code:</p>
<pre><code>row1 = [(2,2), (4,4)]
row2 = [(5,5)]
row3 = [10, 20, 30, 40]
row_tuple_list = []
for r1 in row1:
for r2 in row2:
for r3 in row3:
row_tuple_list.append((r1, r2, r3))
row_index = pd.MultiIndex.from_tuples(row_tuple_list, names=['row1', 'row2', 'row3'])
col1 = ['f', 'i']
col2 = ['g', 'h']
col_tuple_list = []
for c1 in col1:
for c2 in col2:
col_tuple_list.append((c1, c2))
col_index = pd.MultiIndex.from_tuples(col_tuple_list, names=['col1', 'col2'])
df = pd.DataFrame(index=row_index, columns=col_index)
</code></pre>
<p>which generates a dataframe:</p>
<pre><code>col1 f i
col2 g h g h
row1 row2 row3
(2, 2) (5, 5) 10 NaN NaN NaN NaN
20 NaN NaN NaN NaN
30 NaN NaN NaN NaN
40 NaN NaN NaN NaN
(4, 4) (5, 5) 10 NaN NaN NaN NaN
20 NaN NaN NaN NaN
30 NaN NaN NaN NaN
40 NaN NaN NaN NaN
</code></pre>
<p>Now, I'd like to set individual elements of this dataframe. For example,</p>
<pre><code>w=(2,2)
x=(5,5)
y=10
df.loc[(w,x,y),('f','g')] = 200
print(df)
</code></pre>
<p>which gives:</p>
<pre><code>col1 f i
col2 g h g h
row1 row2 row3
(2, 2) (5, 5) 10 200 NaN NaN NaN
20 NaN NaN NaN NaN
30 NaN NaN NaN NaN
40 NaN NaN NaN NaN
(4, 4) (5, 5) 10 NaN NaN NaN NaN
20 NaN NaN NaN NaN
30 NaN NaN NaN NaN
40 NaN NaN NaN NaN
</code></pre>
<p>Is there a way to do this without explicitly setting the second row value (since I known that row1 and row2 occur with the same frequency)?</p>
<p>I tried:</p>
<pre><code>df.loc[(w,slice(None),y),('f','g')] =100
</code></pre>
<p>which fails.</p>
|
<pre><code># you need to use slice for w as well. This should work.
df.loc[(slice(w),slice(None),y),('f','g')]
df
Out[208]:
col1 f i
col2 g h g h
row1 row2 row3
(2, 2) (5, 5) 10 100 NaN NaN NaN
20 NaN NaN NaN NaN
30 NaN NaN NaN NaN
40 NaN NaN NaN NaN
(4, 4) (5, 5) 10 NaN NaN NaN NaN
20 NaN NaN NaN NaN
30 NaN NaN NaN NaN
</code></pre>
|
python|pandas|dataframe|indexing|hierarchical
| 1
|
3,272
| 43,620,478
|
Keras predict not working for multiple GPU's
|
<p>I recently implemented this make_parallel code (<a href="https://github.com/kuza55/keras-extras/blob/master/utils/multi_gpu.py" rel="nofollow noreferrer">https://github.com/kuza55/keras-extras/blob/master/utils/multi_gpu.py</a>) for testing on multiple GPUs. After implementing the predict_classes() function did not work with the new model structure, after some reading I switched to using the predict function instead. This function only works using certain batch sizes, for example a batch size of 750 works, while 500, 100 and 350 fails with the following error: </p>
<pre><code>ValueError: could not broadcast input array from shape (348,15) into shape
(350,15)
</code></pre>
<p>The training was completed with a batch_size of 75. Any idea why this is happening or how I can fix?</p>
<pre><code>pointFeatures = np.zeros((batchSize,featureSize))
libfeatures.getBatchOfFeatures(i,batchSize,pointFeatures)
pointFeatures = pointFeatures.reshape(batchSize, FeatureShape.img_rows,
FeatureShape.img_cols, FeatureShape.img_width,
FeatureShape.img_channels)
pointFeatures = pointFeatures.astype('float32')
results = model.predict(pointFeatures, verbose=True,
batch_size=FeatureShape.mini_batch_size)
</code></pre>
|
<p>If you are using make_parallel function, you need to make sure number of samples is divisible by batch_size*N, where N is the number of GPUs you are using. For example:</p>
<pre><code>nb_samples = X.shape[0] - X.shape[0]%(batch_size*N)
X = X[:nb_samples]
</code></pre>
<p>You can use different batch_size for training and testing as long as the number of samples is divisible by batch_size*N.</p>
|
python|tensorflow|gpu|keras
| 0
|
3,273
| 43,614,102
|
pandas to_json returns a string not a json object
|
<p>I am using the following python code to return a json object:</p>
<pre><code>df_as_json = df.to_json(orient='split')
return jsonify({'status': 'ok', 'json_data': df_as_json})
</code></pre>
<p>When I read the object back in javascript:</p>
<pre><code>// response is xhr respose from server
const mydata = response.data
console.log(mydata.constructor.name)
// >Obj
const dfdata = mydata.json_data
console.log(dfdata.constructor.name)
// >String
</code></pre>
<p>Is there a way to send the df_as_json as a json object inside the parent response.data json object?</p>
|
<p>There's no such thing as a "json object" in python that's why<code>.to_json</code> returns a string representation of the json object, json in python is essentially the same as a <code>dict</code>, you can use the <code>to_dict</code> method instead. </p>
<pre><code>df_as_json = df.to_dict(orient='split')
return jsonify({'status': 'ok', 'json_data': df_as_json})
</code></pre>
|
javascript|python|json|pandas|dataframe
| 29
|
3,274
| 43,624,502
|
Using Python to calculate the unique conic section using five points
|
<p>I'm trying to develop a Python code which can tell you the centre, directrix, foci and the equation of any unique conic section using a set of five-point inputs from the user. </p>
<p>I'm currently running the code with Python2 on Sublime Text on a MacBook which I've installed Scipy , Numpy and Sympy. </p>
<h2>After the user have input the five points, there will be five different general formula of conic sections which has a standard form of:</h2>
<h2>ax^2 + bxy + cy^2 + dx + ey +f = 0</h2>
<p>Then after solving a,b,c,d,e,f individually, we would have a formula looking something like x^2 - y = 0, this would be the equation for that unique conic section.</p>
<p>I'm using LUsolve, however no matter what I input, the results it gives me is always [0,0,0,0,0,0]. Please look at my code and help me w/ this, thanks.</p>
<pre><code>enter code here
import numpy as np
import scipy
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def __str__(self):
rep = '(' + str(self.x) + ', ' + str(self.y) + ')'
return rep
class Conic:
def __init__(self, points):
self.points = points
def equation(self): # ...
from sympy import symbols
from sympy import Matrix, ImmutableMatrix
from sympy.matrices import zeros
from sympy.solvers import solve, nsolve, solveset
x, y, z, v = symbols("x y z v")
# you need symbols for x and y
a, b, c, d , e, f = symbols('a b c d e f')
xs = Matrix([a, b, c, d, e, f])
l = [[x ** 2, x*y, y ** 2, x, y, 1]]
#l = []
for point in self.points:
l.append([point.x ** 2, point.x * point.y, point.y ** 2, point.x, point.y, 1])
m = Matrix(l)
v = m.det()
soln = m.LUsolve(zeros(6, 1)) ###from scipy
print(soln)
return None
def __str__(self):
pass
return None
from random import randint
points = [Point(-2, 4),
Point(-1, 1),
Point(0, 0),
Point(1, 1),
Point(2, 4)]
for point in points:
print(point)
c = Conic(points)
c.equation()
</code></pre>
<p>And result is here:</p>
<pre><code>(-2, 4)
(-1, 1)
(0, 0)
(1, 1)
(2, 4)
Matrix([[0], [0], [0], [0], [0], [0]])
***Repl Closed***
</code></pre>
|
<p>I think the reason you’re getting zeros is because you’re solving the linear system with <code>zeros(6,1)</code>, in which case the all-zero solution is valid. Try setting the right-hand-side to something non-zero.</p>
<p>The other issue is, with five points and six unknowns, the solution is underdetermined. But you can fix the last coordinate to some arbitrary value and solve for that.</p>
<p>Here’s my Numpy purely-numerical solution:</p>
<pre><code>import numpy as np
def fivePointsToConic(points, f=1.0):
"""Solve for the coefficients of a conic given five points in Numpy array
`points` should have at least five rows.
`f` is the constant that you can specify. With the returned solution,
`(a, b, c, d, e, f)`, the full conic is specified as:
$a x^2 + b x y + c y^2 + d x + e y = -f$
If `points` has exactly five rows, the equation will be exact. If `points`
has *more* than five rows, the solution will be a least-squares one that
fits the data the best.
"""
from numpy.linalg import lstsq
x = points[:, 0]
y = points[:, 1]
if max(x.shape) < 5:
raise ValueError('Need >= 5 points to solve for conic section')
A = np.vstack([x**2, x * y, y**2, x, y]).T
fullSolution = lstsq(A, f * np.ones(x.size))
(a, b, c, d, e) = fullSolution[0]
return (a, b, c, d, e, f)
if __name__ == '__main__':
points = np.array([[-2, 4.], [-1., 1], [0., 0], [1., 1], [2., 4]])
print(fivePointsToConic(points))
</code></pre>
<p>Evaluating this in the Python REPL or running it as a script prints out the following solution to your problem, assuming <code>f = 1</code> (using the notation <code>a</code> through <code>f</code> on <a href="https://en.wikipedia.org/wiki/Five_points_determine_a_conic#Dimension_counting" rel="nofollow noreferrer">Wikipedia</a>):</p>
<pre><code>(0.62499999999999989, 9.7144514654701197e-17, -0.24999999999999983, -2.9598635688993528e-16, 0.62499999999999989, 1.0)
</code></pre>
<p>Or roughly <code>[0.625, 0, -0.25, 0, 0.625, 1]</code>. Does that match what you expected?</p>
|
python|numpy|geometry|linear-algebra
| 1
|
3,275
| 73,080,579
|
Comparing two pandas dataframe cells, and if equal ==, copy other content over - results in error
|
<p>I am importing excel files with products and product specific data. They look like this:</p>
<p>dfA</p>
<pre><code>EAN Code Product Name Color Price
12345 AAA xxx 9
45678 BBB zzz 10
</code></pre>
<p>and
dfB</p>
<pre><code>EAN Code Product Name New Price
12-345 AAA 10
45-678 BBB 11
</code></pre>
<p>I am importing these as always:</p>
<pre><code>dfA = pd.DataFrame (dfA, columns=["Season", "EAN Code", "Product Name", "..."] , dtype=str)
</code></pre>
<p>And I am merging them, since there are multiple different excel files:</p>
<pre><code>dfA = pd.concat([dfA1, dfA2, dfA3, dfA4, dfA5, dfA6, dfA7, dfA8, dfA9, dfA10, dfA11], ignore_index=False)
</code></pre>
<p>Then I am deleting the hyphen in the EAN column, because in excel file b there is an unnecessary hyphen in the EAN number.</p>
<pre><code>for col in dfB.columns:
dfB["EAN"] = dfB["EAN"].str.replace('-', '')
</code></pre>
<p>So far, so good.</p>
<p>Now I try to search through the EAN code column of dfA and search for the same product in dfB. When there is a match, I want to copy over the new price.
This worked great in my old code, although it took about 15 minutes for the script to search through a half million rows...
This worked quite well in the past, but now I am trying to achieve the same, but I get an error message. This is my simplified code:</p>
<pre><code>for i in dfA.index:
for j in dfB.index:
if dfA.loc[i, "EAN"] == dfB.loc[j, "EAN"]:
print ("EAN", number,"times found!")
number=number+1
dfA.loc[i, "Price"] = dfB.loc[j, "New Price"]
</code></pre>
<p>The print statement is only a feedback for me, so that I see wether the script is still doing something.</p>
<pre><code>Traceback (most recent call last):
File "c:...", line 72, in <module>
if dfA.loc[i, "EAN"] == dfB.loc[j, "EAN"]:
File "C:...", line 1527, in __nonzero__
raise ValueError(
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>Why am I getting this now?</p>
|
<p>Use a <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a>, after setting up a common EAN Code:</p>
<pre><code>out = dfA.merge(
dfB.assign(**{'EAN Code': dfB['EAN Code'].str.replace('-', '')
.astype(int) # only if dfA has an int
}),
on='EAN Code')
</code></pre>
<p>output:</p>
<pre><code> EAN Code Product Name_x Color Price Product Name_y New Price
0 12345 AAA xxx 9 AAA 10
1 45678 BBB zzz 10 BBB 11
</code></pre>
|
python|pandas
| 2
|
3,276
| 73,133,044
|
How to delete the last line of a groupby
|
<p>I am trying to handle the following dataframe.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'ID':[1,1,2,2,2,3,3,3,3],
'sum':[1,1,1,2,3,1,4,4,4],
'flg':[1,np.nan, 1, np.nan, np.nan, 1, 1, np.nan, np.nan],
'year':[2018, 2019, 2018, 2019, 2020, 2018, 2019, 2020, 2021]})
df['diff'] = df.groupby('ID')['sum'].apply(lambda x: x - x.iloc[-1])
</code></pre>
<p><a href="https://i.stack.imgur.com/2FfVe.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2FfVe.jpg" alt="enter image description here" /></a></p>
<p>The 'diff' is the difference from the 'sum' of the final year of each ID.</p>
<p>So, I tried the following code to remove the final year row used for comparison.</p>
<pre><code>comp = df.groupby('ID').last().reset_index()
col = list(df.columns)
fin =pd.merge(df, comp, on=col, how='outer', indicator=True).query(f'_merge != "both"')
</code></pre>
<p>But here is where the problem arises.</p>
<p>The contents of 'comp' are as follows.</p>
<p><a href="https://i.stack.imgur.com/xhK1U.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xhK1U.jpg" alt="enter image description here" /></a></p>
<p><strong>The 'comp' I originally wanted to get is below.</strong></p>
<pre><code>ID sum flg year diff
1 1 Nan 2019 0
2 3 Nan 2020 0
3 4 Nan 2021 0
</code></pre>
<p><strong>Why is the Nan in 'flg' being complemented to 1 by itself?</strong>
Please let me know if there is a better way to do this.</p>
|
<p>IIUC, use <code>head(-1)</code>:</p>
<pre><code>g = df.groupby('ID')
out = g.head(-1).assign(diff=g['sum'].apply(lambda x: x - x.iloc[-1]))
</code></pre>
<p>output:</p>
<pre><code> ID sum flg year diff
0 1 1 1.0 2018 0
2 2 1 1.0 2018 -2
3 2 2 NaN 2019 -1
5 3 1 1.0 2018 -3
6 3 4 1.0 2019 0
7 3 4 NaN 2020 0
</code></pre>
<p>Variant:</p>
<pre><code>g = df.groupby('ID')
out = g.head(-1).assign(diff=lambda d: d['sum'].sub(g['sum'].transform('last')))
</code></pre>
|
python|pandas|group-by
| 3
|
3,277
| 10,669,270
|
Python numpy memmap matrix multiplication
|
<p>Im trying to produce a usual matrix multiplication between two huge matrices (10*25,000,000).
My memory runs out when I do so. How could I use numpy's memmap to be able to handle this?
Is this even a good idea? I'm not so worried about the speed of the operation, I just want the result even if it means waiting some time. Thank you in advanced!</p>
<p>8 gbs ram, I7-2617M 1.5 1.5 ghz, Windows7 64 bits. Im using the 64 bit version of everything: python(2.7), numpy, scipy.</p>
<p>Edit1:</p>
<p>Maybe h5py is a better option?</p>
|
<p>you might try to use <code>np.memmap</code>, and compute the 10x10 output matrix one element at a time.</p>
<p>so you just load the first row of the first matrix and the first column of the second, and then <code>np.sum(row1 * col1)</code>.</p>
|
python|numpy|memory-management|matrix-multiplication|large-data
| 2
|
3,278
| 70,699,908
|
Using custom StyleGAN2-ada network in GANSpace (.pkl to .pt conversion)
|
<p>I trained a network using <a href="https://github.com/NVlabs/stylegan2-ada-pytorch" rel="nofollow noreferrer">Nvdia's StyleGAN2-ada pytorch implementation</a>. I now have a .pkl file. I would like to use the <a href="https://github.com/harskish/ganspace" rel="nofollow noreferrer">GANSpace code</a> on my network. However, to use GANSpace with a custom model, you need to be able to give it a checkpoint to your model that should be uploaded somewhere (they suggest Google Drive)(checkpoint required in code <a href="https://github.com/harskish/ganspace/blob/master/models/wrappers.py#L138" rel="nofollow noreferrer">here</a>). I am not entirely sure how this works or why it works like this, but either way it seems I need a .pt file of my network, not a .pkl file, which is what I currently have.</p>
<p>I tried following this <a href="https://github.com/dvschultz/ai/blob/master/Ganspace_S2DD.ipynb" rel="nofollow noreferrer">tutorial</a>. It seems the GANSpace code actually provides a file (models/stylegan2/convert_weight.py) that can do this conversion. However, it seems the file convert_weight.py that was supposed to be there has been replaced by a link to a whole other <a href="https://github.com/harskish/stylegan2-pytorch/tree/91ea2a7a4320701535466cce942c9e099d65670e" rel="nofollow noreferrer">repo</a>. If I try run the convert_weight.py file as below, it gives me the following error</p>
<pre><code>python content/stylegan2-pytorch/convert_weight.py --repo="content/stylegan2-pytorch/" "content/fruits2_output/00000-fruits2-auto1/network-snapshot-025000.pkl"
ModuleNotFoundError: No module named 'dnnlib'
</code></pre>
<p>This makes sense because there is no such dnnlib module. If I instead change it to look for the dnnlib module somewhere that does have it (<a href="https://github.com/skyflynil/stylegan2" rel="nofollow noreferrer">here</a>) like this</p>
<pre><code>python content/stylegan2-pytorch/convert_weight.py --repo="content/stylegan2/" "content/fruits2_output/00000-fruits2-auto1/network-snapshot-025000.pkl"
</code></pre>
<p>it previously gave me an error saying TensorFlow had not been installed (which in all fairness it hadn't because I am using PyTorch), much like this error reported <a href="https://github.com/rosinality/stylegan2-pytorch/issues/250" rel="nofollow noreferrer">here</a>. I then installed TensorFlow, but then it gives me this error.</p>
<pre><code>ModuleNotFoundError: No module named 'torch_utils'
</code></pre>
<p>again the same as in the previous issue reported on github. After installed torch_utils I get the same error as SamTransformer (ModuleNotFoundError: No module named 'torch_utils.persistence'). The response was "convert_weight.py does not supports stylegan2-ada-pytorch".</p>
<p>There is a lot I am not sure about, like why I need to convert a .pkl file to .pt in the first place. A lot of the stuff seems to talk about converting Tensorflow models to Pytorch ones, but mine was done in Pytorch originally, so why do I need to convert it? I just need a way to upload my own network to use in GANSpace - I don't really mind how, so any suggestions would be much appreciated.</p>
|
<p>Long story short, the conversion script provided was to convert weights from the official Tensorflow implementation of StyleGAN2 into Pytorch. As you mentioned, you already have a model in Pytorch, so it's reasonable for the conversion script to not work.</p>
<p>Instead of StyleGAN2 you used StyleGAN2-Ada which isn't mentioned in the GANspace repo. Most probably it didn't exist by the time the GANspace repo was created. As far as I know, StyleGAN2-Ada uses the same architecture as StyleGAN2, so as long as you manually modify your <code>pkl</code> file into the required <code>pt</code> format,you should be able to continue setup.</p>
<p>Looking at the <a href="https://github.com/harskish/stylegan2-pytorch/blob/91ea2a7a4320701535466cce942c9e099d65670e/convert_weight.py#L233" rel="nofollow noreferrer">source code for converting to Pytorch</a>, GANspace requires the <code>pt</code> file to be a <code>dict</code> with keys: <code>['g', 'g_ema', 'd', 'latent_avg']</code>. StyleGAN2-Ada <a href="https://github.com/NVlabs/stylegan2-ada-pytorch/blob/6f160b3d22b8b178ebe533a50d4d5e63aedba21d/training/training_loop.py#L357" rel="nofollow noreferrer">saves a <code>pkl</code> containing a <code>dict</code></a> with the following keys: <code>['G', 'G_ema', 'D', 'augment_pipe']</code>. You <em>might</em> be able to get things to work by loading the contents of your <code>pkl</code> file and resaving them in <code>pt</code> using these keys.</p>
|
tensorflow|pytorch|stylegan
| 0
|
3,279
| 70,553,630
|
how to count occurrences of specific string in previous x rows
|
<p>I have a list of activities and the approximate timestamp they occur in. I would like to count the occurences of a string in the previous 'x' rows (walking or running etc.) and add it to the dataframe. Pandas DataFrame does not support rolling (for non-numeric data) and I'm not sure if I can use shift to check like the previous 30, 50 or even 70 rows of data. I haven't made any concrete progress yet as I have been looking for similar questions/solutions on the site.</p>
<pre><code> timestamp event
0 2021-12-18 18:20:25+08:00 running
1 2021-12-18 18:20:27+08:00 running
2 2021-12-18 18:20:29+08:00 walking
3 2021-12-18 18:20:31+08:00 walking
4 2021-12-18 18:20:33+08:00 walking
5 2021-12-18 18:20:35+08:00 walking
6 2021-12-18 18:20:37+08:00 walking
7 2021-12-18 18:20:39+08:00 walking
8 2021-12-18 18:20:41+08:00 stationary
9 2021-12-18 18:20:43+08:00 stationary
10 2021-12-18 18:20:45+08:00 stationary
11 2021-12-18 18:20:47+08:00 stationary
df.loc[:, 'Count previous K'] = 0 # new column to count previous row activities
</code></pre>
<p>expected output:</p>
<pre><code> timestamp event Count previous K
0 2021-12-18 18:20:25+08:00 running 0
1 2021-12-18 18:20:27+08:00 running 0
2 2021-12-18 18:20:29+08:00 walking 1
3 2021-12-18 18:20:31+08:00 walking 2
4 2021-12-18 18:20:33+08:00 walking 3
5 2021-12-18 18:20:35+08:00 walking 4
6 2021-12-18 18:20:37+08:00 walking 5
7 2021-12-18 18:20:39+08:00 walking 6
8 2021-12-18 18:20:41+08:00 stationary 6
9 2021-12-18 18:20:43+08:00 stationary 6
10 2021-12-18 18:20:45+08:00 stationary 6
11 2021-12-18 18:20:47+08:00 stationary 6
12 2021-12-18 18:20:49+08:00 stationary 5
</code></pre>
<p>for a window size of 10 (including current index/row) counting occurences of walking.</p>
|
<p>You can use a boolean to see when a particular event is occurring, then perform a rolling sum on the boolean series. As @mozway pointed out, the argument <code>min_periods=1</code> will avoid <code>NaN</code> appearing at the beginning of the resulting DataFrame:</p>
<pre><code>df['walking_count'] = (df['event'] == 'walking').rolling(5, min_periods=1).sum()
</code></pre>
<p>This sets a new column <code>'walking_count'</code> to the following series:</p>
<pre><code>0 0.0
1 0.0
2 1.0
3 2.0
4 3.0
5 4.0
6 5.0
7 5.0
8 4.0
9 3.0
10 2.0
11 1.0
</code></pre>
|
python|pandas|rolling-computation
| 3
|
3,280
| 70,509,011
|
Error comparing dask date month with an integer
|
<p>The dask map_partitions function in the code below has a dask date field where its month is compared to an integer. This comparison fails with the following error:</p>
<blockquote>
<p>ValueError: The truth value of a Series is ambiguous. Use a.empty,
a.bool(), a.item(), a.any() or a.all().</p>
</blockquote>
<p>What is this error and how to fix it?</p>
<pre><code>import pandas as pd
import dask
import dask.dataframe as dd
import datetime
pdf = pd.DataFrame({
'id2': [1, 1, 1, 2, 2],
'balance': [150, 140, 130, 280, 260],
'date2' : [datetime.datetime(2021,3,1), datetime.datetime(2021,4,1),
datetime.datetime(2021,5,1), datetime.datetime(2021,1,1),
datetime.datetime(2021,2,1)]
})
ddf = dd.from_pandas(pdf, npartitions=1)
def func2(obj):
m = obj.date2.dt.month
if m > 10:
return 1
else:
return 2
ddf2 = ddf.map_partitions(func2, meta=int)
ddf2.compute() # <-- fails here
</code></pre>
|
<p>By using <code>.map_partition</code>, each dask dataframe partition (which is a pandas dataframe) is passed to the function <code>func2</code>. As a result, <code>obj.date2.dt.month</code> refers to a Series, not a single value, so by running the comparison with the integer, it's not clear to Python whether how to determine the validity of the comparison.</p>
<p>As one option, below is a snippet that creates a new column, conditional on <code>dt.month</code> result:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import dask
import dask.dataframe as dd
import datetime
pdf = pd.DataFrame({
'id2': [1, 1, 1, 2, 2],
'balance': [150, 140, 130, 280, 260],
'date2' : [datetime.datetime(2021,3,1), datetime.datetime(2021,4,1),
datetime.datetime(2021,5,1), datetime.datetime(2021,1,1),
datetime.datetime(2021,2,1)]
})
ddf = dd.from_pandas(pdf, npartitions=1)
def func2(obj):
m = obj.date2.dt.month
obj.loc[m>10, 'new_int']=1
obj.loc[m<=10, 'new_int']=2
return obj
ddf2 = ddf.map_partitions(func2)
ddf2.compute()
</code></pre>
|
python|pandas|dask
| 2
|
3,281
| 42,636,000
|
How to read a table and Sql query from Oracle in Pandas?
|
<p>I am completely new to Python and pandas. I want to load a some tables and Sql Queries from Oracle and Teradata to pandas Dataframes and want to analyse them.
I know, we have to create some connection strings to Oracle and Teradata in Pandas. Can you please suggest me them and also add the sample code to read both table and SQL query in that?</p>
<p>Thanks Inadvance</p>
|
<blockquote>
<p>I don't have Oracle server, so I take Teradata as an example</p>
<p>This is not the only way to to that, just one approach</p>
</blockquote>
<ul>
<li>Make sure you have installed Teradata ODBC Driver. Please refer to Teradata official website about the steps, I suppose you use Windows (since it is easy to use SQL Assistant to run query against Teradata, that is only on Windows). You can check it in <code>ODBC Data Source Administrator</code></li>
<li>Install <code>pyodbc</code> by the command <code>pip install pyodbc</code>. Here is the <a href="https://mkleehammer.github.io/pyodbc/" rel="nofollow noreferrer">official website</a> </li>
<li>The connection string is <code>db_conn_str = "DRIVER=Teradata;DBCNAME={url};UID={username};PWD={pwd}"</code></li>
<li>Get a connection object <code>conn = pyodbc.connect(db_conn_str)</code></li>
<li>Read data from a SQL query to a DataFrame <code>df = pd.read_sql(sql="select * from tb", con=conn)</code></li>
</ul>
<p>The similar for Oracle, you need to have the driver and the format of ODBC connection string. I know there is a python module from Teradata which supports the connection too, but I just prefer use odbc as it is more generic purpose.</p>
|
oracle|python-3.x|pandas|teradata
| 2
|
3,282
| 42,796,852
|
Python Pandas - Initialising and Populating a DataFrame
|
<p>Sorry, a real numpty question here:</p>
<p>Why does the following code not produce a dataframe with values in?</p>
<pre><code>df = pd.DataFrame()
df['First'] = 68
df['Second'] = 157
</code></pre>
<p>How should I modify the code? I'm looking for the column names to be <code>First</code> and <code>Second</code> and the corresponding values to be <code>68</code> and <code>157</code>.</p>
<p>This is only a simple example to illustrate the difficulty I'm having. In the actual code I go through a series of code to find the relevant values, so the dataframe cannot be initialised and completed in a single step using dictionaries. </p>
<p>Each column in the dataframe will only have one value. The dataframe will then be returned from the function, with the values appended to the "master" dataframe. If dataframes aren't the data structure I should be using to achieve this then please advise.</p>
<p>Thanks</p>
|
<p><code>df = pd.DataFrame()</code> produces an empty dataframe... But it's more empty than other empty dataframes. It has no index and no columns.</p>
<p><code>df['First'] = 68</code> assigns the value of 68 to a column named <code>'First'</code> for every index value. You'll note that the columns <code>['First', 'Second']</code> now exist. There were simply no index values for which to make the assignments of <code>68</code> and <code>157</code></p>
<p>This may be more obvious if I add an index to your <code>df</code> construction.</p>
<pre><code>df = pd.DataFrame(index=[1, 2, 3])
df['First'] = 68
df['Second'] = 157
print(df)
First Second
1 68 157
2 68 157
3 68 157
</code></pre>
|
python|pandas
| 2
|
3,283
| 42,765,510
|
Extracting element/string from a dict list value
|
<p>I'm having some trouble extracting info from a Python object. Basically, using notation like this works to get down to values within a dict I am working with:</p>
<pre><code>clean_content['Al38zGKg6YC4']['image']
</code></pre>
<p>I was expecting to see another nested dict which contained the key/value that I wanted to extract. However, what's there is a list that looks like a dict<img src="https://i.stack.imgur.com/pYSvR.png" alt="![enter image description here">[<a href="https://i.stack.imgur.com/pYSvR.png" rel="nofollow noreferrer">1</a>]</p>
<p>I'm looking to extract the 'permalink' field from this list, and then tie it back to the page ID in the original dict. Any suggestions?</p>
|
<p>I came to a working solution as follows: converting the original dictionary into a pandas dataframe, I extracted the column containing the lists of image data. Each of these lists contained URLs for multiple images, so I extracted the one required using dicts:</p>
<pre><code>image_df_temp = {}
image_df_url = {}
for i in range(len(image_df_base.index)):
image_df_temp[i] = pd.DataFrame(image_df_base.image[i])
image_df_url[i] = image_df_temp[i].iloc[0]
</code></pre>
<p>Finally, I edited the dicts missing the required metadata using .get:</p>
<pre><code>for i in range(len(image_df_base.index)):
image_df_url[i] = image_df_url[i].get('permalink', "No image available")
</code></pre>
|
python-2.7|pandas
| 0
|
3,284
| 25,022,899
|
Reading complex data into numpy array
|
<p>I need to read complex numbers from a text file into a numpy array. My question is similar to this one <a href="https://stackoverflow.com/questions/23231698/writing-and-reading-complex-numbers-using-numpy-savetxt-and-numpy-loadtxt">Writing and reading complex numbers using numpy.savetxt and numpy.loadtxt</a> however, the solution here is to alter the format the data is saved in. I don't have this luxury as the text file is generate by other software which I cannot change. A sample of the text file is a follows:</p>
<pre><code>25 + 0i
8.43818 + -4.94194i
4.46817 + -5.08305i
4.55764 + -3.02201i
2.69138 + -5.43104i
-0.151334 + -4.83717i
1.98336 + -1.3339i
3.59002 + -0.932973i
1.42727 + -0.617317i
1.02005 + -1.14214i
-0.15564 + 2.74564i
</code></pre>
<p>I have tried the following:</p>
<pre><code>np.loadtxt('file.txt',delimiter='\n',dtype=np.complex128)
</code></pre>
<p>However I get the error:</p>
<pre><code>ValueError: complex() arg is a malformed string
</code></pre>
<p>The posts I have read suggest this is an issue with the <code>+ -</code> notation in some lines, however, I get the same error even if the extra <code>+</code> signs are removed.</p>
|
<blockquote>
<p>however, the solution here is to alter the format the data is saved in</p>
</blockquote>
<p>Good news, you don't have to!</p>
<p><code>numpy.loadtxt</code> can take any iterable of lines, not just a file object.</p>
<p>So, you can wrap your file object in a simple generator that transforms the lines on the fly, and feed that to <code>loadtxt</code>, and everyone will be happy.</p>
<p>Like this:</p>
<pre><code>def transform_complex(line):
# insert your code here
with open('file.txt', 'rb') as f:
lines = map(transform_complex, f)
arr = np.loadtxt(lines, dtype=np.complex128)
</code></pre>
<p>(If you're using Python 2.x, and the file is large, you probably want to use <code>itertools.imap</code> rather than <code>map</code>.)</p>
<p>The "insert your code here" part, you fill in from the answer that worked, but wasn't an acceptable solution because it required modifying the files. Since I don't see such an answer in your link, I'm not sure what that is, but for example, maybe it's this:</p>
<pre><code>def transform_complex(line):
return line.replace(b'+ -', b'- ')
</code></pre>
<hr>
<p>Testing things out locally, it looks like there are actually three things wrong with your input.</p>
<p>You can test what the output <em>should</em> look like using <code>savetxt</code>. For example:</p>
<pre><code>>>> arr = np.array([1-2j])
>>> f = io.BytesIO()
>>> np.savetxt(f, arr)
>>> f.getvalue()
b' (1.000000000000000000e+00-2.000000000000000000e+00j)\n'
</code></pre>
<p>(In Python 2.x, you won't see the <code>b</code> prefix.)</p>
<p>Not all of those differences turn out to be relevant—you don't have to use exponential notation, you don't need parens, etc.—but it looks like these three are:</p>
<ul>
<li>No spaces allowed around the <code>+</code> in complex numbers.</li>
<li>The imaginary unit has to be <code>j</code>, not <code>i</code>.</li>
<li>No <code>+-</code> allowed.</li>
</ul>
<p>So:</p>
<pre><code>def transform_complex(line):
return line.replace(b' ', b'').replace(b'+-', b'-').replace(b'i', b'j')
</code></pre>
|
python|numpy
| 5
|
3,285
| 25,265,961
|
How to plot different CSV file data into a single graph, day as reference at x-axis
|
<p>I have written a code to plot several CSV file into single graph, using day as reference @ x-axis.</p>
<pre><code> CSV 1:
Value
time
2012-02-10 11:03:45 520429.598
2012-07-17 09:12:07 522155.535
... ...
2014-07-19 12:57:36 626192.186
2014-07-19 12:59:52 705789.899
</code></pre>
<p>3-years of data</p>
<pre><code> CSV 2:
Value
time
2014-02-10 11:03:45 520429.598
2014-02-17 12:12:07 522155.535
... ...
2014-07-19 12:57:36 626192.186
2014-07-19 12:59:52 705789.899
</code></pre>
<p>6-months of data</p>
<pre><code> CSV 3:
Value
time
2013-02-10 11:03:45 520429.598
2013-02-17 12:12:07 522155.535
... ...
2014-07-19 12:57:36 626192.186
2014-07-19 12:59:52 705789.899
</code></pre>
<p>18-months of data</p>
<p>...</p>
<pre><code> CSV n:
Value
time
2011-11-10 11:03:45 520429.598
2011-11-11 09:12:07 522155.535
... ...
2011-12-18 12:57:36 626192.186
2011-12-19 12:59:52 705789.899
</code></pre>
<p>2-months of data</p>
<p>I am finding difficulty to plot all in one graph from (0,0) reason every csv file starts with different dates, so I was trying to convert timestamp into no. of days, so that I can plot all in one go with starting from day 1, but couldn't get it right, any suggestions please!</p>
<p>for example:- csv 2 (6-months) should fall short as compared to csv 1 (3-years). How to proceed, any hint, help highly appreciated....</p>
<pre><code> data1 = pd.read_csv(path1,names=['time','Value'],sep=',', index_col=0, parse_dates=True, dayfirst=False)
data2 = pd.read_csv(path2,names=['time','Value'],
sep=',', index_col=0, parse_dates=True, dayfirst=False)
...
datan = pd.read_csv(pathn,names=['time','Value'],
sep=',', index_col=0, parse_dates=True, dayfirst=False)
ax1 = pd.rolling_mean(data1['Value'],100).plot()
ax2 = pd.rolling_mean(data2['Value'],100).plot()
...
axn = pd.rolling_mean(datan['Value'],100).plot()
plt.show()
</code></pre>
|
<p>You might try letting matplotlib handle turning the dates into your x-axis coordinates. I had a similar problem a little while ago and was able to plot multiple time series with different date ranges.</p>
<p><strong>Edit:</strong> I think this should do what you want. It plots Value based on the time difference in days between each row and the time in the first row of the file.</p>
<pre><code>import pandas as pd
import io
import matplotlib.pyplot as plt
csv1 = io.StringIO(u"""time,Value\n2012-02-10 11:03:45,520429.598\n2013-07-19 12:57:36,626192.186\n2014-07-19 12:59:52,705789.899""")
csv2 = io.StringIO(u"""time,Value\n2013-02-11 11:03:45,420429.598\n2013-05-17 12:12:07,522155.535\n2014-07-19 12:57:36,626192.186\n2014-07-19 12:59:52,705789.899""")
dat1 = pd.read_csv(csv1, parse_dates=['time'])
dat2 = pd.read_csv(csv2, parse_dates=['time'])
dat1['timeDiff'] = (dat1['time'] - dat1['time'][0]).astype('timedelta64[D]')
dat2['timeDiff'] = (dat2['time'] - dat2['time'][0]).astype('timedelta64[D]')
fig,ax = plt.subplots()
ax.plot(dat1['timeDiff'],dat1['Value'])
ax.plot(dat2['timeDiff'],dat2['Value'])
plt.show()
</code></pre>
|
python|matplotlib|plot|pandas
| 0
|
3,286
| 25,247,276
|
ggplot area plot: order in which groups are rendered affects visibility
|
<p>I have this dataframe:</p>
<pre><code> Hour ENTRIES_hourly_rainy ENTRIES_hourly_not_rainy ENTRIES_hourly_total
0 0 3559751 7248389 10808140
1 1 1606880 3361780 4968660
2 2 145719 282413 428132
3 3 26804 54543 81347
4 4 766333 1672134 2438467
5 5 379272 800500 1179772
6 6 59030 123764 182794
7 7 140758 242930 383688
8 8 1950224 3544500 5494724
9 9 3806660 7234291 11040951
10 10 477959 837528 1315487
11 11 235289 410994 646283
12 12 7787028 15026342 22813370
13 13 3145361 6265131 9410492
14 14 388437 776277 1164714
15 15 149688 297624 447312
16 16 5735102 11601840 17336942
17 17 4250723 8442271 12692994
18 18 564774 1123973 1688747
19 19 290350 544482 834832
20 20 8302496 16203000 24505496
21 21 4452747 8668253 13121000
22 22 418217 784093 1202310
23 23 115005 230668 345673
</code></pre>
<p>I'm using ggplot to display the different columns on the y-axis, and the Hour as a x-axis.
The problem is that the values of the first column are hidden:</p>
<pre><code>print ggplot(aes(x='Hour',ymin=0,ymax='value',fill='variable'),data = entriesPerHourPerRain) +
geom_area()+theme_matplotlib()
</code></pre>
<p><img src="https://i.stack.imgur.com/DOCpd.png" alt="enter image description here"></p>
<p>I could use alpha in order to see them:</p>
<pre><code> print ggplot(aes(x='Hour',ymin=0,ymax='value',fill='variable'),data = entriesPerHourPerRain) +
geom_area(alpha=0.6)+theme_matplotlib()
</code></pre>
<p><img src="https://i.stack.imgur.com/MWtmt.png" alt="enter image description here"></p>
<p>But I would prefer not to use alpha, and having the three areas visible.
I tried to change the "order" of the columns (ENTRIES_hourly_total, ENTRIES_hourly_not_rainy, ENTRIES_hourly_rainy), but it doesn't seem to change.</p>
<p>Does anyone know how to solve this? Thank you!</p>
<p><strong>-------------------Update:-----------------------</strong></p>
<p>Following the suggestions below, I tried to inverse the order of the variable (after melting).
I have grouped the hours in 6 groups (now you see them grouped in the column HourGroupIndex)</p>
<p>This is the standard case with [ENTRIES_hourly_rainy,ENTRIES_hourly_not_rainy,ENTRIES_hourly_total]
entriesPerHourPerRain = pandas.melt(entriesPerHourPerRain,id_vars=['HourGroupIndex'])</p>
<pre><code> HourGroupIndex variable value
0 0 ENTRIES_hourly_rainy 5339154
1 1 ENTRIES_hourly_rainy 1345393
2 2 ENTRIES_hourly_rainy 6470132
3 3 ENTRIES_hourly_rainy 11470514
4 4 ENTRIES_hourly_rainy 10840949
5 5 ENTRIES_hourly_rainy 13288465
6 0 ENTRIES_hourly_not_rainy 10947125
7 1 ENTRIES_hourly_not_rainy 2839328
8 2 ENTRIES_hourly_not_rainy 12027313
9 3 ENTRIES_hourly_not_rainy 22365374
10 4 ENTRIES_hourly_not_rainy 21712566
11 5 ENTRIES_hourly_not_rainy 25886014
12 0 ENTRIES_hourly_total 16286279
13 1 ENTRIES_hourly_total 4184721
14 2 ENTRIES_hourly_total 18497445
15 3 ENTRIES_hourly_total 33835888
16 4 ENTRIES_hourly_total 32553515
17 5 ENTRIES_hourly_total 39174479
</code></pre>
<p>This is the inveresed case with [ENTRIES_hourly_total,ENTRIES_hourly_rainy,ENTRIES_hourly_not_rainy]:</p>
<pre><code>custom_dict= {'ENTRIES_hourly_rainy':3, 'ENTRIES_hourly_not_rainy':2, 'ENTRIES_hourly_total':1}
entriesPerHourPerRain['rank'] = entriesPerHourPerRain['variable'].map(custom_dict)
entriesPerHourPerRain.sort(columns=['rank','HourGroupIndex'],inplace=True)
del entriesPerHourPerRain['rank']
entriesPerHourPerRain=entriesPerHourPerRain.reset_index()
del entriesPerHourPerRain['index']
HourGroupIndex variable value
0 0 ENTRIES_hourly_total 16286279
1 1 ENTRIES_hourly_total 4184721
2 2 ENTRIES_hourly_total 18497445
3 3 ENTRIES_hourly_total 33835888
4 4 ENTRIES_hourly_total 32553515
5 5 ENTRIES_hourly_total 39174479
6 0 ENTRIES_hourly_not_rainy 10947125
7 1 ENTRIES_hourly_not_rainy 2839328
8 2 ENTRIES_hourly_not_rainy 12027313
9 3 ENTRIES_hourly_not_rainy 22365374
10 4 ENTRIES_hourly_not_rainy 21712566
11 5 ENTRIES_hourly_not_rainy 25886014
12 0 ENTRIES_hourly_rainy 5339154
13 1 ENTRIES_hourly_rainy 1345393
14 2 ENTRIES_hourly_rainy 6470132
15 3 ENTRIES_hourly_rainy 11470514
16 4 ENTRIES_hourly_rainy 10840949
17 5 ENTRIES_hourly_rainy 13288465
</code></pre>
<p>But I still have the same plot in both cases</p>
<pre><code>print ggplot(aes(x='HourGroupIndex',ymin=0,ymax='value',fill='variable'),data = entriesPerHourPerRain) + geom_area(alpha=0.6,position='dodge')+scale_x_continuous(breaks = range(0,6), labels=['0-3','1-7','2-11','3-15','4-19','5-23'])+theme_matplotlib()
</code></pre>
<p>It seems that the order doesn't change much.</p>
<p><img src="https://i.stack.imgur.com/bmsDK.png" alt="enter image description here"></p>
|
<p>Your code looks like Python, not R, so this might not be helpful. I think the ggplot defaults are different in Python. (You did tag the question R though.)</p>
<p>In R:</p>
<pre><code>library(ggplot2)
library(reshape2)
gg <- melt(entriesPerHourPerRain, id="Hour")
ggplot(gg, aes(x=Hour,y=value,fill=variable)) +
geom_area(position="dodge")
</code></pre>
<p>produces this:</p>
<p><img src="https://i.stack.imgur.com/Tyd89.png" alt=""></p>
<p>Whereas this code:</p>
<pre><code>gg$variable <- factor(gg$variable, levels=rev(levels(gg$variable)))
ggplot(gg, aes(x=Hour,y=value,fill=variable)) +
geom_area(position="dodge")
</code></pre>
<p>produces this:
<img src="https://i.stack.imgur.com/Y9MMU.png" alt=""></p>
<p>So <code>variable</code> is a factor and the original levels are set in the order of the columns (_rainy, _not_rainy, _total). When plotted this way the last column's values obscure the other two. The line:</p>
<pre><code>gg$variable <- factor(gg$variable, levels=rev(levels(gg$variable)))
</code></pre>
<p>reverses the order of the factor levels, so now _rainy is plotted last.</p>
<p>Note the use of <code>position="dodge"</code>. The default in <code>geom_area(...)</code> [in R] is "stacked", wherein the y values will be additive (so the y-values for _total will be _rainy + _not_rainy + _total). You don't seem to want this.</p>
|
python|pandas|ggplot2
| 0
|
3,287
| 25,033,631
|
Multiple processes sharing a single Joblib cache
|
<p>I'm using Joblib to cache results of a computationally expensive function in my python script. The function's input arguments and return values are numpy arrays. The cache works fine for a single run of my python script. Now I want to spawn multiple runs of my python script in parallel for sweeping some parameter in an experiment. (The definition of the function remains same across all the runs).</p>
<p><strong>Is there a way to share the joblib cache among multiple python scripts running in parallel?</strong> This would save a lot of function evaluations which are repeated across different runs but do not repeat within a single run. I couldn't find if this is possible in <a href="https://pythonhosted.org/joblib/memory.html" rel="noreferrer">Joblib's documentation </a></p>
|
<p>Specify a common, fixed <code>cachedir</code> and decorate the function that you want to cache using</p>
<pre><code>from joblib import Memory
mem = Memory(cachedir=cachedir)
@mem.cache
def f(arguments):
"""do things"""
pass
</code></pre>
<p>or simply</p>
<pre><code>def g(arguments):
pass
cached_g = mem.cache(g)
</code></pre>
<p>Then, even if you are working across processes, across machines, if all instances of your program have access to <code>cachedir</code>, then common function calls can be cached there transparently.</p>
|
python|caching|numpy|joblib
| 13
|
3,288
| 39,174,694
|
label a point in graph using matplotlib for timeseries
|
<p>I have a pandas dataframe with 3 columns.
I plot col1 on Y axis and a time_stamps series on X axis.
For this series whenever col2 is -1, I want to highlight that point on graph as anomaly. I tried to get the coordinate and highlight using ax.text but I cannot get the correct coordinate since X axis is a time series. In the example below I am trying to plot third row coordinates since col2[2]==-1. </p>
<pre><code>import pandas
import matplotlib.pyplot as plt
df=df[["time_stamps","col1"]]
df.set_index("time_stamps",inplace=True)
ax=df.plot()
ticklabels = [l.get_text() for l in ax.xaxis.get_ticklabels()]
new_labels=[tick[-6:] for tick in ticklabels]
ax.xaxis.set_ticklabels(new_labels)
x1="16965 days 17:52:03"
y1=0.7
ax.text(x1, y1, "anaomly", fontsize=15)
plt.show()
</code></pre>
<p>Sample data looks like </p>
<pre><code>time_stamp=[16965 days 17:52:00,16965 days 17:52:02
16965 days 17:52:03,16965 days 17:52:05
16965 days 17:52:06,16965 days 17:52:08
16965 days 17:52:09,16965 days 17:52:11
16965 days 17:52:12,16965 days 17:52:14]
col1=[0.02,0.01,0.7,0.019,0.019,0.017,0.023,0.04,0.072,0.05]
col2=[1,1,-1,1,1,1,1,1,1,1]
</code></pre>
|
<p>I figured it out that I can convert it to seconds and then label the points as anomalies. This is what i did.</p>
<pre><code>def changetotimedelta(row):
return pd.to_timedelta(row["time_stamps"])/ np.timedelta64(1,'D')
def main()
df=pd.read_csv(inputFile)
df["time"]=df.apply(changetotimedelta,axis=1)
new_df=df[["time","col1"]]
new_df.set_index("time",inplace=True)
ax=new_df.plot()
x1=pd.to_timedelta("16965 days 17:52:03")/ np.timedelta64(1,'D')
y1=0.7
ax.annotate('anomaly', xy=(x1, y1), xytext=(x2, 1),
arrowprops=dict(facecolor='red', shrink=0.01),)
plt.show()
</code></pre>
|
python|pandas|matplotlib|dataframe|time-series
| 1
|
3,289
| 39,177,438
|
Python numpy reshape issues
|
<p>I am working with machine learning and numpy and having issues with the <code>np.reshape()</code> function. My data sizes are reading in the variable console as Dataframe(22,5), x(21,4),x_lately(1,4), y(22,). I tried reshaping them with <code>np.reshape(22,5)</code> since that is the dataframe size and it is giving me this error:</p>
<blockquote>
<p>ValueError: total size of new array must be unchanged</p>
</blockquote>
<p>I presume I am either not understanding something or there is something wrong with my system.</p>
<p>Thank you in advance.</p>
|
<p>You really need to describe your data better.</p>
<p>I can imagine the pieces fitting together:</p>
<pre><code>Dataframe(22,5), x(21,4),x_lately(1,4), y(22,)
</code></pre>
<p>The dataframe has 22 rows, 5 columns. <code>y</code> could be one column (22 items). <code>x</code> could be most of the 4 other columns, and <code>x_lately</code> the rest of <code>x</code>.</p>
<p>You don't describe how <code>x</code>,<code>y</code>,<code>x_lately</code> are related to the dataframe, though it looks like they might be arrays extracted from it (either as copies or views). In any case, they cannot be <code>reshaped</code> to match the size of the dataframe. At best they are pieces of the dataframe.</p>
<p><code>reshape</code> isn't a resize or expand or pad or anything like that. You don't even explain why want to reshape anything.</p>
|
python|numpy|machine-learning
| 0
|
3,290
| 19,804,241
|
Python Pandas check if a value occurs more then once in the same day
|
<p>I have a Pandas dataframe as below. What I am trying to do is check if a station has variable <code>yyy</code> and any other variable on the same day (as in the case of <code>station1</code>). If this is true I need to delete the whole row containing <code>yyy</code>. </p>
<p>Currently I am doing this using <code>iterrows()</code> and looping to search the days in which this variable appears, changing the variable to something like "delete me", building a new dataframe from this (because <a href="https://stackoverflow.com/questions/15972264/why-doesnt-this-function-take-after-i-iterrows-over-a-pandas-dataframe">pandas doesn't support replacing in place</a>) and filtering the new dataframe to get rid of the unwanted rows. This works now because my dataframes are small, but is not likely to scale.</p>
<p><strong>Question:</strong> This seems like a very "non-Pandas" way to do this, is there some other method of deleting out the unwanted variables?</p>
<pre><code> dateuse station variable1
0 2012-08-12 00:00:00 station1 xxx
1 2012-08-12 00:00:00 station1 yyy
2 2012-08-23 00:00:00 station2 aaa
3 2012-08-23 00:00:00 station3 bbb
4 2012-08-25 00:00:00 station4 ccc
5 2012-08-25 00:00:00 station4 ccc
6 2012-08-25 00:00:00 station4 ccc
</code></pre>
|
<p>I might index using a boolean array. We want to delete rows (if I understand what you're after, anyway!) which have <code>yyy</code> and more than one <code>dateuse</code>/<code>station</code> combination.</p>
<p>We can use <code>transform</code> to broadcast the size of each <code>dateuse</code>/<code>station</code> combination up to the length of the dataframe, and then select the rows in groups which have length > 1. Then we can <code>&</code> this with where the <code>yyy</code>s are.</p>
<pre><code>>>> multiple = df.groupby(["dateuse", "station"])["variable1"].transform(len) > 1
>>> must_be_isolated = df["variable1"] == "yyy"
>>> df[~(multiple & must_be_isolated)]
dateuse station variable1
0 2012-08-12 00:00:00 station1 xxx
2 2012-08-23 00:00:00 station2 aaa
3 2012-08-23 00:00:00 station3 bbb
4 2012-08-25 00:00:00 station4 ccc
5 2012-08-25 00:00:00 station4 ccc
6 2012-08-25 00:00:00 station4 ccc
</code></pre>
|
python|python-2.7|pandas
| 4
|
3,291
| 23,561,856
|
FFT vs least squares fitting of fourier components?
|
<p>So I've got a signal, and I've tried fitting a curve to it using two methods that I thought should have been numerically equivalent, but apparently are not.</p>
<p><strong>Method 1: Explicit fitting of sinusoids by least squares:</strong></p>
<pre><code>def curve(x, a0, a1, b1, a2, b2):
return a0 + a1*np.cos(x/720*2*math.pi) + b1*np.sin(x/720*2*math.pi) + a2*np.cos(x/720*2*math.pi*2) + b2*np.sin(x/720*2*math.pi*2)
def fit_curve(xdata, ydata):
guess = [10, 0, 0, 0, 0]
params, params_covariance = optimize.curve_fit(curve, xdata, ydata, guess)
return params, params_covariance
</code></pre>
<p><strong>Method 2: Use of inbuilt FFT algorithm to do the same thing</strong>:</p>
<pre><code> f = np.fft.rfft(y,3)
curve = np.fft.irfft(f, width)
</code></pre>
<p>I have two problems. The first one is minor, the FFT is 'out of scale', so I apply a scaling factor <code>mean(y)/mean(curve)</code> to fix it, which is a bit of a hack. I'm not sure why this is the case.</p>
<p>The main problem I have is that I believe these should produce almost identical results, but they don't. The explicit fitting produces a tighter fit every time than the FFT results- my question is, <strong>should it?</strong></p>
|
<p>Take a look at the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.rfft.html" rel="nofollow">docstring for <code>np.fft.rfft</code></a>. In particular, this: "If n is smaller than the length of the input, the input is cropped." When you do this:</p>
<pre><code> f = np.fft.rfft(y,3)
</code></pre>
<p>you are computing the FFT of the first three data points in <code>y</code>, not the first three Fourier coefficients of <code>y</code>.</p>
|
python|numpy|scipy|fft|curve-fitting
| 3
|
3,292
| 22,480,884
|
Pandas read_csv. Using '^A' as a delimeter
|
<p>I have a csv file where the field separators are <code>^A</code> characters. When I try</p>
<pre><code>df = pd.read_csv(p_file, sep='^A')
</code></pre>
<p>The file looks as follows:</p>
<pre><code>0J0NrQDHHx^A989.0^A1
0J0NrQDHHx^A1204.0^A1
0U0NrQDHHx^A1654.0^A1
0N0NrQDHHx^A1679.0^A3
...
</code></pre>
<p>However, when I run the command above, I get everything in one column. Why?</p>
|
<p>Use <code>sep='\^A</code>:</p>
<pre><code>pd.read_csv(p_file, sep='\^A')
</code></pre>
<p>Reason is that <code>sep</code> also accepts regular expressions, and <code>^</code> has a special meaning in regular expressions, so the <code>\</code> is used to escape this.</p>
|
python|pandas
| 4
|
3,293
| 62,432,442
|
Pandas - list with dict to dataframe
|
<p>I have the following output:</p>
<pre><code>output = [{'test1': [('No Data', '[Auto] Clock in sync with NTP')]},
{'test2': [('No Data', '[Auto] Clock in sync with NTP'),
('No Data','Lambda - Concurrent Execution Limit')
}]
</code></pre>
<p>Needed Dataframe:</p>
<pre><code> test1 test2
0 'No Data', '[Auto] Clock in sync with NTP') 'No Data', '[Auto] Clock in sync with NTP'
1 'No Data','Lambda - Concurrent Execution Limit'
</code></pre>
<pre><code>from pprint import pprint
import pandas as pd
df = pd.json_normalize(output)
pprint(df)
</code></pre>
<p>Not working as I need.
Could you help me? </p>
|
<pre><code>output = [{'test1': [('No Data', '[Auto] Clock in sync with NTP')]},
{'test2': [('No Data', '[Auto] Clock in sync with NTP'),
('No Data','Lambda - Concurrent Execution Limit')]
}]
</code></pre>
<p>You can do this, but it is not a great idea to have lists be cells in a pandas dataframe.</p>
<pre><code>pd.concat([pd.DataFrame(o) for o in output], axis=1)
</code></pre>
|
python|pandas|dataframe
| 2
|
3,294
| 62,240,134
|
How to plot the frequency of a specific word through time
|
<p>I have a dataset</p>
<pre><code>Column1 Column2 Column3 ....
2020/05/02 She heard the gurgling water (not relevant)
2020/05/02 The water felt delightful
2020/05/03 Another instant and I shall never again see the sun, this water, that gorge!
2020/05/04 Fire would have been her choice.
2020/05/04 Everywhere you go in world are water fountains.
...
2020/05/31 She spelled "mother" several times.
</code></pre>
<p>I would like to plot the frequency of word 'water' through time. How could I do?</p>
<p>What I have tried is defining a pattern:</p>
<pre><code>pattern=['water']
</code></pre>
<p>and apply <code>re.search</code>: </p>
<pre><code>df['Column2'] = df['Column2'].apply(lambda x: re.search(pattern,x).group(1))
</code></pre>
<p>to select the word <code>water</code> in <code>Column2</code>.
To group by date and count them, I would use</p>
<pre><code>df.groupby(['Column1','Column2'])['Column1'].agg({'Frequency':'count'})
</code></pre>
<p>and to plot them I would use matplotlib (using a bar plot):</p>
<pre><code>df['Column1'].value_counts().plot.bar()
</code></pre>
<p>This is what I have tried, with a lot of mistakes.</p>
|
<p><strong>Setup</strong></p>
<pre><code>df = pd.DataFrame({
"Column1": ["2020/05/02", "2020/05/02", "2020/05/03", "2020/05/04", "2020/05/04", "2020/05/31"],
"Column2": ["She heard the gurgling water water", "The water felt delightful", "Another instant and I shall never again see the sun, this water, that gorge!", "Fire would have been her choice.", "Everywhere you go in world are water fountains.", "She spelled 'mother' several times."]
})
</code></pre>
<p><strong>Logic</strong></p>
<pre><code># for each string, get the number of times a phrase appears
df['phrase_count'] = df['Column2'].str.count('water')
# plot the results
df.groupby('Column1')['phrase_count'].sum().plot(kind='bar')
</code></pre>
<p><strong>Results</strong></p>
<p><a href="https://i.stack.imgur.com/Wkb1k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wkb1k.png" alt="enter image description here"></a></p>
|
python|regex|pandas|matplotlib
| 1
|
3,295
| 62,343,258
|
Custom Keras Layer with Trainable Scalars
|
<p>I'm (trying to) writing a custom Keras layer which implements the following componentwise:</p>
<p>x -> a <em>x + b</em>ReLU(x)</p>
<p>with a and b trainable weights. Here's what I've tries so far:</p>
<pre class="lang-py prettyprint-override"><code>class Custom_ReLU(tf.keras.layers.Layer):
def __init__(self, units=d):
super(Custom_ReLU, self).__init__()
self.units = units
def build(self, input_shape):
self.a1 = self.add_weight(shape=[1],
initializer = 'random_uniform',
trainable=True)
self.a2 = self.add_weight(shape=[1],
initializer = 'random_uniform',
trainable=True)
def call(self,inputs):
return self.a1*inputs + self.a2*(tf.nn.relu(inputs))
</code></pre>
<p>However, get errors. I think the issue is that I have no clue how to define trainable "scalars"... Am I correct in thinking this and how to do that?</p>
<p><strong>Edit/Additions:</strong></p>
<p>Here is how I'm trying to build my plain-vanilla feed-forward architecture with ReLU replaced by "Custom_ReLU":</p>
<pre><code># Build Vanilla Network
inputs_ffNN = tf.keras.Input(shape=(d,))
x_ffNN = fullyConnected_Dense(d)(inputs_ffNN)
for i in range(Depth):
x_HTC = Custom_ReLU(x_ffNN)
x_ffNN = fullyConnected_Dense(d)(x_ffNN)
outputs_ffNN = fullyConnected_Dense(D)(x_ffNN)
ffNN = tf.keras.Model(inputs_ffNN, outputs_ffNN)
</code></pre>
<p>And here is a snippet of the errors:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-27-8bf6fc4ae89d> in <module>
7 #x_HTC = tf.nn.relu(x_HTC)
8 x_HTC = BounceLU(x_HTC)
----> 9 x_HTC = HTC(d)(x_HTC)
10 outputs_HTC = HTC(D)(x_HTC)
11 ffNN_HTC = tf.keras.Model(inputs_HTC, outputs_HTC)
~/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
816 # Eager execution on data tensors.
817 with backend.name_scope(self._name_scope()):
--> 818 self._maybe_build(inputs)
819 cast_inputs = self._maybe_cast_inputs(inputs)
820 with base_layer_utils.autocast_context_manager(
~/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in _maybe_build(self, inputs)
2114 # operations.
2115 with tf_utils.maybe_init_scope(self):
-> 2116 self.build(input_shapes)
2117 # We must set self.built since user defined build functions are not
2118 # constrained to set self.built.
<ipython-input-5-21623825ed35> in build(self, input_shape)
5
6 def build(self, input_shape):
----> 7 self.w = self.add_weight(shape=(input_shape[-1], self.units),
8 initializer='random_normal',
9 trainable=False)
TypeError: 'NoneType' object is not subscriptable
</code></pre>
|
<p>I have no problem using your layer:</p>
<pre><code>class Custom_ReLU(tf.keras.layers.Layer):
def __init__(self):
super(Custom_ReLU, self).__init__()
self.a1 = self.add_weight(shape=[1],
initializer = 'random_uniform',
trainable=True)
self.a2 = self.add_weight(shape=[1],
initializer = 'random_uniform',
trainable=True)
def call(self,inputs):
return self.a1*inputs + self.a2*(tf.nn.relu(inputs))
</code></pre>
<p>usage:</p>
<pre><code>d = 5
inputs_ffNN = tf.keras.Input(shape=(d,))
x_ffNN = tf.keras.layers.Dense(10)(inputs_ffNN)
x_HTC = Custom_ReLU()(x_ffNN)
outputs_ffNN = tf.keras.layers.Dense(1)(x_HTC)
ffNN = tf.keras.Model(inputs_ffNN, outputs_ffNN)
ffNN.compile('adam', 'mse')
ffNN.fit(np.random.uniform(0,1, (10,5)), np.random.uniform(0,1, 10), epochs=10)
</code></pre>
<p>here the full example: <a href="https://colab.research.google.com/drive/1n4jIsY3qEDvtobofQaUPO3ysUW9bQWjs?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1n4jIsY3qEDvtobofQaUPO3ysUW9bQWjs?usp=sharing</a></p>
|
python|tensorflow|keras|neural-network|layer
| 1
|
3,296
| 62,439,852
|
Sampling from gaussian distribution
|
<p>My question is very specific. Given a <code>k</code> dimensional Gaussian distribution with mean and standard deviation, say I wish to sample <code>10</code> points from this distribution. But the <code>10</code> samples should be very different from each other. For example, I do not wish to sample <code>5</code> of those very close to the mean (By very close, we may assume for this example within <code>1</code> sigma) which may happen if I do random sampling. Let us also add an additional constraint that all the drawn samples should be at least 1 sigma away from each other. Is there a known way to sample in this fashion methodically? Is there any such module in PyTorch which can do so? </p>
<p>Sorry if this thought is ill posed but I am trying to understand if such a thing is possible.</p>
|
<p>To my knowledge there is no such library. The problem you are trying to solve is straightforward. Just check if the random number you get is 'far enough' from the mean. The complexity of that check is constant. The probability of a point <strong>not</strong> to be between one sigma from the mean is ~32%. It is not that unlikely.</p>
|
python|pytorch|sampling|normal-distribution
| 0
|
3,297
| 51,223,859
|
Pandas: how to merge two dataframes for different years?
|
<p>I have two dataframes <code>df1</code> and <code>df2</code>.</p>
<p><code>df1</code> contains the information of people and how money they received and the ID code.</p>
<pre><code>df1 = pd.DataFrame({'Money' : [359,45,780,77,93,257],
'NAME' : ['A', 'B', 'C', 'D', 'E', 'F'],
'ID' : ['0', '1', '2', '3', '4','5']})
</code></pre>
<p>In <code>df2</code> we have the classification of each ID for different years, for example like the following:</p>
<pre><code> C ID Year
0 1 0 2015
1 2 0 2016
2 3 0 2017
3 1 1 2016
4 1 1 2017
5 3 2 2017
6 3 3 2015
7 1 3 2017
8 1 4 2015
9 3 5 2016
10 2 5 2017
</code></pre>
<p>where <code>C</code> is the classification. I would like to merge the two dataframe in order to have a dataframe like the following</p>
<pre><code>df3
ID Money NAME 2015 2016 2017
0 0 359 A 1 2 3
1 1 45 B NaN 1 1
2 2 780 C NaN NaN 2
3 3 77 D 2 NaN 1
4 4 93 E 1 NaN NaN
5 5 257 F NaN 3 2
</code></pre>
|
<p>First, create the year columns:</p>
<pre><code>c = df2.set_index(['ID', 'Year']).unstack('Year').C
</code></pre>
<p>That gives you:</p>
<pre><code>Year 2015 2016 2017
ID
0 1.0 2.0 3.0
1 NaN 1.0 1.0
2 NaN NaN 3.0
3 3.0 NaN 1.0
4 1.0 NaN NaN
5 NaN 3.0 2.0
</code></pre>
<p>Then <code>df1.join(c, 'ID')</code>:</p>
<pre><code> Money NAME ID 2015 2016 2017
0 359 A 0 1.0 2.0 3.0
1 45 B 1 NaN 1.0 1.0
2 780 C 2 NaN NaN 3.0
3 77 D 3 3.0 NaN 1.0
4 93 E 4 1.0 NaN NaN
5 257 F 5 NaN 3.0 2.0
</code></pre>
|
python|pandas|dataframe|merge
| 2
|
3,298
| 51,470,082
|
how to remove trailing zeros showing in my data description in jupyter notebook
|
<p>How do I remove trailing zeros showing in my data description in Jupyter notebook?</p>
<p><img src="https://i.stack.imgur.com/IEFW4.png" alt="enter image description here"></p>
|
<p><code>pd.set_option('precision', 1)</code> might help.</p>
|
python|pandas|jupyter-notebook
| 2
|
3,299
| 51,367,545
|
How to use numpy.where to change all pixels of an image?
|
<p>I have an image of shape (300,300,3) consisting of these pixels <code>[255, 194, 7],[224, 255, 8],[230, 230, 230],[11, 102, 255]</code>. I want to change this pixel <code>[230, 230, 230]</code> to <code>[255,255,255]</code>. And rest other pixels to <code>[0,0,0]</code>. So I'm applying numpy <code>where</code> function to switch the pixels. Below is the code:</p>
<pre><code>import numpy
im = numpy.array([[[255, 194, 7],[224, 255, 8],[230, 230, 230],[11, 102, 255]]])
im[np.where((im == [230, 230, 230]).all(axis = 2))] = [255,255,255]
im[np.where((im != [255,255,255]).all(axis = 2))] = [0,0,0]
</code></pre>
<p>The first code is working fine, but all the pixels that have <code>255</code> in it like <code>[11, 102, 255]</code> doesnot get flipped at all in the second line. and the image remains same. Can anyone tell me what I'm doing wrong ?</p>
|
<pre><code>import numpy as np
im = np.array([[[255, 194, 7],[224, 255, 8],[230, 230, 230],[11, 102, 255]]])
</code></pre>
<p>Like this?<br>
Make a mask and use it to change the values.</p>
<pre><code>>>> mask = im == 230
>>> im[mask] = 255
>>> im[np.logical_not(mask)] = 0
>>> im
=> array([[[ 0, 0, 0],
[ 0, 0, 0],
[255, 255, 255],
[ 0, 0, 0]]])
</code></pre>
<p>Or using numpy.where</p>
<pre><code>>>> np.where(im==230, 255, 0)
=> array([[[ 0, 0, 0],
[ 0, 0, 0],
[255, 255, 255],
[ 0, 0, 0]]])
</code></pre>
|
python|image|numpy|image-processing
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.