text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
How to change an NA value in a specific row in R?
I am very new in R and still learning. My data is the Titanic.csv which has 891 observation and 13 variables. I would like to change the NA value on the 62 observation of PassengerID 62 in column 12 (column_name "Embarked") from NA to "S" and 830 observation to "C".
I found similar postings, but it didn't give me what I need.
How to replace certain values in a specific rows and columns with NA in R?
How to change NA value in a specific row and column?
My assignment is asking to use the below function.
boat<-within(boat,Embarked[is.na(Embarked)]<-"your choice here")
If I do this
boat<-within(boat,Embarked[is.na(Embarked)]<- "S")
or "C" in where it says "your choice here" it replaces both observations with either "S" or "C".
Below is the example of the Titanic.csv file.
PassengerId Survived Pclass Name Sex Age SibSp Parch Ticket Fare Cabin Embarked
1 0 3 Braund, Owen male 22 1 0 A/5 1717.25 S
2 1 1 Cumings,John female 38 1 0 PC 9971.28 C85 C
17 0 3 Rice, Eugene male 2 4 1 382 29.125 Q
18 1 2 Williams,Charles male 0 0 2443 13 S
60 0 3 Goodwin, William male 11 5 2 CA 21 46.9 S
61 0 3 Sirayanian, Orsen male 22 0 0 2669 7.2292 C
62 1 1 Icard, Amelie female 38 0 0 11357 80 B28 NA
63 0 1 Harris, Henry male 45 1 0 36973 83.475 C83 S
My apologies if the sample dataframe is somewhat condensed.
I'm guessing the text about deleting git branches is included by accident?
Hi Camille, thanks for editing my dataframe. No, the text about deleting git branches is not included. Thank you!
# df is you data frame, first one is the row e.g 62, second one is column e.g 12
df[62, 12]
# Now assign "S" with the `<-` operator
df[62, 12] <- "S"
# and check if NA is changed to S
df[62, 12]
#Embarked
#<chr>
# 1 S
# Same with
df[830, 12] <- "C"
Thank you TarJae for your help and speedy reply! It is working now.
To expand on this for a new R user, the brackets after a data object indicate where you want to locate things. The first part (before the comma) indexes rows, the second part (after the comma) indexes columns. So df[62, 12] is indexing the row and column. So if you want to change everything in a column or row, you could leave one blank (i.e. df[,12] calls all rows in column 12, and df[62,] calls all columns in row 12. You could also call multiple specific rows/columns, i.e. df[c(1,6,12), c(3,5,9)] would call observations 1, 6, and 9 and columns 3, 5, and 9. Good luck!
| common-pile/stackexchange_filtered |
Raychaudhuri equation for black holes
Since the Raychaudhuri equation is defined only for timeline and null geodesic congruences, is it valid to use his equation to describe the null generators of the event horizon of a black hole?
Assuming that it is valid, if the black hole is stationary, the expansion parameter, $\theta=k^a_{;a}-\kappa$,will be zero. Here $k$ is a Killing vector that is timelike outside the black hole (must exist because the spacetime is stationary), while $\kappa$ is the surface gravity, defined by, $k^ak^b_{;a}=\kappa k^b$. But since $k$ is a Killing vector, its trace will be zero and hence the surface gravity of all stationary black holes will also turn out to be zero, which cannot be true. Can someone point out my mistake?
The trace of $\nabla_ak_b$ is zero, but $\theta\ne\nabla_ak^a$.
In Poisson's book that's what he has said. For non-affine paramterizations, $\theta$ equals the divergence of $k$ minus $\kappa$, the surface gravity. It's the last problem of section 2.6.
Why is $\theta=0$, for a stationary black hole?
@DrakeMarquis I had the same question. I don't fully understand why, but I think if $\theta$ were non zero, the event horizon won't be stable as there will be accumulation of geodesics. Poisson has mentioned it after equation 5.91 in his book but unfortunately it's not clear at all.
The key here is that when Poisson stated $\theta=0$ for a stationary black hole, he referred to the affinely parameterized geodesic congruences. In that case, $\nabla_ak_b$ is no longer antisymmetric, which compensates $\kappa$.
@DrakeMarquis Your answer is correct but in problem 7 of page 220 he considers a Killing vector field that is not affinely parametrized but still has a zero expansion. How should the situation be resolved then?
Unfortunately, I only have the draft version, which has only 190 pages. I have no access to googlebook either.
Let us continue this discussion in chat.
There is a subtilty in applying Raychaudhuri equation to the null generators of some null hypersurface. The relation $\theta=\nabla_ak^a-\kappa$ applies to the congruence define all over (some open subset of) the spacetime. In the case of the null generators $\xi^a$ of the horizon, the null $\xi^a$ is defined only on the horizon. In order to use that relation, you have to extend $\xi^a$ to some null congruence near the horizon. Although the expansion is independent of the extension, $\nabla_ak^a$ depends on it. So $\theta=\nabla_ak^a-\kappa$ cannot be used in this case.
Potentially related: https://physics.stackexchange.com/questions/465313/are-the-horizon-generators-radial-null-geodesics-also/465461#465461
| common-pile/stackexchange_filtered |
Regex for a word followed by multiple numbers in a sequence
I am trying to create a general regular expression that returns a boolean if it sees one word followed but more than one set of numbers.
If should report a TRUE for the following test cases:
"AB 40<PHONE_NUMBER> 1296 1496 1722 1847 1915 1979 2018 2056 2106 2240 2294 2394 2539 2587 2660"
"SB 466<PHONE_NUMBER> 1554 1761 1807 1828 1852 1875 1899 1922 1940 1968 2007 2046 2074 2075 2158"
"Assembly 772 1604 1932 2187 2543 2759 2777"
"Senate 241 1110 1342 1822 1865 1957"
And FALSE for the following cases:
"ACR 105"
"SJR 29"
"AB 2359 AB 2456 and AB 2823"
"CDFA Budget for Pierce's Disease"
"PERS, STRS, Regents"
If you can provide two answers: one regular expression looking for the letters and the numbers and another answer looking for multiple numbers back to back, I would greatly appreciate it.
Thank you so much for your help!
Not understood "multiple numbers back to back".
Sorry: Multiple numbers in a sequence within a string.
My first thought of your requirement is like this, however to me it's not clear if words/numbers can only be separated by whitespace or even other characters like comma in which case I'd use that variant.
Try this.
grepl('\\D+\\d+\\s\\d+', x)
# [1] TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE
Explanation
\D+ one or more non-digit
\d+ one or more digit
\s whitespace
Yes, this works! Thank you so much. So I can learn for the future, could you please explain how this regular expression works?
| common-pile/stackexchange_filtered |
How to pre-populate WTForm FileField with current image file name?
When a user wants to update a post using a WTForm, how I can I pre-populate the FileField input on the form so that it displays the current image name?
Currently, the user has to upload the same image every time that they update other aspects of the post otherwise the current image will be removed as the post is being updated without an image.
How I am currently pre-populating the form:
if request.method == 'GET':
form.title.data = post.title
form.language.data = post.language
form.alt.data = post.alt
form.text.data = post.content
form.tag_1.data = post.tag_1
form.tag_2.data = post.tag_2
form.tag_3.data = post.tag_3
form.image.data = post.img # This is the line where I am trying to pre-populate the FieldField
Currently, the user has to upload the same image every time that they update other aspects of the post otherwise the current image will be removed as the post is being updated without an image.
I was stuck with the same issue while working on a project. Though I did not solve the issue of pre-populating the filefield, I had a work-around. I did the following.
While the request.method is 'GET', check whether the image attribute in the database is NULL. If it is not null, the location of the image file should be passed to the filefield. Using your variables, in your routes.py, it would be,
form.image.process_data(path_to_image)
Display an image thumbnail under the filefield in the HTML form using Jinja2 if condition, to let the user know that an image is already there. (I was using a Bootstrap card)
{% if form.image.data%}
<div class="card" style="width: 12rem;">
<img src="{{form.image.data}}" class="card-img-top" alt="{{form.alt.data}}">
<div class="card-body">
<p class="card-text">{{form.alt.data}}</p>
</div>
</div>
During POST, check whether the form.img data is NULL. If it is not NULL, write to the database. In your routes.py, it would be,
if form.image.data.filename:
post.img = form.image.data.filename
| common-pile/stackexchange_filtered |
How do you backup all your emails, by copying the 'Outlook' folder? (Outlook 2013)
[PC Advisor UK:] [...] All versions of Outlook use pst files, but the disk location has changed over the years. Right-click your email account on the left in Outlook and select Data File Properties. Click Advanced to see the filename. Only the pst file is essential, but copy the whole folder to back up Outlook’s settings and preferences too. Just copy it back to restore it, or click File, Open, Open Outlook Data File. Outlook can work with any pst file stored anywhere.
[Howtogeek.com:] Backing up your Outlook data file(s) is as easy as copying your .OST or .PST files over to another hard drive, cloud server, thumb drive, or some other storage medium. You remember how to locate your data files?
Open the Mail control panel and click the “Data Files” button. Click “Open File Location…” and you can then back up your data files.
[I don't display this picture here)
After following the instructions above, I arrive at:
C:\Users\[my username]\AppData\Local\Microsoft\Outlook
However, all Outlook Data Files here are .ost, and NOT .pst. In File > Open & Export > Open Outlook Data File, 'Open Outlook Data File' only works for .pst.
What is wrong here? Where's the folder with all the .pst files?
How can I backup all my Outlook email accounts simultaneously?
I'll assume you have Windows Explorer set to show extensions, since your question implies that. To find out where your .pst files are, in Outlook go to File > Account Settings > Account Setting and click on the Data Files tab. You should see a list of all your active Outlook data files and their full pathnames. Mine are in: C:\Users\UserName\Documents\Outlook Files\, and I have "Archive.pst", "Outlook.pst", and "Personal Folders.pst". I have this folder set as part of my routine backup procedure and use Norton 360 for that purpose. If you want you can schedule a task to copy this folder to an external drive each time you start your computer. I do this to copy my user profile pictures and wallpaper to my user account folders, which then get picked up as part of my weekly backups. You can create a .cmd file in the root of your system drive (typically C:\) that looks something like this:@ECHO OFF
echo y | del C:\Users\UserName\Documents\Outlook Files\*.* /s
xcopy C:\Users\UserName\Documents\Outlook Files\*.* X:\Yourpath /y
and then add this file in Task Scheduler.
Thank you. Alas, I haven't been able to try your answer because your answer revealed another problem with my Outlook 2013: http://superuser.com/q/934323/269574
Why not just export to a .pst from the menu. Should be file > export > pst.
It's actually File > Open & Export > Import/Export > Export to a File > Outlook Data File (.pst) ...
@fixer1234 It is an answer - finding a .pst file is not the same as exporting a .ost file even though the menu commands are similar to start with.
'.ost' are the internal Outlook caches that it uses when working with an Exchange server. They are largely useless for the backup purposes as they will have just this and that of the Inbox.
If this doesn't apply, perhaps your .pst files are in a custom location. To look them up you can use Mail applet from the Control Panel. Look under Data Files and it should have your primary Inbox .pst listed there.
| common-pile/stackexchange_filtered |
pod spec lint error: "unexpected '@' in program"
I am creating a podspec file for an open-source project I created, and I am utilizing Apple's UIImage+ImageEffects.h/.m for a blur effect, and inside there, they use the new @import Accelerate; syntax versus #import <Accelerate/Accelerate.h>. When I run pod spec lint SFSCollectionMenu.podspec, I receive the error:
ERROR | [xcodebuild] SFSCollectionMenu/UIImage+ImageEffects.h:96:1: error: unexpected '@' in program
Does the CocoaPods platform not like the new modules syntax? I'm relatively new to CocoaPods so there very well could be something I'm missing. I followed Nils Hayat's blog for creating a simple pod (which fit my scenario perfectly -- nothing outlandish), http://nilsou.com/blog/2013/07/21/how-to-open-source-objective-c-code/, and receive this error in his section about verifying the pod via lint.
Here's relevant lines from podspec file:
s.source_files = 'SFSCollectionMenuController.*{h,m}', 'SFSCircleLayout.*{h,m}', 'SFSMenuCell.*{h,m}', 'UIImage+ImageEffects.*{h,m}'
s.frameworks = 'Accelerate', 'QuartzCore', 'AVFoundation'
Thank you for any help!
I don't think Modules are turned on by default in Xcode, can you test whether adding spec.compiler_flags = "-fmodules" to turn on modules in your generated library fixes this?
Thanks for the quick reply, @orta. I added s.compiler_flags = '-fmodules' right after the s.frameworks... line in the podspec file, ran lint, and it seems to have fixed it. Now I just get a "WARN | Comments must be deleted", but I'm sure that can be handled separately. Thank you!
I think modules are the default in new projects with the new SDK
Yea, I believe you're right. Modules are ON in my project's Build Settings (started project using Xcode 5 betas where it's ON by default), that's why I was wondering why it failed.
Great, as modules are probably the future we should think about this, any chance you can write an issue on the cocoapods github?
| common-pile/stackexchange_filtered |
Count and concatenate MySQL entries
Essentially, I have a table that is like this:
FirstName, LastName, Type
Mark, Jones, A
Jim, Smith, B
Joseph, Miller, A
Jim, Smith, A
Jim, Smith, C
Mark, Jones, C
What I need to do is be able to display these out in PHP/HTML, like:
Name | Total Count Per Name | All Type(s) Per Name
which would look like...
Mark Jones | 2 | A, C
Jim Smith | 3 | B, A, C
Joseph Miller | 1 | A
Jim Smith | 3 | B, A, C
Jim Smith | 3 | B, A, C
Mark Jones | 2 | A, C
I have spent time trying to create a new table based off the initial one, adding these fields, as well as looking at group_concat, array_count_values, COUNT, and DISTINCT, along with other loop/array options, and cannot figure this out.
I've found a number of answers that count and concatenate, but the problem here is I need to display each row with the total count/concatenation on each, instead of shortening it.
Please post some code with an example of what you have tried so far.
How about doing it like this?
SELECT aggregated.* FROM table_name t
LEFT JOIN (
SELECT
CONCAT(FirstName, ' ', LastName) AS Name,
COUNT(Type) AS `Total Count Per Name`,
GROUP_CONCAT(Type SEPARATOR ',') AS `All Type(s) Per Name`
FROM table_name
GROUP BY Name) AS aggregated
ON CONCAT(t.FirstName, ' ', t.LastName) = aggregated.Name
This works great. With some slight modification I got it to work: $query = " SELECT aggregated.* FROM 'TABLE 1' LEFT JOIN ( SELECT CONCAT('COL 5', ' ', 'COL 6') AS Name, COUNT('COL 20') AS 'Total Count Per Name', GROUP_CONCAT('COL 20' SEPARATOR ', ') AS 'All Type(s) Per Name', 'COL 17' AS Image, CONCAT('COL 7', ', ', 'COL 8') AS Location, 'COL 29' AS Profits FROM 'TABLE 1' GROUP BY Name) AS aggregated ON CONCAT('TABLE 1'.'COL 5', ' ', 'TABLE 1'.'COL 6') = aggregated.Name";
As you can see I added some fields in the SELECT. I'm wondering if there is a way I can modify that to still show the unique entries for Profit and Image.
Sure, but you'd have to select them from TABLE 1 and not from aggregated. So the query becomes SELECT aggregated.*, 'TABLE 1'.'COL 17' AS Image, ... and so on.
Hmm, I still can't figure out where exactly this code is supposed to go, is it all supposed to go after ON CONCAT(t.FirstName, ' ', t.LastName) = aggregated.Name?
At the beginning of the main query. Remove them from the aggregated view. The aggregated view should only have the columns that were already there in my example.
I got it, I did SELECT 'TABLE 1'.'COL 17' AS Image, aggregated.* ... and it worked. Thanks for all your help in pointing me in the right direction and understanding!
Without an ORDER BY clause, the order the rows will be returned in is indeterminate. Nothing wrong with that, by my personal preference is to have the result to be repeatable.
We can use an "inline view" (MySQL calls it a derived table) to get the count and the concatenation of the Type values for (FirstName,LastName).
And then perform a join operation to match the rows from the inline view to each row in the detail table.
SELECT CONCAT(d.FirstName,' ',d.LastName) AS name
, c.total_coount_per_name
, c.all_types_per_name
FROM mytable d
JOIN ( SELECT b.FirstName
, b.LastName
, GROUP_CONCAT(DISTINCT b.Type ORDER BY b.Type) AS all_types_per_name
, COUNT(*) AS total_count_per_name
FROM mytable b
GROUP
BY b.FirstName
, b.LastName
) c
ON c.FirstName = d.FirstName
AND c.Last_name = d.LastName
ORDER BY d.FirstName, d.LastName
If you have an id column or some other "sequence" column, you can use that to specify the order the rows are to be returned; same thing in the GROUP_CONCAT function. You can omit the DISTINCT keyword from the GROUP_CONCAT if you want repeated values... 'B,A,B,B,C',
| common-pile/stackexchange_filtered |
Same relation for two fields in one data type
can I write something like this:
type User {
primaryStory: Story! @relation(name: "userStory")
secondaryStories: [Story] @relation(name: "userStory")
}
type Story {
user: User! @relation(name: "userStory")
}
Basically what I want is to have a single relation name for both primary story and secondary stories.
This is not possible. With the name specified in an ambiguous way it is not clear what the userStory relates to.
You could either have 2 different relation names, or have a construct like the following and filter accordingly:
type User {
stories: Story! @relation(name: "userStories")
}
type Story {
author: User! @relation(name: "userStories")
isPrimary: Boolean!
}
| common-pile/stackexchange_filtered |
Yet another Classpath java
Hello ive some trouble with CLASSPATh and compiling in Java.
I have this test code:
package prova;
import cat.almata.daw.utils.Log;
public class ProvaUtils {
public static void main(String[] args) {
try{
System.out.println("------ Testing DAW Utils ---");
System.out.println("--- Testing Log class ---");
Log logTest = new Log("logTest.log");
System.out.println("Going to write an Info");
logTest.info("Some information to write in the log");
System.out.println("Going to write a Warning");
logTest.warning("Some warning to write in the log");
System.out.println("Going to write an Error");
logTest.error("Some error to write in the log");
System.out.println("Hola Mon");
}catch(Exception e){
//System.out.println("An Exception has been thrown, with message:" + e.getMessage());
e.printStackTrace();
}
}
}
So i got a DAWUtils.jar which has inside the .class of Log. If you uncompress that jar you got the path cat -> almata -> daw -> utils -> Log.class.
So i need Log.class to execute my test code.
My folder:
practicaJava/
├── DAWUtils.jar
├── MANIFEST.MF
├── prova
│ ├── ProvaUtils.java
When i want to compile and execute ( using Jar):
eduardo@MiPcLinux:~/Descargas/practicaJava$ javac -classpath "/home/eduardo/Descargas/practicaJava/DAWUtils.jar" ./prova/ProvaUtils.java
eduardo@MiPcLinux:~/Descargas/practicaJava$ java prova.ProvaUtils ------ Testing DAW Utils ---
--- Testing Log class ---
Exception in thread "main" java.lang.NoClassDefFoundError: cat/almata/daw/utils/Log
at prova.ProvaUtils.main(ProvaUtils.java:13)
Caused by: java.lang.ClassNotFoundException: cat.almata.daw.utils.Log
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 1 more
It works ok if in that same folder i uncompress the jar;
$jar xvf DAWUtils.jar
And having:
eduardo@MiPcLinux:~/Descargas$ tree practicaJava/
practicaJava/
├── cat
│ └── almata
│ └── daw
│ └── utils
│ └── Log.class
├── DAWUtils.jar
├── MANIFEST.MF
├── META-INF
│ └── MANIFEST.MF
└── prova
└── ProvaUtils.java
Im running all the instructions from my folder practicaJava. So it shouldnt work that way, i think that if i have the jar file it has to get the Log.class and work without needing to uncompress it. So Log.class i have it uncompress and inside DAWUtils.jar
I also need to make a jar ( including the .jar in the MAnifest.mf) and it doesnt work for the same reason:
$jar cvfm ProvaUtils.jar MANIFEST.MF prova/ DAWUtils.jar
With Manifest.mf:
Manifest-Version: 1.0
Created-By: 1.5.0_06 (BEA Systems, Inc.)
Main-Class: prova.ProvaUtils
Class-Path: /home/eduardo/Descargas/practicaJava/DAWUtils.jar
It seems i need to have the DAWUtils.jar uncompress to work.As you can see im in Ubuntu Linux.
Any help would be apreciated.
As requested by:
eduardo@MiPcLinux:~/Descargas/practicaJava$ javac -classpath "/home/eduardo/Descargas/practicaJava/prova/DAWUtils.jar" ./prova/ProvaUtils.java
eduardo@MiPcLinux:~/Descargas/practicaJava$ java prova.ProvaUtils ------ Testing DAW Utils ---
--- Testing Log class ---
Exception in thread "main" java.lang.NoClassDefFoundError: cat/almata/daw/utils/Log
at prova.ProvaUtils.main(ProvaUtils.java:13)
Caused by: java.lang.ClassNotFoundException: cat.almata.daw.utils.Log
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 1 more
Thanks
You'll need to specify the same classpath at runtime. Are you using an IDE? It will sort all of this out for you.
You should use the same classpath for both the javac and the java.
@Michael Certainly you can use a JAR name in a CLASSPATH. For about 15 years that was the only way to get a JAR into a CLASSPATH.
No i cant use an IDE. Its a practice and i have to use the jar in the classpath in command line. Thanks
@EJP Fair enough. Edited my comment then.
@RealSkeptic it doesnt work either....if i dont have the jar uncompress i got the error: --- Testing Log class ---
Exception in thread "main" java.lang.NoClassDefFoundError: cat/almata/daw/utils/Log
[edit] your question and add the exact command line you used for java with the classpath, and the error. Never put code or errors in comments.
Done its put on "requested by" at the end of the question
The solution was to refer to the DAWUtils.jar with .:DAWUtils.jar
So:
$ javac -classpath .:DAWUtils.jar ./prova/ProvaUtils.java
Works and then:
$ java -classpath .:DAWUtils.jar prova.ProvaUtils
To execute.
An answer should not be used to ask follow-up questions. Create a new question.
| common-pile/stackexchange_filtered |
Creating a complex data structure by parsing an output file
I'm looking for some advice on how to create a data structure by parsing a file.
This is the list i have in my file.
'01bpar( 2)= 0.23103878E-01 half_life= 0.3000133E+02 relax_time= 0.4328278E+02',
'01bpar( 3)= 0.00000000E+00',
'02epar( 1)= 0.49998963E+02',
'02epar( 2)= 0.23103878E-01 half_life= 0.3000133E+02 relax_time= 0.4328278E+02',
'02epar( 3)= 0.00000000E+00',
'02epar( 4)= 0.17862340E-01 half_life= 0.3880495E+02 relax_time= 0.5598371E+02',
'02bpar( 1)= 0.49998962E+02',
'02bpar( 2)= 0.23103878E-01 half_life= 0.3000133E+02 relax_time= 0.4328278E+02',
What I need to do is construct a data structure which chould look like this:
http://img11.imageshack.us/img11/7645/datastructure.gif
(couldn't post it becouse of new user restriction)
I've managed to get all the regexp filters to get what is needed, but i fail to construct the structure.
Ideas?
What exactly do you mean by "i fail to construct the structure"? Can we see what you tried?
It is a mess i have, without a clear understanding on what i did. If you think it would contribute even dough answers have been provided which are more understanding, i will post it.
Consider using a dict of dicts.
#!/usr/bin/env python
import re
import pprint
raw = """'01bpar( 2)= 0.23103878E-01 half_life= 0.3000133E+02 relax_time= 0.4328278E+02',
'01bpar( 3)= 0.00000000E+00',
'02epar( 1)= 0.49998963E+02',
'02epar( 2)= 0.23103878E-01 half_life= 0.3000133E+02 relax_time= 0.4328278E+02',
'02epar( 3)= 0.00000000E+00',
'02epar( 4)= 0.17862340E-01 half_life= 0.3880495E+02 relax_time= 0.5598371E+02',
'02bpar( 1)= 0.49998962E+02',
'02bpar( 2)= 0.23103878E-01 half_life= 0.3000133E+02 relax_time= 0.4328278E+02',"""
datastruct = {}
pattern = re.compile(r"""\D(?P<digits>\d+)(?P<field>[eb]par)[^=]+=\D+(?P<number>\d+\.\d+E[+-]\d+)""")
for line in raw.splitlines():
result = pattern.search(line)
parts = result.groupdict()
if not parts['digits'] in datastruct:
datastruct[parts['digits']] = {'epar':[], 'bpar':[]}
datastruct[parts['digits']][parts['field']].append(parts['number'])
pprint.pprint(datastruct, depth=4)
Produces:
{'01': {'bpar': ['0.23103878E-01', '0.00000000E+00'], 'epar': []},
'02': {'bpar': ['0.49998962E+02', '0.23103878E-01'],
'epar': ['0.49998963E+02',
'0.23103878E-01',
'0.00000000E+00',
'0.17862340E-01']}}
Revised version in light of comments:
pattern = re.compile(r"""\D(?P<digits>\d+)(?P<field>[eb]par)[^=]+=\D+(?P<number>\d+\.\d+E[+-]\d+)""")
default = lambda : dict((('epar',[]), ('bpar',[])))
datastruct = defaultdict( default)
for line in raw.splitlines():
result = pattern.search(line)
parts = result.groupdict()
datastruct[parts['digits']][parts['field']].append(parts['number'])
pprint.pprint(datastruct.items())
which produces:
[('02',
{'bpar': ['0.49998962E+02', '0.23103878E-01'],
'epar': ['0.49998963E+02',
'0.23103878E-01',
'0.00000000E+00',
'0.17862340E-01']}),
('01', {'bpar': ['0.23103878E-01', '0.00000000E+00'], 'epar': []})]
dict.has_key(key) is deprecated now, in favor of key in dict. The pattern you use here is a prime candidate for using a defaultdict (from the collections module).
Is this giving the right solution? Values for ['02']['epar'] look incorrect.
Doh. I was more focussed on the data structure and muffed the re.
Thank you for your time and answer. I have to study it now, specially the regexp pattern.
It's theoretically possible to have pyparsing create the whole structure using parse actions, but if you just name the various fields as I have below, building up the structure is not too bad. And if you want to convert to using RE's, this example should give you a start on how things might look:
source = """\
'01bpar( 2)= 0.23103878E-01 half_life= 0.3000133E+02 relax_time= 0.4328278E+02',
'01bpar( 3)= 0.00000000E+00',
'02epar( 1)= 0.49998963E+02',
'02epar( 2)= 0.23103878E-01 half_life= 0.3000133E+02 relax_time= 0.4328278E+02',
'02epar( 3)= 0.00000000E+00',
'02epar( 4)= 0.17862340E-01 half_life= 0.3880495E+02 relax_time= 0.5598371E+02',
'02bpar( 1)= 0.49998962E+02',
'02bpar( 2)= 0.23103878E-01 half_life= 0.3000133E+02 relax_time= 0.4328278E+02', """
from pyparsing import Literal, Regex, Word, alphas, nums, oneOf, OneOrMore, quotedString, removeQuotes
EQ = Literal('=').suppress()
scinotationnum = Regex(r'\d\.\d+E[+-]\d+')
dataname = Word(alphas+'_')
key = Word(nums,exact=2) + oneOf("bpar epar")
index = '(' + Word(nums) + ')'
keyedValue = key + EQ + scinotationnum
# define an item in the source - suppress values with keys, just want the unkeyed ones
item = key('key') + index + EQ + OneOrMore(keyedValue.suppress() | scinotationnum)('data')
# initialize summary structure
from collections import defaultdict
results = defaultdict(lambda : {'epar':[], 'bpar':[]})
# extract quoted strings from list
quotedString.setParseAction(removeQuotes)
for raw in quotedString.searchString(source):
parts = item.parseString(raw[0])
num,par = parts.key
results[num][par].extend(parts.data)
# dump out results, or do whatever
from pprint import pprint
pprint(dict(results.iteritems()))
Prints:
{'01': {'bpar': ['0.23103878E-01', '0.00000000E+00'], 'epar': []},
'02': {'bpar': ['0.49998962E+02', '0.23103878E-01'],
'epar': ['0.49998963E+02',
'0.23103878E-01',
'0.00000000E+00',
'0.17862340E-01']}}
I have chosen to study the variant which doesn't use pyparsing (since i haven't had any contact with it till now) since it seems more familiar. But I am still grateful for your effort.
No problem, pyparsing is just one tool out there, one which I perhaps overuse just out of familiarity. People also use pyparsing sometimes as a way to easily crank out a prototype, and then for production (for higher speed, mostly, IF required), will convert to spark/ply/yapps.
Your top level structure is positional, so it's a perfect choice for a list. Since lists can hold arbitrary items, then a named tuple is perfect. Each item in the tuple can hold a list with it's elements.
So, your code should look something like this pseudocode:
from collections import named tuple
data = []
newTuple = namedtuple('stuff', ['epar','bpar'])
for line in theFile.readlines():
eparVals = regexToGetThemFromString()
bparVals = regexToGetThemFromString()
t = newTuple(eparVals, bparVals)
data.append(t)
You said you could already loop over the file, and had various regex to get the data, so I didn't bother adding all the details.
| common-pile/stackexchange_filtered |
Web component Light DOM slot equivalent for injecting dynamic content into specific element
Im struggling to do something that seems like it should be simple. I am creating my first web component and it cannot be within the shadowDOM. I need it to receive some dynamic content and inject it into the correct div like so:
<component>
<div id="one"><!-- Dynamic Content ends up here --></div>
<div id="two">...</div>
</component>
This is simple using slots in shadowDOM, why is it proving difficult in lightDOM?
Any help is appreciated.
receive from where? <slot> only works in shadowDOM. Please add a minimal-reproducible-example StackOverflow Snippet will help readers execute your code with one click. And help create answers with one click. Thank you.
You problem is essentially about using slots without Shadow DOM. Unfortunately, there is no out-of-box solution for this problem. You can use combination of <template> tag and component props:
<component>
<template>
<div id="one"><!-- Dynamic Content ends up here --></div>
<div id="two">...</div>
</template>
</component>
And, then for each instance of the <component />, you will clone the template using cloneNode method. And, then you append this Node as a child of your custom component. You can augment the DOM however you like.
Now, second part of the equation is adding the dynamic content to the div#one and div#two. For this purpose, you should declare the two props over the component and the caller will pass the content. The web component will then append that content via DocumentFragment, innerHTML or other DOM APIs. This is roughly similar to what vDOM based frameworks like React, Vue do. The only difference is instead of passing real DOM, it passes virtual tree as a children or render prop.
So, without a proper framework, this is a lot of work. I would recommend you consider using Shadow DOM and keep the Shadow Root open.
| common-pile/stackexchange_filtered |
How to generate multiple xml files based on rowcount
I am able to successfully generate a large xml file for a million records
using the below code.
cmd.CommandText = "SELECT PARTNER_NO FROM T1 WHERE YEAR LIKE '%2011-2012%'";
XmlWriter myWriter = XmlWriter.Create("C:/Test/BookInfo.xml")
myWriter.WriteStartDocument(true);
using(OracleDataReader reader = cmd.ExecuteReader(CommandBehavior.CloseConnection))
{
reader.FetchSize = reader.RowSize * 5000;
myWriter.WriteStartElement("master_table");
while(reader.Read())
{
myWriter.WriteStartElement("partner");
myWriter.WriteElementString("partner_no", reader[0].ToString());
myWriter.WriteElementString("id","0008");
myWriter.WriteEndElement();
}
}
myWriter.WriteEndDocument();
myWriter.Flush();
myWriter.Close();
cmd.Dispose()
This is how my xml looks :
I now have a new requirement of generating multiple xml files of 10,000 records each based on the rowcount.
Any pointers on this? How should I proceed on this? This needs to be done on c#
You can do it in a single SQL call:
SQL> SELECT XMLSERIALIZE(CONTENT xmlelement("MASTER_TABLE",
2 xmlagg(xmlelement("PARTNER",
3 XMLELEMENT("PARTNER_NO",
4 partner_no),
5 XMLELEMENT("ID", '0008')
6 )
7 ))
8 INDENT) xmltxt
9 FROM t1
10 GROUP BY CEIL(ROWNUM/2)-- replace 2 by 10000
11 ;
XMLTXT
-------------------------------------------------------------------------------
<MASTER_TABLE>
<PARTNER>
<PARTNER_NO>00001</PARTNER_NO>
<ID>0008</ID>
</PARTNER>
<PARTNER>
<PARTNER_NO>00034</PARTNER_NO>
<ID>0008</ID>
</PARTNER>
</MASTER_TABLE>
<MASTER_TABLE>
<PARTNER>
<PARTNER_NO>00046</PARTNER_NO>
<ID>0008</ID>
</PARTNER>
<PARTNER>
<PARTNER_NO>00052</PARTNER_NO>
<ID>0008</ID>
</PARTNER>
</MASTER_TABLE>
For further reading:
XMLELEMENT
XMLAGG
XMLSERIALIZE
Thanks Vincent. But this is the first time I am working on XML and need a bit of c# code of generating multiple xml files on on click of a button.
I'm not familiar with C#, but you will probably close the file every 10k rows in the loop, then open another one in your original code.
what should be the out parameter if this is to be written in a stored procedure as I am getting ERROR line 11, col 24, ending_line 11, ending_col 29, Found 'INDENT', Expecting: ) and how should VERSION ='1.0', encoding="UTF-8" , STANDALONE= YES be shown in the xml file
This ones not the correct solution I was looking for but even this works for me.. Thanks.
| common-pile/stackexchange_filtered |
Mapping an action name to a class type in C++
I want to link a bunch of 'action' strings to single 'parent' string but there could be multiple strings that own 'action' strings.
map<string, string> ctType;
ctType.insert(pair<string, string>("1")("default"));
ctType.insert(pair<string, string>("2")("register"));
ctType.insert(pair<string, string>("2")("addaddress"));
ctType.insert(pair<string, string>("3")("request"));
What is the best way to complete this?
Have a look at std::multimap...
"What is the best way to complete this?" -- This can't be answered in general, because it depends on your requirements (which you didn't state). Even then, there will be many different opinions.
You could either (1) use std::multimap, or you could (2) use a map with containers as its elements. Variant (1) is rather short, but has the drawback that it is harder to control how the "multiple entries" behave in terms of, for example, duplicates; and its probably harder to implement a "nested loop" over the keys and each of its values. Decide on your own:
int main() {
std::multimap<int, std::string> m;
m.insert({1,"First0"});
m.insert({1,"First0"});
m.insert({1,"First1"});
m.insert({3,"Third"});
for (auto& p : m) {
auto key = p.first;
auto val = p.second;
cout << key << ":" << val << endl;
}
std::map<int,std::set<std::string>> m2;
m2[1].insert("First0");
m2[1].insert("First0");
m2[1].insert("First1");
m2[3].insert("Third");
for (auto& p : m2) {
auto key = p.first;
auto set = p.second;
cout << key << ":" << endl;
for (auto &val : set) {
cout << " " << val << endl;
}
}
}
Output:
1:First0
1:First0
1:First1
3:Third
1:
First0
First1
3:
Third
How would this work if std::multimap<std::string, std::string> m;
| common-pile/stackexchange_filtered |
Why does time.Now().UnixNano() returns the same result after an IO operation?
I use time.Now().UnixNano() to calculate the execution time for some part of my code, but I find an interesting thing. The elapsed time is sometimes zero after an IO operation! What's wrong with it?
The code is running in Go 1.11, and use the standard library "time". Redis library is "github.com/mediocregopher/radix.v2/redis". The redis server version is 3.2. I'm running this on Windows, with VSCode Editor.
isGatherTimeStat = false
if rand.Intn(100) < globalConfig.TimeStatProbability { // Here I set TimeStatProbability 100
isGatherTimeStat = true
}
if isGatherTimeStat {
timestampNano = time.Now()
}
globalLogger.Info("time %d", time.Now().UnixNano())
resp := t.redisConn.Cmd("llen", "log_system")
globalLogger.Info("time %d", time.Now().UnixNano())
if isGatherTimeStat {
currentTimeStat.time = time.Since(timestampNano).Nanoseconds()
currentTimeStat.name = "redis_llen"
globalLogger.Info("redis_llen time sub == %d", currentTimeStat.time)
select {
case t.chTimeStat <- currentTimeStat:
default:
}
}
Here are some logs:
[INFO ][2019-07-31][14:47:53] time<PHONE_NUMBER>269444200
[INFO ][2019-07-31][14:47:53] time<PHONE_NUMBER>269444200
[INFO ][2019-07-31][14:47:53] redis_llen time sub == 0
[INFO ][2019-07-31][14:47:58] time<PHONE_NUMBER>267691700
[INFO ][2019-07-31][14:47:58] time<PHONE_NUMBER>270689300
[INFO ][2019-07-31][14:47:58] redis_llen time sub == 2997600
[INFO ][2019-07-31][14:48:03] time<PHONE_NUMBER>268195600
[INFO ][2019-07-31][14:48:03] time<PHONE_NUMBER>268195600
[INFO ][2019-07-31][14:48:03] redis_llen time sub == 0
[INFO ][2019-07-31][14:48:08] time<PHONE_NUMBER>267631100
[INFO ][2019-07-31][14:48:08] time<PHONE_NUMBER>267631100
[INFO ][2019-07-31][14:48:08] redis_llen time sub == 0
Which hardware architecture are you running on?
Are you running this on Windows?
@icza Yes, it's running on Windows.
@icza In the VSCode Editor actually.
There's nothing wrong with your code. On windows the resolution of time is often higher than 1 ms. Meaning if you query the actual time twice within a millisecond, you often get the same value. Your operation sometimes yields t = 2997600ns = 0.3μs, which could explain this. Blame it on Windows.
@Flimzy 64bit version of Windows 10.
Related: https://stackoverflow.com/q/21262821/13860 https://stackoverflow.com/q/3744032/13860
@icza You are right! I run it on Linux, and there is no zero result. Thank you!
There's nothing wrong with your code. On Windows, the system time is often only updated once every 10-15 ms or so, which means if you query the current time twice within this period, you get the same value.
Your operation sometimes yields t = 2997600ns = 3ms, which could explain this. Blame it on Windows.
Related questions:
How precise is Go's time, really?
How to determine the current Windows timer resolution?
Measuring time differences using System.currentTimeMillis()
The system time has 100 ns precision, but it's only updated by the system timer interrupt every 15 ms or so. The timer resolution can be lowered to a millisecond or less, but this affects the scheduler and causes a significant increase in power consumption. If we need to measure precise intervals, as opposed to logging a system timestamp, we use the system performance counter instead, i.e. whatever way Go exposes WINAPI QueryPerformanceCounter.
time.Now() resolution under Windows has been improved in Go 1.16, see #8687 and CL #248699.
The timer resolution should now be around ~500 nanoseconds.
Test program:
package main
import (
"fmt"
"time"
)
func timediff() int64 {
t0 := time.Now().UnixNano()
for {
t := time.Now().UnixNano()
if t != t0 {
return t - t0
}
}
}
func main() {
var ds []int64
for i := 0; i < 10; i++ {
ds = append(ds, timediff())
}
fmt.Printf("%v nanoseconds\n", ds)
}
Test output:
[527400 39200 8400<PHONE_NUMBER>0 16900 8300<PHONE_NUMBER> 34100] nanoseconds
| common-pile/stackexchange_filtered |
Highlighting strings between [ and ] in lstlistings
I want to use lstlisting to present a file like this:
#comment
[keyword]
key1=value1
key2=value with RegEx
[keyword:NewKeyword]
key1 = value1
key2=value with RegEx
and this is my language definition:
\lstdefinelanguage{myLang}{
sensitive=false,
morestring=[b]",
keywords={=},
morecomment=[l]{\#},
}
How can I extract [keyword] and [keyword:NewKeyword]?
Note that "highlighting" and "extracting" can mean two completely different things. I'm assuming you mean the former - highlighting.
Welcome to TeX.SE.
I you want to highlight the words with the [...], you can use moredelim (as with myLangStyleA) which discards the delimiters and applies the given style. If you want to keep the brackets you can use morecomment (as with myLangStyleB)
\documentclass[border=2pt]{standalone}
\usepackage{xcolor}
\usepackage{listings}
\usepackage{filecontents}% only need to provide the file foo.mylang
\begin{filecontents*}{foo.mylang}% provide file foo.mylang
#comment
[keyword]
key1=value1
key2=value with RegEx
[keyword:NewKeyword]
key1 = value1
key2=value with RegEx
\end{filecontents*}
\lstdefinelanguage{myLang}{
sensitive=false,
morestring=[b]",
keywords={=},
morecomment=[l]{\#},
}
\lstdefinestyle{myLangStyleA}{
language=myLang,
moredelim=[is][\color{blue}\ttfamily]{[}{]}
}
\lstdefinestyle{myLangStyleB}{
language=myLang,
morecomment=[s][\color{red}\ttfamily]{[}{]},
}
\begin{document}
Using \textbf{style=myLangStyleA}:
\lstinputlisting[style=myLangStyleA]{foo.mylang}
Using \textbf{style=myLangStyleB}:
\lstinputlisting[style=myLangStyleB]{foo.mylang}
\end{document}
I think the OP wants to keep the brackets: the settings seems that of configuration files.
@egreg: Ok. Have updated the solution that provides both options.
| common-pile/stackexchange_filtered |
grok vs. django comparison
What are the smashing (pun intended) features of grok that makes it better than django?
how do I know when my project needs grok+zope, or it can just be developed with django ?
Zope was the first object publishing framework evah, and the Zope community has a long experience with Doing Things The Right Way. Zope 2 was the first attempt, Zope 3 was the next attempt, and we are now into the third generation of web frameworks, which includes Grok, BFG and Bobo.
Grok is massive, and has even more modules available that doesn't come when you install the base (and it's in the process of reducing the number of required modules as well, so the footprint gets smaller). BFG and Bobo go the other way around, and are minimalistic frameworks but with easy access to the Zope Toolkit and all the functionalities of Zope.
And although Django is making many of the same mistakes Zope2 did, they are also fixing them much faster, so I completely expect much of this discussion to be moot in five years, because I expect every single Python web framework to use WSGI+WebOb+Repoze+Deliverance+Buildout as a base by then. But even then I'd go for frameworks where I can use the Zope Component Architecture and ZODB, but that includes not only the ones made by the Zope community, but also for example Turbogears. And maybe it will include Django too by then, who knows... :-)
Depending on what the project requirements are I would today go with either Plone (if they need CMS), Grok or BFG (depending on the involved developers, and the complexity of the task and the budget). This is of course partly depending on my large experience with the Zope technologies and my small experience with Django, but mostly because I can use ZTK and ZODB in Grok and BFG.
YMMV, etc, blahblah.
Zope Toolkit
Brandons talk on the Zope Component Architecture (Video from PyCOn, Slides from PloneConf)
BFG
Bobo
Grok is basically all the power of zope in a way easier to use package. So you do get all the luxury of a real python object database (though you can use an sql backend). And I assume you know about the adapters/utilities/views of the so-called "zope component architecture". Those allow you to make a robust application. Especially handy if you later need to selectively customize it. And security is traditionally a zope (and thus grok) strong point. Development and deployment are handled fully with eggs (and buildout): in my experience this is a robust and reliable and repeatable and comfortable way.
If you have an application that can work with straight sql tables without needing much selective customizing afterwards: nothing wrong with django. You'll have to do much security yourself, so that needs a keen eye. There's much less of a framework behind it (an ORM and a url mapper), so your python will feel more "pure and simple". This also means you need to do more yourself.
There's nothing from stopping you from selectively using parts of grok: http://pypi.python.org/pypi/grokcore.component for instance is very much the core. Pretty well isolated, so you can use it without buying into the whole zope stack. I'm pretty sure you can use that in django. grokcore/zope component is just python code. This gets you the adapters/interfaces/utilities. I don't know what you're building, so you'll have to experiment.
One thing hugely in favour of grok that I'd suggest trying out: zope's ZODB object database. A good ORM (and django's is pretty ok) helps a lot taking the pain out of SQL databases, but a real object database is just plain luxury :-)
I don't think any of the frameworks are intended to have any 'features' that make one 'better' than the other, or 'needed' in certain circumstances. Rather, the difference between Django and Grok - or Pylons, or Turbogears - is really one of approach. You may find the approach of Grok to your liking, or you may prefer one of the others. I doubt there is much you can achieve in one of them that you can't in any of the others.
What impresses me is the size of grok+zope with respect to django. So I wonder. What if I need something that it's in there, but at the moment I don't know about ?
Then write it for Django - it's open source.
| common-pile/stackexchange_filtered |
IdentityServer4 FindByNameAsync always null
I am using IdentityServer4 v4.1.1 in a .Net Core 3.1 project and having issues with FindByNameAsync - it always returns null.
This works and returns the user:
var user = await userStore.Context.Users.FirstOrDefaultAsync(u => u.NormalizedUserName == userManager.NormalizeName(model.Username), default(CancellationToken));
This does not and returns null:
var user = await userStore.FindByNameAsync(userManager.NormalizeName(model.Username));
How is this possible?
Edit (in an effort to show my research):
The reason I am really confused about this is that I found the following in the .Net Core source: https://github.com/dotnet/aspnetcore/blob/main/src/Identity/EntityFrameworkCore/src/UserStore.cs#L255
It seems like both of these statements should do the exact same thing. I am hoping that someone can explain why they do not.
Can you confirm that the normalized name supplied in u.NormalizedUserName is the same as the username searched by FindByNameAsync?
Yes, I literally replaced one line with the other.
@Carson, I work with bbailes and I can confirm that the username is exactly the same. We've used the user found by the first query to test the FindByNameAsync method and it still fails to find the user.
Understood, my question is in relation to the values returned by the term "NormalizedName" u.NormalizedUserName would be, I presume, a field in your schema. Are you certain that the FindByNameAsync function is searching that column and not a different one. As I don't see your schema, I'm verifying with you that there isn't a "NormalizedUserName" column and a "Username" column, where the data searched could be the wrong field.
@Carson, there is both a UserName and NormalizedUserName. The UserName is all lowercase and the NormalizedUserName is populated with userManager.NormalizeName(model.Username). I've tested both columns, including Normalizing the model.UserName as well as not Normalizing it before sending it to FindByNameAsync. bbailes also looked at the source code for IdentityServer4 4.1.1 and it shows that the two queries above should be equivalent and that FindByNameAsync uses the NormalizedUserName column.
@Carson NormalizedUserName is part of the Identity standard for .Net Core, not some custom field of ours. See https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.identity.identityuser-1.normalizedusername?view=aspnetcore-5.0
@Carson FindByNameAsync takes a parameter called normalizedUsername. See https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.identity.iuserstore-1.findbynameasync?view=aspnetcore-5.0
I work with @bbales and I finally have figured it out.
We have a class called MultiTenantIdentityUserStore, which we use to override a few things to make the app multi-tenant. It wasn't obvious this was being used, unfortunately.
In it, we have a property:
public TTenantKey TenantId { get; set; }
This is used to override the Users public property:
public override IQueryable<TUser> Users => base.Users.Where(u => u.TenantId.Equals(TenantId));
Once I set the UserStore.TenantId, calling UserStore.Users.ToList() went from having 0 results to having 13, as expected. For comparison, UserStore.Context.Users.ToList() has 22 records, since we are migrating users from an old version of IdentityServer4 to the current version of 4.1.1.
Now using the 2nd query above returns a user:
var user = await userStore.FindByNameAsync(userManager.NormalizeName(model.Username));
As an aside, this also makes the UserManager and SignInManager.UserManager work as expected when using the same methods. (If you didn't know, UserManager and SignInManager.UserManager are the same object.) This is because the UserManager uses the UserStore to access the Users. When you have dependency injection supply those objects, it's all correctly referencing each other.
It may seem as if this Answer is only for our instance, but with as many unresolved Questions with the same stated problem on this Stack, this might be a widespread problem people aren't catching, or at least aren't updating their Question after figuring it out. I've probably read at least a dozen that don't have Answers. I'm wondering how many of them have our same issue.
| common-pile/stackexchange_filtered |
text on back of spinning image
I'm trying to code a gallery page, which contains nine pictures in a three-by-three grid. when someone clicks on an image, I want the image to spin on the Y-axis and display white text on a black background. The problem is that I don't know how to spin the image. I think it has something to do with css?
html:
<div id="gallery" class="gallery-section section-container type3">
<div class="container">
<div class="indent-10">
<div class="row">
<div class="col-md-4">
<div class="img-wrapper">
<img src="img/gallery-img-1.jpg" alt="gallery-img-1"/>
</div>
<div class="text-wrapper">
<div class="header-wrapper">
<h3>
Rushed Flats
</h3>
</div>
<div class="para-wrapper">
<p>
architecto beatae vitae dicta sunt explicabo. Nam libero tempore, cum soluta nobis est
eligendi optio cumque nihil impedit quo minus id quod maxime placeat.
</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
Css:
.gallery-section{
.indent-10{
max-width: 85%;
margin: 0 auto;
}
.img-wrapper{
margin-bottom: 2px;
img{
}
}
.text-wrapper{
.header-wrapper{
h3{
}
}
.para-wrapper{
p{
}
}
}
}
https://davidwalsh.name/demo/css-flip.php
The question does not contain a specific problem that needs fixing, hence it's is too broad. You don't have neither the javascript or css side implemented.
the problem is that I don't know how to spin the image, nothing seems to be working
Visit the link I've provided below. It's very easy to do it. ;)
Visit the link below and there you will get to know about CSS 3D transitions animations etc.
https://desandro.github.io/3dtransforms/docs/card-flip.html
it will definitely help you out for the flip transition you want and lot more transition you may want to learn
Thanks for the help!
| common-pile/stackexchange_filtered |
What is the maximum height of USAF fighter pilot?
In response to this comment:
Three days later I happened to meet a commercial pilot at school who has flown
F4s during Vietnam. He gave me the worst news of my life. My 6'3" ass was too
tall to ever fly a fighter.
Is this actually true? A cursory search seems to contradict what he was
told:
To become a Jet fighter Navigator, you have to have [...] a vertical standing
height of between 64 inches (5 foot 4 inches) and 77 inches (6 foot 5 inches)
tall.
It would appear his height is within range. Was he told wrong or am I missing
something?
Based on the answers below and the fact that the commentor was told by a Vietnam era pilot, I'd say they rules have likely changed since the mid- to late-60s.
Ron's answer is the correct answer, but just to ease your fears, I literally fly with Navy guys that are at least 6'3.
To become an Air Force pilot, you have to be a commissioned officer, there are few ways you can do that. First you can go through the Air Force Academy, which is probably the most common. Another way is to be in an ROTC program through your college of choice. The last way I know of is to graduate college and join the Air Force through the officer candidate school (OCS).
Regardless of the method you choose, there are certain requirements you must meet:
Be a U.S. Citizen
Have any 4 year college degree or be within 365 days of attaining it
Minimum 2.5 GPA (although I'd be surprised if they took you that low)
Under the age of 28 by the board convening date
Have a standing height of 64-77 inches and a sitting height of 34-40 inches
Have no history of hay fever, asthma, or allergies after the age of 12
Meet USAF weight and physical conditioning requirements
Normal color vision
Meet refraction, accommodation, and astigmatism requirements
Distance vision cannot exceed 20/20 uncorrected or must be corrected to 20/20 or better
Near vision cannot exceed 20/40 uncorrected or must be corrected to 20/20 or better
If you've met those requirements you can then go through the Air Force Officer Qualifying Test (AFOQT) and through Military Entrance Processing Station (MEPS) for your intellectual and physical evaluation. If you pass those tests, you may be selected for the Officer Candidate School (if not joining through ROTC or Academy).
After that you go into undergraduate pilot training (UPT) where you spend a year learning to fly through academic and hands-on training. Depending on how you perform at UPT you will receive a seat assignment. Higher scores are selected for fighter training while lower scores move towards transport aircraft. You can voice your opinion on what you want to fly, but ultimately the decision is driven by the needs of the Air Force at the time you graduate UPT.
After UPT, you will move to Advanced Flight Training (AFT) which is between 6 months and a year long depending on the aircraft you've been assigned. After AFT you'll be assigned a squadron and location.
If you don't meet height requirements
There is a waiver process, although getting a waiver for height requirements is extremely rare (if not completely unheard of). The issue is that the ejection systems are designed for a specific height. This process starts by appealing a medical review board which you are allowed to write a letter to argue your position, however even with a waiver you would probably not be assigned a fighter aircraft. You would probably be assigned an aircraft that does not have an ejection system like transport or mid-air refueling aircraft. As RhinoDriver mentions in the comments, the T6 training aircraft is also equipped with an ejection seat and if you can't fly the trainer, you can't progress on.
The only other alternative to be a US Air Force pilot outside of height requirements is to be an unmanned aircraft pilot.
Source 1: U.S. Air Force ROTC Website
Source 2: Department of Defense Article
All USAF primary trainer aircraft are equipped with ejection seats. If you do not meet the physical requirements to fly in the T6 ejection seat then you'd never fly anything, including a transport.
Standing 64"-77"; sitting 34"-40"
More information: http://google.com/search?q=usaf+fighter+pilot+height+restrictions
Check with your recruiter (they are all skilled salesmen - as they get commission) - get it signed in writing that you are not too tall. The cockpits of some jets are just too tight for some - at 6'1-1/2 - I was too tall for some. Helicopters - no problem - and those old vietnam era ones are fun. I don't mean to rag on recruiters - they can help you get into one of the service academies with some pull - they are usually upper level ranking enlisted (like master, gunny - etc.) The two most enjoyable to ever have control over is that Harrier jet, and the Huey helicopter.
| common-pile/stackexchange_filtered |
Profiling / Code optimizing tool
Please let me know which tool (GNU or 3rd party tool) is the best we can make use for profiling and optimizing our code. Is gprof an effective tool? Do we have dtrace tool ported in Linux?
Usually profiling tools are language specific. What language are you targeting? What kind of application (web, desktop)?
Our's is a web app developed on Red Hat Linux OS using C++
Profiling code? What code? What language? What? Why? How? Who? Where?
You're not alone in conflating the terms "profiling" and "optimizing", but they're really very different. As different as weighing a book versus reading it.
As far as gprof goes, here are some issues.
Among profilers, the most useful are the ones that
Sample the whole call stack (not just the program counter) or at least as much of it as contains your code.
Samples on wall-clock time (not just CPU time). If you're losing time in I/O or other blocking calls, a CPU-only profiler will simply not see it.
Tells you by line-of-code (not just by function) the percent of stack samples containing that line. That's important because any such line of code that you could eliminate would save you that percent of overall time, and you don't have to eyeball functions to guess where it is.
A good profiler for linux that does this is Zoom. I'm sure there are others.
(Don't get confused about what matters. Efficiency and/or timing accuracy of a profiler is not important for helping you locate speed bugs. What's important is that it draws your attention to the right things.)
Personally, I use the random-pausing method, because I find it's the most effective.
Not only is it simple and requires no investment, but it finds speedup opportunities that are not localized to particular routines or lines of code, as in this example.
This is reflected in the speedup factors that can be achieved.
gprof is better than nothing. But not much. It estimates time spent not only in a function, but also in all of the functions called by the function - but beware it is an estimate, not a direct measurement. It does not make the distinction that some two callers of the same subroutine may have widely differing times spent inside it, per call. To do better than that you need a real call graph profiler, one that looks at several levels of the stack on a timer tick.
dtrace is good for what it does.
If you are doing low level performance optimization on x86, you should consider Intel's Vtune tool. Not only does it provide the best access I am aware of to low level performance measurement hardware on the chip, the so-called EMON (Event Monitoring) system (some of which I designed), but Vtune also has some pretty good high level tools. Such as call graph profiling that, I believe, is better than gprof. On the low level, I like doing things like generating profiles of the leading suspects, like branch mispredictions and cache misses, and looking at the code to see if there is something that can be done. Sometimes simple stuff, such as making an array size 255 rather than 256, helps a lot.
Generic Linux oprofile, http://oprofile.sourceforge.net/about/, is almost as good as Vtune, and better in some ways. And available for x86 and ARM. I haven't used it much, but I particularly like that you can use it in an almost completely non-intrusive manner, with no need to create the special -pg binary that gprof needs.
Thanks - please let me know if dtrace tool available for Linux too. Please let me know from where I can download it? Since our whole development is on Red Hat Linux / C++ - please provide tools for the same platform.
Their are many tools from where you can optimize your code,
For web application their are different tools to optimize the code i.e jzip compressors
e.g YUI Compressor etc.
For desktop application optimizing compiler is good.
| common-pile/stackexchange_filtered |
is this some kind of optimization? if (1) { ... } else (void)0
Possible Duplicate:
Why are there sometimes meaningless do/while and if/else statements in C/C++ macros?
In the source code of OpenJDK I found this macro:
#define PUTPROP(props, key, val) \
if (1) { \
// some code
} else ((void) 0)
It is used as one would expect, e.g.:
PUTPROP(props, "os.arch", sprops->os_arch);
(If you are interested, it is in the file jdk/src/share/native/java/lang/System.c)
I suppose this is some kind of optimization thing. Can somebody explain or provide a link? This is hard to google.
And before someone asks: Yes, I'm just curious.
Duplicate of Why are there sometimes meaningless do/while and if/else statements in C/C++ macros?
| common-pile/stackexchange_filtered |
JavaScript audio analyze phonetics
I'm able to analyze audio data using the AudioContext API in JavaScript and draw the waveform to a canvas.
The question is, after loading the audio data, I have about a 1024-long Uint8Array data points representing the wavelength (per frame), how do I guess what sounds this is making (from a choice of the phonetics mentioned here, namely:
Ⓐ
lisa A
Closed mouth for the “P”, “B”, and “M” sounds. This is almost identical to the Ⓧ shape, but there is ever-so-slight pressure between the lips.
Ⓑ
lisa B
Slightly open mouth with clenched teeth. This mouth shape is used for most consonants (“K”, “S”, “T”, etc.). It’s also used for some vowels such as the “EE” sound in bee.
Ⓒ
lisa C
Open mouth. This mouth shape is used for vowels like “EH” as in men and “AE” as in bat. It’s also used for some consonants, depending on context.
This shape is also used as an in-between when animating from Ⓐ or Ⓑ to Ⓓ. So make sure the animations ⒶⒸⒹ and ⒷⒸⒹ look smooth!
Ⓓ
lisa D
Wide open mouth. This mouth shapes is used for vowels like “AA” as in father.
Ⓔ
lisa E
Slightly rounded mouth. This mouth shape is used for vowels like “AO” as in off and “ER” as in bird.
This shape is also used as an in-between when animating from Ⓒ or Ⓓ to Ⓕ. Make sure the mouth isn’t wider open than for Ⓒ. Both ⒸⒺⒻ and ⒹⒺⒻ should result in smooth animation.
Ⓕ
lisa F
Puckered lips. This mouth shape is used for “UW” as in you, “OW” as in show, and “W” as in way.
Ⓖ
lisa G
Upper teeth touching the lower lip for “F” as in for and “V” as in very.
This extended mouth shape is optional. If your art style is detailed enough, it greatly improves the overall look of the animation. If you decide not to use it, you can specify so using the extendedShapes option.
Ⓗ
lisa H
This shape is used for long “L” sounds, with the tongue raised behind the upper teeth. The mouth should be at least far open as in Ⓒ, but not quite as far as in Ⓓ.
This extended mouth shape is optional. Depending on your art style and the angle of the head, the tongue may not be visible at all. In this case, there is no point in drawing this extra shape. If you decide not to use it, you can specify so using the extendedShapes option.
Ⓧ
lisa X
Idle position. This mouth shape is used for pauses in speech. This should be the same mouth drawing you use when your character is walking around without talking. It is almost identical to Ⓐ, but with slightly less pressure between the lips: For Ⓧ, the lips should be closed but relaxed.
This extended mouth shape is optional. Whether there should be any visible difference between the rest position Ⓧ and the closed talking mouth Ⓐ depends on your art style and personal taste. If you decide not to use it, you can specify so using the extendedShapes option.
)?
I know there are many machine learning options like Meyda and Tensorflow and other machine learning methodss, but I want an algorithm to detech the above phonetics in real time. It doesn't have to be 100% accurate, just slightly better than randomly picking certain values for the mouths... At this point, anything better than random would be fine.
I'm aware audio recognition can be done with PocketSphinx.js, and this is used in rhubarb lipsink for its calculations, but all I'm looking for is a very simple algorithm, given a 1024 data-array of a wavelength per frame, of how to get the phonetics, again, it doesn't have to be 100% accurate, but it has to be realtime, and better than random.
Basically, the problem with pocketsphinx is that it's purpose is to get speech-to-word recognition, so it has a lot of extra code meant to translate the sounds to the exact words it has compiled in the dictionaries, but I don't need that I only need to extract the sounds themselves, without converting them to some dictionary, so theoretically there shouldn't be as much overheard.
I just want a simple algorithm that can take the already acquired data from the AudioContext, to just guess, relatively, what sound, in the above-mentioned list, is being made
Again, to be very clear:
I am not looking for a PocketSphinx solution, nor any other "ready to go" library, all I want is a mathematical formula for each one of the unique sounds mentioned above, that can be adapted to any programming language
I'm not sure why this is tagged tensorflow if you don't want a TensorFlow answer. If all you want is something better than random, you are almost certainly better off using a package like PocketSphinx and breaking the returned words down into their phonetics. What you're asking for is quite difficult: see threads discussing why here and here.
However, if you are absolutely attached to finding an algorithm for this...
Searching around, most items I came across used machine learning except for a few: this paper from 2008, this one from 1993, which was expanded into a full Ph.D dissertation, and this MIT research paper from 1997. Here is a sample of the algorithm the authors used in the last one, just for the /R/ sound:
The paper says they implemented their algorithm in C++, but unfortunately no code is included.
Bottom line, I would recommend sticking with PocketSphinx unless this is part of your own Ph.D research!
UPDATE:
Adding more detail here upon request. Pocketsphinx explains, if you scroll all the way down to section 8 in their readme, that they use a machine learning platform called Sphinxtrain, which is also available in French and Chinese. But at the top of the Sphinxtrain page, there is a link to their "new library" called Vosk.
Vosk supports 9 languages and is small enough to fit on a Raspberry Pi, so it may be closer to what you're looking for. It, in turn, uses an open source C++ speech recognition toolkit called Kaldi, which also uses machine learning.
Arduinos are significantly more limited than Raspberry Pis, as I'm sure you know, so you may seriously want to reach out to the authors of the MIT paper if you are going in that direction. The authors used a 200 MHz Pentium Pro processor with 32 MB of RAM, and that's about the power level of the best Arduinos: the Arduino Yun 2 includes a 400 MHz Linux microprocessor with 64 MB of RAM.
Hopefully that gives you enough to chew on. Good luck!
OK thanks for the research papers, any idea what pocketsphinx is made from? do they usee machine learning or algorithms? And BTW it may be for a PHD paper theoreticaly, but i DO want an algorithm, because I might want to implement this in arduino and other things where pocketsphinx is too much overhead, also in the browser, pocketsphinx is about 10mb because it has to include the entire dictionaries, and it only works for english words, and a primary function of it is to to speech to text, which I don't care about, so I want to remove that overhead
also how were you able to get that formula from the last link, I wasn't able to find any paper from 1997, and the ones I did find require a login?
There's a small link with "PDF" next to it. Here is the direct link.
I've updated my answer with more detail based on your questions.
hi thanks, although really all I'm looking for is only algorithms, no machine learning really, that first picture of the R sound was really good and exactly what I'm looking for, although I looked in that paper and coundn't find anything for the other sounds, if you can provide some kind of reference to where I can find those then this would be the accepted answer
arduino is one thing, but Im also mainly trying to implement it in the browser, but it needs to be extremely light weight, pocketsphinx,js is over 10mb, and even the most inimilistic version is way too overkill and causes a ton of delay, it needs to be realtime even on older devices
vosk doesnt accoimplish what I need because I need to just use the uadiocontext API with javascript in the client-side of the browser, while vosk is a binary dependancy (and way too big (50mb) anyways), and its meant for language detection, which is not what I'm looking for
| common-pile/stackexchange_filtered |
Storing multiple arrays of strings in a command
I have modified the answer in Storing an array of strings in a command in order to allow multiple \storedata\mydata commands. That's the MWE:
\documentclass[12pt]{article}
\newcount\tmpnum\tmpnum=0
\def\storedata#1#2{\edef\tmp{\string#1}\storedataA#2\end}
\def\storedataA#1{
\ifx\end#1\else
\advance\tmpnum by1
\expandafter\def\csname data:\tmp:\the\tmpnum\endcsname{#1}%
\expandafter\storedataA\fi
}
\def\getdata[#1]#2{\csname data:\string#2:#1\endcsname}
\usepackage{pgffor}
\begin{document}
\storedata\mydata{{A}{B}{C}}
\storedata\mydata{{D}}
\storedata\myotherdata{{E}{F}{G}}
\foreach \n in {1,...,7}{%
\noindent mydata\n:\ \if\relax\getdata[\n]\mydata\relax EMPTY\else \getdata[\n]\mydata\fi \
myotherdata\n:\ \if\relax\getdata[\n]\myotherdata\relax EMPTY\else \getdata[\n]\myotherdata\fi\\
}
\end{document}
However, when I declare another array \myotherdata, it continues the counting of the previous declared data structure.
How do I fix this?
Welcome. // I suggest to study the expl3 solutions made more thoroughly. Have a look here https://tex.stackexchange.com/tags/expl3/info .
there's a string of recent posts about this, most answered by egreg. the updated answer in expl3 to the question you linked has a bunch of very recent followups. basically about implementing arrays in expl3. that will be much easier in the end than doing it all at the tex level.
the problem with your code is that you are using a single count for all the arrays, whereas you would have to create a separate count for each new array you could obviously do it programmatically, but the thing about expl3 is it is already done for you ;).
As was suggested in comments, this is not the best approach. Arrays are much more easily implemented --- or, at least, facsimiles of arrays --- in expl3 and the commands thus created will be considerably more robust and more likely to provide useful error messages and logging.
However, since you provided a non-expl3 example selected from a question with a recently updated expl3 answer by egreg, nothing I could do in expl3 would have any point. So here's a way to fix the code you have.
The problem is you are using a single count register for all the arrays you create. Naturally, a single count can hold only a single value, so all your 'arrays' end up using that value. To do it this way, you need to understand the limitations of the code you're using. There are no arrays in TeX. We can construct the appearance of arrays for some purposes, but the functionality is not out-of-the-box.
So, if we want n arrays, we have to assign n count registers. Then each 'array' is created by using macros. The indexing is a side-effect of the naming. As far as TeX is concerned, the elements of each array are unrelated to one another.
Since we don't know how large n is, we need to create the counts on the fly. We can use the same \tmp for this which is used to create unique macro names for the 'elements'.
\def\storedataA#1{%
We test whether the count exists first. If it does, we don't do anything.
\ifcsname tmpnum\tmp\endcsname\relax
\else
Otherwise, we assign a new count register.
\expandafter\newcount\csname tmpnum\tmp\endcsname
\fi
Now we just need to adjust what follows.
\ifx\end#1\else
First, we need to step the counter we just created rather than \tmpnum.
\expandafter\advance\csname tmpnum\tmp\endcsname by1
To reduce the tangled knots of \expandafters, we pre-expand the value and just store it in a macro. This is fine for this bit because we're just using it to construct the name of the macro, so we don't need it to be treated as a number at all.
\edef\tempa{\expandafter\the\csname tmpnum\tmp\endcsname}%
Then we plug this into the name of the macro which is defined to expand to the element.
\expandafter\def\csname data:\tmp:\tempa\endcsname{#1}%
And finish as before
\expandafter\storedataA\fi
}
Complete code:
\documentclass[12pt]{article}
\def\storedata#1#2{%
\edef\tmp{\string#1}\storedataA#2\end
}
\def\storedataA#1{%
\ifcsname tmpnum\tmp\endcsname\relax
\else
\expandafter\newcount\csname tmpnum\tmp\endcsname
\fi
\ifx\end#1\else
\expandafter\advance\csname tmpnum\tmp\endcsname by1
\edef\tempa{\expandafter\the\csname tmpnum\tmp\endcsname}%
\expandafter\def\csname data:\tmp:\tempa\endcsname{#1}%
\expandafter\storedataA\fi
}
\def\getdata[#1]#2{\csname data:\string#2:#1\endcsname}
\usepackage{pgffor}
\begin{document}
\storedata\mydata{{A}{B}{C}}
\storedata\mydata{{D}}
\storedata\myotherdata{{E}{F}{G}}
\foreach \n in {1,...,7}{%
\noindent mydata\n:\ \if\relax\getdata[\n]\mydata\relax EMPTY\else \getdata[\n]\mydata\fi \
myotherdata\n:\ \if\relax\getdata[\n]\myotherdata\relax EMPTY\else \getdata[\n]\myotherdata\fi\\
}
\end{document}
expl3 sequences would be much nicer, though. then we could just say \seq_new:c { g_eder_#1_seq } and just define some wrappers for functions other people have written.
Excellent answer! Thank you!
In LaTeX it is common to have optional arguments nested in square brackets, but with your code the macro \getdata has a mandatory/non-optional argument in square brackets.
Instead of storing each array element in a macro on its own you can store the entire array in a macro and at the time of retrieving the data apply an expandable routine \UD@ExtractKthArg. This way you don't need \count-registers at all but can do with \romannumeral and \romannumeral-expansion.
A disadvantage of this approach is that changing the value of a single array element is more cumbersome and may consume more memory as the macro that needs to be redefined holds all values of the entire array, not just a single value of the array.
\makeatletter
%%
%% Code for \UD@ExtractKthArg
%%
%%=============================================================================
%% Paraphernalia:
%% \UD@firstoftwo, \UD@secondoftwo, \UD@PassFirstToSecond, \UD@Exchange,
%% \UD@stopromannumeral, \UD@CheckWhetherNull
%%=============================================================================
\newcommand\UD@firstoftwo[2]{#1}%
\newcommand\UD@secondoftwo[2]{#2}%
\newcommand\UD@PassFirstToSecond[2]{#2{#1}}%
\newcommand\UD@Exchange[2]{#2#1}%
\@ifdefinable\UD@stopromannumeral{\chardef\UD@stopromannumeral=`\^^00}%
%%-----------------------------------------------------------------------------
%% Check whether argument is empty:
%%.............................................................................
%% \UD@CheckWhetherNull{<Argument which is to be checked>}%
%% {<Tokens to be delivered in case that argument
%% which is to be checked is empty>}%
%% {<Tokens to be delivered in case that argument
%% which is to be checked is not empty>}%
%%
%% The gist of this macro comes from Robert R. Schneck's \ifempty-macro:
%% <https://groups.google.com/forum/#!original/comp.text.tex/kuOEIQIrElc/lUg37FmhA74J>
\newcommand\UD@CheckWhetherNull[1]{%
\romannumeral\expandafter\UD@secondoftwo\string{\expandafter
\UD@secondoftwo\expandafter{\expandafter{\string#1}\expandafter
\UD@secondoftwo\string}\expandafter\UD@firstoftwo\expandafter{\expandafter
\UD@secondoftwo\string}\expandafter\UD@stopromannumeral\UD@secondoftwo}{%
\expandafter\UD@stopromannumeral\UD@firstoftwo}%
}%
%%=============================================================================
%% Extract K-th inner undelimited argument:
%%
%% \UD@ExtractKthArg{<TeX <number> quantity denoting integer value K>}%
%% {<tokens in case list of undelimited args doesn't have a K-th argumnent>}%
%% {<list of undelimited args>} %
%%
%% In case there is no K-th argument in <list of indelimited args> :
%% Does deliver <tokens in case list of undelimited args doesn't have a K-th argumnent.
%% In case there is a K-th argument in <list of indelimited args> :
%% Does deliver that K-th argument with one level of braces removed.
%%
%% Examples:
%%
%% \UD@ExtractKthArg{0}{not available}{ABCDE} yields: not available
%%
%% \UD@ExtractKthArg{3}{not available}{ABCDE} yields: C
%%
%% \UD@ExtractKthArg{3}{not available}{AB{CD}E} yields: CD
%%
%% \UD@ExtractKthArg{4}{not available}{{001}{002}{003}{004}{005}} yields: 004
%%
%% \UD@ExtractKthArg{6}{not available}{{001}{002}{003}} yields: not available
%%
%%=============================================================================
\newcommand\UD@ExtractKthArg[1]{%
\romannumeral%
% #1: <integer number K>
\expandafter\UD@ExtractKthArgCheck
\expandafter{\romannumeral\number\number#1 000}%
}%
\newcommand\UD@ExtractKthArgCheck[3]{%
\UD@CheckWhetherNull{#1}{\UD@stopromannumeral#2}{% empty
\expandafter\UD@ExtractKthArgLoop\expandafter{\UD@firstoftwo{}#1}{#2}{#3}%
}%
}%
\begingroup
\def\UD@ExtractFirstArgLoop#1{%
\endgroup
\@ifdefinable\UD@RemoveTillFrozenrelax{%
\long\def\UD@RemoveTillFrozenrelax##1##2#1{{##1}}%
}%
\newcommand\UD@ExtractKthArgLoop[3]{%
\expandafter\UD@CheckWhetherNull\expandafter{\UD@firstoftwo##3{}.}{\UD@stopromannumeral##2}{%
\UD@CheckWhetherNull{##1}{%
\UD@ExtractFirstArgLoop{##3#1}%
}{%
\expandafter\UD@PassFirstToSecond\expandafter{\UD@firstoftwo{}##3}%
{\expandafter\UD@ExtractKthArgLoop\expandafter{\UD@firstoftwo{}##1}{##2}}%
}%
}%
}%
}%
\expandafter\expandafter\expandafter\UD@ExtractFirstArgLoop
\expandafter\expandafter\expandafter{%
\expandafter\expandafter\ifnum0=0\fi}%
%% Usage of frozen-\relax as delimiter is for speeding things up by reducing the
%% amount of iterations needed. I chose frozen-\relax because David Carlisle
%% pointed out in <https://tex.stackexchange.com/a/578877>
%% that frozen-\relax cannot be (re)defined in terms of \outer and cannot be
%% affected by \uppercase/\lowercase.
%%
%% \UD@ExtractFirstArg's argument may contain frozen-\relax:
%% The only effect is that internally more iterations are needed for
%% obtaining the result.
\newcommand\UD@ExtractFirstArgLoop[1]{%
\expandafter\UD@CheckWhetherNull\expandafter{\UD@firstoftwo{}#1}%
{\expandafter\UD@stopromannumeral\UD@firstoftwo#1{}}%
{\expandafter\UD@ExtractFirstArgLoop\expandafter{\UD@RemoveTillFrozenrelax#1}}%
}%
%%
%% End of code for \UD@ExtractKthArg.
%%
%% Code for \UD@CheckWhetherOnlyDefinedMacro / \UD@CheckWhetherOnlyUndefinedToken
%%
%%-----------------------------------------------------------------------------
%% Check whether argument's first token is an explicit character token
%% of category 1 (begin group):
%%.............................................................................
%% \UD@CheckWhetherBrace{<Argument which is to be checked>}%
%% {<Tokens to be delivered in case that argument
%% which is to be checked has a leading
%% explicit catcode-1-character-token>}%
%% {<Tokens to be delivered in case that argument
%% which is to be checked does not have a
%% leading explicit catcode-1-character-token>}%
\newcommand\UD@CheckWhetherBrace[1]{%
\romannumeral\expandafter\UD@secondoftwo\expandafter{\expandafter{%
\string#1.}\expandafter\UD@firstoftwo\expandafter{\expandafter
\UD@secondoftwo\string}\expandafter\UD@stopromannumeral\UD@firstoftwo}{%
\expandafter\UD@stopromannumeral\UD@secondoftwo}%
}%
%%=============================================================================
%% Check whether token is macro (only with macros the \meaning contains the
%% sequence ->) - the argument of \UD@CheckWhetherMacro must be a single token;
%% the argument of \UD@CheckWhetherMacro must not be empty:
%%=============================================================================
\newcommand\UD@CheckWhetherMacro[1]{%
\expandafter\expandafter\expandafter\UD@CheckWhetherNull
\expandafter\expandafter\expandafter{%
\expandafter\UD@@CheckWhetherMacro\meaning#1->}%
{\UD@secondoftwo}{\UD@firstoftwo}%
}%
\@ifdefinable\UD@@CheckWhetherMacro{\long\def\UD@@CheckWhetherMacro#1->{}}%
%%=============================================================================
%% Check whether token is undefined control sequence (only with undefined
%% control sequences the \meaning has the leading phrase "undefined"; besides
%% this \meaning never delivers no tokens at all) - the argument of
%% \UD@CheckWhetherUndefined must be a single token; the argument of
%% \UD@CheckWhetherUndefined must not be empty:
%%=============================================================================
\begingroup
\def\UD@CheckWhetherUndefined#1#2{%
\endgroup
\newcommand\UD@CheckWhetherUndefined[1]{%
\expandafter\expandafter\expandafter\UD@CheckWhetherNull
\expandafter\expandafter\expandafter{%
\expandafter\UD@@CheckWhetherUndefined\meaning##1#1#2}%
}%
\@ifdefinable\UD@@CheckWhetherUndefined{%
\def\UD@@CheckWhetherUndefined##1#1##2#2{##1}%
}%
}%
\escapechar=-1\relax
\expandafter\expandafter\expandafter\UD@Exchange
\expandafter\expandafter\expandafter{%
\expandafter\expandafter\ifnum0=0\fi}{%
\expandafter\UD@CheckWhetherUndefined\expandafter{\string\undefined}%
}%
%%=============================================================================
%% Check whether the token list which forms the argument is a single non-space
%% token (as a single token it also isn't wrapped in braces):
%%=============================================================================
\newcommand\UD@CheckWhetherSingleNonBlankNonBraced[1]{%
\expandafter\UD@CheckWhetherNull\expandafter{\UD@firstoftwo#1{}{{}}}{%
%#2 is blank
\UD@secondoftwo
}{%
\expandafter\UD@CheckWhetherNull\expandafter{\UD@firstoftwo{}#1}{%
\UD@CheckWhetherBrace{#1}{%
%#1 has a leading brace token
\UD@secondoftwo
}{%
\UD@firstoftwo
}%
}{%
%#1 is not a single undelimited argument but several undelimied arguments
\UD@secondoftwo
}%
}%
}%
\newcommand\UD@CheckWhetherOnlyDefinedMacro[1]{%
\UD@CheckWhetherSingleNonBlankNonBraced{#1}{%
\UD@CheckWhetherMacro{#1}{%
\UD@firstoftwo
}{%
\UD@secondoftwo
}%
}{\UD@secondoftwo}%
}%
\newcommand\UD@CheckWhetherOnlyUndefinedToken[1]{%
\UD@CheckWhetherSingleNonBlankNonBraced{#1}{%
\UD@CheckWhetherUndefined{#1}{%
\UD@firstoftwo
}{%
\UD@secondoftwo
}%
}{\UD@secondoftwo}%
}%
%%
%% End of code for \UD@CheckWhetherOnlyDefinedMacro / \UD@CheckWhetherOnlyUndefinedToken
%%
\makeatother
\documentclass[12pt]{article}
\makeatletter
\newcommand\storedata[2]{%
\UD@CheckWhetherOnlyUndefinedToken{#1}{\def#1{#2}}{%
\UD@CheckWhetherOnlyDefinedMacro{#1}{%
% Here one should crank out whether #1 is a macro which processes arguments/parameter text,
% but that is a "hard" task. Because by parsing the result of \meaning you cannot reliably
% distinguish a <definition text>'s <parameter text> from the definition's <balanced text>.
% \globaldefs having a non-positive value is assumed; the assignment is
% restricted to the local scope.
\expandafter\UD@PassFirstToSecond\expandafter{\the\toks@}{%
\toks@\expandafter{#1#2}\edef#1{\the\toks@}\toks@
}%
}{%
Error: \detokenize{#1} is neither undefined nor a macro%
}%
}%
}%
\@ifdefinable\getdata{%
\long\def\getdata[#1]#2{%
\romannumeral
\UD@CheckWhetherOnlyDefinedMacro{#2}%
{%
% Here one should crank out whether #1 is a macro which processes arguments/parameter text,
% but that is a "hard" task. Because by parsing the result of \meaning you cannot reliably
% distinguish a <definition text>'s <parameter text> from the definition's <balanced text>.
\expandafter\UD@PassFirstToSecond
\expandafter{#2}{%
% \romannumeral is already running and driving expansion, so let's remove
% the \romannumeral coming from \UD@ExtractKthArg.
\expandafter\UD@secondoftwo\UD@ExtractKthArg{#1}{array element not available}%
}%
}%
{\UD@stopromannumeral Error: \detokenize{#2} is not a defined macro}%
}%
}%
\makeatother
\usepackage{pgffor}
\begin{document}
\storedata\mydata{{A}{B}{C}}
\storedata\mydata{{D}}
\storedata\myotherdata{{E}{F}{G}}
\foreach \n in {-1,...,7}{%
\ifnum\n>-1 \bigskip\fi
\par\noindent
\texttt{\string\n}=\n\\
\texttt{\string\getdata}[\n]\texttt{\string\mydata}: \getdata[\n]\mydata\\
\texttt{\string\getdata}[\n]\texttt{\string\myotherdata}: \getdata[\n]\myotherdata
}
\end{document}
| common-pile/stackexchange_filtered |
Passing partial array to a function
This is really a question about efficiency I guess...
I have a device that gives me (image) data in an array that I then pass to a third party library as follows:
Private Sub NewData(TheData() as ushort)
LibraryFunction(TheData)
End Sub
This works just fine, EXCEPT that for the device to run efficiently it uses a ring buffer, for which it requires that the image data array is 'n' times the length of the image data, where 'n' is the depth of the ring buffer. This is fine as when I receive data from the device I also receive the index of the image frame.
What I want to write then is:
Private Const ImageDataLength = somenumber
Private Sub NewData(TheData() as ushort, bufferIndex as long)
LibraryFunction(TheData(bufferIndex * ImageDataLength))
End Sub
BUT the LibraryFunction will only accept an array (of ushort), so to make this work I have had to do the following:
Private Const ImageDataLength = somenumber
Private Sub NewData(TheData() as ushort, bufferIndex as long)
Dim tmp(ImageDataLength) As ushort
Array.Copy(TheData, (bufferIndex * ImageDataLength, tmp, 0, ImageDataLength)
LibraryFunction(tmp)
End Sub
Again, works OK BUT there is the obvious extra data copy going on. Given that each chunk of image data is actually 15MB and I'm trying to keep frame rates up, I'd love to lose the array.copy
Any ideas on how to make this as efficient as possible?
Thanks
When you pass an array to a function, only the reference of that array is passed. So if the LibraryFunction is under your control, you should optimize it to work with the passed in array. Most possibly an overload accepting the bufferIndex should do.
Are you pushing these chunks through a wire or wireless connection or this is all in-process?
Hi. It's in-process at this point in code. "LibraryFunction" is part of a third-party system, so I only have the function as exposed by the library to work with - hence posing the question here! (old version used to take a pointer the data, but new .Net version seems to just accept an array as a parameter)
| common-pile/stackexchange_filtered |
npx create-react-app myapp doesn't work. Aborting installation
When tried create new project
npx create-react-app my-app - got this Aborting installation.
I have node v14.17.1
And then I switched the latest version 16.4.0, and the problem is the same.
sudo npm cache clean --force not resolved my problem
I am reinstalled nvm, but it is not helped. My OS is macOS Big Sur Version 11.4
I am a first-time face up with this problem.
Why I got error 500 on the server https://nexus.poynt.com...?
But now I get another error
I tried on my old machine Windows 7 and node v13.14.0 and all finish success, the problem with this Mac.
"the problem is the same" - is it? One says<EMAIL_ADDRESS>can't be resolved (which indeed doesn't seem to exist: https://www.npmjs.com/package/@types/zipcodes, see Versions tab), the other that there was a server error trying to hit what is presumably your corporate package repo. And please post text as text.
This problem resolved! I cleared all data in file ~/.npmrc from the previous project.
sudo nano ~/.npmrc
| common-pile/stackexchange_filtered |
Implementing a brute force algorithm for detecting a self-intersecting polygon
I initially implemented the Hoey-Shamos algorithm, however it is too complex for future maintainability (I have no say in this), and it wasn't reporting correctly, so an optimized brute force algorithm is what I'm going to use.
My question is: How can I optimize this code to be usable?
As it stands, my code contains a nested for loop, iterating the same list twice.
EDIT: Turned lines into a HashSet and used two foreach loops... shaved about 45 seconds off scanning 10,000. It's still not enough.
foreach (Line2D g in lines)
{
foreach (Line2D h in lines)
{
if (g.intersectsLine(h))
{
return false;
}
}
} // end 'lines' for each loop
If I force my "intersectsLine()" method to return false (for testing purposes) it still takes 8 seconds to scan 10,000 records (I have 700,000 records). This is too long, so I need to optimize this piece of code.
I tried removing lines from the List after it's been compared to all the other lines, however there's an accuracy issue (no idea why) and the speed increase is barely noticeable.
Here is my intersectsLine method. I found an alternate approach here but it looks like it'd be slower with all the method calls and whatnot. Calculating the slope doesn't seem to me like it'd take too much computing (Correct me if I'm wrong?)
public bool intersectsLine(Line2D comparedLine)
{
//tweakLine(comparedLine);
if (this.Equals(comparedLine) ||
P2.Equals(comparedLine.P1) ||
P1.Equals(comparedLine.P2))
{
return false;
}
double firstLineSlopeX, firstLineSlopeY, secondLineSlopeX, secondLineSlopeY;
firstLineSlopeX = X2 - X1;
firstLineSlopeY = Y2 - Y1;
secondLineSlopeX = comparedLine.X2 - comparedLine.X1;
secondLineSlopeY = comparedLine.Y2 - comparedLine.Y1;
double s, t;
s = (-firstLineSlopeY * (X1 - comparedLine.X1) + firstLineSlopeX * (Y1 - comparedLine.Y1)) / (-secondLineSlopeX * firstLineSlopeY + firstLineSlopeX * secondLineSlopeY);
t = (secondLineSlopeX * (Y1 - comparedLine.Y1) - secondLineSlopeY * (X1 - comparedLine.X1)) / (-secondLineSlopeX * firstLineSlopeY + firstLineSlopeX * secondLineSlopeY);
if (s >= 0 && s <= 1 && t >= 0 && t <= 1)
{
//console.WriteLine("s = {0}, t = {1}", s, t);
//console.WriteLine("this: " + this);
//console.WriteLine("other: " + comparedLine);
return true;
}
return false; // No collision */
}
EDIT: The major bottleneck is the big polygons! I excluded running polygons with more than 100 lines, and it ran all 700,000+ polygons in 5:10. Which is in the acceptable range! Surely there's a way to see if a polygon is worth calculating before running this code? I have access to the XMin, Xmax, YMin, and YMax properties if that helps?
Ran another test, limiting polygons to under 1000 lines each. It took a little over an hour.
I removed all limiting of polygons, and it's been running for 36 hours now... still no results.
A couple ideas I have:
-When I generate my lines hashset, have another hashset/List that has candidates for intersection. I would only add lines to this list if there's a chance for intersection. But how do I eliminate/add possibilities? If there's only three lines that could possibly intersect with others, I'd be comparing 4000 lines against 3 instead of 4000. This alone could make a HUGE difference.
-If the same point occurs twice, except the first and last point, omit running the nested for loop.
Edit:
Information about the polygons:
700,000 total
There is over four thousand polygons with 1,000 or more points
There is 2 polygons with 70,000 or more points
I think it's possible to get this down to fifteen or so minutes with a bit of creativity.
As this is a programing site, there are some issues that relate to what data type you plan to use: FP or integer?
Not quite sure what you mean? I have a function within the object that returns true/false. The function will return false if the polygon has self-intersecting lines.
Are the points of your polygons an integer or floating-point type?
They are doubles and have several decimal places. It's ArcGIS data.
Not a complete answer, but you could use QuadTrees (http://en.wikipedia.org/wiki/Quadtree). This nice article http://blogs.msdn.com/b/kaelr/archive/2009/05/21/priorityquadtree.aspx has a cool PriorityQuatree<T> implementation.
Is there any particular part of that that could help me? From what I read, it looks like that's for drawing shapes on a screen. @SimonMourier
Quadtrees are mostly used to detect intersections. Check the source for methods such as HasItemsIntersecting, GetItemsIntersecting, GetItemsInside, etc.
I assume I'd have to make some modifications, I'll do some reading, thanks.
Can you supply a sample/test dataset?
Let me see if I understand the constraints correctly here. You cannot use the best algorithm (Hoey-Shamos) because it's too complicated. You want to use Brute-Force, but it's O(n^2) and realistically, no amount of mere code optimization is going to make 490,000,000,000 comparisons tractable. So what you really want is an algorithm+implementation that is faster than brute-force but simple enough to pass off as an "Optimized Brute-Force". Is that about right?
Yes, you are 100% correct. I have no control over the requirements unfortunately.
Have you considered using well known high optimized implementations (IsSimple function) like GDAL Lib (has good, fast SWIG C# bindings) or even PostGIS (without storage, just using SQL/MM Spatial engine)?
Not sure about PostGIS since none of my processing is doing any SQL. Which one of those mentioned has an isSimple function? I don't see anything on the GDal Lib website.
Well I have something that I think might work, but I need some sample data that I can test against so that I can make sure that it works correctly.
I can share output for this dummy data I created in ArcMap, but that's about it unfortunately :(
http://sharetext.org/b0uL
OK, those are some strangely shaped polygons. If the bigger real polygons look like these, I am not sure that my optimization will help much. What do these polygons represent and why does it matter if they cross themselves?
The code at the top is what I use to separate the rings. (It's ArcGIS data). The polygons are important. I'm scanning property data. I'm about to update my post with some more info.
Well, for one thing, you appear to be doing every possible comparison twice, and are comparing each line to itself as well. Cleaning those problems up should cut the runtime roughly in half.
OK, good. Property plots are sufficiently well-behaved that my idea should perform well enough.
Your current Brute-Force algorithm is O(n^2). For just your two 70,000 line polygons that's some factor of almost 10 Billion operations, to say nothing of the 700,000 other polygons. Obviously, no amount of mere code optimization is going to be enough, so you need some kind algorithmic optimization that can lower that O(n^2) without being unduly complicated.
The O(n^2) comes from the nested loops in the brute-force algorithm that are each bounded by n, making it O(n*n). The simplest way to improve this would be to find some way to reduce the inner loop so that it is not bound by or dependent on n. So what we need to find is some way to order or re-order the inner list of lines to check the outer line against so that only a part of the full list needs to be scanned.
The approach that I am taking takes advantage of the fact that if two line segments intersect, then the range of their X values must overlap each other. Mind you, this doesn't mean that they do intersect, but if their X ranges don't overlap, then they cannot be intersecting so theres no need to check them against each other. (this is true of the Y value ranges also, but you can only leverage one dimension at a time).
The advantage of this approach is that these X ranges can be used to order the endpoints of the lines which can in turn be used as the starting and stopping points for which lines to check against for intersection.
So specifically what we do is to define a custom class (endpointEntry) that represents the High or Low X values of the line's two points. These endpoints are all put into the same List structure and then sorted based on their X values.
Then we implement an outer loop where we scan the entire list just as in the brute-force algorithm. However our inner loop is considerably different. Instead of re-scanning the entire list for lines to check for intersection, we rather start scanning down the sorted endpoint list from the high X value endpoint of the outer loop's line and end it when we pass below that same line's low X value endpoint. In this way, we only check the lines whose range of X values overlap the outer loop's line.
OK, here's a c# class demonstrating my approach:
class CheckPolygon2
{
// internal supporting classes
class endpointEntry
{
public double XValue;
public bool isHi;
public Line2D line;
public double hi;
public double lo;
public endpointEntry fLink;
public endpointEntry bLink;
}
class endpointSorter : IComparer<endpointEntry>
{
public int Compare(endpointEntry c1, endpointEntry c2)
{
// sort values on XValue, descending
if (c1.XValue > c2.XValue) { return -1; }
else if (c1.XValue < c2.XValue) { return 1; }
else // must be equal, make sure hi's sort before lo's
if (c1.isHi && !c2.isHi) { return -1; }
else if (!c1.isHi && c2.isHi) { return 1; }
else { return 0; }
}
}
public bool CheckForCrossing(List<Line2D> lines)
{
List<endpointEntry> pts = new List<endpointEntry>(2 * lines.Count);
// Make endpoint objects from the lines so that we can sort all of the
// lines endpoints.
foreach (Line2D lin in lines)
{
// make the endpoint objects for this line
endpointEntry hi, lo;
if (lin.P1.X < lin.P2.X)
{
hi = new endpointEntry() { XValue = lin.P2.X, isHi = true, line = lin, hi = lin.P2.X, lo = lin.P1.X };
lo = new endpointEntry() { XValue = lin.P1.X, isHi = false, line = lin, hi = lin.P1.X, lo = lin.P2.X };
}
else
{
hi = new endpointEntry() { XValue = lin.P1.X, isHi = true, line = lin, hi = lin.P1.X, lo = lin.P2.X };
lo = new endpointEntry() { XValue = lin.P2.X, isHi = false, line = lin, hi = lin.P2.X, lo = lin.P1.X };
}
// add them to the sort-list
pts.Add(hi);
pts.Add(lo);
}
// sort the list
pts.Sort(new endpointSorter());
// sort the endpoint forward and backward links
endpointEntry prev = null;
foreach (endpointEntry pt in pts)
{
if (prev != null)
{
pt.bLink = prev;
prev.fLink = pt;
}
prev = pt;
}
// NOW, we are ready to look for intersecting lines
foreach (endpointEntry pt in pts)
{
// for every Hi endpoint ...
if (pt.isHi)
{
// check every other line whose X-range is either wholly
// contained within our own, or that overlaps the high
// part of ours. The other two cases of overlap (overlaps
// our low end, or wholly contains us) is covered by hi
// points above that scan down to check us.
// scan down for each lo-endpoint below us checking each's
// line for intersection until we pass our own lo-X value
for (endpointEntry pLo = pt.fLink; (pLo != null) && (pLo.XValue >= pt.lo); pLo = pLo.fLink)
{
// is this a lo-endpoint?
if (!pLo.isHi)
{
// check its line for intersection
if (pt.line.intersectsLine(pLo.line))
return true;
}
}
}
}
return false;
}
}
I am not certain what the true execution complexity of this algorithm is, but I suspect that for most non-pathological polygons it will be close to O(n*SQRT(n)) which should be fast enough.
Explanation of the Inner Loop logic:
The Inner Loop simply scans the endPoints list in the same sorted order as the outer loop. But it will start scanning from where the outer loop from where the outer loop currently is in the list (which is the hiX point of some line), and will only scan until the endPoint values drop below the corresponding loX value of that same line.
What's tricky here, is that this cannot be done with an Enumerator (the foreach(..in pts) of the outer loop) because there's no way to enumerate a sublist of a list, nor to start the enumeration based on another enumerations current position. So instead what I did was to use the Forward and Backward Links (fLink and bLink) properties to make a doubly-linked list structure that retains the sorted order of the list, but that I can incrementally scan without enumerating the list:
for (endpointEntry pLo = pt.fLink; (pLo != null) && (pLo.XValue >= pt.lo); pLo = pLo.fLink)
Breaking this down, the old-style for loop specifier has three parts: initialization, condition, and increment-decrement. The initialization expression, endpointEntry pLo = pt.fLink; initializes pLo with the forward Link of the current point in the list. That is, the next point in the list, in descending sorted order.
Then the body of the inner loop gets executed. Then the increment-decrement pLo = pLo.fLink gets applied, which simply sets the inner loop's current point (pLo) to the next lower point using it's forward-link (pLo.fLink), thus advancing the loop.
Finally, it loops after testing the loop condition (pLo != null) && (pLo.XValue >= pt.lo) which loops so long as the new point isn't null (which would mean that we were at the end of the list) and so long as the new point's XValue is still greater than or equal to the low X value of the outer loop's current point. This second condition insures that the inner loop only looks at lines that are overlapping the line of the outer loop's endpoint.
What is clearer to me now, is that I probably could have gotten around this whole fLink-bLink clumsiness by treating the endPoints List as an array instead:
Fill up the list with endPoints
Sort the List by XValue
Outer Loop through the list in descending order, using an index iterator (i), instead of a foreach enumerator
(A) Inner Loop through the list, using a second iterator (j), starting at i and ending when it passed below pt.Lo.
That I think would be much simpler. I can post a simplified version like that, if you want.
It takes a total of 15 minutes to run all 700,000 polygons! That's extremely good. I just need to check if it doesn't miss out on any.
@EvanParsons How did the testing go? Did this algorithm work out for you?
I'm 99% sure it's doing what I want. The only issue is that I don't understand it completely yet (although I have a fairly good idea what it does), I'm going to dedicate some time tomorrow and make sure I understand it. I made some adjustments to my lineSegment intersection method and a few other areas and got the time down to 8:47. This is basically as fast as my original implementation of the Hoey-Shamos algorithm. Thanks a bunch!
Okay, so I think I generally understand what's going on. But what's going on in the inner loop? I've never seen a for loop like that before.
@EvanParsons I have added an explanation for this to the end of my Answer.
there are 2 things to check:
closed polygon consists of cyclic sequence of points
if there is the same point in this sequence more than once than it is self intersecting polygon.
beware that the first and last point can be the same (differs on your polygon representation) in that case this point must be there more tan twice.
check all not neighbouring lines of your polygon for intersection
not neighbouring lines do not share any point
if there is intersection then polygon is self intersecting
no,... i am using C++ and in most cases (of my work) third party libs are not an option for many reasons. btw intersection detection in my pgms are fast enough. I think your comment is better suited for the question not for answer. BTW this is follow answer to double question for clarity take a look here http://stackoverflow.com/questions/18512815/implementing-hoey-shamos-algorithm-with-c-sharp?lq=1
Yep, haste. I've moved comment
Simple optimisation that should half the time for polygons that do not intersect:
int count = lines.Count();
for (int l1idx = 0; l1idx < count-1; l1idx++)
for (int l2idx = l1idx+1; l2idx < count; l2idx++)
{
Line2D g = lines[l1idx];
Line2D h = lines[l2idx];
if (g.intersectsLine(h))
{
return false;
}
}
Do not use Count() on the List. It is linq extension method that creates Enumerator and MovesNext incrementing counter each item. For 700000 it will execute counter++ 700000 times. The List<> has a field Count and everyone should use it. That is simple value, Add(T) method is using Count for inserting new item right into the end of the List. It is also better to use array for ValueTypes to avoid boxing/unboxing.
@GlebSevruk You're incorrect about count, the linq extension returns the property value if it is an ICollection. See http://stackoverflow.com/a/7969468. You're also wrong about the generic boxing logic, that would be true for java but not c#. It is rare you would want to use an array http://stackoverflow.com/a/434765
Thaks for pointing. I still use 3.5 realized that "This was only added in .NET 4". Now I got the point with boxing, it will occur only in old .Net 2 non-typed List. since Ints are stored in the array of objects, and should be casted back and forth. The only issue left with List for storing plain values is dynamic resize, when, you add new element. Initialy List is created with buffer for 8 or 16 (int[16]). After it reach this limit, the new array, doubled in size,is allocated (int[32]) and data is moved arround in memory. I see little avareness about new List<T>(SIZE) in developers
| common-pile/stackexchange_filtered |
Association name expected, 'badge' is not an association
I have a problem with override document (mongodb) and generator.yml
Parent document:
<php
namespace Acme\DemoBundle\Document;
use Doctrine\ODM\MongoDB\Mapping\Annotations as ODM;
/**
* @ODM\Document
*/
class Product
{
/**
* @ODM\ReferenceOne(targetDocument="Badge")
*/
private $badge;
}
Child document:
<php
namespace Acme\TestDemoBundle\Document;
use Acme\DemoBundle\Document\Product as BaseProduct;
use Doctrine\ODM\MongoDB\Mapping\Annotations as ODM;
/**
* @ODM\Document
*/
class Product extends BaseProduct
{
/**
* @ODM\String
*/
private $field;
}
The problem occurs when you specify in the overridden generator new model
Parent generator:
generator: admingenerator.generator.doctrine_odm
params:
model: Acme\DemoBundle\Document\Product
namespace_prefix: Acme
bundle_name: DemoBundle
i18n_catalog: AcmeDemoBundle
object_actions:
delete: ~
fields:
badge:
label: badge.label
# ......
Child generator:
generator: admingenerator.generator.doctrine_odm
params:
model: Acme\TestDemoBundle\Document\Product
namespace_prefix: Acme
bundle_name: DemoBundle
i18n_catalog: AcmeDemoBundle
object_actions:
delete: ~
fields:
badge:
label: badge.label
field:
label: field.label
# ......
The problem occurs when you specify in the overridden generator new model. I set to new generator model attribute Acme\TestDemoBundle\Document\Product and get an error "Association name expected, 'badge' is not an association."
The problem came when he entered commit https://github.com/symfony2admingenerator/AdmingeneratorGeneratorBundle/commit/357c0378ce7b0bafa2551148aa24fc533c6998a3
Method hasAssociation() from metadata check field in fieldMappings array, but the getAssociationTargetClass() return form asociationMappings array
When using parent class with doctrine you should annotation @ODM\MappedSuperclass. Using @ODM\Document annotation for parent class, the child one can't proxy reference so you have to override these kind of properties. Child document should look like below if you keep parent @ODM\Document annotation
<php
namespace Acme\TestDemoBundle\Document;
use Acme\DemoBundle\Document\Product as BaseProduct;
use Doctrine\ODM\MongoDB\Mapping\Annotations as ODM;
/**
* @ODM\Document
*/
class Product extends BaseProduct
{
/**
* @ODM\String
*/
private $field;
/**
* @ODM\ReferenceOne(targetDocument="Badge")
*/
private $badge;
}
| common-pile/stackexchange_filtered |
Extract emails from a web page in python
I have found the following code that crawls a website (I think all the website) for emails
import re
import requests
import requests.exceptions
from urllib.parse import urlsplit
from collections import deque
from bs4 import BeautifulSoup
# starting url. replace google with your own url.
starting_url = 'http://www.miet.ac.in'
# a queue of urls to be crawled
unprocessed_urls = deque([starting_url])
# set of already crawled urls for email
processed_urls = set()
# a set of fetched emails
emails = set()
# process urls one by one from unprocessed_url queue until queue is empty
while len(unprocessed_urls):
# move next url from the queue to the set of processed urls
url = unprocessed_urls.popleft()
processed_urls.add(url)
# extract base url to resolve relative links
parts = urlsplit(url)
base_url = "{0.scheme}://{0.netloc}".format(parts)
path = url[:url.rfind('/')+1] if '/' in parts.path else url
# get url's content
print("Crawling URL %s" % url)
try:
response = requests.get(url)
except (requests.exceptions.MissingSchema, requests.exceptions.ConnectionError):
# ignore pages with errors and continue with next url
continue
# extract all email addresses and add them into the resulting set
# You may edit the regular expression as per your requirement
new_emails =<EMAIL_ADDRESS>response.text, re.I))
emails.update(new_emails)
print(emails)
# create a beutiful soup for the html document
soup = BeautifulSoup(response.text, 'lxml')
# Once this document is parsed and processed, now find and process all the anchors i.e. linked urls in this document
for anchor in soup.find_all("a"):
# extract link url from the anchor
link = anchor.attrs["href"] if "href" in anchor.attrs else ''
# resolve relative links (starting with /)
if link.startswith('/'):
link = base_url + link
elif not link.startswith('http'):
link = path + link
# add the new url to the queue if it was not in unprocessed list nor in processed list yet
if not link in unprocessed_urls and not link in processed_urls:
unprocessed_urls.append(link)
How can I modify such a code to extract only one web page ..? I just need to target one web page not the whole website.
your question isn't clear or limited to specific issue, kindly check [ask] and edit your question with the exact issue
The question for getting emails from a web site (not all the websites but just one link)
remove the loop seems simple. Thought to be honest.... much easier to not bother with a queue then and simply pull out the bs4 bit and make a single request.
@QHarr I am so newbie at python stuff :)
@QHarr Can you help me .. the website will have a JavaScript and the code doesn't deal with it? Is there a solution?
Just delete all lines beginning from for anchor in soup.find_all("a"):. Your document should look like this then:
import re
import requests
import requests.exceptions
from urllib.parse import urlsplit
from collections import deque
from bs4 import BeautifulSoup
# starting url. replace google with your own url.
starting_url = 'http://www.miet.ac.in'
# a queue of urls to be crawled
unprocessed_urls = deque([starting_url])
# set of already crawled urls for email
processed_urls = set()
# a set of fetched emails
emails = set()
# process urls one by one from unprocessed_url queue until queue is empty
while len(unprocessed_urls):
# move next url from the queue to the set of processed urls
url = unprocessed_urls.popleft()
processed_urls.add(url)
# extract base url to resolve relative links
parts = urlsplit(url)
base_url = "{0.scheme}://{0.netloc}".format(parts)
path = url[:url.rfind('/')+1] if '/' in parts.path else url
# get url's content
print("Crawling URL %s" % url)
try:
response = requests.get(url)
except (requests.exceptions.MissingSchema, requests.exceptions.ConnectionError):
# ignore pages with errors and continue with next url
continue
# extract all email addresses and add them into the resulting set
# You may edit the regular expression as per your requirement
new_emails =<EMAIL_ADDRESS>response.text, re.I))
emails.update(new_emails)
print(emails)
# create a beutiful soup for the html document
soup = BeautifulSoup(response.text, 'lxml')
To generate random email-addresses with Python use this:
from faker import Faker
faker = Faker()
for i in range(12):
print(f'{faker.email()}')
Thank you very much. That's wonderful. But when trying with this site https://www.randomlists.com/email-addresses, it doesn't work.
This is because this website loads the email-addresses with javascript after the site has loaded. So when crawling the webpage, the addresses aren't in the page yet and the crawler can't find them. You can use my other answer to generate random email-addresses with Python.
Thank you very much. Is there any luck to deal with JavaScript ..? as the target is to get the emails from any page that has or hasn't JavaScript.
| common-pile/stackexchange_filtered |
Apache access to example.com/app from different root directory
For now, I have following directories:
WordPress directory is connected with example.com domain. I would like to access my Zend app by example.com/app URL, but it is in separated directory. I tried to add rewrite rule in htaccess file in WordPress directory, but server responds me with 400 error. I tried that without success:
RewriteRule ^app/(.*)$ ../zend_app/$1
And now I am not sure what to do. I think about some php file to include something from zend_app folder or to create symlink inside WordPress directory pointing to zend_app folder. What solution is better? Maybe there is another solution? I am using shared hosting, so possibilities are limited.
EDIT:
Here is my whole htaccess file from WordPress:
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteRule ^app/(.*)$ zend_app/$1 [L]
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
If you use example.com/zend_app/index.php does it work?
No, because zend_app is in different folder than wordpress.
Well that alone shouldn't matter. Are you getting a 400 error not a 404 error?
With RewriteRule I am getting 400 error
If the zend_app directory is a child of public_html remove the ../ in your rewrite rule. And since you are adding that to an existing Wordpress .htaccess you'll want it to be at the top and add [L] to the rule so nothing else gets processed. RewriteRule ^app/(.*)$ zend_app/$1 [L]
Now I have 404 error from WordPress
Please edit your question and include the contents of your .htaccess file.
I edited my question with htaccess file.
| common-pile/stackexchange_filtered |
nginx can't execute many requests
I have a question about nginx tune.
I have an application which I want execute 200 times every second.
I created bash file and used wget with bqO switch for execute.
But it has a problem.
When the number of requests is greater than 100. nginx not responses to another request and stuck into loading until one request done.
However I set pm.max_children and set worker_connections to 200.
Do you have a any suggest for solve this or is there any tuner like "MySQL Tuner" for tune nginx.
my configs:
php-fpm55.conf:
pm = ondemand
pm.max_children = 1024
pm.start_servers = 20
pm.min_spare_servers = 20
pm.max_spare_servers = 35
pm.max_requests = 256
pm.process_idle_timeout = 20
net.core.somaxconn=4096
sysctl.conf:
net.ipv4.tcp_window_scaling = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.ip_local_port_range = 1024 65000
nginx.conf:
worker_processes 8;
worker_rlimit_nofile 1024000;
events {
worker_connections 10240;
use epoll;
multi_accept on;
}
sendfile on;
keepalive_timeout 2;
types_hash_max_size 2048;
server_tokens off;
client_max_body_size 1024m;
client_body_buffer_size 128k;
server_names_hash_bucket_size 128;
server_names_hash_max_size 10240;
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
result: ab -n 100 -c 10 myindex.php
Server Software: nginx
Server Port: 80
Document Length: 3 bytes
Concurrency Level: 10
Time taken for tests: 21.128 seconds
Complete requests: 100
Failed requests: 32
(Connect: 0, Receive: 0, Length: 32, Exceptions: 0)
Total transferred: 17500 bytes
HTML transferred: 515 bytes
Requests per second: 4.73 [#/sec] (mean)
Time per request: 2112.791 [ms] (mean)
Time per request: 211.279 [ms] (mean, across all concurrent requests)
Transfer rate: 0.81 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 19 1334 2747.0 144 15734
Waiting: 19 1334 2747.0 144 15733
Total: 19 1334 2746.9 144 15734
Percentage of the requests served within a certain time (ms)
50% 144
66% 549
75% 1281
80% 1700
90% 4095
95% 8790
98% 12579
99% 15734
100% 15734 (longest request)
Much better than your previous attempt at the same question - you need to at least show your config.
you can see my config
You should look into your PHP application, and look why the requests are taking such a long time. The performance is mostly affected by the application code itself.
plz check result ab -n 100 -c 10 myindex.php
You seem to be obsessed with nginx tuning while not even knowing which part of your architecture is the slowest. It's very unlikely that nginx would be the first thing to tune.
Put your fastcgi targets in one upstream block and append $upstream_addr and $upstream_response_time to your log format. If not already present, append $request_time to your log format then compare both times. If they are close to each other, your app is the culprit. If not then it's either nginx or your network.
Only after you got this information it will be potentially relevant to tune nginx.
Maybe you are interested in keepalive_requests ? The default value is 100, you can change it to a higher value
| common-pile/stackexchange_filtered |
I am getting a "Trying to get property of non-object" error from my controller
I am getting a "Trying to get property of non-object" error whenever I try to process a deposit in my controller
Here is my controller
users::where('id',$user->id)
->update([
'confirmed_plan' => $deposit->plan,
'activated_at' => \Carbon\Carbon::now(),
'last_growth' => \Carbon\Carbon::now(),
]);
//get plan
$p=plans::where('id',$deposit->plan)->first();
//get settings
$settings=settings::where('id', '=', '1')->first();
$earnings=$settings->referral_commission*$p->price/100;
//increment the user's referee total clients activated by 1
agents::where('agent',$user->ref_by)->increment('total_activated', 1);
agents::where('agent',$user->ref_by)->increment('earnings', $earnings);
}
//update deposits
deposits::where('id',$id)
->update([
'status' => 'Processed',
]);
return redirect()->back()
->with('message', 'Action Sucessful!');
}
And The Error seems to be at this line of code
$earnings=$settings->referral_commission*$p->price/100;
And Here is my process deposit view blade
@include('header')
<!-- //header-ends -->
<!-- main content start-->
<div id="page-wrapper">
<div class="main-page signup-page">
<h3 class="title1">Manage clients deposits</h3>
@if(Session::has('message'))
<div class="row">
<div class="col-lg-12">
<div class="alert alert-info alert-dismissable">
<button type="button" class="close" data-dismiss="alert" aria-hidden="true">×</button>
<i class="fa fa-info-circle"></i> {{ Session::get('message') }}
</div>
</div>
</div>
@endif
<div class="bs-example widget-shadow table-responsive" data-example-id="hoverable-table">
<table class="table table-hover">
<thead>
<tr>
<th>ID</th>
<th>Client name</th>
<th>Client email</th>
<th>Amount</th>
<th>Payment mode</th>
<th>Plan</th>
<th>Status</th>
<th>Date created</th>
<th>Option</th>
</tr>
</thead>
<tbody>
@foreach($deposits as $deposit)
<tr>
<th scope="row">{{$deposit->id}}</th>
<td>{{$deposit->duser->name}}</td>
<td>{{$deposit->duser->email}}</td>
<td>${{$deposit->amount}}</td>
<td>{{$deposit->payment_mode}}</td>
@if(isset($deposit->dplan->name))
<td>{{$deposit->dplan->name}}</td>
@else
<td>For withdrawal</td>
@endif
<td>{{$deposit->status}}</td>
<td>{{$deposit->created_at}}</td>
<td> <a class="btn btn-default" href="{{ url('dashboard/pdeposit') }}/{{$deposit->id}}">Process</a></td>
</tr>
@endforeach
</tbody>
</table>
</div>
</div>
</div>
@include('modals')
@include('footer')
I keep getting "Trying to get property of non-object" error without any explanation as to where to look at. I need help please.
Are you sure you have a $settings object with id of 1?
$settings=settings::where('id', '=', '1')->first(); // <-- why 1, why not just first()?
If you are hard coding a single id for a single setting into the DB that will always be the same, and always id of 1... consider just adding that to code instead.
Also check on $deposit->plan -- is this correct, or maybe should this be $deposit->plan_id or similar? Perhaps the $p is null because there is no value for $deposit->plan.
You can set some error checking here by checking for null ahead of the calculation:
if(isset($settings) && isset($p)){
$earnings=$settings->referral_commission*$p->price/100;
}
else
{
$earnings = 0; // or whatever you want to replace it with should there be no setting
}
Add a parens at the end - I missed it: if(isset($settings) && isset($p)) {
Quick note, settings::first() won't necessarily return the record with id = 1. In Postgres for example, the results are somewhat random. If you're gonna replace that line, better to use settings::find(1).
Add a ; at the end of the earnings. I just wrote this in the text as an example - you should try to make it your own, including checking for syntax. So: $earnings = 0; I'll edit the answer to correct, but was really meant as an example to help you see a potential path to take.
Glad to help. But also check for root cause of the error like is the id 1 actually in the database, or does $deposit->plan have an id, or does it need a different variable etc. :)
| common-pile/stackexchange_filtered |
Why is there a "±" in lea rax, [ rip ± 0xeb3]?
I just started learning about assembly language in Kali Linux in VMware. I have a Ryzen 5 CPU. In the below code snippet, I have a few things I don't understand.
What is the meaning of lea rax, [rip ± 0xeb3] at <main + 17>? I understand what lea does, but what is the meaning of ±?
And what is the purpose of RDI after getting updated?
(gdb) list
1 #include<stdio.h>
2
3 int main(){
4 int i;
5 for(i = 0 ; i < 10 ; i++){
6 printf("Hello World!\n");
7 }
8 return 0;
9 }
(gdb) disassemble main
Dump of assembler code for function main:
0x0000000000001139 <+0>: push rbp
0x000000000000113a <+1>: mov rbp,rsp
0x000000000000113d <+4>: sub rsp,0x10
0x0000000000001141 <+8>: mov DWORD PTR [rbp-0x4],0x0
0x0000000000001148 <+15>: jmp 0x115d <main+36>
0x000000000000114a <+17>: lea rax,[rip±0xeb3] # 0x2004
0x0000000000001151 <+24>: mov rdi,rax
0x0000000000001154 <+27>: call 0x1030 <puts@plt>
0x0000000000001159 <+32>: add DWORD PTR [rbp-0x4],0x1
0x000000000000115d <+36>: cmp DWORD PTR [rbp-0x4],0x9
0x0000000000001161 <+40>: jle 0x114a <main+17>
0x0000000000001163 <+42>: mov eax,0x0
0x0000000000001168 <+47>: leave
0x0000000000001169 <+48>: ret
End of assembler dump.
(gdb)
Edit:
gdb -v
GNU gdb (Debian 12.1-3) 12.1
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Could be useful to show the output of gdb -v
I would suspect that it is actually supposed to be "-" but that it is being printed as "+ -" and then getting transformed to ± for some reason.
@SoronelHaetir 0x1151 + 0xeb3 = 0x2004 though
What shell and terminal are you using? Give us more details about your environment. AFAICT that symbol simply is not to be found anywhere in GDB/libopcodes code.
@MarcoBonelli I was using a normal terminal in Kali Linux, not as the root user, but the user was privileged. Can you please mention what details do you need about the environment?
I can reproduce this. It's not a plus-minus, it's an underlined plus. Possibly due to a wrong color escape sequence.
Ok, found the bug. gdb 12.1 uses Python (!!!) to colorize its output. Specifically, it uses the Pygments packages. Which handles x64 code badly, here's a test case. The (yet to be released) next version uses and entirely different coloring code, where each disassembler function can introduce style markers in its output and the disassemble command (gdb/disassemble.c) translate those markers into terminal escapes.
OK so it means + right? Thanks for checking out.
@KaranTejas you can do set style disassembler enabled off to disable the disassembler styling, this should fix the formatting issues.
@MargaretBloom: You could post that as an answer. BTW, on my Arch Linux system, in Konsole, your pastebin test-case prints an underlined + which looks confusing at first, but once you know to look for it, is clearly a + with an underline of the whole cell. And it copy/pastes as +, not ±
@PeterCordes That's how I found it was a plus and not a plus-minus: by copy-pasting it to remove any formatting. Then spent an hour trying to figure out what is wrong with the coloring code only to realize (later) that I was looking at GDB 13 :) I'll post an answer so this can be marked as answered.
@MargaretBloom: I assume the OP had copy/pasted from their terminal into the question, where the code blocks have ±. Pretty misleading [mcve] if actual GDB isn't outputting that character on their terminal, without saying anything about manually editing to make it look like what they see. Maybe some terminal emulator copy/pastes an underlined + as ±, or some other innocent explanation. Hope that didn't cost you too much extra time when tracking this down.
It's not a plus-minus (±, Unicode point 0x00b1), it's an underlined plus.
If you copy-paste it, you get only a plus (+).
GDB 12.1 uses Python to colorize each line of its disassembler output. Specifically, it uses the Pygments packages, which, at the current version 2.11.2, handle x64 code badly, here's a test case:
from pygments import formatters, lexers, highlight
def colorize_disasm(content, gdbarch):
# Don't want any errors.
try:
lexer = lexers.get_lexer_by_name("asm")
formatter = formatters.TerminalFormatter()
return highlight(content, lexer, formatter).rstrip().encode()
except:
return None
print(colorize_disasm("lea [rip+0x211] #test", None).decode())
The (yet to be released) next version uses an entirely different coloring code, where each disassembler function can introduce style markers in its output and the disassemble command (see gdb/disassemble.c) translates those markers into terminal escapes.
| common-pile/stackexchange_filtered |
Making ridges method on a cube mesh
I have a wooden box in my example which looks like its made out of different wooden pieces. I'm trying to find a good method how to make it look its from separate pieces. So In my example I made loop cuts, and extruded the faces along the normals. would this be the right approach, or would it be incorrect to just make up a box from different meshes to get the ridge look where the red arrows are pointing?
Thank you.
yes why not making separate meshes, it will simplify your work
Hey Moonboots, I wanted to make sure I'm not breaking any 3D modelling rules, and I take a correct approach. Thank you for your advice.
| common-pile/stackexchange_filtered |
VS2019: WF: Add Second Form don't work, it tries to add a class
I created a windows form application and it has a form1.cs file which has also designer mode.
but when I try to add->"new form", then the VS2019 only adds an empty class! not a new form.
Am I Missing something?
Does this answer your question? How to add new win form in project C#?
If the duplicate does not help, you can try to repair VS setup. Else to uninstall VS and reinstall it. If it does not work, you can try a full cleanup: https://stackoverflow.com/questions/62703847/why-does-my-visual-studio-closes-automatically-without-any-errors/62713351#62713351. Else you can try to reinstall Windows from nothing, unless you have a O&O DiskImage or a Macrium Reflect or anything else to restore from your daily incremental backups.
Tax So much, the problem solved by installing ".Net desktop development"
but I think it is a bug because I could create a windows form application but I could not add the form!
| common-pile/stackexchange_filtered |
creating 3x3 sobel operator in opencv2 C++
Im trying to create my own sobel edge detection based off of the gx and gy matrices on three channels i have in my code below.
[[0,1,2],
[-1,0,1],
[-2,-1,0]]
and
[-2,-1,0],
[-1,0,1],
[0,1,2]]
I edited the variables j and i in my code further down but it is not working, how can i create a sobel edge detection on those three channels
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
void salt(cv::Mat &image, int n) {
int i,j;
for (int k=0; k<n; k++) {
// rand() is the MFC random number generator
i= rand()%image.cols;
j= rand()%image.rows;
if (image.channels() == 1) { // gray-level image
image.at<uchar>(j,i)= 255;
} else if (image.channels() == 3) { // color image
image.at<cv::Vec3b>(j,i)[0]= 255;
image.at<cv::Vec3b>(j-1,i-1)[1]= 255;
image.at<cv::Vec3b>(j,i-1)[2]= 255;
}
}
}
int main()
{
srand(cv::getTickCount()); // init random number generator
cv::Mat image= cv::imread("space.jpg",0);
salt(image,3000);
cv::namedWindow("Image");
cv::imshow("Image",image);
cv::imwrite("salted.bmp",image);
cv::waitKey(5000);
return 0;
}
I'm a little confused by the question, because the question relates to sobel filters, but you provided a function that adds noise to an image.
To start with, here is the Sobel function, which will call the classic sobel functions (that will calculate dx and dy gradients).
Secondly, there is the more generic filter2D which will let you apply an arbitrary kernel (like the one you created in the question).
Lastly, if you want to apply a different kernel in each channel or band, you can do as the filter2D documentation implies, and call split on an image, and then call filter2D on each channel, and then combine the values into a single band image using the matrix operators.
The most complicated thing I think you could be asking is how to find the locations of that salt you added to the image, and the answer would be to make a kernel for each band like so:
band 0:
[[ 0, 0, 0],
[ 0, 1, 0],
[ 0, 0, 0]]
band 1:
[[ 1, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0]]
band 2:
[[ 0, 1, 0],
[ 0, 0, 0],
[ 0, 0, 0]]
Be sure to put the anchor in the center of the kernel (1,1).
| common-pile/stackexchange_filtered |
How to I split a column (of Strings) in MariaDB
I´m new to MariaDB. I have imported a Json.file in my MySQL Workbench. But e.g. the authors are all in one line instead each author in one line.
Is there a way to split them up, so each author gets a new line like this:
Lucinda Riley\n
Margit Steinborn\n
product_author:
["Lucinda Riley", "Margit Steinborn", "Don Winslow", "Laila Maria Witt", "Klaus-Peter Wolf", "Evelyn Weigert", "Sarah Sprinz", "Vincent Kliesch", "Eva Almst\u00e4dt", "Juliane Maibach", "Lisa Oliver", "Elias Haller"]
Please let us know what does SELECT VERSION(); return? There's a good solution in MySQL 8.0, but this task is quite hard in older versions. Your question is virtually the same as one I answered before: https://stackoverflow.com/a/60250840/20860
Hi, thx for answer. It returns 10.4.24-MariaDB
MariaDB does not support the JSON_TABLE() function until version MariaDB 10.6 (see https://mariadb.com/kb/en/json_table/). I suggest you just fetch the JSON string as is into your application, and split it up using code.
FYI MariaDB is not MySQL. MariaDB started as a fork of MySQL in 2010, but both products have been changing since then. In particular, their support for JSON has been implemented after the fork, so they are completely independent. You should not think of MariaDB as compatible with MySQL.
| common-pile/stackexchange_filtered |
XMLHttpRequest Status 0 for Firefox 49.0.2 Add On
There is a XMLHttpRequest in the content script of my Firefox WebExtensions add on. Q: why is the status of this request is always 0?
This is the JavaScript code making the request:
var query = "http://api.wolframalpha.com/v2/query?appid=[MY-APP-ID]&includepodid=Comparison&scanner=Unit&format=plaintext&input=1%20lm";
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function()
{
console.log("onreadystatechange");
console.log(this);
if (this.readyState == 4 && this.status == 200)
{
onSuccess(this.responseText);
}
};
xhttp.open("GET", query, true);
xhttp.send();
If I print out the results of the request for each onreadystatechange call, I get:
XMLHttpRequest { onreadystatechange: makeWolframRequest/xhttp.onreadystatechange(),
readyState: 1, timeout: 0, withCredentials: false, upload: XMLHttpRequestUpload,
responseURL: "", status: 0, statusText: "", responseType: "", response: "" }
XMLHttpRequest { onreadystatechange: makeWolframRequest/xhttp.onreadystatechange(),
readyState: 2, timeout: 0, withCredentials: false, upload: XMLHttpRequestUpload,
responseURL: "", status: 0, statusText: "", responseType: "", response: "" }
XMLHttpRequest { onreadystatechange: makeWolframRequest/xhttp.onreadystatechange(),
readyState: 4, timeout: 0, withCredentials: false, upload: XMLHttpRequestUpload,
responseURL: "", status: 0, statusText: "", responseType: "", response: "" }
Things I checked:
Content scripts should be able to make cross-domain requests according to the WebExtensions documentation.
Making a request to "https://api.wolframalpha.com/" instead of "http://api.wolframalpha.com/".
The readystatechange event fires whenever the readystate changes, the first three happens before the server responds, and you can't have a status code before you have a response, that's why it's 0 the first three times, and that's also why we check that the readystate is 4, as that indicates that a response was gotten. The issue likely isn't the status code for the three first readystatechange calls, but something else.
From my experiments, the readystate value of logs is 1, 2, and then 4. I would agree a status of 0 makes sense for states 1 and 2 (in the first two logs), but the readystate of 4 combined with status 0 is why I'm concerned.
If the status is still 0 when the readystate is 4, that usually indicates some other problem. I don't have an app ID and can't test this, but I'd guess it's a CORS error. Did you set the correct permissions for your script, you can't do cross origin calls without allowing it by asking for the right permissions?
@adeneo If you would like, you can post an answer for me to choose since it was indeed a CORS issue.
In this case it was a CORS issue. I had to add this secret sauce to my manifest.json file:
"permissions": [
"http://api.wolframalpha.com/*"
]
More information here: https://developer.mozilla.org/en-US/Add-ons/WebExtensions/manifest.json/permissions
Much thanks to @adeneo for insisting I keep looking at CORS issues.
This issue would almost certainly produced some output in the Browser Console (Ctrl-Shift-J, or Cmd-Shift-J on OSX) when you executed this code which would have given you a good idea as to what the problem was. There are also other consoles which you could have looked in to see information about this issue.
| common-pile/stackexchange_filtered |
Can AngularJS directive pre-link and post-link functions be customized?
I have seen many references to AngularJS pre- and post-link functions in literature about AngularJS.
I am not sure however whether these can be customized or are internal to the framework.
In other words, as an AngularJS developper, can I provide my own pre and post link functions to my custom directives?
Yes you can, as per @Mikke's answer. To sum up, there are four ways to declare linking functions:
From within compile specifying both preLink and postLink functions explicitly:
compile: function compile(tElement, tAttrs, transclude) {
return {
pre: function preLink(scope, iElement, iAttrs, controller) { ... },
post: function postLink(scope, iElement, iAttrs, controller) { ... }
}
}
From within compile returning only postLink implicitly:
compile: function compile(tElement, tAttrs, transclude) {
return function postLink( ... ) { ... }
}
From within link specifying both preLink and postLink explicitly:
link: {
pre: function preLink(scope, iElement, iAttrs, controller) { ... },
post: function postLink(scope, iElement, iAttrs, controller) { ... }
}
From withing link using postLink implicitly:
link: function postLink( ... ) { ... }
Yes, you can provide your own pre and post link functions. See the directive blueprint at Angular Docs' Comprehensive Directive API.
{
compile: function compile(tElement, tAttrs, transclude) {
return {
pre: function preLink(scope, iElement, iAttrs, controller) { ... },
post: function postLink(scope, iElement, iAttrs, controller) { ... }
}
// or
// return function postLink( ... ) { ... }
},
}
| common-pile/stackexchange_filtered |
Why does shadowing change the mutability of a variable in this code?
In thie following code,
fn main()
{
let mename : String = String::from("StealthyPanda");
println!("{mename}");
let mename = displayswithhere(mename);
println!("{mename}");
let mename = addshere(mename);
println!("{mename}");
}
fn displayswithhere(astring: String) -> String
{
println!("{astring} here!");
return astring;
}
fn addshere(mut astring : String) -> String
{
astring.push_str(" here!");
astring
}
Why isn't there an error after mename is shadowed and not declared as mutable when being assigned the value of displayswithhere(mename)? The code runs exactly as if the variable mename was mutable all along. I don't understand where the bug in the code, if any, is located.
Are you complaining about its change in addshere() or the re-assignment?
@ChayimFriedman What I'm asking is why is the addshere(mename) function call not causing an error, even though mename is not mutable? Isn't mename always immutable in the previous 2 declarations?
@StealthyPanda you don't mutate the mename variable, just have three distinct variables which have the same name (because the let keyword introduces new variables, see the answer below).
You're saying "the variable mename", but there's three in your main function. You can convince yourself of that by running the following code:
#![allow(unused_variables)]
struct Foo(i32);
impl Foo {
fn new(v: i32) -> Foo {
println!("{v}");
Foo(v)
}
}
impl Drop for Foo {
fn drop(&mut self) {
println!("{}", self.0);
}
}
fn main() {
let foo = Foo::new(0);
let foo = Foo::new(1);
let foo = Foo::new(2);
println!("At this point, all three Foos are alive, each in its own variable");
}
Now you may ask, if I can just shadow a previous variable, what's the difference to just having it mutable? The difference should become apparent when your run this code:
fn main() {
let i = 0;
for _ in 0..2 {
println!("{i}");
let i = i + 1;
println!("{i}");
}
}
When you shadow a variable, you create another one, distinct from the previous, but with the same name (that is just a coincidence).
The drawback is that you cannot simply refer to the former with its name any more because this names now refers to the latter.
On the example below, the functions fn_1() and fn_2() are very similar except that in fn_1() we still can refer directly to the original variable, but in fn_2() we have to find another way: we introduce a reference with a different name.
This is not related to mutability since the original variable keeps its original value all the way long.
On the other hand, fn_3() relies on mutability but we do not use let with the same name a second time, so the second assign operation is not an initialisation of a new variable but a real assign operation which will change the value of the original variable.
fn fn_1() {
println!("~~~~~~~~");
let my_var = 1;
println!("my_var: {}", my_var);
let my_other_var = 2; // creating another variable with a different name
println!("my_other_var: {}", my_other_var);
println!("my_var: {}", my_var);
}
fn fn_2() {
println!("~~~~~~~~");
let my_var = 1;
println!("my_var: {}", my_var);
let ref_to_my_var = &my_var;
let my_var = 2; // creating another variable with the same name (coincidence)
println!("my_var: {}", my_var);
println!("ref_to_my_var: {}", ref_to_my_var);
}
fn fn_3() {
println!("~~~~~~~~");
let mut my_var = 1;
println!("my_var: {}", my_var);
my_var = 2; // changing the origianl variable, which must be mutable
println!("my_var: {}", my_var);
}
fn main() {
fn_1();
fn_2();
fn_3();
}
/*
~~~~~~~~
my_var: 1
my_other_var: 2
my_var: 1
~~~~~~~~
my_var: 1
my_var: 2
ref_to_my_var: 1
~~~~~~~~
my_var: 1
my_var: 2
*/
| common-pile/stackexchange_filtered |
NumberFormatException when trying to convert JTextField to int
in this part of my code, i'm creating an object of the class Dizionarioand writing it to a file, first calling the construcor, accepting 3 parameters, (Path, String, int).
I am getting these 3 parameters from 3 JTextField and particulary, the last one (JTextField3) is creating this error, converting to int
This is the error:
Exception in thread "AWT-EventQueue-0" java.lang.NumberFormatException: For input string: "javax.swing.JTextField[,62,11,302x28,layout=javax.swing.plaf.basic.BasicTextUI$UpdateHandler,alignmentX=0.0,alignmentY=0.0,border=javax.swing.plaf.synth.SynthBorder@9577f8b,flags=288,maximumSize=,minimumSize=,preferredSize=,caretColor=,disabledTextColor=DerivedColor(color=142,143,145 parent=nimbusDisabledText offsets=0.0,0.0,0.0,0 pColor=142,143,145,editable=true,margin=javax.swing.plaf.InsetsUIResource[top=0,left=0,bottom=0,right=0],selectedTextColor=DerivedColor(color=255,255,255 parent=nimbusSelectedText offsets=0.0,0.0,0.0,0 pColor=255,255,255,selectionColor=DerivedColor(color=57,105,138 parent=nimbusSelectionBackground offsets=0.0,0.0,0.0,0 pColor=57,105,138,columns=0,columnWidth=0,command=,horizontalAlignment=LEADING]"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:492)
at java.lang.Integer.<init>(Integer.java:677)
I tried these pieces of code to convert the string to an integer:
int i = new Integer(jTextField3.toString());
and then putting i as parameter (or directly calling new Integer(...) as parameter)
(int)JTextField3.toString();
Integer.ParseInt(JTextField3.toString());
and here is my method
private void CreateMouseClicked(java.awt.event.MouseEvent evt) {
Dizionario dic = new Dizionario(
(Paths.get(jTextField2.toString())),
jTextField1.toString(),
Integer.parseInt(jTextField3.toString()));
dic.writeToFile();
}
"NumberFormatException when trying to convert JTextField to int" betterApproach = JSpinner + SpinnerNumberModel.
um, it's not jTextField3.toString(), it's jTextField3.getText(). That's a big difference, and to see just what toString() returns, look at your error message. You're trying to parse this:
"javax.swing.JTextField[,62,11,302x28,layout=javax.swing.plaf.basic.BasicTextUI$UpdateHandler,alignmentX=0.0,alignmentY=0.0,border=javax.swing.plaf.synth.SynthBorder@9577f8b,flags=288,maximumSize=,minimumSize=,preferredSize=,caretColor=,disabledTextColor=DerivedColor(color=142,143,145 parent=nimbusDisabledText offsets=0.0,0.0,0.0,0 pColor=142,143,145,editable=true,margin=javax.swing.plaf.InsetsUIResource[top=0,left=0,bottom=0,right=0],selectedTextColor=DerivedColor(color=255,255,255 parent=nimbusSelectedText offsets=0.0,0.0,0.0,0 pColor=255,255,255,selectionColor=DerivedColor(color=57,105,138 parent=nimbusSelectionBackground offsets=0.0,0.0,0.0,0 pColor=57,105,138,columns=0,columnWidth=0,command=,horizontalAlignment=LEADING]"
Into a number.
but why don't I get errors in the first and the second parameters?
@maxpesa Why should you? getText and toString both return String, as far as the complier is concerned they are correct options for the method...the compiler can't guess what the result of the call might be...
@maxpesa: because they are Strings. You should test the Strings you get in a println statement to see for youself.
Don't use JTextField#toString, use JTextField#getText to return the text content of the text field, for example...
int i = new Integer(jTextField3.getText());
toString is generally used to provide useful diagnostic information about an Object
| common-pile/stackexchange_filtered |
Have same radio group input located on two different location of same form
Is there some way to locate in two different locations the same radio group inputs and if they change one of them, also the other changes automatically?
For two location, I mean two different places on a same submit form.
I edited the post to better show what I mean. There are two groups. If user click on 1 on first group, I'd like to have also 1 checked on second group.
My code:
<p>First group</p>
<label class="n_people radio-inline">1
<input type="radio" name="number_people" value="1">
<span class="checkmark"></span>
</label>
<label class="n_people radio-inline">2
<input type="radio" name="number_people" value="2">
<span class="checkmark"></span>
</label>
<label class="n_people radio-inline">3
<input type="radio" name="number_people" value="3">
<span class="checkmark"></span>
</label>
<label class="n_people radio-inline">4
<input type="radio" name="number_people" value="4" checked="checked">
<span class="checkmark"></span>
</label>
<br>
<p>Second group</p>
<label class="n_people radio-inline">1
<input type="radio" name="number_people_2" value="1">
<span class="checkmark"></span>
</label>
<label class="n_people radio-inline">2
<input type="radio" name="number_people_2" value="2">
<span class="checkmark"></span>
</label>
<label class="n_people radio-inline">3
<input type="radio" name="number_people_2" value="3">
<span class="checkmark"></span>
</label>
<label class="n_people radio-inline">4
<input type="radio" name="number_people_2" value="4" checked="checked">
<span class="checkmark"></span>
</label>
What do you mean by "two different locations"?
Thanks for comment. For two location, I meand two different places on a submit form.
Yes, it's possible but they should have the same name.
I edited the original post, to put the same name. You can run code snippet to see that doesn't work.
It works fine, there is only one radio button can be selected in the all eight radios.
Doesn't work fine. I edited the post to better show what I mean. There are two groups. If user click on 1 on first group, I'd like to have also 1 checked on second group.
I suspect you want the two sets of 4 to update depending on whether the user has selected the equivalent button in the other set. Is that right? I think you’ll need some JS.
@A Haworth you're right
@James, may I misunderstand you from the start, I think that I got your point, I added an answer please check.
just wanted to add that if you used a framework like react or vue it would be very much easier since you'd only need a v-model in vue to link them with each other
You can divide the groups to two groups with wrapper div, and with event delegate, you can capture change event on each group, and change check the corresponding radio in the other group.
const groupOne = document.getElementById('group-one')
const groupTwo = document.getElementById('group-two')
const changeGroup = (groupA, groupB) => {
groupA.addEventListener('change', (e) => {
groupB.querySelector(`[value="${e.target.value}"]`).checked = true
})
}
changeGroup(groupOne, groupTwo)
changeGroup(groupTwo, groupOne)
<div id="group-one">
<p>First group</p>
<label class="n_people radio-inline"
>1
<input type="radio" name="number_people" value="1" />
<span class="checkmark"></span>
</label>
<label class="n_people radio-inline"
>2
<input type="radio" name="number_people" value="2" />
<span class="checkmark"></span>
</label>
<label class="n_people radio-inline"
>3
<input type="radio" name="number_people" value="3" />
<span class="checkmark"></span>
</label>
<label class="n_people radio-inline"
>4
<input
type="radio"
name="number_people"
value="4"
checked="checked"
/>
<span class="checkmark"></span>
</label>
</div>
<br />
<div id="group-two">
<p>Second group</p>
<label class="n_people radio-inline"
>1
<input type="radio" name="number_people2" value="1" />
<span class="checkmark"></span>
</label>
<label class="n_people radio-inline"
>2
<input type="radio" name="number_people2" value="2" />
<span class="checkmark"></span>
</label>
<label class="n_people radio-inline"
>3
<input type="radio" name="number_people2" value="3" />
<span class="checkmark"></span>
</label>
<label class="n_people radio-inline"
>4
<input
type="radio"
name="number_people2"
value="4"
checked="checked"
/>
<span class="checkmark"></span>
</label>
</div>
| common-pile/stackexchange_filtered |
Get a list of persistent objects stored in database
I should insert a list of objects in a database using a workflow for each object, the inserting process could take several time and waiting for some external input so I have to save persistent state in db, and it works correctly.
Now I need to show pending objects in my user interface, how can I retrieve variables of stored data??
Thank you,
Marco
See How to: Deserialize Instance Data Properties
Hello, I think this is how I can get primitive types, I read "This example does not demonstrate how to deserialize complex data properties because this is currently not a supported operation", am I wrong?
The data is there you just have to work out how to get it. It is just serialized into dictionaries.
| common-pile/stackexchange_filtered |
Program to find the result of primitive recursive functions
I'm writing a program to solve the result of primitive recursive functions:
1 --Basic functions------------------------------
2
3 --Zero function
4 z :: Int -> Int
5 z = \_ -> 0
6
7 --Successor function
8 s :: Int -> Int
9 s = \x -> (x + 1)
10
11 --Identity/Projection function generator
12 idnm :: Int -> Int -> ([Int] -> Int)
13 idnm n m = \(x:xs) -> ((x:xs) !! (m-1))
14
15 --Constructors--------------------------------
16
17 --Composition constructor
18 cn :: ([Int] -> Int) -> [([Int] -> Int)] -> ([Int] -> Int)
19 cn f [] = \(x:xs) -> f
20 cn f (g:gs) = \(x:xs) -> (cn (f (g (x:xs))) gs)
these functions and constructors are defined here: http://en.wikipedia.org/wiki/Primitive_recursive_function
The issue is with my attempt to create the compositon constructor, cn. When it gets to the base case, f is no longer a partial application, but a result of the function. Yet the function expects a function as the first argument. How can I deal with this problem?
Thanks.
There's also a function composition operator http://tinyurl.com/ykts2pz and an article on how to write pointfree code http://www.haskell.org/haskellwiki/Pointfree
Just a note: in idnm, you needlessly pattern-match against the : list constructor. You can just write idnm n m = \xs -> xs !! (m-1), with the !! operator forcing the list type; this simplifies to idnm _ m = (!! (m-1)). If you really want to pattern-match against : (perhaps to forbid []), you could write idnm _ m xs@(_:_) = xs !! (m-1).
Well, all 3 functions are over complicated. z = const 0; s = succ.
Given f,
f :: [a] -> b
and g_k,
g_k :: [a] -> a
we want to produce h,
h :: [a] -> b
so the composition should be like
compo :: ([a] -> b) -> [[a] -> a] -> [a] -> b
compo f gs xs = f (map ($ xs) gs)
Example: http://codepad.org/aGIKi8dF
Edit: It can also be written in applicative style (eliminating that $) as
compo f gs xs = f (gs <*> pure xs)
| common-pile/stackexchange_filtered |
Find duplicates in a list and return list with position of duplicates
I know that there are tons of posts that deal with finding duplicates in a list.
Here is one
Find continuous duplicates in a List
What I would like is something more: returning a list in which are the positions of the string duplicates.
If the list is like [0]=AAA [1]=BBB [3]=CCC [4]=AAA [5]=CCC
I would like an extension that acts like;
var dupList = lst.FindDuplicatesReturnPositions();
dupList = 0,4,5 or even better 1,5,6.
Thanks in advance
Patrick
ADD I understand that I have not made it clear the reason why 1,5,6 instead than 0,4,5.
So as a code I prefer 0,4,5 but I have to apply that to a pallet matrix. And for customer who are NON coders cells start from 1 and not from 0. So not big deal to increment the numbers but if positions start from 1 it is better
Its unclear to me how you would get these index's. 0,4,5 and 1,5,6. Also we are all coders here, we like to see sample data written and implemented as actual code
0,4,5 would be AAA,AAA,CCC. 1,5,6 would be BBB,CCC,??? (because index 6 is not included in your sample data). I fail to see how "AAA" and "CCC" or "BBB" and "CCC" can be considered string duplicates.
Why don't you just modify the method you are linking? Instead of making it return the value, return the position. Obviusly, to achieve this avoid the methods that order the list previously.
Please see my edit. Hope it is cleared now
Sorry but I did not understand why in the duplicate list there should be both 0 and 4 (for AAA) but not both 3 and 5 for (CCC)
Anyway... Funs over, its time to mark this question as needing details or clarity. Its unclear what you are asking, and how you get your results. If you could take time to fully explain the problem, double check your sample data and output indexes, this will likely be very answerable
Please see my comments below regarding the proposed solution
There you have:
public static List<int> FindDuplicatesReturnPositions(this List<string> list)
{
List<int> res = new List<int>();
HashSet<string> hashSet = new HashSet<string>();
int index = 0; //int index = 1 if you want the other result
foreach (var item in list)
{
if (hashSet.Contains(item))
{
res.Add(index);
}
else
{
hashSet.Add(item);
}
index++;
}
return res;
}
The way you should use is:
var dupList = FindDuplicatesReturnPositions(lst);
Here is my problem. if you run this on the OPs test data. you wont get the indexes specified
I have therefore to assume that indexes here start from 0 so if I have var lst = new System.Collections.Generic.List() { "AAA","BBB","CCC","DDD","EEE","AAA","XXX","CCC","AAA" };
var dupList = FindDuplicatesReturnPositions(lst); the result will be 5,7,8 that is "AAA", "CCC", "AAA". So that works thanks
The result of the OP data is obviously wrong.
Indexes 0 and 4 are returned because they are duplicates. But just 5 is returned and not 3?
Returning the 0 is a mistake, that's why my solution doesn't match the OPs test data.
[0]=AAA [1]=BBB [3]=CCC [4]=DDD [5]=AAA [6]=XXX [7]=CCC [8]=AAA so if you scan the list the first duplicate is [5]=AAA then [7]=CCC [8]=AAA ---> Thus 5,7,8
I meant the result the result provided in the question.
Hi boy if you wanna list your duplicates list and sort them you can use this code but my code is example just . you can use this method . this work in controller api.
namespace Web.UI.Controllers
{
public class ListExamplesController : Controller
{
public ActionResult Index()
{
ListExamples listExamples = new ListExamples();
var duplicatePets = listExamples.ReturnListOfPets()
.GroupBy(x => x.StringValue)
.Where(y => y.Count() > 1)
.Select(z => z.Key).ToList();
var model = new ExampleOfListViewModel
{
ListOfPets = listExamples.ReturnListOfPets(),
ListOfInts = listExamples.ReturnListOfInts(),
ListOfStrings = listExamples.ReturnListOfStrings(),
ListOfIntsJoined = listExamples.ReturnListOfIntsJoin(),
ListOfPetsDuplicates = duplicatePets,
ListOfIntsSorted = listExamples.SortReturnListOfInts()
};
return View("~/Views/Home/ListExamples.cshtml",model);
}
}
}
Woah... so much to unpack here. Though what I do know is this answer completely misses the mark
| common-pile/stackexchange_filtered |
how to use uppercase method with do not allowed turkish character?
I have the following code:
<input type="text" style="text-transform: uppercase" />
The string gets transformed to uppercase, but ü for example, will transform to Ü.
I want ü to transform to U, e.g. Gülay should become GULAY.
Does anyone have any idea how can I do this?
Did I get that right, you want to transform umlaut U (Ü) to usual U ?
DEAR GULAY - CAN YOU REPHRASE THAT?
ok. For example: My name is Gülay. I can transfer this text MY NAME İS GÜLAY. But I don't want to İ and Ü letters. I want MY NAME IS GULAY.
could you please use markdown so that we could see your text ??
@GülayUygun - it seems that what you want is not uppercasing the ü to Ü but replacing the character with another one. I don't know exactly why you would want that (why should you want the "umlaut" when having lowercase text but not want it when you have uppercase text?), text-transform: uppercase, as the name says, transforms to uppercase. If you want to replace certain letters by others, you will need to curate your data before it goes to the view.
http://stackoverflow.com/questions/1850232/turkish-case-conversion-in-javascript seems related
String.prototype.turkishToLower = function () {
var string = this;
var letters = {
İ: "i", I: "i", ı: "i", Ş: "s", ş: "s", Ğ: "g", ğ: "g", Ü: "u", ü: "u", Ö: "o", ö: "o", Ç: "c", ç: "c",
};
string = string.replace(/(([İIıŞşĞğÜüÇçÖö]))/g, function (letter) {
return letters[letter];
});
return string.toLowerCase();}
String.prototype.turkishToUpper = function(){
var string = this;
var letters = { "i": "I", "ş": "S", "ğ": "G", "ü": "U", "ö": "O", "ç": "C", "ı": "I" };
string = string.replace(/(([iışğüçö]))/g, function(letter){ return letters[letter]; });
return string.toUpperCase();}
use it "string".turkishToLower()
Modifying native types prototypes is rarely a good idea. But your code is working. Maybe you should consider creating a small object constructed with your string and use those two methods with it
If this data is comming from a database, I suggest you treat this on your server before you send to your view.
But you can also use a javascript for that if you don't want this to happen on the server.
Check out this answer: Remove accents/diacritics in a string in JavaScript
You could use something like:
var finalString = removeDiacritics(initialString).toUpperCase();
| common-pile/stackexchange_filtered |
The significance of the number in front of a series?
I believe I have solved the series to $-4/9$ however I'm not sure of the significance of the 9 in front of sigma. Is this simply multiplying the final result or is it something different?
The series in question is
$$9\sum_{n=1}^\infty\left(-\frac{4}{5}\right)^n$$
I means 9 times the sum we would like to find. Multiplication like usual. Keep in mind that if the sum converges, it is one number.
Had the multiplier been $-9/4$, we could have called this operation a normalization...
So I'm correct in saying the answer is -4?
It is simply multiplying the final result. (Although it's worthwhile to note $\sum_{i=1}}^n (2x^2 +3) = \sum 2x^2 + \sum 3 = 2\sum x^2 +3\sum 1= 2(\sum x^2) +3n $ is a useful trick.) So are you correct in saying the answer is -4? Lesee $9\sum (-4/5)^n = 9*\frac 1 {1-(-4/5)}=9*5/9 =5$. Well... I may have screwed up. But yes, whatever the sum is, you just multiply by 9.
Your solution of the series is correct. According to the formula for geometric power series
$$\sum_{n=1}^\infty q^n=\frac{q}{1-q}\qquad\qquad |q|<1$$
we obtain
\begin{align*}
\sum_{n=1}^\infty\left(-\frac{4}{5}\right)^n=\frac{-\frac{4}{5}}{1+\frac{4}{5}}=-\frac{4}{9}
\end{align*}
We do not explicitely need to write a multiplication sign between a factor and a series to denote the multiplication. We can simply write $9\sum_{n=1}^\infty\left(-\frac{4}{5}\right)^n$
and the meaning is
\begin{align*}
9\sum_{n=1}^\infty\left(-\frac{4}{5}\right)^n=9\cdot\sum_{n=1}^\infty\left(-\frac{4}{5}\right)^n=9\cdot\left(-\frac{4}{9}\right)=-4
\end{align*}
| common-pile/stackexchange_filtered |
Getting Active Directory Users and looping through them to return a new list using F#
I have tried several ways to get this to work, I am thinking I need to some kind of recursive function but haven't been able to wrap my head around it. Looked at folds and catamorphism and don't really get how I could apply it this this situation. I also, tried an example with yeild.
This is the code I have thus far:
//Function: Get AD User Directory Services
let getADUserDS (searchBase:string,filter:string) =
let dEntry = new DirectoryEntry("LDAP://"+searchBase)
let propertiesToLoad = [|"samaccountname"|]
let dSearcher = new DirectorySearcher(dEntry,filter,propertiesToLoad)
let searchResults = dSearcher.FindAll()
let results: String List = []
//let newSearchResults = [for u in searchResults do let r = u.GetDirectoryEntry() && let a: String List = [r.Properties.["samaccountname"].Value.ToString()] && yeild a]
//let results = newSearchResults |> List.filter ((<>) "")
for u in searchResults do
match u with
| null -> printfn ""
| u when u.ToString() = ""-> printfn ""
| _ -> let r = u.GetDirectoryEntry()
let a: String List = [r.Properties.["samaccountname"].Value.ToString()]
results |> List.append <| a |> ignore
results
I don't have an Active Directory available, so I was not able to test this. However, one line in your code that is definitely not right is the following one:
results |> List.append <| a |> ignore
I suppose you are trying to add the new value a to the list of results, but F# lists are immutable, so this just creates a new list and then ignores it using ignore.
Using list comprehensions as you attempted in the commented-out version is defintely one good way of going about this. The correct syntax incluing your match code would be:
let newSearchResults =
[ for u in searchResults do
match u with
| null -> printfn ""
| u when u.ToString() = ""-> printfn ""
| _ -> let r = u.GetDirectoryEntry()
yield r.Properties.["samaccountname"].Value.ToString() ]
Alternatively, you can use functions like List.map and List.filter. One caveat is that dSearcher.FindAll is not returning a generic IEnumerabl and so you need some extra work to turn this result into a normal F# list - a simple list comprehension can do this nicely for you:
let searchResults = [ for r in dSearcher.FindAll() -> r ]
let newSearchResults =
searchResults
|> List.filter (fun u -> u <> null && u.ToString() <> "")
|> List.map (fun u ->
let r = u.GetDirectoryEntry()
r.Properties.["samaccountname"].Value.ToString())
Thanks for answering the question, Both methods worked perfectly! I actually have some other places that this will help as well. I did have another question...The first method seems much easier to read, is there a benefit that the second method would have over the first?
| common-pile/stackexchange_filtered |
DT::datatable on grouped dataframe within Rmarkdown report (HTML)
I have a grouped data frame output that I want to visualize as a DT::datatable with Download buttons (csv,excel) in Rmarkdown report (HTML).
It works fine when I'm constructing the Rmarkdown, but shows an error saying that there is no group by method applicable for object classes datatable and htmlwidgets.
Many Thanks in advance.
Here is my Code :
## Grouping columns
Site <- LETTERS[1:6]
Block <- LETTERS[1:4] ## For each Site
Plot <- paste(rep("P",10),seq(1,10,1),sep="_") ## For each Block
df <- expand.grid(Site = Site, Block = Block, Plot = Plot)
## Dependant variables
df <-cbind.data.frame(df,data.frame(Tmin=runif(min=-3,max=18,n=240),
Tmax=runif(min=10,max=39, n=240),
Index1=runif(0,5,n=240),
Index2=runif(1,10,n=240)))
# Export grouped df as datatable
library(dplyr)
df%>%
group_by(Site,Block,Plot)%>%
summarize(Tmin_avg=mean(Tmin), Tmax_avg=mean(Tmax))%>%
DT::datatable(
extensions = 'Buttons', options = list(
dom = 'Bfrtip',
buttons =
list('copy', 'print', list(
extend = 'collection',
buttons = c('csv', 'excel', 'pdf'),
text = 'Download'
))
))
dplyr::group_by is used to summarise and aggregate data. You have not aggregated anything in your example. Did you perhaps mean to do so?
Indeed! I have edited the code, I just fogot to mention it.
Please add that dependency too (library(tidyverse) or library(dplyr)) so that others can run your example. And remove ... from summarize()
You forgot to aggregate your data and you should remove the ... from summarize function. The ... are used inside functions to give the user flexibility to add existing parameters to functions. E.g.
library(tidyverse)
aggfun <- function(df, ...) {
df%>%
group_by(Site,Block,Plot)%>%
summarize(Tmin_avg=mean(Tmin), Tmax_avg=mean(Tmax), ...)%>%
DT::datatable(
extensions = 'Buttons', options = list(
dom = 'Bfrtip',
buttons =
list('copy', 'print', list(
extend = 'collection',
buttons = c('csv', 'excel', 'pdf'),
text = 'Download'
))
))
}
aggfun(df, T_min = min(Tmin))
Your example works if you do the changes:
df%>%
group_by(Site,Block,Plot)%>%
summarize(Tmin_avg=mean(Tmin), Tmax_avg=mean(Tmax))%>%
DT::datatable(
extensions = 'Buttons', options = list(
dom = 'Bfrtip',
buttons =
list('copy', 'print', list(
extend = 'collection',
buttons = c('csv', 'excel', 'pdf'),
text = 'Download'
))
))
It s working , Many thanks. I d like to mention that the "..." were added to say implicitly that multiple aggregations happen within the summarize function for multiple variables.
| common-pile/stackexchange_filtered |
CSVReader headerReader = new CSVReader(new FileReader(this.getClass().getResource(filePath).getPath()), delimiter);
i am working on java Maven framework and using CSVReader to read files but i dont understand, How "this.getClass().getResource(filePath).getPath()" fetches the absolute path of the file and also what the function of delimiter(i.e ",").
Value of filepath is D:\read.txt
Absolute path is C:\Dev\03-24-15\AutomationTesting\src\test\resources\Data
CSVReader headerReader = new CSVReader(new FileReader(this.getClass().getResource(filePath).getPath()), ",");
Please help!!!!
A resource is not a file, and the result of getResource().getPath() is not a filename.
Use the URL returned by getResource(), get an InputStream from it, wrap that in an InputStreamReader, and pass that to new CSVReader(...).
Or just use getResourceAsStream instead of the first two steps.
| common-pile/stackexchange_filtered |
Reloading data with chart.dataSource.reloadFrequency adds the data to the chart in stead of replacing it
I've added this code
chart.dataSource.reloadFrequency = 60000;
to live reload the chart data.
It works, the network tab is showing that, and the data is loaded, but it looks like it's added and not replaced.
The docs say:
In case your data changes dynamically, you might want to set up the data loader to reload the data at preset intervals, so that your chart is always up-to-date.
To do that, use dataSource.reloadFrequency setting.
It's a number of milliseconds you want your chart data to be reloaded at.
For example if I want my chart to reload its data every 5 seconds, i'll set this to 5000:
Do you have incremental set to true? That would explain why it's adding instead of replacing as the default behavior is to replace the data on reload (i.e. incremental = false). If that's not the case, please post a fiddle.
No, I did not set incremental to true. Accoding tot the docs "This will instruct the loader to treat each new load of data as addition to the old data". This created a chart with multiple datalines
Please post your chart setup, then. I can't reproduce this locally.
it's a little bit of complex setup, with data coming from my server
Surely you can pare it down to a minimal version of your chart config, without your data, that is enough to reproduce the problem? Without seeing what you have, it's very hard to say what's wrong, especially if you're using the latest release. As it stands, reloadFrequency behaves correclty on all of our examples.
Can I DM you my live code?
I had same problem and changing these line to false helped:
chart.dataSource.incremental = false;
| common-pile/stackexchange_filtered |
Two programmers debug a type checker late evening.
Dyer: This equality checker keeps failing on supposedly identical functions. They should be the same.
John: But are they really the same? In Martin-Löf's system, we don't just ask if two things are equal - we need a witness, a proof of that equality.
Dyer: Right, the constructive approach. So instead of just true or false, we need evidence.
John: Exactly. Now | sci-datasets/scilogues |
Get the value of sys.argv[1] and concatenate it with some string literals
I simply want to get the value of sys.argv[1] and concatenate it with some string literals.
This is my script:
import sys
myValue = sys.argv[1]
out = 'a' + myValue + 'c'
print (out)
When i run the python script with the parameter "b", i expect to get:
"abc"
But i am getting:
"cb"
Anyone know why?
Your script works fine for me. I get abc as the output.
Ya . Your code is fine . I have not found any error. For me also getting the output abc
It's acting like the value in sys.argv[1] is actually "b\r" - ab gets output, the carriage return goes back to the start of the line, the c overwrites the a. I have no idea how you managed to unintentionally get a carriage return into a parameter, however.
Thanks a lot jasonharper, this is exactly the problem "\r".
I agree with @Loocid that I don't see a problem with your code as-is.
//string_literal.py
import sys
myValue = sys.argv[1]
out = 'a' + myValue + 'c'
print (out)
And I run it:
//sh
$ python string_literal.py 0
>>a0c
This indicates to me that you are having a problem that is outside of what you showed us. Anyway, to the other part of your question about working with string literals, I would recommend to you .format. With format you create what other languages (and I guess python) would call a "string template"
//string_template.py
import sys
print("a{}c".format(sys.argv[1]))
And I run it with the same result:
//sh
$ python string_template.py 0
>>a0c
Thanks all, i think i've found the problem.
By printing sys.argv, i can see the sys.argv[1] has a trailing "\r", I will remove it. Thanks again guys!!
| common-pile/stackexchange_filtered |
Prove that $\text{Tr}(M|ψ\rangle\langleϕ|)=\langleϕ|M|ψ\rangle$
Question:
I am studying alone, and I found p.76 of the book quantum computation and quantum information of nielsen &c huang that: $$\text{Tr}(M |\psi\rangle \langle\psi)=\langle\psi| M |\psi\rangle\,.$$
I want to be sure that my understanding of the formula is correct by proving a more general expression: $$\text{Tr}(M |\psi\rangle\langle\phi|)=\langle\phi| M |\psi\rangle\,,$$ with $M$ an operator is correct.
Answer:
1- We know that $$\text{Tr}(A) = \sum_i \lambda_i\,,$$ with $\lambda_i$ all the eigenvalues of $A$. Hence with $|i\rangle$ the eigenvector associated to the eigenvalue $\lambda_i$ we can write that $$\text{Tr}(A) = \sum_i \langle i|A|i\rangle \text{ as }\,\,\, A|i\rangle=\lambda_i |i\rangle\,.$$
Rem: To simplify the scripture I suppose that to each eigenvalue match only one eigenvector (I do not think that it will change a lot my demonstration in the case this is not the case).
2- By identifying $A=M |\psi\rangle\langle\phi|$ we can then write (with $|i\rangle$ the eigenvectors of $M |\psi\rangle \langle\phi|$), $$\begin{align}
\text{Tr}(M |\psi\rangle \langle\phi|) &= \sum_i \langle i|M |\psi\rangle \langle\phi|i\rangle\\
&= \sum_i \langle\phi|i\rangle \langle i|M |\psi\rangle\\
&= \langle \phi| \bigg(\sum_i |i\rangle \langle i| \bigg) M |\psi\rangle \,.
\end{align}$$
The term $\sum_i ∣i⟩⟨i∣ = I$ is the identity operator since it represents a sum over a complete set of projectors onto an orthonormal basis.
3- Hence we get $$\text{Tr}(M |\psi \rangle \langle\phi|)=\langle\phi| M |\psi \rangle\,.$$
Is this correct? Did I forgot to write some conditions in order to make my prove more precise?
related: https://quantumcomputing.stackexchange.com/q/5045/55
You don't actually need the spectral decomposition of $A$. You can let $|i\rangle$ be any orthonormal basis, and just define the trace to be $\text{Tr}(A)=\sum_i\langle i|A|i\rangle$.
Your proof is not general, it assumes implicitly that the operator $M|\psi\rangle\langle \phi|$ is diagonalizable (there is a basis of eigenvectors).
You can instead just use basic fact of Trace, namely $\mathrm{Tr}(AB)=\mathrm{Tr}(BA)$.
So
$\mathrm{Tr}(M|\psi\rangle\langle \phi|)=\mathrm{Tr}(\langle \phi|M|\psi\rangle)$
But $\langle \phi|M|\psi\rangle \in \mathbb{C}$ is just a scalar so
$\mathrm{Tr}(\langle \phi|M|\psi\rangle)=\langle \phi|M|\psi\rangle$
Edit. More insight
Here is a visual proof for those who are unsure about the formula:
$M|\psi\rangle$ is an operator applied to vector $|\psi\rangle$, the result is a vector which lives in the same state space as $|\psi\rangle$. Let us write the matrix representation in a given basis :
\begin{alignat*}{1}
M|\psi\rangle&=\begin{bmatrix}
\alpha_1 \\
\alpha_2\\
\vdots\\
\alpha_l\\
\vdots\\
\end{bmatrix}
\end{alignat*}
$|\phi\rangle$ is a vector which lives in same state space as $|\psi\rangle$, let its representation be:
\begin{alignat*}{1}
|\phi\rangle&=\begin{bmatrix}
\beta_1 \\
\beta_2\\
\vdots\\
\beta_l\\
\vdots\\
\end{bmatrix}
\end{alignat*}
Its dual vector is represented by the conjugate transpose of the previous column:
\begin{alignat*}{1}
\langle\phi|&=\begin{bmatrix}
\overline{\beta_1} &
\overline{\beta_2} &
\dots &
\overline{\beta_l} &
\dots\\
\end{bmatrix}
\end{alignat*}
So now think about what happens when we make the matrix multiplication of a column by a row: we get a square (maybe infinite) matrix
\begin{alignat*}{1}
M|\psi\rangle \langle\phi|&=\begin{bmatrix}
\alpha_1\overline{\beta_1} & \alpha_1\overline{\beta_1}& \dots &\alpha_1\overline{\beta_i}&\dots&&\\
\alpha_2\overline{\beta_1} & \alpha_2\overline{\beta_2}& \dots &\dots&\dots&&\\
\vdots & & \ddots &&&&\\
\alpha_i\overline{\beta_1} & & &\alpha_i\overline{\beta_i}&&&\\
\vdots & & & &\ddots&&\\
\end{bmatrix}
\end{alignat*}
You can check that the trace of the latter is indeed the inner product $\langle\phi|M|\psi\rangle$.
Thank a lot for yout help ! Indeed it is easier and from far better!
Sorry I have question. You use the property that $Tr(AB)=Tr(BA)$ but $A,B$ are matrixes while it is not the case for $|\phi>$ that you ve moved to left. How did you justify this?
$|\phi\rangle$ is a matrix: it is a vector and a vector is matrix with 1 column.
The formula is valid for any format; of course $AB$ has to be a square matrix.
In quantum physics we may have "infinite" matrix, so this is a kind of generalized matrix multiplication.
Thk a lot for your editing that add useful information
$\newcommand{\bra}[1]{\left<#1\right|}\newcommand{\ket}[1]{\left|#1\right>}\newcommand{\bk}[2]{\left<#1\middle|#2\right>}\newcommand{\bke}[3]{\left<#1\middle|#2\middle|#3\right>}$
If you want to be more rigorous in proving that
$$\text{Tr}(M |\psi\rangle\langle\phi|)=\langle\phi| M |\psi\rangle\,,\tag{1}$$
from scratch, without using any identity/formula, a good approach would be to consider everything in terms of the vector and matrix elements as a summation with respect to the basis and then do some algebra, and everything should work out automatically.
Let $\{ \ket{i}\}$ be the basis we are working with. Then, we can write
$$\ket{\psi} = \sum_i a_i \ket{i}\,,\tag2$$
$$\ket{\phi} = \sum_i b_i \ket{i}\,,\tag3$$
$$M = \sum_{ij} m_{ij} \ket{i}\bra{j}\,.\tag4$$
Now, let's first compute RHS of Eq. $(1)$.
\begin{align}
\langle\phi| M |\psi\rangle &= \bigg(\sum_i b^*_i \bra{i}\bigg)\cdot\bigg(\sum_{jk} m_{jk} \ket{j}\bra{k}\bigg)\cdot\bigg(\sum_l a_l \ket{l}\bigg)\,,\tag{5.1}\\
&= \sum_{ijkl} b_i^* m_{jk}a_l \langle i|j\rangle \langle k|l\rangle\,,\tag{5.2}\\
&= \sum_{ijkl} b_i^* m_{jk}a_l \cdot \delta_{ij} \cdot \delta_{kl}\,,\tag{5.3}\\
&= \sum_{ik} b_i^* m_{ik}a_k\,.\tag{5.4}
\end{align}
Now, conmputing the part inside the trace in the LHS of Eq.$(1)$,
\begin{align}
M \ket{\psi} \bra{\phi} &= \bigg(\sum_{jk} m_{jk} \ket{j}\bra{k}\bigg) \cdot \bigg(\sum_l a_l \ket{l}\bigg) \cdot \bigg(\sum_i b^*_i \bra{i}\bigg)\,,\tag{6.1}\\
&= \sum_{ijkl} m_{jk}a_l b^*_i \ket{j} \langle k|l\rangle\bra{i}\,,\tag{6.2}\\
&= \sum_{ijkl} m_{jk}a_l b^*_i \ket{j}\cdot \delta_{kl} \cdot\bra{i}\,,\tag{6.3}\\
&= \sum_{ijk} m_{jk} a_k b^*_i \ket{j} \bra{i}\,.\tag{6.4}
\end{align}
Now, taking trace,
\begin{align}
\text{Tr} (M \ket{\psi} \bra{\phi}) &= \text{Tr}\bigg( \sum_{ijk} m_{jk} a_k b^*_i \ket{j} \bra{i} \bigg)\,,\tag{7.1}\\
&= \sum_{x} \bra{x} \bigg( \sum_{ijk} m_{jk} a_k b^*_i \ket{j} \bra{i} \bigg) \ket{x}\,,\tag{7.2}\\
&= \sum_{xijk} m_{jk} a_k b^*_i \langle x | j \rangle \langle i | x \rangle\,,\tag{7.3}\\
&= \sum_{xijk} m_{jk} a_k b^*_i \cdot \delta_{xj} \cdot\delta_{ix}\,,\tag{7.4}\\
&= \sum_{ik} m_{ik} a_k b^*_i \tag{7.5}\,.
\end{align}
Comparing Eq. $(5.4)$ and $(7.5)$, we can conclude that we have proved Eq. $(1)\,.$
| common-pile/stackexchange_filtered |
How to get negative timestamp?
I need to convert dates before 1970 to timestamps. I have a correct tm structure but Microsoft's std library mktime can't convert it to negative timestamps. Is there any standard way (no Qt, WinAPI) to get it?
have you tried to use std::chrono? i recommend using this header only library in addition to it. https://github.com/HowardHinnant/date/blob/master/include/date/date.h
I tried to find a way using chrono, but failed. How can I use it with tm?
https://stackoverflow.com/questions/16773285/how-to-convert-stdchronotime-point-to-stdtm-without-using-time-t
So your advice is to use days_from_civil function?
look for howard hinnant on youtube he has really good examples on how to use std::chrono, and date time manipulation.
Could you post the answer (want mark it as solution)?
Good answer. ;-) Question: Are your input dates UTC, or associated with some time zone?
UTC, all is working now. My hand written function somehow lost 29/02. Thanks for your videos, it is really helpful!
| common-pile/stackexchange_filtered |
How to get all ROS log messages from rostest?
I am trying to debug a problem with a rostest that it is probably failing due to multithreading, as the result of the test is always different.
That aside, my problem is that I cannot get to see the complete ROS logs. More specifically, I can see the logs but for only the first test case. The remaining tests don't show any logs.
Here is an example of what I get from the console. It shows ros logs up to the result of the first test MoveItCppTest.SimpleSimulatenousExecutionTest, which also always succeeds. For the remaining tests, I just get the logs of the result but nothing else.
This is the test file and I am executing it like this:
rostest moveit_ros_planning test_simultaneous_execution_manager.test --text
...
[DEBUG] [1674281108.466333224]: Event EXECUTION_COMPLETED
[DEBUG] [1674281108.724856326]: Event's group name: panda_2
[DEBUG] [1674281108.724891348]: 0 remaining active controllers for group name: panda_2
[DEBUG] [1674281108.724905334]: Clearing context with group name: panda_2
[DEBUG] [1674281108.724931419]: No active trajectories remaining
[ INFO] [1674281108.725006225]: Deleting PlanningComponent 'panda_2'
[ INFO] [1674281108.725067552]: Deleting PlanningComponent 'panda_1'
[ INFO] [1674281108.725092946]: Deleting MoveItCpp
[ WARN] [1674281108.746951670]: Stop!. stop_execution_: 0 run_event_manager_: 0
[ INFO] [1674281109.234819475]: Stopped publishing maintained planning scene.
[ INFO] [1674281109.236025658]: Stopping world geometry monitor
[ INFO] [1674281109.236683101]: Stopping planning scene monitor
[ WARN] [1674281109.329307461]: SEVERE WARNING!!!
Attempting to unload /root/ros_ws/devel/lib/libmoveit_kdl_kinematics_plugin.so
while objects created by this library still exist in the heap!
You should delete your objects before destroying the ClassLoader. The library will NOT be unloaded.
[ OK ] MoveItCppTest.SimpleSimulatenousExecutionTest (5903 ms)
[ RUN ] MoveItCppTest.WaitForSingleTrajectory
/root/ros_ws/src/moveit/moveit_ros/planning/trajectory_execution_manager/test/test_simultaneous_execution_manager.cpp:115: Failure
Value of: moveit_cpp_ptr->getTrajectoryExecutionManager()->push(panda_1_robot_trajectory_msg)
Actual: false
Expected: true
[ FAILED ] MoveItCppTest.WaitForSingleTrajectory (1447 ms)
[ RUN ] MoveItCppTest.RejectTrajectoryInCollision
[ OK ] MoveItCppTest.RejectTrajectoryInCollision (1489 ms)
[ RUN ] MoveItCppTest.RejectInvalidTrajectory
[ OK ] MoveItCppTest.RejectInvalidTrajectory (1812 ms)
[ RUN ] MoveItCppTest.CancelTrajectory
[ OK ] MoveItCppTest.CancelTrajectory (1314 ms)
[----------] 5 tests from MoveItCppTest (11965 ms total)
[----------] Global test environment tear-down
[==========] 5 tests from 1 test suite ran. (11965 ms total)
[ PASSED ] 4 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] MoveItCppTest.WaitForSingleTrajectory
1 FAILED TEST
Is there any way to enable logging for all tests?
I also have issues with that, normally what I would do is search in ~/.ros/logs for the latest logs or add some ROS_WARN, as that would be shown in the terminal as well
If I add ROS_WARN logs, they appear but again only for the first test case, not for any of the other ones
| common-pile/stackexchange_filtered |
How to find min max values in Array - Scanner class
I was trying to take user input, store the values in the Arrays, and then find min max values. My code works fine in case the array is initialized with values, but when I take user input it always returns 0 as min value. See the code:
import java.util.Arrays;
import java.util.Scanner;
public class Loops {
public static void main(String[] args) {
Scanner scan = new Scanner(System.in);
System.out.print("How many numbers in a set: ");
int number = scan.nextInt();
int[] myArray = new int[number];
int min = myArray[0];
int max = myArray[0];
System.out.println("Input your numbers: ");
for(int i = 0; i <= myArray.length-1; i++) {
myArray[i] = scan.nextInt();
if (myArray[i] < min) {
min = myArray[i];
} else if (myArray[i] > max) {
max = myArray[i];
}
}
System.out.println(Arrays.toString(myArray));
System.out.println(max + " " + min);
System.out.println();
}
}
As a beginner, I'd appreciate your look at the above piece of code and hints
public static void main(String... args) {
Scanner scan = new Scanner(System.in);
System.out.print("How many numbers in a set: ");
int[] arr = new int[scan.nextInt()];
int min = Integer.MAX_VALUE;
int max = Integer.MIN_VALUE;
System.out.print("Input your numbers: ");
for (int i = 0; i < arr.length; i++) {
arr[i] = scan.nextInt();
min = Math.min(min, arr[i]);
max = Math.max(max, arr[i]);
}
System.out.println(Arrays.toString(arr));
System.out.println(max + " " + min);
System.out.println();
}
When you did this
int[] myArray = new int[number];
int min = myArray[0];
int max = myArray[0];
you set min and max on 0 and then, in for loop, you check if it is less than 0.
If you are giving only positive numbers to your program,min value will stay 0
If you want to find min and max value of array you can initialize min and max like this:
int min = Integer.MAX_VALUE
int max = Integer.MIN_VALUE
Your code initializes min as 0 by default which is smaller most of the numbers we give in the array
int min = Integer.MAX_VALUE; //maximum value Integer datatype can hold
int max = Integer.MIN_VALUE;// minimum value Integer datatype can hold
Here we make min as the heighest possible number and the max as the lowest possible number which makes the code mode bullet-proof to -ve values as well!.
to know more about Integer.MAX_VALUE and Integer.MIN_VALUE follow this link: https://www.geeksforgeeks.org/integer-max_value-and-integer-min_value-in-java-with-examples/
First of all int[] myArray = new int[number]; initiates the array size. So if you input for example 4 which in this case is the [number], you create an array myArray[] = {0, 0, 0, 0}; .
That I've already known. I replaced 0 values with other values [i] in for loop. Anyway, I got good hints how to handle it, subject exhausted. thanks.
@Proximo_224: I think you could expand the answer a little bit and explain the reason why min is always 0. Cheers :)
package javase.Arrays;
import java.util.Arrays;
import java.util.Scanner;
public class Loops1 {
public static void main(String[] args) {
Scanner scan = new Scanner(System.in);
System.out.print("How many numbers in a set: ");
int number = scan.nextInt();
int[] myArray = new int[number];
int min = Integer.MAX_VALUE;
int max = Integer.MIN_VALUE;
System.out.println("Input your numbers: ");
for (int i = 0; i < myArray.length; i++) {
myArray[i] = scan.nextInt();
min = Math.min(min, myArray[i]);
max = Math.max(max, myArray[i]);
}
System.out.println(Arrays.toString(myArray));
System.out.println("Max : "+max + " " +"Min : " +min);
System.out.println();
}
}
| common-pile/stackexchange_filtered |
Is it good to emphasize that one obtained a PhD from a prestigious university
I was wondering whether it is a good practice for someone to emphasize that he obtained his degree from a prestigious university by including the name of his university always after his title, something like:
Name, PhD (Caltech)
I have seen this in few occasions, but felt that it could be a bit offending to someone. Is this a discouraged behavior?
When is it even necessary to state your highest degree, except on your CV or department webpage? In very formal communication then Dr. or Professor should suffice. Anything more could easily (depending on context) come off as pretentious.
Depends on your location. In Europe this style is rather unusual and may raise some eyebrows. However, many of my friends in the US state their title exactly like that, so I would assume that it is at least not offending or uncommon. That being said, I would assume the vast majority of people couldn't care less one way or the other :)
@Moriarty In an email signature for example... So it all depends on the context I suppose.
Not really offensive to me, but it jump started my sympathetic engine and I want to know more about what kind of hardship this dude had to endure in order to get into Caltech. If I am going to meet someone who does this, I'll tread very carefully not to hurt his ego.
I think this form is really only appropriate if you obtained a PhD from an Ivy League university.
In Europe, mentioning the university is similar bad to emphasizing your title in the US (search the site for questions about that!)
Moreover, many Europeans wouldn't know the universities. In this forum, people often write "i did by degree at Mich" or something and I'm not even sure if this is a university, more less what this information should tell me. In Europe, people can say "I was at the university of Stockholm" which tells me a lot more.
It could be taken to mean that your PhD from CalTech was a high point of your career and you want to make sure people know that. Depending on the context, that may be a bad or good thing. At a CalTech alumni meeting, it could be very good.
I would never have made this interpretation.
It would definitely look odd and like you are trying too much. It's really not appropriate to push it in correspondence. Leave that for your resume or CV.
While it's mildly good to have gone to CIT (or the equivalent), realize there are weak performers within that group as well. What really matters is all your Science/Nature/PRL/JAMA/etc. publications. Oh...and puleeze don't list them in your signature!
Personally, I wouldn't even do the comma Ph.D. game. There is a correlation of those who are most proud of the Ph.D. (and showing it) being the weaker performers during the Ph.D. If you're a badass, you don't have to mention it. And it's no big deal to have gotten the "union card". (What matters more is how you did there.)
In most academic/science interactions, it will just be expected that you have a Ph.D. regardless (no big deal). In most other external interactions, it's irrelevant. Perhaps there are a few places (like selling consulting) where you might want to emphasize it. Yet even here I definitely get a "too proud of it, therefore probably weak (or even sketchy)" vibe when I see people who emphasize the title.
Within your field, it's probably more important to emphasise who your supervisor(s) were during your PhD. At least in the UK, the prestige of the university is less important at that level than the prestige of the other academics you worked with (and, of course, the quality of your research). In any case, it would be odd to see it in an email signature. I'd read it as (slightly desperate) bragging.
One related point is that people who have undergraduate degrees from Oxford or Cambridge may receive an MA rather than a BA under certain circumstances. These people might list their first degree as "MA (Oxon.)" or "MA (Cantab.)", listing the university to distinguish it from a "real" Masters degree (and possibly to do a little bragging of their own).
| common-pile/stackexchange_filtered |
How to find the least number that can be written as the sum of two cubes in two different ways?
Find the least number that can be written as the sum of two cubes in two different ways.
Find the number and show steps in deriving it.
This is one of mathematician's Srinvasa Ramanujan's challenge problems.
See this relevant Wikipedia article. Also, note that the great Leonhard Euler already gave a complete parameterization of numbers which can be written as the sum of two cubes in two distinct ways more than three centuries ago. The relevant entries can be found here and here.
Visit a friend in hospital by taxi, and look at the license plate number of the taxi :-)
@gnasher729 The answer is 91, not a taxicab number.
This number is also known as the Hardy-Ramanujan number:
$1729=1^3+12^3=9^3+10^3$
It can be easily found with computer research (or even by hand, if you insist), and I don't think an elegant proof is known. More generally, the $n$th "Taxicab number" is defined to be the smallest integer expressable as $n$ different sums of two positive cubes. More information can be found on http://mathworld.wolfram.com/TaxicabNumber.html.
Bzzzt WRONG. The answer is 91. Or if you allow trivial solutions, then the answer is zero,
Computer search (or a short hand computation) quickly finds that it's $$91 = 3^3 + 4^3 = (-5)^3 + 6^3$$
Other small examples include $152, 189, 217$.
(Observe that, unlike squares, cubes can be negative.)
Well, maybe...but then what about $0=1^3+(-1)^3=2^3+(-2)^3$?
Yes, I omitted the obvious trivial solutions.
| common-pile/stackexchange_filtered |
Can I make a flash like hash in rails?
I'm a ruby/rails newbie and need to make a hash which can be manipulated anywhere in the rails application and can be accessed by all views just like the flash[:notice] hash. Is this possible?
This may help http://stackoverflow.com/questions/3598785/where-to-put-global-variables-in-rails-3
This should work:
class ApplicationController < ActionController::Base
def block
@block ||= {}
end
helper_method :block
end
block[:foo] = "FOO"
block[:foo] #=> "FOO"
However, what you are trying to do is normally done with the help of content_for
Yes, you can do it. Any key/value pair can be stored in flash.
for example,
flash[:email] =<EMAIL_ADDRESS> flash[:username] = 'abc'
flash[:xyz] = 'xyz'
These values can be accessed any where in controllers and views, just like flash[:notice] and flash[:error]
| common-pile/stackexchange_filtered |
Google dataflow: AvroIO read from file in google storage passed as runtime parameter
I want to read Avro files in my dataflow using java SDK 2
I have schedule my dataflow using cloud function which are triggered based on the files uploaded to the bucket.
Following is the code for options:
ValueProvider <String> getInputFile();
void setInputFile(ValueProvider<String> value);
I am trying to read this input file using following code:
PCollection<user> records = p.apply(
AvroIO.read(user.class)
.from(String.valueOf(options.getInputFile())));
I get following error while running the pipeline:
java.lang.IllegalArgumentException: Unable to find any files matching RuntimeValueProvider{propertyName=inputFile, default=gs://test_bucket/user.avro, value=null}
Same code works fine in case of TextIO.
How can we read Avro file which is uploaded for triggering cloud function which triggers the dataflow pipeline?
You need to use simply from(options.getInputFile()): AvroIO explicitly supports reading from a ValueProvider.
Currently the code is taking options.getInputFile() which is a ValueProvider, calling the JavatoString() function on it which gives a human-readable debug string "RuntimeValueProvider{propertyName=inputFile, default=gs://test_bucket/user.avro, value=null}" and passing that as a filename for AvroIO to read, and of course this string is not a valid filename, that's why the code currently doesn't work.
Also note that the whole point of ValueProvider is that it is placeholder for a value that is not known while constructing the pipeline and will be supplied later (potentially the pipeline will be executed several times, supplying different values) - so extracting the value of a ValueProvider at pipeline construction time is impossible by design, because there is no value. At runtime though (e.g. in a DoFn) you can extract the value by calling .get() on it.
Hi
I tried using options.getInputFile(), but it gives following error.
incompatible types:
org.apache.beam.sdk.options.ValueProvider<java.lang.String> cannot be converted to java.lang.String
following is the code :
PCollection<user> records = p.apply( AvroIO.read(user.class) .from(options.getInputFile()));
Oh indeed, seems that the from(ValueProvider) version was added only in 2.2.0 which is currently going through the release process. You can either use version 2.2.0-SNAPSHOT, or as a workaround meanwhile you can use the internal class that normally shouldn't be used directly: p.apply(Read.from(AvroSource.from(options.getInputFile()).withSchema(user.class)))
p.apply(Read.from(AvroSource.from(options.getInputFile()).withSchema(user.class)))
With this also getting the same incompatible types error.
This is from AvroSource code:
public static AvroSource<GenericRecord> from(String fileNameOrPattern)
Sorry, my mistake - looked at the wrong github tab... Yeah, you're going to need to use 2.2.0-SNAPSHOT.
For now, can you try from(options.getInputFile().get())?
Please try ...from(options.getInputFile())) without converting it to a string.
For simplicity, you could even define your option as simple string:
String getInputFile();
void setInputFile(String value);
No I cant use simple String here.
I want to pass the input file at runtime.
when I am using options.getInputFile()
getting incompatible types error.
from expects input as a String.
| common-pile/stackexchange_filtered |
Php: how to get video urls in my vimeo account
I have a vimeo api keys and i uploaded videos this url https://vimeo.com and get my all videos link from this link https://vimeo.com/home/myvideos. Now i got response for all my video links details.
<?php
$urls = array();
$videos = array();
// vimeo test
$urls[] = 'https://vimeo.com/243625359';
$urls[] = 'https://vimeo.com/243438242';
foreach ($urls as $url) {
$videos[] = getVideoDetails($url);
}
function getVideoDetails($url)
{
$host = explode('.', str_replace('www.', '', strtolower(parse_url($url, PHP_URL_HOST))));
$host = isset($host[0]) ? $host[0] : $host;
switch ($host) {
case 'vimeo':
$video_id = substr(parse_url($url, PHP_URL_PATH), 1);
$hash = json_decode(file_get_contents("http://vimeo.com/api/v2/video/{$video_id}.json"));
// header("Content-Type: text/plain");
// print_r($hash);
// exit;
return array(
'provider' => 'Vimeo',
'title' => $hash[0]->title,
'description' => str_replace(array("<br>", "<br/>", "<br />"), NULL, $hash[0]->description),
'description_nl2br' => str_replace(array("\n", "\r", "\r\n", "\n\r"), NULL, $hash[0]->description),
'thumbnail' => $hash[0]->thumbnail_large,
'video' => "https://vimeo.com/" . $hash[0]->id,
'embed_video' => "https://player.vimeo.com/video/" . $hash[0]->id,
);
break;
}
}
header("Content-Type: text/plain");
print_r($videos);
Response:
Array
(
[0] => Array
(
[provider] => Vimeo
[title] => SampleVideo_1280x720_10mb
[description] => Vimeo was born in 2004, created by a group of
filmmakers who wanted an easy and beautiful way to share videos with
their friends. Word started to spread, and an insanely supportive
community of creators began to blossom. Now Vimeo is home to more
than:
[description_nl2br] => Vimeo was born in 2004, created by a group of
filmmakers who wanted an easy and beautiful way to share videos
with their friends. Word started to spread, and an insanely
supportive community of creators began to blossom. Now Vimeo is home
to more than:
[thumbnail] => http://i.vimeocdn.com/video/667808655_640.jpg
[video] => https://vimeo.com/243625359
[embed_video] => https://player.vimeo.com/video/243625359
)
[1] => Array
(
[provider] => Vimeo
[title] => SampleVideo_1280x720_5mb
[description] => We spend our days building a product we love for a growing community of millions. And eating lots of free snacks.
[description_nl2br] => We spend our days building a product we love for a growing community of millions. And eating lots of free snacks.
[thumbnail] => http://i.vimeocdn.com/video/667575091_640.jpg
[video] => https://vimeo.com/243438242
[embed_video] => https://player.vimeo.com/video/243438242
)
)
It's good. I applied my video links manually, but correct way to apply my video links dynamic. I want to get my vimeo video urls in based api keys.
It appears you're using the old Simple API (with this url format: http://vimeo.com/api/v2/video/{$video_id}.json) which is no longer supported by Vimeo.
That said, if your videos are embeddable, you'll likely be better off using oEmbed to get the specified metadata (provider, title, description, thumbnail, video). Vimeo's oEmbed documentation is found here: https://developer.vimeo.com/apis/oembed
Regarding the video and embed_video values you're generating, it's best practice to get the video link and embed code exactly from the API. Because you're generating these values on your own, any changes we make to the URL structure may break your links. For example, unlisted videos have an additional hash after the numerical video_id that your code does not account for.
I hope this information helps!
| common-pile/stackexchange_filtered |
How to open a Slack huddle directly from the command line?
I would like to join a daily Slack Huddle meeting from crontab. How should I call slack?
The URL obtained via Copy huddle link (starting with "https://app.slack.com/huddle/..."), that works with the web browser indeed, doesn't work when applied directly to the Slack binary. And I would like to skip the navigator because of its disadvantages:
it requires $DISPLAY to be defined
it leaves a trampoline webpage open
You need to use a slack:-URL.
You can convert the one you get with "Copy huddle link" as follows:
sed 's|^https://app\.slack\.com/huddle/\([^/]\+\)/\(.*\)$|slack://join-huddle?team=\1\&id=\2|'
Which converts https://app.slack.com/huddle/<TEAM>/<ID> into: slack://join-huddle?team=<TEAM>&id=<ID>.
Now you can add this to your crontab (by typing crontab -e):
30 9 * * 1-5 slack 'slack://join-huddle?team=<TEAM>&id=<ID>'
(Beware the quotation marks, since there is an ampersand in the URL)
A nice thing is that, in any case, slack will show a confirmation dialog before entering the call.
| common-pile/stackexchange_filtered |
Grafana :: Cannot login to http://localhost:3000/login
I freshly installed Grafana and I cannot login at http://localhost:3000/login
All documentation shows that the default user/password should be admin/admin but I'm locked out.
If I go to check into the file C:\Program Files\GrafanaLabs\grafana\defaults.ini the values are set to:
[security]
# disable creation of admin user on first start of grafana
disable_initial_admin_creation = false
# default admin user, created on startup
admin_user = admin
# default admin password, can be changed before first start of grafana, or in profile settings
admin_password = admin
# used for signing
secret_key = SW2YcwTIb9zpOOhoPsMm
# current key provider used for envelope encryption, default to static value specified by secret_key
encryption_provider = secretKey
If I try to retrieve the password through the e-mail I receive no e-mail.
What am I doing wrong?
Was Grafana already installed before?
@dnnshssm, ho, good question... maybe. I remember I might have installed it last year for a different project. would that be the root cause?
thank you @dnnshssm. This sounds like a reply to me, you can post it and I will label it as the answer if it's true. However I don't remember the user/password I used last year and I cannot retrieve it from the retrieve password procedure. Why I don't receive a password in my inbox?
The problem come from the grafana.db file. This is where your password is stored.
In your local machine, install the sqlite3 package
sudo apt-get install sqlite3
Login into your sql database
sudo sqlite3 /var/lib/grafana/grafana.db
Reset the admin password using SQL update (the new password will be admin)
sqlite> update user set password = '59acf18b94d7eb0694c61e60ce44c110c7a683ac6a8f09580d626f90f4a242000746579358d77dd9e570e83fa24faa88a8a6', salt = 'F3FAxVm33R' where login = 'admin';
sqlite> .exit
Now, you could log in your Grafana web interface using username: admin and password: admin
This one worked for me. Make sure to use the right login.
what is this alphanumeric key you mentioned in query? what is salt? Can i just copy same command? Do i need to edit params?
Error: unable to open database "/var/lib/grafana/grafana.db": unable to open database file
getting this error
Ritch, the file path mentioned by Francesco is a Windows path, so this answer is not related to the question strictly. But P.S.K. wrote your answer is helpful... I suggests to improve your question by 1) summarizing general idea and highlighting "for Debians", or 2) adding the Windows case.
Why is the login with admin:admin despite the configuration not working?
One possibility here is that you had Grafana installed previously (and when using it with the admin account already had to change the default password set in the config). In that case, you did not freshly install Grafana but instead upgraded it. That preserves the database including users and passwords, therefore you will have to use the password you set for that account.
Why are you not getting a reset password email?
I can think of two possibilities here: One is that Email is not configured in the Grafana config file and therefore no emails can be sent. The second one is that you did not set the email address for the account in question (afaik defaults to "admin@localhost") and therefore you don't get any emails. Of course it is possible that both is the case.
How can you solve this?
By either resetting the admin password (that will allow you to keep your existing data) or by removing Grafana and all files completely and making a fresh install.
Thanks for the hint. We are running Grafana in a Kubernetes cluster and had to delete an very old persistent volume claim to make the new settings work (including a new password for the admin account).
| common-pile/stackexchange_filtered |
I have Error in paypal phonegap plugin after enter buyer mailid
I have Error in paypal phonegap plugin after enter buyer mailid
"The application is not approved to use the following parameter with this type of payment" what is the mean of this error
I think my error is here
var obj = {
server : 'ENV_SANDBOX',
appId : 'APP-80W284485P519543T'
};
The original developer of the PayPal plugin just recently posted this message to the repo.
This Plugin is significantly out of date for both PhoneGap and PayPal.
It should only be used as a reference for an updated implementation.
So it looks like some more work needs to be done to update this plugin to todays PayPal API.
then i should wait for update version of PayPal plugin for phonegap ..or i may create my own appid using paypal
| common-pile/stackexchange_filtered |
Golang race with sync.Mutex on map[string]int
I have a simple package I am using to log stats during a program run and I found that go run -race says there is a race condition in it. Looking at the program I'm not sure how I can have a race condition when every read and write is protected by a mutex. Can someone explain this?
package counters
import "sync"
type single struct {
mu sync.Mutex
values map[string]int64
}
// Global counters object
var counters = single{
values: make(map[string]int64),
}
// Get the value of the given counter
func Get(key string) int64 {
counters.mu.Lock()
defer counters.mu.Unlock()
return counters.values[key]
}
// Incr the value of the given counter name
func Incr(key string) int64 {
counters.mu.Lock()
defer counters.mu.Unlock()
counters.values[key]++ // Race condition
return counters.values[key]
}
// All the counters
func All() map[string]int64 {
counters.mu.Lock()
defer counters.mu.Unlock()
return counters.values // running during write above
}
I use the package like so:
counters.Incr("foo")
counters.Get("foo")
All returns the underlying map and the releases the lock, so the code using the map will have a data race.
You should return a copy of the map:
func All() map[string]int64 {
counters.mu.Lock()
defer counters.mu.Unlock()
m := make(map[string]int64)
for k, v := range counters.values {
m[k] = v
}
return m
}
Or not have an All method.
A Minimal Complete Verifiable Example would be useful here, but I think your problem is in All():
// All the counters
func All() map[string]int64 {
counters.mu.Lock()
defer counters.mu.Unlock()
return counters.values // running during write above
}
This returns a map which does not make a copy of it, so it can be accessed outside the protection of the mutex.
| common-pile/stackexchange_filtered |
How can I delete Safari cookies via Terminal on 10.7.2?
I would like to delete all Safari cookies from terminal on Mac OS 10.7.2.
I tried to delete ~/Library/Cookie/Cookies.binarycookies (this is the only file in ~/Library/Cookie), but it didn't help.
Please advise.
I have a file Cookies.plist in ~/Library/Cookies, which I think is what stores them. Do you have any other files in there?
The missing part was to kill the cookied process:
killall cookied
| common-pile/stackexchange_filtered |
Making a dataframe where new row is created after every nth column using only semi colons as delimiters
I have the following string in a column within a row in a pandas dataframe. You could just treat it as a string.
;2;613;12;1;Ajc hw EEE;13;.387639;1;EXP;13;2;128;12;1;NNN XX Ajc;13;.208966;1;SGX;13;..
It goes on like that.
I want to convert it into a table and use the semi colon ; symbol as a delimiter. The problem is there is no new line delimiter and I have to estimate it to be every 10 items.
So, it should look something like this.
;2;613;12;1;Ajc hw EEE;13;.387639;1;EXP;13;
2;128;12;1;NNN XX Ajc;13;.208966;1;SGX;13;..
How do I convert that string into a new dataframe in pandas. After every 10 semi colon delimiters, a new row should be created.
I have no idea how to do this, any help would be greatly appreciated in terms of tools or ideas.
This should work
# removing first value as it's a semi colon
data = ';2;613;12;1;Ajc hw EEE;13;.387639;1;EXP;13;2;128;12;1;NNN XX Ajc;13;.208966;1;SGX;13;'[1:]
data = data.split(';')
row_count = len(data)//10
data = [data[x*10:(x+1)*10] for x in range(row_count)]
pd.DataFrame(data)
I used a double slash for dividing but as your data length should be dividable by 10, you can use only one.
Here's a screenshot of my output.
| common-pile/stackexchange_filtered |
How do I export a remote graph to json using tinkerpop gremlin and neptune?
I'm trying to export an entire remote graph into json. When I use the following code, it results in an empty file. I am using Gremlin-driver 3.3.2 as this is the same version in the underling graph database, AWS Neptune.
var traversal = EmptyGraph.instance().traversal().withRemote(DriverRemoteConnection.using(getCluster()))
traversal.getGraph().io(graphson()).writeGraph("my-graph.json");
How is one supposed to populate the graph with data such that it can be exported?
Taking on-board the valuable feed-back from Ankit and Kelvin, I concentrated on using a local gremlin server to handle the data wrangling.
Once I had the data in a locally running server, by generating gremlin script from an in-memory entity model,I accessed it via a Gremlin console and ran the following:
~/apache-tinkerpop-gremlin-console-3.3.7/bin/gremlin.sh
gremlin> :remote connect tinkerpop.server conf/remote.yaml
gremlin> :> graph.io(graphson()).writeGraph("my-graph.json")
==>null
This put the my-graph.json file in /opt/gremlin-server/ on the docker container.
I extracted it using docker cp $(docker container ls -q):/opt/gremlin-server/my-graph.json .
I can then use this data to populate a gremlin-server testcontainer for running integration tests against a graph database.
The Gremlin io( ) Step is not supported in Neptune. Here is the Neptune's documentation which talks about the other difference between the Amazon Neptune implementation of Gremlin and the TinkerPop implementation.
so this approach won't work, is there a way to export a graph as json and then load it into a nepture instance?
I've spun up a docker image running gremlin-server, populated it with data, still unable to get data out of the graph. I think it's an issue with how I'm getting a reference to the graph.
I also posted this to the Gremlin Users list.
Here's some code that will do it for you with Neptune and should work with most Gremlin Server implementations I would think.
https://github.com/awslabs/amazon-neptune-tools/tree/master/neptune-export
The results of the export can be used to load via Neptune's bulk loader if you choose to export as CSV.
Hope that is useful
If that is more than you needed hopefully it will at least give you some pointers that help.
With hosted graphs, including Neptune, it is not uncommon to find that they do not expose the Graph object or give access to the io() classes.
I used the mentioned aws github project and executed the following command which is not showing any result:
\bin>neptune-export.sh export-pg -e /: -d s3:///
See new answer from Ian Robinson. The export target needs to be the local file system.
neptune-export doesn't support direct export to S3. You'll have to export to the local file system, and then separately copy the files to S3.
| common-pile/stackexchange_filtered |
Range of standardized coefficients in a discriminant analysis
I want to run a discriminant analysis on different motion capture measures to see which of the measures distinguishes best between my two conditions. The problem is that some of the standardized discriminant function coefficients are >1 and <-1. How is that possible? What did I do wrong? I used z-transformed data to make sure the scales are no issue.
Thanks for your help.
standardization, in general, does not guarantee the quantity being standardized will lie between -1 and 1.
To have unstandardized discriminant coefficients equal the standardized ones one must scale the predictor variables to variance 1 within groups (did you do that?). But the standardized coefficients can exceed 1 in magnitude. Do not confuse them with regression coefficients beta.
| common-pile/stackexchange_filtered |
How to invert the selection while keeping the previous selection?
How can I select the same faces on the right side, which are selected on the left side? I know I can select the rest of the faces little by little with "shift", "c", or "b", but I would like to know if there is something easier, so I can apply it in future projects. Thanks in advance.
In the Select Menu you can find "Select Mirror" (Shift Ctrl M).
Tick the "Extend" option in the "Edit Last Operation" Tab (in the left bottom, a black box, F9 to recall it) if you want to add to the previous selection.
Omg, this really worked, thanks so much for your help! This setting was really hidden! Thanks again!
| common-pile/stackexchange_filtered |
not able to create index aws elasticsearch from laravel application using scout elasticsearch laravel
I am trying to connect from an ec2 instance to AWS Elasticsearch using scout-elasticsearch-laravel but it is failing.
my steps :-
added host to .env
SCOUT_DRIVER=elastic ( I tried elasticsearch also )
SCOUT_ELASTIC_HOST=https://vpc-cofxxx-xxx.xxxxxx.ap-south-1.es.amazonaws.com
i can curl the aws es endpoint and it works
right now it is giving below error :-
No alive nodes found in your cluster
But like I said curl is working and cluster health is in green.
I do not understand what I have misconfigured so any assistance will be appreciated.
I don't know the package scout-elasticsearch-laravel. Could you share a link?
actually the fix was simple
I had to add port number after url
https//vpc-cofxxx-xxx.xxxxxx.ap-south-1.es.amazonaws.com:443
Reason for this confusion was I did a curl from my ec2 instance and without the port it returned OK so I did not think of this.
And also ensure you have php-curl installed.
However posting this answer just in case someone gets stuck.
| common-pile/stackexchange_filtered |
Bitwise XOR in C using 64bit instead of 8bits
I consider how to make efficient XORing of 2 bytes arrays.
I have this bytes arrays defined as unsigned char *
I think that XORing them as uint64_t will be much faster. Is it true?
How efficiently convert unsigned char * to this uint64_t * preferably inside the XORing loop? How to make padding of last bytes if length of the bytes array % 8 isn't 0?
Here is my current code that XORs bytes array, but each byte (unsigned char) separately:
unsigned char *bitwise_xor(const unsigned char *A_Bytes_Array, const unsigned char *B_Bytes_Array, const size_t length) {
unsigned char *XOR_Bytes_Array;
// allocate XORed bytes array
XOR_Bytes_Array = malloc(sizeof(unsigned char) * length);
// perform bitwise XOR operation on bytes arrays A and B
for(int i=0; i < length; i++)
XOR_Bytes_Array[i] = (unsigned char)(A_Bytes_Array[i] ^ B_Bytes_Array[i]);
return XOR_Bytes_Array;
}
Ok, in the meantime I have tried to do it this way. My bytes_array are rather large (rgba bitmaps 4*1440*900?).
static uint64_t next64bitsFromBytesArray(const unsigned char *bytesArray, const int i) {
uint64_t next64bits = (uint64_t) bytesArray[i+7] | ((uint64_t) bytesArray[i+6] << 8) | ((uint64_t) bytesArray[i+5] << 16) | ((uint64_t) bytesArray[i+4] << 24) | ((uint64_t) bytesArray[i+3] << 32) | ((uint64_t) bytesArray[i+2] << 40) | ((uint64_t) bytesArray[i+1] << 48) | ((uint64_t)bytesArray[i] << 56);
return next64bits;
}
unsigned char *bitwise_xor64(const unsigned char *A_Bytes_Array, const unsigned char *B_Bytes_Array, const size_t length) {
unsigned char *XOR_Bytes_Array;
// allocate XORed bytes array
XOR_Bytes_Array = malloc(sizeof(unsigned char) * length);
// perform bitwise XOR operation on bytes arrays A and B using uint64_t
for(int i=0; i<length; i+=8) {
uint64_t A_Bytes = next64bitsFromBytesArray(A_Bytes_Array, i);
uint64_t B_Bytes = next64bitsFromBytesArray(B_Bytes_Array, i);
uint64_t XOR_Bytes = A_Bytes ^ B_Bytes;
memcpy(XOR_Bytes_Array + i, &XOR_Bytes, 8);
}
return XOR_Bytes_Array;
}
UPDATE: (2nd approach to this problem)
unsigned char *bitwise_xor64(const unsigned char *A_Bytes_Array, const unsigned char *B_Bytes_Array, const size_t length) {
const uint64_t *aBytes = (const uint64_t *) A_Bytes_Array;
const uint64_t *bBytes = (const uint64_t *) B_Bytes_Array;
unsigned char *xorBytes = malloc(sizeof(unsigned char)*length);
for(int i = 0, j=0; i < length; i +=8) {
uint64_t aXORbBytes = aBytes[j] ^ bBytes[j];
//printf("a XOR b = 0x%" PRIx64 "\n", aXORbBytes);
memcpy(xorBytes + i, &aXORbBytes, 8);
j++;
}
return xorBytes;
}
It's probably a bit faster, did you check ? How long are your arrays ? malloc is slow, so if your arrays are short, it won't make any difference. Padding if the length is not a multiple of 8 is almost trivial, show what you have tried.
Why should it be faster? Just open debugger and look at assembly code to check if there's difference in machine code.
@rlib Of course it is going to be faster if the specific CPU supports 64 bit XOR. 1 instruction instead of 8.
"I think that XORing them as uint64_t will be much faster": measure it and find out for certain.
I am sending this XORed data via socket it seems that now I can send in 30 sec about 77 images instead of 57 images. But they seems to be broken.
@Lundin The 64-bit XOR itself may be faster than eight 8-bit XOR instructions. But the function next64bitsFromBytesArray() looks to me like it's going to be a lot slower than simply doing eight 8-bit XOR instructions. 8 array dereferences, 7 bit shifts, and 7 OR calculations and the overhead of a the function call, just to save 7 XOR calculations? Maybe a good optimizing compiler can address some of that, but I'm doubtful.
So my question was how to make it efficiently ?
If the compiler notices the pattern and merges that stuff into a single 64bit load, it'll be fine. Of course, you could also explicitly tell it to do that.
Intereseting. I tried this and got 5.5x faster on my system (the code indicates it is 8x faster but it decays with caching/page faults etc.). However I doubt very much that this part of your code is slowing you down. Your byte arrays aren't very large considering modern computers. Mine does 64bit XOR of 100MB in a vritual machine in 0.2 seconds...
what about 2 approach? why it doesn't work correctly?
Ok i find error in 2nd approach, not it seems to work faster, and correct, as image in first approach was wrong.
also, if you're going for speed, avoid using dynamic memory and set some max size for your buffer then allocate it on your stack, memcpy() and malloc() all have overhead that you simply can avoid if you'd eliminate usage of dynamic memory
@Lundin: There are plenty of machines with non-8-bit chars.
@rlib "Plenty" is quite an exaggeration. There exist(ed) a few oddball DSPs and some 4 bit computers that were very unsuitable for high-level languages to begin with. Portability to wildly exotic, mildly retarded architectures shouldn't need to be a concern for 99.99% of all programmers.
Calling memcpy is probably not the fastest option for copying 8 bytes.
but how can I copy it in different way?
memcpy(XOR_Bytes_Array + i, &XOR_Bytes, 8); -> *((uint64_t*)(XOR_Bytes_Array + i)) = XOR_Bytes;
Yeah it seems to be half faster with this instead of memcpy. I'm using execution time with clock_t
So I did an experiment:
#include <stdlib.h>
#include <stdint.h>
#ifndef TYPE
#define TYPE uint64_t
#endif
TYPE *
xor(const void *va, const void *vb, size_t l)
{
const TYPE *a = va;
const TYPE *b = vb;
TYPE *r = malloc(l);
size_t i;
for (i = 0; i < l / sizeof(TYPE); i++) {
*r++ = *a++ ^ *b++;
}
return r;
}
Compiled both for uint64_t and uint8_t with clang with basic optimizations. In both cases the compiler vectorized the hell out of this. The difference was that the uint8_t version had code to handle when l wasn't a multiple of 8. So if we add code to handle the size not being a multiple of 8, you'll probably end up with equivalent generated code. Also, the 64 bit version unrolled the loop a few times and had code to handle that, so for big enough arrays you might gain a few percent here. On the other hand, on big enough arrays you'll be memory-bound and the xor operation won't matter a bit.
Are you sure your compiler won't deal with this? This is a kind of micro-optimization that makes sense only when you're measuring things and then you wouldn't need to ask which one is faster, you'd know.
On my laptop I see the difference, while using uint64_t, I am copying 80-90 images via socket, while using uint8_t I am copying 55-65 images. It seems to do much better
| common-pile/stackexchange_filtered |
using urlrewrite for a domain with parameters
I want to rewrite example.com/index.php?post_id=1&seri_id=5 to example.com/seri-name/post-name
I found this:
Store the following in your web roots .htaccess file:
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-l
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.php/$1
In the example above, all URLs are sent on to the index.php scripts.
Fetch the URL inside the index.php like this:
$route = $_SERVER['PHP_SELF'];
$route = substr($route, strlen('/index.php'));
but I didn't understand how I can the rewrite process. Can somebody help? Is it the best way to do what I wanted? Is There anything that I can edit to improve it?
Technically that'd be PATH_INFO.
Is it the correct way to do what I wanted?
| common-pile/stackexchange_filtered |
Code to find the existence of file with partial filename
I have a file as c:\sai\chn_20151019_5932.txt . Here 5932 is minutes and seconds format. 59 mins 32 secs. When we run again the package on the same day the existed file in the folder should deleted but , due to the seconds I am unable to delete the file.
I need C# code something like this.
steing filename: @"c:\sai\chn_20151019_5932.txt";
if(filename.exists(@"c:\sai\chn_20151019" + "*")
{
delete.file(filename);
}
Try this:
var dir = new DirectoryInfo(@"c:\sai\");
foreach (var file in dir.EnumerateFiles("chn_20151019*.txt")) {
file.Delete();
}
EDIT:
DirectoryInfo di = new DirectoryInfo(@"c:\sai\");
FileInfo[] files = di.GetFiles("chn_20151019*.txt")
.Where(p => p.Extension == ".txt").ToArray();
foreach (FileInfo file in files)
try
{
File.Delete(file.FullName);
}
catch { }
enumeratefiles method can be used only after .net 4.0, but I need the code for .net 3.5 framework version. Can you please give an alternate code
@SAILENDRASUDA:- Updated my answer. Please check now!
if you are using .NET or above -
string path = @"c:\sai";
bool exist = Directory.EnumerateFiles(path, "chn_20151019*").Any();
You can use EnumerateFiles which will find files by pattern in specified directory. And then just use File.Delete
Directory.EnumerateFiles("c:\\sai","chn_20151019*",SearchOption.TopDirectoryOnly)
.ToList()
.ForEach(x=>File.Delete(x));
Use using System.IO;, then:
foreach (var f in Directory.GetFiles(@"c:\sai\", "chn_20151019*.txt"))
{
File.Delete(f);
}
If you like to be fancy (using System;):
Array.ForEach(Directory.GetFiles(@"c:\sai\", "chn_20151019*.txt"), File.Delete);
| common-pile/stackexchange_filtered |
ActionScript 3 - How do I get my String to convert to a Number?
I have a Budget field (called txtBudget) where I want users to enter a number value.
I then want to store what they entered as a variable (totalBudget) so that I can perform calculations based on it later.
My problem right now is that when I go to a different frame, and then return to the frame with the code, the text field displays "NaN". I can't work out why, and it's driving me mental.
On frame 1 I say:
var totalBudget:Number = 0;
Then on frame 14 I tried:
onBudgChange(null);
txtBudget.addEventListener(Event.CHANGE, updateBudget);
function updateBudget(event:Event):void {
totalBudget = Number(txtBudget);
}
function onBudgChange(event:Event):void {
txtBudget.text = totalBudget.toString();
}
And when that didn't work I looked around and saw a post that said my text field wasn't a string, and that I needed a variable to convert it. So I tried this (with no luck):
onBudgChange(null);
var budgetBridge = String(txtBudget);
txtBudget.addEventListener(Event.CHANGE, updateBudget);
function updateBudget(event:Event):void {
totalBudget = parseFloat(budgetBridge);
}
function onBudgChange(event:Event):void {
txtBudget.text = String(totalBudget);
}
I've been pulling my hair out, so any help you can give me would be greatly appreciated.
Right, your textfield isn't a string, it's a textfield. However there's no need for an intermediate variable, all you have to do is using the .text property for reading the value:
function updateBudget(event:Event):void {
totalBudget = parseFloat(txtBudget.text);
}
@Koru.Chico Please check his answer if it works. Mark it as correct.
| common-pile/stackexchange_filtered |
Linear Combinations and solutions [Columnwise Description of Matrices]
Let $A$ be a $5\times 3$ matrix.
If $b = a_1 + a_2 = a_2 + a_3$ where $a_1, a_2, a_3$ are columns of $A$ then what can we conclude about the number of solutions of the linear system $Ax = b$?
Same Question for: when $b = a_1+a_2+a_3+a_4$
This query is from this Book page number 62
I am stuck at: let $a_1=[1,0,0]$, $a_2=[0,1,0]$ and $a_3=[0,0,1]$ but upon $a_1+a_2$ and $a_2+a_3$ these are not equal. such as $[1,1,0]$ and $[0,1,1]$
What do you think? We are not here to blindly answer your homework questions, but can help you if you are stuck. It may also help to give the page number of the problem in the book so we can read the context surrounding it
i am stuck at:
let a1=[1,0,0], a2=[0,1,0] and a3=[0,0,1]
but upon a1+a2 and a2+a3 these are not equal. such as [1,1,0] and [0,1,1]
The rank of a matrix can be defined in many ways, in particular (i) maximum number of linearly independent rows (ii)maximum number of linearly independent columns (iii)maximum size of square sub-matrix with non-zero determinant. You should be comfortable with all 3 versions and know why they are equivalent. For your immediate purpose, (ii) is the most relevant. The rank of the augmented matrix of a system of linear equations is either the same as, or one more than, the rank of the coefficient matrix. The system is consistent iff the the rank of the augmented matrix is the same as the rank of the coefficient matrix. You can take it from there.
$A$ has $5$ rows. Let $\{e_1,e_2,e_3\}$ be the standard basis of $ \mathbb R^3.$
Then we have
$$Ae_j=a_j$$
for $j=1,2,3.$
Can you proceed ?
| common-pile/stackexchange_filtered |
require.resolve not resolving dependency
I was working on a node module, where I would have to check if the user had installed an specific package (that's the logic I have to follow) and if not, install that using child_process.exec.
That's where the problem appears, even though my script installed the package (node_modules contains it, also package.json), if I call require.resolve again, it says the module does not exist.
For what I found (from node github issues), node has sort of a "cache", where on the first call, if the package is not installed, the cache entry for this package is set to a random value which is not the path to it. Then when you install and try to call that again, it calls that cache entry.
Is there any other way to check if packages are installed, or my local node installation is broken?
Take a look into node.js require() cache - possible to invalidate?
It's possible to clean up the require cache, with something like this:
delete require.cache[require.resolve('./module.js')]
One way to check if a package is installed is trying to require it. Please see my code below that verifies if a package is not installed. Just add your code that installs it after the comment now install the missing package.
try {
require('non-existent-package');
} catch (error) {
// return early if the error is not what we're looking for
if (error.code != 'MODULE_NOT_FOUND') {
throw error;
}
// now install the missing package
}
Let me know if this works.
Interesting, if i create a local file with this try catch, it works. Now if i have my module code, linked in a dummy project, running this script, it always throws the error (i.e, installs the package)
The module will only be able to require packages inside its own dependency tree, i.e. if you install the missing package on the parent code, the module will not find it.
I've reproduced the following behaviors:
Installing the missing package inside the module: it stops throwing the error
Installing the missing package on the requirer dependency tree: it never stops throwing the error
So, it was not actually a problem on my node... best approach for this would be crawling package.json?
I'm not 100% sure about what you're trying to achieve. Are you trying to install a package on the requirer code? If so, why do you need to do that?
Yes, i'm trying. We are trying to make less dependencies as possible, but allow the user to have the option to install something, or not, if needed.
But anyways, thanks for the effort. Actually i've discovered something new :)
| common-pile/stackexchange_filtered |
Azure SDK - List Subscription Diagnostic Settings
I want to use this API: Subscription Diagnostic Settings - List, But I have not been able to find it in the Azure SDK.
Tried to looking into the @azure/arm-monitor but found only DiagnosticSettings which do not apply to the subscription resource, moreover they don't even have the same return type. I do not know where else to look.
After trial and error, the following is correct:
Use the diagnosticSettings list function and in the resourceUri it should be subscriptions/${subscriptionId}:
const credentials = ...
const subscriptionId = ...
const monitor = new MonitorClient(credentials, subscriptionId)
const resourceUri = 'subscriptions/${subscriptionId}'
const subscriptionDiagnosticSettings = monitor.diagnosticSettings.list(resourceUri)
| common-pile/stackexchange_filtered |
show jira issues in bitbucket issue tracker
im trying to setup some projects in jira, we have existing bitbucket repos. Our jira projects have issues we have put in, is it possible to view these jira issues in bitbucket
You can link JIRA to BitBucket by following this tutorial: Linking Bitbucket and GitHub accounts to JIRA. This uses the JIRA DVCS Connector add-on which I believe is a free download from the Atlassian Marketplace.
Branches, pull requests and commit messages can reference JIRA issues. I'm not sure whether you can link existing branches etc though unless they reference the issue in JIRA directly.
Is Jira free to use with Bitbucket repo?
No, you purchase a JIRA license based upon the number of users that will need access to it. See https://www.atlassian.com/software/jira. Bitbucket does give some basic issue management though, see https://confluence.atlassian.com/bitbucket/issue-trackers-221449750.html
| common-pile/stackexchange_filtered |
Is using GIADDR when relaying DHCP requests beneficial?
I am setting up three Juniper EX3200 switches simply named 1,2, and 3.
Switch 2 will have two VLANs, one being the internet uplink from the Firewall and one being the gateway for the Workstations.
Switch 1 will be on a VLAN on the same subnet as Switch 2.
Switch 3 will have another VLAN, which will be the gateway of the servers and be on a different subnet.
With background out of the way my question is, would setting the switch to change the source address of the request packet to the GIADDR be beneficial in anyway, or would it route fine if I just left the setting alone?
I have never heard the term GIADDR before... and why are you trying to do NAT with a L3 switch?
@ToryNewnham are you talking about this feature?
| common-pile/stackexchange_filtered |
Account Linking redirect in Actions on Google simulator
I'm building an app utilizing the new Action on Google Java API. As I understand from dealing with account linking in Alexa, the initial flow (when the userId in the JSON request is null) should redirect to a sign in form to elicit user consent:
@ForIntent("RawText")
public ActionResponse launchRequestHandler(ActionRequest request) {
String userId = request.getAppRequest().getUser().getUserId();
String queryText = request.getWebhookRequest().getQueryResult().getQueryText();
String speech = null;
ResponseBuilder responseBuilder = getResponseBuilder(request);
if (isBlank(userId) || GREETING.equalsIgnoreCase(queryText)) {
speech = "I've sent a link to your Google Assistant app that will get you started and set up in just several simple steps.";
responseBuilder.add(
new SignIn()
.setContext(speech));
//...
return responseBuilder.build();
While testing in the AoG Simulator, however, I'm not seeing any redirection being done. I'm seeing the following error:
My account linking setup:
where authorization URL redirects to a local mock auth service which is supposed to display a login form. It's accessible (both via localhost and via ssh tunnel, provided by serveo.net reverse proxy in this case). Why Google doesn't redirect me there?
Can someone please guide me how to do this initial handshake in the account linking flow and where can I see the form which the Sign-In intent sent from the web hook is supposed to trigger?
I'd rather not use my phone, as the error message seems to suggest, as the account under which I'm testing in AoG simulator differs from my user ID on the phone.
What is meant by using Simulator as a Speaker? What is missing in my setup?
Is there another Google app that simulates the physical device better, similar to Alexa's simulator?
| common-pile/stackexchange_filtered |
CSS not getting applied to <label>
I am developing a form and setting a size for the label and aligning it right so as to look decent. However the CSS is not getting applied. Here is my code:
label{
width: 50%;
text-align: right;
}
Fiddle
How do I get this working?
it is working mate => http://jsfiddle.net/46xF5/2/ ...you have problem in your layout, change the DOM :)
here you go http://jsfiddle.net/46xF5/3/
try (i test in your fiddle)
display:inline-block;
fiddle demo
Can you tell me why do we need to use this?
by default some html element have default css value like p have margin-top and margin-bottom or body have margin:20px so label have display:inline, you can set width for dispaly inline because it's inline
One follow up question: How do I get the button at the center? margin: 0 auto; isnt working. http://jsfiddle.net/46xF5/8/
i update your fiddle http://jsfiddle.net/46xF5/9/ when you don't set width for element or use float margin:0 auto not work better use tag or set text-align:center for parent DIV
I am notusing <center> tag because it is not recommended. Anyways margin-left: 50%; solved the problem. Thanks!
@RahulDesai : no don't use tag. Use like this http://jsfiddle.net/46xF5/10/ . put your button inside the div and align it to center
no no no that's not center your element use text-align:center to parent tag, that's not cross browser
You could make it an inline-block element:
UPDATED EXAMPLE HERE
label{
width: 50%;
text-align: right;
display:inline-block;
}
You need to do this because a label is an inline element, and the following applies to it:
10 Visual formatting model details - 10.2 Content width: the 'width' property
This [width] property does not apply to non-replaced inline elements. The content width of a non-replaced inline element's boxes is that of the rendered content within them (before any relative offset of children). Recall that inline boxes flow into line boxes. The width of line boxes is given by the their containing block, but may be shorted by the presence of floats
fiddle : http://jsfiddle.net/46xF5/7/
change the css code like this
body{
font-family: "Arial", Helvetica, sans-serif;
}
/*changes here*/
#book_tickets label{
width: 50%;
display:inline-block;
text-align: left;
}
LIke this
demo
css
label{
width:50%;
text-align: right;
border:1px solid red;
display:inline-block;
}
add display:inline-block in your label Fiddle
| common-pile/stackexchange_filtered |
Expectation of Quadratic Form with Two Random Vectors
Assume I have two independent $(N \times 1)$ random vectors, $\epsilon_{1} \sim N(0,\Sigma_1)$ and $\epsilon_{2} \sim N(0,\Sigma_2)$.
We could assume $\Sigma_1=\Sigma_2$ for my purposes but a general answer would be better.
What is the solution then of
$\mathbb{E}\big[\epsilon_1^{\prime} A \epsilon_2 \epsilon_2^{\prime}A^{\prime}\epsilon_1\big],$
where $A$ is any $(N\times N)$ matrix?
Use the independence of the $\epsilon_i$ to evaluate the expectation over $\epsilon_2\epsilon_2^\prime$ first, thereby simplifying the question to a well-known problem.
Thanks @whuber ! You're suggesting I can apply the quadratic form twice? So the answer would be $trace(A \Sigma_2 A^{\prime} \Sigma_1)$?
Let's start with what's well known: when $B=(b_{ij})$ is any square matrix and $x$ is a zero-mean vector with covariance matrix $\mathbb{E}(xx^\prime)=\Sigma,$ then the definition of matrix multiplication and linearity of expectation imply
$$\mathbb{E}(x^\prime B x) = \mathbb{E}\left(\sum_{i,j} x_i b_{ij} x_j\right) = \sum_i \sum_j b_{ij}\mathbb{E}(x_ix_j) = \sum_i (B\Sigma^\prime)_{ii} = \operatorname{Tr}(B\Sigma^\prime).$$
In the circumstances, linearity of expectation and the independence of the $\epsilon_i$ reduce the problem to the previous result with $B=A\Sigma_2A^\prime$ and $x=\epsilon_1:$
$$\eqalign{
\mathbb{E}\left(\epsilon_1^\prime A \epsilon_2 \epsilon_2^\prime A^\prime \epsilon_1\right) &= \mathbb{E}\left(\mathbb{E}\left(\epsilon_1^\prime A \epsilon_2 \epsilon_2^\prime A^\prime \epsilon_1 \mid \epsilon_1\right)\right) \\
&= \mathbb{E}\left(\epsilon_1^\prime A\mathbb{E}\left( \epsilon_2 \epsilon_2^\prime \right) A^\prime \epsilon_1\right) \\
&= \mathbb{E}\left(\epsilon_1^\prime A\Sigma_2 A^\prime \epsilon_1\right) \\
&= \operatorname{Tr}\left(A\Sigma_2 A^\prime\ \Sigma_1^\prime\right).
}$$
You are free to transpose either of the $\Sigma_i$ or not, because--as covariance matrices--they are symmetric.
| common-pile/stackexchange_filtered |
Cup product of Euler classes: What happens to the corresponding bundles?
It should be clear that $H^k(B)$ corresponds to the set of all $\mathbb{S}^{k-1}$-bundles over $B$ up to isomorphism via the Euler class. For $k=2$, the addition of Euler classes should correspond to the fibrewise complex tensor product of the bundles (completet to complex line bundles).
Is there any correspondence to the cup product of Euler classes? E. g. for two Euler classes of $\mathbb{S}^1$-bundles, the cup product should be the Euler class of a $\mathbb{S}^3$-bundle. My first guess would be the fibrewise suspension of the fibrewise smash product?
Is there any good literature about this?
It is not true that $H^k(B)$ corresponds to the set of all $\mathbb S^{k-1}$ bundles up to isomorphism. For example there are $\mathbb{Z}$ non-trivial oriented $\mathbb S^{2}$ bundles on $\mathbb{S}^4$ coming from oriented three dimensional vector bundles as $\pi_3(SO(3))=\mathbb{Z}$. I use here the clutching construction to classify vector bundles over spheres, see for example Hatcher's notes.
The euler class behaves nicely with respect to direct sums over vector bundles: If $E$ and $F$ are oriented, then so is $E\oplus F$ and
$$
e(E\oplus F)=e(E)\cup e(F).
$$
Ah, of course. This correspondence only works for $k=2$ since $\mathbb{S}^1$ is a $K(\mathbb{Z},1)$, right?
Yes, and $SO(2)$.
| common-pile/stackexchange_filtered |
Raspberry pi network over usb
Hi I am very new to the Raspberry pi but I have experience with the beaglebone.
I am just wondering if its possible to do something like network-over-USB like on beaglebone , but for the raspberry pi using the micro usb power connector. Since it would be much easier to connect to SSH and power up the board in one go.
The Pi's power connector is power-only, so it can't be used for networking purposes. A couple other things to try:
Set up the Pi to connect to a hotspot created by (say) an Android phone, connect your laptop as well, then ssh over the phone's local network. This is how I used my Pi headless for quite a while.
Connect an ethernet cable directly between your pi and your laptop. (The Pi automatically detects if it's connected directly to a computer, and plays nice if it is!) You'll need to configure both the Pi and the laptop to use a static IP address over the ethernet network, but then you can SSH over the ethernet cable quite easily.
I don't know about beaglebone, but it is possible to network the Pi using ssh or VNC.
ssh used to require configuration in raspi-config, but I believe is now on by default.
VNC requires installation
sudo apt-get install tightvncserver
You can't use the micro USB connector as the data is not connected (only power pins).
| common-pile/stackexchange_filtered |
Re-display invalid data entry when expecting a number upon validation error
Let us say that the user submits a form. One of the fields was invalid, so the form gets re-rendered. When in the process of re-rendering the form: I check to see if the submitted value for that attribute was invalid. If so: then show that invalid data entry with some additional text.
Here is what I have so far. It is close but isn't working completely:
<%= form_for(blog) do |f| %>
<div class="field">
<%= f.label :age %>
<% if f.object.age %>
<% if f.object.errors.include?(:age) %>
<%= f.text_field :age, value: "Submitted: #{f.object.age} was bad" %>
<% else %>
<%= f.text_field :age %>
<% end %>
<% else %>
<%= f.text_field :age %>
<% end %>
</div>
<div class="actions">
<%= f.submit %>
</div>
<% end %>
So I am saying: "Hey Rails: if the user has already submitted this form, and the value the user submitted for this attribute of age was invalid, then re-render the submitted value within the input, along with some additional text."
The issue I am having is that it will not show the error input correctly. For example: if the user inputs "abcd" into the input field, then when the form re-renders upon failing validations, the input field should say: "Submitted: abcd was bad". Instead it says "Submitted: 0 was bad".
I am close. The ultimate question is: Why is it displaying 0? Why isn't it displaying "abcd"? The next question is: "How can I make it display "abcd"?
Is this behavior because I have the following validation? :
validates :age, numericality: true
If so: how can I have the validation to ensure it is a number, but also re-display the value the user entered?
Have you considered using simple_form? It handles all of this for you out-of-the-box; placing error messages below the relevant input fields.
@TomLord I am more just trying to figure out this odd behavior and making it work as expected. I agree the implementation above could be better, but I just want to be able to display the submitted value, as opposed to displaying 0 for whatever reason it is doing that.
This works. Just grab the submitted value from the params hash:
<div class="field">
<%= f.label :age %>
<% if f.object.age %>
<% if f.object.errors.include?(:age) %>
<%= f.text_field :age, value: "Submitted #{params[:blog][:age]} is bad" %>
<% else %>
<%= f.text_field :age %>
<% end %>
<% else %>
<%= f.text_field :age %>
<% end %>
</div>
It appears that f.object.age does not work because the age attribute is of type integer. because of this: within the writer method for the age attribute:
def age=(value)
...
end
Somewhere within that setter what probably happens is that rails attempts to convert the submitted value with: to_i. When you do to_i on a string it returns 0, so when you attempt f.object.age it will return 0 because the writer attempted to convert the string.
The other tricky thing is the text_field method. Rails has some magic going on in there. For example: if there was nothing special going on: the following would successfully display the submitted value as opposed to 0:
<%= f.text_field :age %>
So somewhere within the text_field method, rails looks to see if that attribute of age was invalid, and if it was, then it specifies as the value the submitted value.
Without digging into the rails source code, it is tough to say how text_field figures out if the attribute was previously submitted, AND if the submitted value was invalid. Thus: if one dug within the rails source code I am sure you could clean up the conditionals up above once you figure out how text_field displays the submitted value (ex: 'abcd' as opposed to the converted value from the setter (ex: '0').
Nonetheless: the above code works and does what is desired.
This is failing because you're calling f.object.age in an interpolated string, not the value submitted.
Since you're validating the numericality on age, your 'abcd' input will never pass validation and therefore would never saved.
So when you're calling .age on that object, you're only getting the default value, or the last value that passed validation and was saved. Your 'abcd' string would never have been saved.
An easier way to show the error would be to do:
f.object.errors.full_messages_for(:age)
You could also try adding a custom error message:
validates_numericality_of :age, message: '%{value} is not a valid age'
so how do I display the value submitted? Ex: input of 'abcd'?
You would have to figure out a way to pass the bad string to your view. It would be much easier to just display the default error message and omit the invalid user input.
It is odd to me that this is so difficult. If I did not have any of that additional input text and just left it like this: <%= f.text_field :age %> then when it fails validations it properly re-displays whatever value was submitted. How is that able to grab the submitted entry upon failed validation? Maybe if I figured that out then I could apply it to this situation. Seems like Rails is doing some magic...
See the edit about adding a custom error message. That should do the trick.
That does not give me the ability to redisplay the users submitted value within the form input. I want to know how I can grab the users submitted value and redisplay it within the input.
| common-pile/stackexchange_filtered |
Creating a default object for Entity Framework navigation property
I have a class Client as it follows
public class Client
{
public int UniqueLoginsPerDay { get; set; }
public virtual Limit Limit { get; set; }
public int LimitId { get; set; }
}
And then I have a Limit class
public class Limit
{
public string Email{ get; set; } =<EMAIL_ADDRESS> public int UniqueLoginsPerDayLimit { get; set; } = 200000;
public virtual Client Client{ get; set; }
public int ClientId { get; set; }
}
What would be the most sensible way to attach the "default" Limit object to my Client entity?
I am using EntityFramework.Core 7.0.0-rc1-final
I am aware of this question, the solution given there is not applicable to EntityFramework.Core 7.0.0-rc1-final.
That sounds like business logic to me. Could you move it higher than the data access classes?
I came up with a solution where I moved that to higher level. I was wondering is there any good reason to that in the data access layer if so what is it
| common-pile/stackexchange_filtered |
Has numerosity-preserving functions between structures been studied before?
Isomorphisms are bijective functions between structures that preserve the constants, relations, and functions of the structure. I wonder, has a notion of "strong isomorphism" between structures been studied before in the mathematical literature? By strong isomorphism, I mean a numerosity-preserving function between structures that preserve the constants, relations, and functions of the structure. So, for example, the structures $(\mathbb{N};<)$ and $(\mathbb{N}^+;<)$, where $\mathbb{N}$ and $\mathbb{N}^+$ represent the set of nonnegative and positive integers respectively, are isomorphic but not strongly isomorphic, because they do not have the same numerosity, as $\mathbb{N}^+$ is a proper subset of $\mathbb{N}$. Basically, I am asking if numerosity-preserving functions have been studied.
What do you mean by "numerosity?" That's not a standard term.
@NoahSchweber I mean, a notion of size of sets that is finer than cardinality. That notion respects Euclid's notion of "the whole is greater than the part". Finite cardinalities are the same as finite numerosities, but in the case of infinite sets, numerosities are strictly finer than cardinalities.
Are numerosities linearly ordered? How do the numerosities of the even vs. odd integers compare? This needs to be a lot more precise.
Numerosity is not a mathematical word, what does it mean here? What if they are entirely different sets? This strikes me as not useful.
For example, is $\mathbb N$ more or less numerous than ${1}\times \mathbb N?$
In any event, without a definition of "more numerous," it is undefined.
The key failing I can see is that the composition of strong isomorphisms isn't a strong isomorphism. That makes it nearly completely useless for understanding a structure.
| common-pile/stackexchange_filtered |
How to infer dayfirst for dateutil.parser.parse based on pytz timezone to parse string date
Trying to parse string dates to datetime.date that might be in the format DD/MM/YY or MM/DD/YY. The only info I have is the pytz.tz to help me figure out the format of the string date.
I am using dateutil.parser.parse which has a parameter dayfirst=.
But I don't know how to infer if dayfirst should be true or false given a timezone.
The human readable format of a date has absolutely nothing to do with what timezone it is in. You should be assuming a date format based on where it is coming from, knowing that a particular date source provides dates in a particular format. If you had a lot of dates to look at that you know are of the same format, you might deduce which format you have by looking for which place contains values that are only between 1 and 12 and which are sometimes bigger than 12, but that's a hack. It's very unusual for you to have to parse dates not knowing their format in advance.
@CryptoFool Your first sentence is wrong. https://en.wikipedia.org/wiki/Date_format_by_country
The same source outputs a different format depending on the country it was extracted from and I have no control over it. The only info I have is the timezone for which this date is for. The file might have only one date so I can't infer the format based on that unfortunately.
the date format is related to the time zone in some way (as the linked Wikipedia page suggests), but the attribution of format to time zone won't be unambiguous - that's what the table on said page clearly demonstrates. So basically you could define a mapping tz -> format, but that would just be a best-guess.
| common-pile/stackexchange_filtered |
double tabs printed when writing single tab in c++ strings
i'm using strings in c++ with tabs inside when the word before the tab char is less than 7 letters a normal tab is printed when it's more it prints double tabs can anyone tell me the solution ? i'm using linux and c++ 98
the line that prints
code
bool list::search_str(LIST_TYPE &s1 , string &output){
output = "";
if(head ==NULL)
return false;
node * temp = head;
while(temp){
if(strstr(temp->get_data().c_str() , s1.c_str())){
output += temp->get_data() + ":" +"\t"+"lines "+ temp->get_line_numbers() + "\n";
}
temp = temp->get_next();
}
}
Can we see your code?
your question is not 100% clear. Show what you are printing and what the output is.
this is the output
http://imgur.com/eXEf7um
this is the code just finding some words in a list
code
bool list::search_str(LIST_TYPE &s1 , string &output){
output = "";
if(head ==NULL)
return false;
node * temp = head;
while(temp){
if(strstr(temp->get_data().c_str() , s1.c_str())){
output += temp->get_data() + ":" +"\t"+"lines "+ temp->get_line_numbers() + "\n";
}
temp = temp->get_next();
}
}
it's the same line that prints them all but different spaces each time as shown in the image
output += temp->get_data() + ":" +"\t"+"lines "+ temp->get_line_numbers() + "\n";
Not sure, but I suspect you're seeing normal tab behavior.
can't i make it all the same length ? i debugged the code it's always the same format but when printed it prints out like this
Could you add your code to the question post itself? Its unclear to read it from the comments.
use spaces with mono font if you want the same length
i added the code to the original post
It sounds like you are using tabs without actually understanding what they are.
That's how tabs work: a tab in the output character string moves the output position to the next tab position. It looks like your console is set for 8-character tabs, so you have tab positions at 8, 16, 24, 32, etc. When you print something that's less than 8 characters followed by a tab, the output position moves to the 8th character position. So:
012345678
names: lines 489
when you print something whose length is between 8 and 15 characters followed by a tab, the output position moves to the next tab position, which is at the 16th character position. So:
01234567012345678
program: lines 330 404
If you want to put things in fixed positions you have to use character strings with embedded spaces rather than tabs, or you can use the manipulator std::setw(n) to set the width of the output field to at least n characters.
This is literally the purpose of tabs. A tabspace "skips" to the next available tab column. It won't go backwards to overwrite the preceding text. Obviously if the preceding text runs over tabs 1 and 2, the next available tab is 3.
Don't use tabs if you don't want their behaviour, or if you don't understand what they do. Use fixed-space padding based on the expected width of your text columns, perhaps. For that, you'll have to precalculate what that width is.
But just randomly interjecting your output with \t characters don't magically re-arrange the entire output before and after into an aligned tabular format.
Don't use tabs. Tabs can expand to 0, 2, 3, 4, 8 spaces or space over to the next tab stop.
Using std::ostringstream and the setw I/O manipulator.
std::ostringstream output_stream;
//...
output_stream << temp->get_data()
<< ":"
<< setw(32)
<< "lines "
<< temp->get_line_numbers()
<< "\n";
output = output_stream.str();
Also look up the right and left justification manipulators.
I'm not sure why this is happening, but if you want a fixed number of spaces to be output (1 tab is 4 spaces), then you can just do:
output += temp->get_data() + ":" + " " +"lines "+ temp->get_line_numbers() + "\n";
What you're experiencing is probably standard C++ behaviour. If you want to learn more about it, read the C++11, C++14, ISO standard.
| common-pile/stackexchange_filtered |
Ubuntu Touch SDK - qmake problem
I've recently tried installing the Ubuntu SDK but with no luck!
I followed the Developer Instructions , and when I launch the QtCreator Ubuntu SDK and create anew project I don't have any options for creating a new touch project.
Also worth noting that some options on the left are greyed out. I have done my research and followed other answers for this but had no luck. I also seen a bug for it on Launchpad and followed their steps.
In Terminal when I run: which qmake I get: /usr/bin/qmake
When I run:
qmake -v
I get:
qmake: could not find a Qt installation of ''
As detailed in the Lauchpad bug report I've deleted config files and tried adding it manually.
If anyone has any ideas or are also having this problem, please let me know.
Thanks in advanced.
Also worth noting: I have done the classic "Have you tried turning it off and on again"
do you have qt5-default installed?
Yes I ran `sudo apt-get install qt5-default' and it was already there. I have also done a complete purge of the Ubuntu SDK and started from fresh but still no luck :(
It appears to of fixed itself. I just ran a system update today and it seemed to of worked!
Can't really give a fix for this problem, but I'll have a look through the updates. Might have been a packaging issue?
Thanks devs for the great software! :)
| common-pile/stackexchange_filtered |
Bootstrap 4 Select2 jQuery Not working in Ajax Calls
I am using Bootstrap 4 and Select2 jQuery plugin in my webpage. It is working perfectly in my website. But when calling an another page contenting Select2 class of select input is not working.
Plugin Initiated at main page as -
<script type="text/javascript" language="javascript">
$(function () {
//Initialize Select2 Elements
$('.select2').select2()
$('.select2-selection').css('border','0px')
$('.select2-container').children().css('border','0px')
//Initialize Select2 Elements
$('.select2bs4').select2({
theme: 'bootstrap4'
})
})
</script>
My Ajax page contain below input elements -
<select class="select2" style="width: 100%;" id="emp" name="emp">
<option value="">-- Select employee --</option>
<option value="E001">Employee 1</option>
<option value="E002">Employee 2</option>
</select>
I am unable to figure out the issue why it not working on Ajax Called page. Anybody please give answer? Thanks in advance.
check this one related your question
https://stackoverflow.com/questions/28674228/select2-jquery-plugin-not-working-after-ajax-calling-for-html-content-change
Thanks for the information. I have tried your reference but still not working for my case. Can you please post an answer with code? Thanks in advance!
In my opinion, you can call the select2 plugin function again after ajax called, I think.
Try it, maybe the mistake fixed.
Hope it helps. thanks
Thank you my friend. It works now. I called / initiated the select2 plugin along with the Ajax call as you suggested and got it working perfectly!!!
| common-pile/stackexchange_filtered |
Combining non-contiguous polygons in ArcGIS Pro
I am trying to combine non-contiguous polygons that are close but not touching one shape. As if you were to draw a line around these five parts connecting them into one. Originally these were part of a multipart feature that I exploded. I have tried merge (which combines the features but does not dissolve the boundary physically), aggregate, and dissolve. I have hundreds of shapes to do this with but I attached a screenshot of five polygons as an example.
Does Eliminate tool give you good solution?
Another option is to use the Snap (Editing) tool. Pick a maximum distance that polygons can be apart before they're considered separate, that is used in the snap environment distance parameter. Note you may wish to use the Densify (Editing) tool first to add some extra vertices (with a distance less than the maximum snap distance).
Note that both Densify and Snap modify the input data, so make a backup or run the tools on a copy of the data.
You can then dissolve if you like.
E.g.
Input:
Densify:
Snap:
Output:
Dissolved:
It sounds like you want something like the concave hull of these shapes.
Your best option for ArcGIS Pro appears to be the Minimum Bounding Geometry tool. The issue here may be that this only outputs convex hulls rather than concave hulls. (I haven't tested the tool myself so I can't be sure that it won't work for your situation).
I also found reference to a suite of community tools that might work.
If you have access to PostGIS, ST_ConcavHull should achieve what you are looking for.
| common-pile/stackexchange_filtered |
Suspending a bash script running in the background
I have a bash script running in the background, and it executes in a loop every 5 seconds. This effects the commands that are being executed at the shell itself.
Is there a way, i can suspend the script execution until the shell is not executing other commands.
thanks
How does it affect the the shell itself? Maybe you just need to redirect the output of the background process?
to stop the scipt you can use
kill -s STOP $PID
where $PID contains the process ID of the script. To start it again, you use
kill -s CONT $PID
Renice should help in such scenario.
Using renice you can decrease/increase the priority of the process.
Look man page for more detailed help.
| common-pile/stackexchange_filtered |
How advanced would Lanka under Ravana (and thus the Rākṣasas/Yakṣas) have to be to supply a 100 billion population of meat eaters?
The population of Lanka under Ravana is 100 billion, over 10 times the world population now (see my answer here for details), in a space of one, not that big country, and the population eats meat. From what I found, the highest population density city's population density times the area of Sri Lanka is less than 3% of the population of Lanka under Ravana (41515*65610 = 2.72 billion)
How advanced would Rākṣasas/Yakṣas have to be to stay consistent with their description in the Ramayana? The population is probably a mix of Rākṣasas/Yakṣas due to matching their leader's races. The sizes for Rākṣasas are immense, enough to cover mountain ranges. Yakṣas are resemble cranes (the birds), but are also bigger than humans, but to nowhere near the same extent. They are as tall palmyra-palm.
Vaisampayana continued, 'Hearing these accursed words couched in harsh syllabus,[2] Yudhishthira, O king, approaching the Yaksha who had spoken then, stood there. And that bull among the Bharatas then beheld that Yaksha of unusual eyes and huge body tall like a palmyra-palm and looking like fire or the Sun, and irresistible and gigantic like a mountain, staying on a tree, and uttering a loud roar deep as that of the clouds.
The residents have to be pretty comfortable there, as there is nothing preventing them from expanding to other areas controlled by Ravana.
By the way, I think it makes a lot more sense if they can warp space to give a bigger on the inside houses, but that's more advanced than other explanations like just antigravity for the structural weight (which would be an issue with warping space as well).
Sorry, my mistake for not noticing the pretty exact size for the Yakṣas earlier.
https://www.wisdomlib.org/hinduism/book/the-mahabharata-mohan/d/doc7595.html
For those of us that are not Hindu, please explain all races, lands, and governments involved
@MichaelKutz I doubt Hindus would know most of this. Lanka is just Sri Lanka on Earth. It does not really make sense for the races to be anything but Rākṣasas/Yakṣas as Ravana and Kubera are such respectively. Yakṣas are smaller and resemble cranes. They both eat meat as stated.
Since you are asking about the Ramayana, I am afraid this falls into the bucket "question about third party worlds"
As other members have said, this seems a little off topic for worldbuilding, and possibly unclear In it’s current state.
You meant million right?
@user6760 Definitely billion. 10,000 kotis is 10,000 times 10 millions.
You may want to clarify that you are creating a setting for a story based loosely on the Ramayana, and are trying to design how such a nation would function, NOT that you are simply asking how to explain a religious text.
@DWKraus - Space travel was one of Ravana's most famous accomplishments - the Sri Lankans named their first satellite after him. So I think you have a decent answer there.
@MikeSerfas He actually stole arguably the greatest flying machine from his bother, so Kubera should get that credit. Although Kubera may have gotten/improved upon it from the previous rulers of Lanka https://www.wisdomlib.org/definition/pushpaka
I’m voting to close this question because this appears to be a question about how a world of existing myth works, rather than worldbuilding as such.
@ZeissIkon It's mostly because I get annoyed with inaccurate depictions of the other races in Hinduism in media. That's not a religiously ordained thing, I just don't like it.
@ZeissIkon many elements of WB questions (not closed) are derived from Western myth and Western culture. We're answering e.q. questions about merfolk and dragon physiology, or flat earth models, cyberpunk scenario's, or Matrix-like realities. These ideas are based on existing ancient, and/or modern myth. This scenario (context) involves a Hindu mythological context, the question is WB, I don't see a close reason.
A couple centuries ahead of today:
That IS a lot of bodies to feed. But in the end, it just comes down to finding the real estate and food so everyone is reasonably comfortable. Scientists today are already working out how to do what you are trying to accomplish.
First, let's set some guidelines. I'm assuming no implausible magic. Lots of speculative literature suggests the Ramayana was discussing advanced aircraft, atomic weapons and missiles, stealth technologies, and the like. Rakshasas are described as great illusionists, shapeshifters, and able to interact with humans. So I'll say for this reason rakshasas are approximately human in relative size, and any giant ones are a function of illusion (holograms, etc.) or technology (like battle armor). All "magic" is explainable under the "Any sufficiently advanced technology" clause of worldbuilding.
Most of the problems you face are really engineering ones. Yaksas are guardians of secrets and treasures underground, and they are building vast networks of tunnels to create all the real estate they need. Lanka can spread out underground under the ocean. Geothermal energy can provide all the power they need to operate vast hydroponic farms underground, and the air is circulated with the plants to renew the oxygen supply. Hydroponic algae is fed mostly to insects (meat) which form the basis of the local food production. Elites can eat fancier, but this gets you abundant cheap meat and lots of real estate.
Your Rakshasas are always hungry, and they are perpetually trying to expand their systems to compensate. They have constructed a vast arcology encompassing most of the island of Lanka. The tower of this thing extends thousands of stories up and provides all the living space you need. This fits well with the designation that the fortress of Ravenna was 13 km tall. Atop this is a network of space elevators and a couple vast pipes continuously pumping up carbon dioxide, water and volatilized organics plus minerals needed for space-based hydroponics. Since the best space elevator locations would be along the equator, this fits well with the positioning of Lanka south of Sri Lanka along the equator. Solar power stations in orbit provide all the power these facilities (and the people on the surface) need to run their civilization. Food is run down the gravity well in an endless stream (again, likely mostly in the form of insect proteins raised on hydroponic plants).
Any time you need more, simply add orbitals, dig horizontally or vertically, and add floors to the arcology.
While Rakshasa can downsize themselves for convenience, they also have a gigantic form in size like mountain rages, which is probably their true form as it is the only form they rapidly shapeshifting into. (As a side note, Sanjaya is probably giving Karna too much credit. I remember hearing this story told other ways, where Ashvattama does literally everything, until Karna gets an entire Akshauhini of his own side killed) https://www.wisdomlib.org/hinduism/book/the-mahabharata-mohan/d/doc493167.html
That's why I think the residents make more sense to be mostly Yakṣas, as on their home turf they will take their natural form to be more comfortable and you could not fit 4 billion natural form Rākṣasas on the battlefield. Also, the city of Lanka used to be part of the Yakṣa empire, so it would make sense for the city to continue being made of that race. Although Yakṣa and Rākṣasa are pretty close-knit so there would be a mixture https://en.wikipedia.org/wiki/Lanka
This means in addition to killing more humans than all other races combined, Vishnu kills more Devas than Asuras, breaking all the stereotypes.
@AupakaranaAbhibhaa The rakshasas were a mixed bag; Ravenna's brother betrayed him and led to his defeat, supporting Rama. They went back and forth a lot like that. The Yaksas would have made the underground empire, while the rakshasas build over top with the arcology. Not sure you can physically fit 4 billion mountain-sized life forms on the surface of the planet.
A few decades more advanced.
You need three things for the civilization to do well. Housing, food, and power. There are a host of associated technologies, like recycling, water efficiency and such they need, but those are relatively easy to make.
They need housing.
We could house a lot more people than we do. They need to have hosts of skyscrapers piercing the skies and mines diving deep into the earth to house their vast population. We have just begun development of such buildings. Theirs will likely be larger and more common.
I imagine everyone will live in places like this, with less poverty hovels around them.
With these, they can have a massive population density, far beyond even Sri Lanka, with their powerful construction technology and building ability.
They need meat.
We are close to being able to grow meat, but not quite there. They need the ability to grow vast amounts of meat for their population. This reduces energy loss and increases efficiency. No need for herds of cows when you can have herds of biovats.
They need power.
Running the air systems, lighting, weather cancellation systems, water purification systems, crop growing vats and everything will be massively expensive. As such, they need a cheap and plentiful power source.
Nuclear fusion is that. They've presumably mastered the fires of the sun, and can power a nigh infinite amount of devices with isotopes purified from base water.
| common-pile/stackexchange_filtered |
Failed to install Tk module of Perl in Windows7
At first,I tried to use ppm and cpanm to install Tk module.But failed to download whth the reason which i don't konw(yet i can install image module) .So I tried another way.I downloaded the Tk-804.030 from the cpan website.And unpacked it.Then I type "perl makefile.pl" in the cmd, howerver, shit happens.There were so many errors.And i remenbered to read the README.txt, so i found the following which make me frustrated.
When you install ActivePerl, it provides patched C runtime as PerlCRT.dll
which it installs in the "system32" directory.
This needs "administrator" rights on NT.
It also provides the import library PerlCRT.lib, but this is installed
in an odd location e.g. C:\ActivePerl\lib\CORE\PerlCRT.lib
where it is not found by MakeMaker or VC++.I copied it to C:\VisualStudio\VC98\lib\PerlCRT.lib
(Your paths may vary dependinh where you installed ActivePerl and VC++.)
I could not find the PerlCRT.dll and PerlCRT.lib in my computer, i googled and found PerlCRT.dll which could be downloaded, but i counld find PerlCRT.lib to download.i don't kown how to do it, i really need some help.It couldn't be better if you can tell me the whole installing procedure. ( I'm new to Perl, and I use Win7, visual studio 2012 and MinGW as well)
If you have ActivePerl than you already have Tkx installed. Why you need Tk?
See PPM Tk info page, the distro fails to build on the current versions of ActiveState Perl for Windows. You can add the 3rd party Bribes repository, Tk is available there.
Thank you.I have solved it.Now I added the Bribes repo and i found why i was wrong when i compiled the Tk module was that I use the latest Tk-804.030,however,activestate Perl just support the Tk-804.029,damn Windows
| common-pile/stackexchange_filtered |
In Chrome, after submitting a form and navigating back, its values are autofilled after DOMContentLoaded. Is it possible to listen for this change?
See an example here: https://large-platinum-ethernet.glitch.me.
Using Google Chrome (using v81 as of May 2020):
Open your console.
Select a value other than "Option 0."
Click "Submit."
Press "Back" in your browser.
The value of the select element will be updated to the value of the select when you submitted the form. If you check the console, though, you'll see the value is "Option 0" initially, and it is updated to the value prior to navigation some time between DOMContentLoaded and window.onload.
Does anybody know if it's possible to listen for when Chrome makes this change? No change or input event is fired. I've tried using a setTimeout inside the DOMContentLoaded handler, and that seems to work, but seems hacky and potentially inconsistent.
Edit: It looks like the short answer is "no, there isn't an event that's triggered when Chrome changes the values." It is possible instead to see if the page was loaded after a navigation event. If it was, any form values set by window.onload can be assumed to have been set by the browser.
look up bfcache
Looks like pageshow is buggy.
performance.getEntriesByType("navigation")[0].type === 'back_forward
(Deprecated value) window.performance.navigation.type === 2
You can also use autocomplete=off on your form inputs
EDIT: pageshow doesn't work as of 5/6/2020 on Chrome 81
pageshow/pagehide events to detect loading from bfcache, which is where the form values are loaded from.
https://github.com/adobe/webkit/blob/master/Websites/webkit.org/blog-files/pageshow-pagehide-example.html
https://developers.google.com/web/updates/2018/07/page-lifecycle-api
https://developer.mozilla.org/en-US/docs/Web/API/Window/pageshow_event
function pageShown(evt)
{
if (evt.persisted)
; // do things to your forms
else
; // no need to do anything
}
function pageHidden(evt)
{
if (evt.persisted)
; // do things to your forms
else
; // no need to do anything
}
window.addEventListener("pageshow", pageShown, false);
window.addEventListener("pagehide", pageHidden, false);
It looks like by the time pageshow is fired, the form values have already been set. So, the persisted property is always returning false. I've updated my original example to a include pageshow event listener.
updated the answer with a value you can check to see if it's being loaded from bfcache
Thanks for the follow-up. Looks like performance.navigation.type is deprecated, and PerformanceNavigationTiming.type (https://developer.mozilla.org/en-US/docs/Web/API/PerformanceNavigationTiming/type) should be used instead. Thanks for pointing me in that direction, though!
Chromium bug report for pageshow persisted property: https://bugs.chromium.org/p/chromium/issues/detail?id=344507.
Chrome 79 not reflecting DOM state in onpageshow after back navigation might also be of interest.
| common-pile/stackexchange_filtered |
bash: assign to an array in bash from a function called with the & control operator
I'd like to know why this works:
arr=()
fun() { arr[$1]=$2; }
fun 1 2
echo ${arr[1]}
# echoes '2'
but this doesn't:
arr=()
fun() { arr[$1]=$2; }
fun 1 2 &
wait
echo ${arr[1]}
# echoes a blank line
By running fun in the background in your second example, you run it in a subshell. Changes to the array made in the subshell are not visible to the parent shell, where you echo the value of arr[1].
and the same thing would happen if you created any subprocess, eg: (fun 1 2) or echo | fun 1 2
This can't work, as running the function asynchronously creates a new shell context, which can't modify the parent context's environment. This is much similar to pipes into control structures, where variables modified inside the control structure won't be modified in the parent outside the pipe.
| common-pile/stackexchange_filtered |
What's wrong with this pipeline communication code?
I'm trying to implement in my code with pipeline communication.
Build success but my code doesn't work.
My Pipeline communication code is :
int CTssPipeClient::Start(VOID)
{
m_uThreadId = 0;
m_hThread = (HANDLE)_beginthreadex(NULL, 0, ProcessPipe, LPVOID(this), 0, &m_uThreadId);
return 1;
}
VOID CTssPipeClient::Stop(VOID)
{
m_bRunning = FALSE;
if(m_hThread)
{
WaitForSingleObject(m_hThread, INFINITE);
CloseHandle(m_hThread);
}
if(m_hPipe)
{
CloseHandle(m_hPipe);
}
}
UINT CTssPipeClient::Run()
{
BOOL fSuccess;
DWORD dwMode;
LPTSTR lpszPipename = TEXT("\\\\.\\pipe\\tssnamedpipe");
m_vMsgList.clear();
m_bRunning = TRUE;
// Try to open a named pipe; wait for it, if necessary.
while (m_bRunning)
{
m_hPipe = CreateFile(
lpszPipename, // pipe name
GENERIC_READ // read and write access
0, // no sharing
NULL, // default security attributes
OPEN_EXISTING, // opens existing pipe
0, // default attributes
NULL); // no template file
// Break if the pipe handle is valid.
if (m_hPipe == INVALID_HANDLE_VALUE)
{
Sleep(1000);
continue;
}
dwMode = PIPE_READMODE_MESSAGE;
fSuccess = SetNamedPipeHandleState(
m_hPipe, // pipe handle
&dwMode, // new pipe mode
NULL, // don't set maximum bytes
NULL); // don't set maximum time
if (!fSuccess)
{
continue;
}
while(fSuccess)
{
if(m_vMsgList.size() > 0)
{
DWORD cbWritten;
// Send a message to the pipe server.
fSuccess = WriteFile(
m_hPipe, // pipe handle
m_vMsgList[0].c_str(), // message
(m_vMsgList[0].length() + 1)*sizeof(TCHAR), // message length
&cbWritten, // bytes written
NULL); // not overlapped
m_vMsgList.erase(m_vMsgList.begin());
if (!fSuccess)
{
break;
}
}
Sleep(200);
}
CloseHandle(m_hPipe);
}
_endthreadex(0);
return 0;
}
DWORD CTssPipeClient::WriteMsg(LPCTSTR szMsg)
{
if(!m_bRunning)
{
Start();
}
wstring wstr(szMsg);
m_vMsgList.push_back(wstr);
return 0;
}
I was tried fix this problem. But I didn't found What's wrong?
Please help me. I'm grateful for your help.
Thank you.
What's the problem? Please be specific, and include the error message or anything else that doesn't work as expected.
What exactly is not working? Be more specific, and explain what you expect, and why you would expect that.
The reason is simple. Because you open a file with only read mode.
Please modify like this:
m_hPipe = CreateFile(
lpszPipename, // pipe name
GENERIC_READ | // read and write access
GENERIC_WRITE,
0, // no sharing
NULL, // default security attributes
OPEN_EXISTING, // opens existing pipe
0, // default attributes
NULL); // no template file
I hope that helps.
| common-pile/stackexchange_filtered |
Is it better to open different ports or one port with identifiers (or else)?
I'm coding a system in which there are three different java applications that interact with eachother via TCP-IP. Two of these apps connect with the other one, called Directory, through a ServerSocket.
One of the apps connects with it only to log in and be added to a list, while the other app connects with it only to ask for the list or to send a message.
These connections are all being done via the same port in the Directory's ServerSocket, the apps that connect with the Directory send a String through the socket with a sort of task-identifier slapped on the front, which the Directory then processes to know what it has to do.
Is this approach of reading identifier Strings ok? Is it efficient, maintainable, or should it be done in another way? e.g. having ServerSockets with different ports for different types of clients, or different ports for different funcionalities. The funcionalities mentioned are the only ones for the time being, but more may be added so I would like to know if this is a viable implementation.
public class Directory {
private ServerSocket server;
public Directory() {
super();
}
public void openServer(int port) throws IOException {
new Thread() {
public void run() {
try {
server = new ServerSocket(port);
while (true) {
Socket socket = server.accept();
PrintWriter out = new PrintWriter(socket.getOutputStream(), true);
BufferedReader in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
String identifier = in.readLine();
if (identifier.equalsIgnoreCase("Connect")) {
connect(); // stub
} else if (identifier.equalsIgnoreCase("NeedList")) {
giveList(list); // stub
} else if (identifier.equalsIgnoreCase("SendMessage")) {
sendMessage(); // stub
}
}
} catch (IOException e) {
// interrupted
}
}
}.start();
}
}
Primarily opinion. I like nanomsg for this use case. But that's just my opinion. And there are plenty of options both commercial and free.
I maybe should have clarified that this is a college exercise so i have to code it all by myself and is why i'm asking about the implementation, but i'll check it out later nonetheless, thank you.
Unless you have two completely different protocols there is no reason to use two ports. It is normal for messages within a protocol to identify themselves ;-)
If this is just a college exercise, it really doesn't matter which way you do it. Reasoning about scalability, maintainability, etc is hypothetical. But the short answer is that either approach will work.
If all apps use the same protocol, which basically just means as long as all apps use the same packet structure, you´re fine with going with one port.
Another pro using one port is that it´s less coinfiguration since, let´s say you have a firewall, you only have to open this one port and if someone tries to connect within a restricted network, the same applies.
If the apps use different protocols you´d be better off using three ports.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.