text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Asp.Net app in VS2017 using Crystal Reports error 'Invalid Subreport Name'
I have been given a legacy ASP.Net application written in vs2005 with CR 8. I am currently running vs2017 and CR 13.0.22.
The application interacts with a lot of different data and shows multiple reports and I was asked to modify 1 of them that is a parent report with 5 subreports. When I initially started work I ran the Verify Database on the parent report and was prompted to connect using schema/XML files that did not make it into the saved code base and are now LOST.
I re-created the 6 datasets/connections that connect to stored procedures for the data and refactored/redesigned all 6 reports to reconnect to the new datasets.
Now when I run the application I get the above mentioned error: Invalid Subreport Name
Here is the method that is generating the error:
public void GenerateReport(List<Report> Reports, CrystalReportViewer Viewer)
{
int rptcount;
Report mainreport;
// make sure there is only one main report in the list
rptcount = 0;
mainreport = null;
foreach (Report report in Reports)
{
if (report.IsSubReport == false)
{
rptcount = rptcount + 1;
mainreport = report;
}
}
if (rptcount != 1)
throw new Exception("ReportWriter.GenerateReport: There was more than one main report found in the list of reports. " +
"Only one can be the main report. The other reports have to be subreports.");
// generate the report
checkReportFile(mainreport.ReportName);
document = new ReportDocument();
document.Load(mainreport.ReportName);
document.SetDataSource(mainreport.DataSet);
/* Previously the next line was a call to setReportParameters and the code errored with the Invalid Subreport Name call. I moved that line of code to after the For Each and now the error occurs within the foreach loop on the first iteration. */
// attach sub reports to the main report
foreach (Report report in Reports)
{
if (report.IsSubReport == true)
/* This line throws the error */
document.OpenSubreport(report.ReportName).SetDataSource(report.DataSet);
}
/* This is the previously failing line that I moved to here */
setReportParameters(document, mainreport.ReportParameters);
Viewer.ReportSource = document;
}
I ran the code in the Debugger and stopped execution prior to the error. I then executed the "document.OpenSubreport(report.ReportName)" piece of the failing line in the immediate window and this is exactly where the error is thrown. The ReportDocument type/object is part of the Crystal Decisions library and it is not possible to drill any further down in that code so I do not have the data to determine what exactly is failing. I've looked at all the converted subreports over and over again and they all look perfect in the designer. How do I determine what is really failing here? Is there a way to parse the .rpt file to see if there is some sort of legacy junk in it that needs to be deleted? If I can't find an answer I will have to start over from scratch...
Rather than using OpenSubReport, I suggest that you use the Subreports collection on the main report, testing for null. Then you can tell the caller that they have requested an incorrect subreport name if it doesn't exist.
Something like
foreach (Report report in Reports)
{
if (!report.IsSubReport)
{
continue;
}
var subReport = document.SubReports[report.ReportName];
if (subReport == null) {
throw new Exception($"Subreport {report.ReportName} not found");
}
subReport.SetDataSource(report.DataSet);
}
And using the SubReports collection will also let you examine exactly which subreports were loaded. It's entirely possible that there is a subtle typo or even a case difference that might make a difference.
I used competent_tech's advice but it didn't solve my issue. What I ended up doing was installing the full Crystal Reports (2008 unfortunately) app and opening the report with that IDE. The IDE gave me much more informative errors. The problem was that there were missed object replacement errors within the Formula Fields and the Grouping for the parent report and a couple of the subreports. I also moved the setReportParameters back to its original called position. It was great to have better more descriptive information and I recommend going back and forth between the Visual Studio CR design and the stand alone CR design to isolate errors.
| common-pile/stackexchange_filtered |
In World War 1, why were the Australian and Canadian troops so good?
I'm reading a book about the first world war, and makes it clear that the Canadian and Australian troops made quite a name for themselves. They acquitted themselves very well in many of the actions they participated in (within reason), and were regarded as Britain's crack troops, even by the central powers.
In the battle of Amiens in 1918 the British went into quite a lot of effort to hide the fact that specifically four Canadian divisions were being brought onto the scene.
It also seems like there is a very high concentration of good military commanders, such as Monash.
Why is this the case, when these colonies didn't really have any kind of military tradition at all and not that much experience in warfare in general?
Can you cite the book / source?
Keep in mind that England had very little (land-based) military tradition, so you've got a low bar.
Biased sources. When you write a book about certain units, you subconsciously try to make them somehow exceptional, to justify your effort. In reality, most Australian and Canadian divisions were just a run-of-the-mill troops.
You may be interested in: https://www.jstor.org/stable/260511
@rs.29: The Canadian Expeditionary Force ultimately comprised four infantry divisions and a cavalry brigade. How many of these 4.5 divisions are you counting as "most ... Canadian divisions were just a run-of-the-mill troops"? The Corps was employed, and regarded by both sides, as an elite force at arms.
Can you cite sources to support that they were good? Right now this is an opinion asking for confirmation of the opinion.
@inappropriateCode You misunderstand. I am implying that Canadian and Australian troops are being implicitly compared against other Anglophone troops rather than against superior troops (i.e., elite French or German triops).
@inappropriateCode There is no concrete proof that Germans considered Canadian troops as something special, compared to other British troops. That includes both infantry divisions and cavalry brigade which spent most of the war fighting as infantry. Note that in trench warfare most of soldiers die from artillery fire or from machine guns when in attack, not from some uber-trained or "elite" enemy.
No military tradition? All three were relatively new countries with strong ties to the British military. Canada even had Highland Regiments linked with the Scottish regiments. All three had sent forces to aid the British during the Second Boer War which had ended just 12 years prior to WWI.
@C Monsour - Britain (including England) had a huge land-based military tradition, it was fresh from the Boer War and had a large empire that used plenty of troops in policing actions and the interminable Afghan wars. Go into any drill hall in Britain and you'll see the battle honours from the Crimea, the Sudan etc. etc
I am surprised no one has mentioned the crucified canadian officer. After this happened and all the propaganda that followed, canadians fought with a particular brutality.
The Australian historian and journalist LA Carlyon in his book Gallipoli reports Australian troops, a higher proportion of whom at that date had grown up in a rural, outdoor life (the same was probably true of Canadians), noticed that British troops who had grown up in the then smoky and crowded industrial cities of Britain often seemed less well nourished and healthy, even stunted in intellect.
This is not to say that the Australians were without faults. Vera Brittain who wrote about her experiences as a Nurse in World War 1 in 'Testament of Youth' said that if there was any trouble Australian troops were always ready to be part of it.
British War Correspondent Philip Gibb in his book 'Now it Can be Told' said that of British Empire troops the Scots and the Australians were more likely than others not to take prisoners but to kill enemy troops who had surrendered.
Britain recruited a higher proportion of its adult male population into the armed forces during the war, including by conscription from 1916, which meant that the British army probably came closer to 'scraping the bottom of the barrel' and taking some less suitable recruits. Australian, New Zealand and most Canadian soldiers were volunteers, hence more likely to be motivated.
Of course we should be cautious about accepting every generalisation made then or now about the qualities of soldiers of different nations, which can be biased by patriotism, stereotypes or chance as to e.g. which Canadian soldiers someone met and under what circumstances, as they were all individuals and doubtless some braver, more intelligent, better trained etc. than others.
A few Canadians (about 24,000) were conscripted and served in France , stating in January 1918 https://www.thecanadianencyclopedia.ca/en/article/conscription
@DJohnM: While true - did any of them see combat? My understanding is that (virtually) none of the conscripts saw action on the front line.
DJohnM I stand corrected about the (late and somewhat limited) use of conscription in Canada and have amended my answer. I am a little surprised that French Canadians tended to oppose conscription while Anglophones mostly supported it, given that France and partly French-speaking Belgium were Britain's allies, but that is a separate topic.
One can make much of the prairie and frontier background of the Canadian troops - and one should, as they were 8 cm (3") taller, 5'7" (178cm) at age 21 compared to just 5'4" (170cm) for Brits of the same era - but in no small part the success of the Canadian Corps post-Somme must be credited to its commander, Sir Julian Byng, being likely the most competent and innovative corps commander in either the French or British army.
Amongst other qualities, it was the dogged effort of Byng, his successor-to-be Sir Arthur Currie, and their staff that solved the conundrum of perfecting a walking barrage; allowing Canadian troops assaulting Vimy Ridge to be in German trenches reliably just 30 to 45 seconds after the barrage lifted. And in contrast to British regiments where only captains and above received artillery schedules, Byng and Currie issued schedules to every NCO down to corporal, as a means of guaranteeing that no troops got left behind due to officer casualties.
Pay the price of victory in shells - not lives. -- General Sir Arthur William Currie (1875-1933)
Pierre Berton's excellent book Vimy has a couple of early chapters on the symbiosis between Byng and his Canadian troops that resulted in such success on the battlefield.
On another post: do you have 5 minutes for teaching Austrians to be on time at Rivoli if a Corsican invites to battle?
@LangLangC: Possibly; but that request is a bit cryptic.
@PieterGeerkens - https://history.stackexchange.com/questions/53176/what-is-the-context-for-napoleons-quote-the-austrians-did-not-know-the-value would seem to decryptify the comment.
In the case of Australia, WW 1 was the first major war with significant contribution of Australia towards the war effort of the Empire. Prior to WW 1, Australia contributed troops to the Boer War and the Boxer Rebellion Campaign.
So, on the part of the Australians we can expect a significant degree of propaganda highlighting the Australian contributions to WW 1. Especially, since the battlefields of Gallipoli and the Western Front were so remote from mainland Australia. Thus, painting the ANZAC's as especially good troops serves to maintain morale and support at the homefront.
Sorry, for the large amount of conjecture and lack of sources. See this as more of a thought or comment, rather than a full answer to the original question.
It might also be worth making the points that:
a) The majority of Canadian troops were in any case recent British immigrants (estimated to be >70% in the early part of the war and >60 % in the latter part). A large proportion of these were fairly recent immigrants who had only been in Canada for < 8 years. A higher proportion of the officers were Canadian born because they were the doctors, solicitors, teachers, etc in pre-war towns.
b) The second point is that the cream of the British army were the regular forces (soldiers who were in the army before the war broke out). These soldiers, who were well trained, well fed (and consequently healthy and of good stature), and well equipped were killed or injured at battles such as Mons, Le Cateau, First battle of the Marne, First Battle of the Aisne, and First Ypres (and Gallipoli). The vast losses from these early battles were then replaced by the ill fed, poorly trained, poorly educated masses from Britain's industrial slums.
Nevertheless, many of these unheralded formations in Kitchener's army did extremely well (once trained sufficiently) including the 46th North Midland Division (North Staffs) who broke the Hindenburg Line in 1918 when other, more vaunted divisions, had failed.
These stats are absurd in light of the immigration to Canada between 1901 and 1911 being less than the population increase of about 1.9 million, the bulk from Continental Europe as the mass immigration wave from the U.K had ended more than a decade earlier. Of that 1.9 million immigrants it's unlikely even a third was male of serving age, to match the 640,000 or so Canadians who comprised the CEF.
| common-pile/stackexchange_filtered |
Researcher: The protein unfolding traces we're getting show significant scatter between experimental sessions. Even with the same polyprotein construct, the force values drift by 20-30 piconewtons.
Consultant: That's the calibration demon rearing its head again. Each time you mount a new cantilever, you're introducing fresh uncertainty into the spring constant determination. The thermal method gives you maybe 10-15% accuracy at best.
Researcher: Right, but if we can't trust absolute force values, how do we make meaningful comparisons between different protein variants or environmental conditions?
Consultant: Have you considered concurrent measurement strategies? Instead of comparing forces across separate experiments, you measure your reference protein and test protein simultaneously on the same cantilever.
Researcher: Interesting approach. You mean like mixing different polyprotein constructs in the same sample?
Consultant: Exactly. The calibration error becomes a systematic offset that affects both measurements equally. When you take ratios or differences, that uncertainty cancels out. I've seen accuracy improvements by a factor of six using this approach.
Researcher: But how do you distinguish between different proteins in the same pulling experiment?
Consultant: Orthogonal fingerprinting. Engineer distinct unfolding signatures - different contour length increments, characteristic force plateaus, or unique folding intermediates. Each protein leaves its own mechanical signature.
Researcher: That makes sense for the data analysis side, but what about the hardware limitations? We're still wrestling with cantilever selection. The high-bandwidth ones give us temporal resolution but terrible force noise.
Consultant: It's the eternal tradeoff. Those ultrashort cantilevers with microsecond resolution also have low quality factors. They end up applying high-frequency force modulations to your molecule that traditional analysis doesn't account for. Meanwhile, the softer cantilevers with better force precision can't track fast unfolding events.
Researcher: So there's no universal solution?
Consultant: Not yet. You have to match the cantilever to your specific application. Studying slow conformational changes in membrane proteins? Go soft and stable. Catching rapid unfolding intermediates? Accept the noise for the bandwidth. The key is understanding what you're sacrificing and whether it matters for your particular question.
Researcher: This explains why our bacteriorhodopsin unfolding data looked so different from the literature values. We were probably using incompatible cantilever specifications.
Consultant | sci-datasets/scilogues |
How to create managerID lookup hashtable
I've had success in the past creating hash tables for looking up an exact match for one ID in a CSV, when looking it up on in another CSV, but what I'm trying to do must be very different? I have 1 CSV with everyone getting a unique LoginID and then a PositionID. Then in another column, I have "Reports to PositionID" with the manager PositionID there. I want to look up the manager's LoginID based on their PositionID, but have tried several ways unsuccessfully.
CSV example even when I remove the empty first row can't seem to get Bob's Person ID in cross reference to Reports to Position ID heading:
Person ID Position ID Legal First Name Reports To Position ID
YQM000051 DIANE YQM000076
S9999991 YQM000052 CHARISSE YQM000076
S9999992 YQM000052 CHARISSE YQM000076
s9999993 YQM000052 CHARISSE YQM000076
s9999994 YQM000076 Bob YQM000071
This is the third time in a couple hours you asked basically the same question. Please don't do that.
Your question is kind of hard to understand without seeing any examples but I imagine you could do something like the following:
#Imports CSV file as variable
$Csvfile = Import-Csv -Path "C:\SomePath\file.csv" -Delimiter ","
#Change 1513 to the actual managers PositionID
$Csvfile | Where {$_.PositionID-eq "1513"} | Select -ExpandProperty LoginID
To test I created a quick csv with some random numbers:
loginID PositionID ReportstoPositionid
1111 3654 1513
2222 1513 54123
3333 54123 16543564
4444 156413 156413
5555 16543564 3654
Then I decided that 1513 would be the managers position code and then ran the PowerShell commands above to get the loginid.
PS C:\> $Csvfile | Where {$_.PositionID -eq "1513"} | Select -ExpandProperty LoginID
2222
If you wanted to this all without a variable you could do the following:
PS C:\> Import-Csv -Path "C:\SomePath\file.csv" -Delimiter "," | Where {$_.PositionID -eq "1513"} | Select -ExpandProperty LoginID
2222
Lastly, you could remove the select statement if you wanted to see the full line in the csv file. Example:
loginID PositionID ReportsPositionid
------- ---------- -----------------
2222 1513 54123
Again, your questions was a little hard to understand but based on what I read here is what I came up with.
Hope this helps!
EDIT
After testing with the sample data, I believe you could use the following:
$CSV1 = Import-Csv -Path "C:\Users\Tyler\Desktop\test1.csv" -Delimiter ","
$Manager = $CSV1 | Where {$_.'Position ID' -eq 'YQM000076'} | Select -ExpandProperty "Legal First Name"
Write-Output "The Following Employees Report To $Manager :"
foreach ($Row in $CSV1)
{
if ($Row.'Reports to Position ID' -eq 'YQM000076')
{
Write-Output "$Row"
}
}
OUTPUT
The Following Employees Report To Bob :
@{Person ID=; Position ID=YQM000051; Legal First Name=DIANE; Reports To Position ID=YQM000076}
@{Person ID=S9999991; Position ID=YQM000052; Legal First Name=CHARISSE; Reports To Position ID=YQM000076}
@{Person ID=S9999992; Position ID=YQM000052; Legal First Name=CHARISSE; Reports To Position ID=YQM000076}
@{Person ID=S9999993; Position ID=YQM000052; Legal First Name=CHARISSE; Reports To Position ID=YQM000076}
Hi Tyler, I didn't get any data back when I put in a ReportstoPositionID? I'm pulling from a large list with many managers and employees under some of them? My headers have spaces between the words? I'm using 'single' around or $Row.'Position ID'. Does that help?
hmm, I added spaces to the headers in my example and then changed $.PositionID to $.'Position ID' and I received the same results I posted above. How similar is your csv to my example above? If possible, could you edit your question above with the first 5 lines of your csv including the headers? You can change the data, I would like to run some tests with everything in the exact same formatting. Also how is the csv saved? Is this a regular csv, csv (MS-DOS), or csv (Macintosh)?
What version of PowerShell are you running? Run the following to find out: $PSVersionTable.PSVersion.Major
Not sure if it matters yet but there could be a possibility that I am using something that was introduced in a newer version of PowerShell.
I'm on 4.0 and will edit the main question with sample headers and data. I can't seem to figure this out. Thx...
Ok, yeah at version 4 you definitely wouldn't see any issues with the commands above. Ill play with the sample data you post and Ill get back to you with what I find. Thanks
See my answer above, below EDIT section. I think that is what you are looking for.
Still testing, but I think what I'm after is the Person ID of the manager? In AD, this would be the SamAccountName for us. Thx.
Hi Tyler, I think I found my problem in testing. I forgot to use the @{} empty array for the $PIDTable = @{}
Import-csv $ADPcsv | ForEach-Object {
$PIDTable[$.'Position ID'] = $
}
Marking your answer as it works now. Thx!
| common-pile/stackexchange_filtered |
setup ssh private/public keys
I have done the following:
ssh-keygen
Then put the contents of the id_rsa.pub into the remote ~/.ssh/authorized_keys file. I thought that should do it. But it still prompts for a password. Not the id_rsa password...I did a ssh-add which that is all set but the computer password (remote password) in order to log into the remote system. I specify a User in my .ssh/config file so it knows what user to use.
I checked my remote .ssh directory and it is 700. The only thing I can think of is the .ssh directory is owned by john. When I connect to the remote system I do john@ip and the computer that I am connecting with (local machine) the username is johnsmith. Could that be why? If so, is there a way I can allow this without having to make the same user for each system?
I figured it out. Apparently the permissions for the file authorized_keys on the remote server needed to be set: chmod 700.
| common-pile/stackexchange_filtered |
Compiler flagging error for using array size variable
This simple piece of code for some reason is causing the compiler to show an error:
#include <iostream>
using namespace std;
int main() {
size_t c_string_length{15};
auto* selection{new char[c_string_length]{"Biggie Smalls"}};
for(size_t i{};i<c_string_length;i++)
cout<<selection[i];
delete [] selection;
return 0;
}
The error that I get is this:
error: invalid use of array with unspecified bounds
14 | char* selection{new char[c_string_length]{"Biggie Smalls"}};
| ^
But as soon as I replace the size (given by the variable
c_string_length
) of the dynamically allocated array with an integer - say 15, the program runs just fine and displays the intended output.
Why is that? It doesn't have to do with the datatype of the array size variable used, I checked using int.
Works well in clang and gcc. Getting fatal error C1001: Internal compiler error. in msvc. What compiler are you using?
It is an online compiler. This one:
https://www.programiz.com/cpp-programming/online-compiler/
I figured out the reason behind the error. It is because, to the compiler, it is a bit confusing - the size of the array, which is supposed to be constant size. When simply using the variable c_string_length, we are implying that the size is not fixed and subject to change.
The fix is to tell the compiler that c_string_length is a constant; by way of:
const size_t c_string_length{15};
That's all, now the compiler is sure that c_string_length is a constant and can use it without any confusion.
Actually in this case, array size not necessarily needs to be constant as you allocate it dynamically. Looks like the problem in something else.
| common-pile/stackexchange_filtered |
How do I get a reference to the current project in an Eclipse plugin?
I'm creating an editor for Eclipse. Right now the editor fires up with the user creates a new file with the appropriate extension. My question is, how can I get a reference to the project in which the file resides? For example, say I have a workspace with 2 projects, P1 and P2. I right click P2 and create a new file, can I get a reference to P2 from this somehow?
Ultimately I need to reference the AST or Java Model of the project but even a String identifying the project would work.
does http://stackoverflow.com/questions/1206095/how-to-get-the-project-name-in-eclipse help?
I think, the answer is simply IFile.getProject() would work...
If you work with a FileEditorInput in the init() method, you could use the following code to obtain the searched project resource:
FileEditorInput fileInput = (FileEditorInput) input;
fileInput.getFile().getProject();
| common-pile/stackexchange_filtered |
Mealy and Moore implementations in verilog
Can any one tell what are the differences between these implementations in verilog/VHDl? I mean how does Mealy and Moore Synthesize into circuits in detail ? Any links would prove useful too.
I am quite familiar to this
Thank you
But is this the way it implements ??
Please try Google. e.g. http://www.altera.co.uk/support/examples/vhdl/vhd-state-machine.html
@Oli Charlesworth ,I know how to write codes in verilog for Mealy and Moore. What I need is , will the synthesizer implement it in a different way or the regular block diagram as we all are familiar with.
English is not my strong point , but I hope you understand what I am trying to say
The synthesiser will implement logic that matches the code you have written. If you have outputs which are unregistered (ie, not written to from a clocked block) then that's what the synthesiser will give you.
More to the point - why does anyone care? Academics seem to keep teaching Mealy vs Moore for no good reason I can see. In my getting on for 2 decades of professional electronic design, I have never had to care what "kind" of state machine I am getting. I just describe the behaviour and let the tools produce the circuits. The tools also do not care (check the logfiles, it won't say "found a Mealy state-machine" anywhere).
What point of mine are you referring to that the academic may have been trying to get students to recognise? As for introducing terms in common usage, I'm not sure that they are (in industry)... no-one I work with uses them.
@JoeHass, FYI its not a homework question.Come on who gives HW questions like these ?I am just curious that s it!
whether or not the synthesizer recognizes your code as FSM and also the way it implements FSMs in hardware depends on the synthesizer you use! check the according documentation. e.g. for Xilinx XST, see the XST user Guide, and search for FSM.
I know this is 3 weeks old, but there's an answer here, with details of what the various styles synthesise to in XST. The example is actually a Moore machine, but some of the styles have combinatorial outputs, which will give you an idea of what will happen for Mealy machines. There are some surprises - XST can push combinatorial outputs back into state registers, for example.
| common-pile/stackexchange_filtered |
how to write fixedlength file to CSV file with bean io with all values come in diffent columns of a record
This code is able to write data to csv file but the only problem is the data is getting written in single column only.
I want the data to come in different column. I am new to bean io and not able to figure it out.
I have tried below given code and not able get output in proper format:
public class XlsWriter {
public static void main(String[] args) throws Exception {
StreamFactory factory = StreamFactory.newInstance();
factory.load("C:\\Users\\PV5057094\\Demo_workspace\\XlsxMapper\\src\\main\\resources\\Employee.xml");
Field[] fields = Employee.class.getDeclaredFields();
System.out.println("fileds" + fields.length);
List<Object> list = new ArrayList<Object>();
for (Field field : fields) {
list.add(field.getName());
}
BeanReader in = factory.createReader("EmployeeInfo", new File("C:\\Temp\\Soc\\textInput.txt"));
BeanWriter out = factory.createWriter("EmployeeInfo", new File("C:\\Temp\\Soc\\output.csv"));
Object record;
while ((record = in.read()) != null) {
System.out.println(record.toString().length());
out.write(record);
System.out.println("Record Written:" + record.toString());
}
in.close();
out.flush();
out.close();
}
}
textInput.txt
AAAAABBBBBCCCCC
AAAAABBBBBCCCCC
AAAAABBBBBCCCCC
AAAAABBBBBCCCCC
AAAAABBBBBCCCCC
<?xml version="1.0" encoding="UTF-8"?>
<beanio xmlns="http://www.beanio.org/2012/03"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.beanio.org/2012/03 http://www.beanio.org/2012/03/mapping.xsd">
<stream name="EmployeeInfo" format="fixedlength">
<record name="employee"
class="com.aexp.gmnt.imc.record.submission.Employee" minOccurs="0"
maxOccurs="unbounded" order="1">
<field name="firstName" length="5" padding="0" justify="right" />
<field name="lastName" length="5" padding="0" justify="right"/>
<field name="title" length="5" padding="0" justify="right"/>
</record>
</stream>
I want every record value in different column of a CSV file, but currently it is comming in a single column only, please help.
was the answer helpful or did it not answer your specific needs? If not, please explain what else you need
You need to have a different stream definition in your mapping file for writing to the CSV file. The EmployeeInfo stream can only deal with fixed length content because that is how it is configured.
You need to add a second <stream> definition to handle the CSV file you want to generate and your BeanWriter need to reference the new CSV stream instead of the fixed length one.
Add a new <stream> definition to your existing mapping.xml file:
<stream name="EmployeeInfoCSV" format="csv">
<record name="employee" class="com.aexp.gmnt.imc.record.submission.Employee" minOccurs="0" maxOccurs="unbounded">
<field name="firstName" />
<field name="lastName" />
<field name="title" />
</record>
</stream>
Note the change in the name of the <stream> and the format set to csv. In this <stream> definition you can then also change the order in which the data is written to the csv file if you want to without affecting your BeanReader's order in which it expects to read the data. The length,padding and justify attributes are not required for a csv file.
Now you only need to change how you configure your BeanWriter from:
BeanWriter out = factory.createWriter("EmployeeInfo", new File("C:\\Temp\\Soc\\output.csv"));
to
BeanWriter out = factory.createWriter("EmployeeInfoCSV", new File("C:\\Temp\\Soc\\output.csv"));
Note the change to use the csv stream name in the createWriter method parameters.
Edit to answer this question from the comments:
just a added question of I need to add the as first line with header
values as field values without writing them as header record type in
bean io then is it possible through reflection or something?
No need for reflection or jumping through hoops to get it done. You can create a Writer that you can use to write out the header (column) names to the file first before passing the writer to the BeanWriter for appending the rest of the output.
Instead of using the BeanWriter like above:
BeanWriter out = factory.createWriter("EmployeeInfoCSV", new File("C:\\Temp\\Soc\\output.csv"));
You would now do something like:
BufferedWriter writer = new BufferedWriter(new FileWriter(new File("C:\\Temp\\Soc\\output.csv")));
writer.write("First Name,Last Name,Title");
writer.newLine();
BeanWriter out = factory.createWriter("EmployeeInfoCSV", writer);
BeanIO would then carry on writing its output to the writer which will append the data to the existing file. Remember to close() the writer as well when you are done.
Yep this works perfectly fine.. just a added question of I need to add the as first line with header values as field values without writing them as header record type in bean io then is it possible through reflection or something?
I have updated my answer. Please accept the answer as correct if you are happy with it.
| common-pile/stackexchange_filtered |
Expanding the Hyper-Parameter Search Grid in Caret
I was wondering if there was a way to expand the hyper-parameter search within caret package or with slight modification. For example, evtree can currently only take alpha in caret but the evtree.control can take many more arguments. I doubt that I can just pass it through method = "evtree".
ctrl <- trainControl(method = "repeatedcv", number=10, repeats = 5,classProbs=TRUE,summaryFunction = twoClassSummary)
# Set up the cross-validated hyper-parameter search
EVTREE_GRID= expand.grid(pmutatemajor=c(0.2,0.2), pmutateminor=c(0.2,0.2), pcrossover=c(0.2,0.2),
psplit=c(0.2,0.2), pprune=c(0.2,0.2), minbucket=c(5L,10L), minsplit=c(15L,25L),
maxdepth=c(5L,15L),alpha=c(1L,5L))
EVTREE_fit_bayes <- function(pmutatemajor, pmutateminor, pcrossover,
psplit,pprune,minbucket,minsplit,maxdepth,alpha) {
txt <- capture.output(
mod <- train(Creditability~ ., data =XY_train,
method = "evtree",
preProc = c("center", "scale"),
metric = "AUC",
trControl = ctrl,
tuneGrid = EVTREE_GRID))
list(Score = getTrainPerf(mod)[, "TrainAUC"], Pred)
}
library(rBayesianOptimization)
EVTREE_Search <- BayesianOptimization(EVTREE_fit_bayes,
bounds = list(pmutatemajor=c(0.2,0.2), pmutateminor=c(0.2,0.2), pcrossover=c(0.2,0.2),
psplit=c(0.2,0.2), pprune=c(0.2,0.2), minbucket=c(5L,10L),
minsplit=c(15L,25L), maxdepth=c(5L,15L), alpha=c(1L,5L)),
init_grid_dt =NULL,
init_points =10,
n_iter = 10,
acq = "ucb",
kappa =1,
eps = 0.0,
verbose = TRUE)
you can create your own model so basically you redefine the parameters for evtree, basically redefining the existing one for evtree, the elements of the list to change are under parameters, grid and fit (where you pass in evtree.control) :
evtree_ext <- list(label = "Tree Models from Genetic Algorithms",
library = c("evtree"),
loop = NULL,
type = c('Regression', 'Classification'),
parameters = data.frame(parameter = c("maxdepth","alpha"),
class = rep('numeric',2),
label = c('maxdepth','alpha')),
grid = function(x, y, len = NULL, search = "grid") {
if(search == "grid") {
out <- data.frame(alpha = seq(1, 3, length = len),maxdepth=seq(1,20,len=len))
} else {
out <- data.frame(alpha = runif(len, min = 1, max = 5),maxdepth=sample(1:20,len=len))
}
out
},
fit = function(x, y, wts, param, lev, last, classProbs, ...){
dat <- if(is.data.frame(x)) x else as.data.frame(x, stringsAsFactors = TRUE)
dat$.outcome <- y
theDots <- list(...)
if(any(names(theDots) == "control"))
{
theDots$control$alpha <- param$alpha
ctl <- theDots$control
theDots$control <- NULL
} else ctl <- evtree::evtree.control(alpha = param$alpha,maxdepth=param$alpha)
## pass in any model weights
if(!is.null(wts)) theDots$weights <- wts
modelArgs <- c(list(formula = as.formula(".outcome ~ ."),
data = dat,
control = ctl),
theDots)
out <- do.call(evtree::evtree, modelArgs)
out
},
levels = function(x) x$obsLevels,
predict = function(modelFit, newdata, submodels = NULL) {
if(!is.data.frame(newdata)) newdata <- as.data.frame(newdata, stringsAsFactors = TRUE)
predict(modelFit, newdata)
},
prob = function(modelFit, newdata, submodels = NULL) {
if(!is.data.frame(newdata)) newdata <- as.data.frame(newdata, stringsAsFactors = TRUE)
predict(modelFit, newdata, type = "prob")
},
tags = c("Tree-Based Model", "Implicit Feature Selection", "Accepts Case Weights"),
sort = function(x) x[order(x[,1]),])
Using an example (sorry not familiar with this package, might be nonsense):
library(MASS)
library(caret)
library(evtree)
library(rBayesianOptimization)
mod <- train(type~ ., data =Pima.tr,method=evtree_ext,
tuneGrid=data.frame(alpha=c(1,2),maxdepth=c(5,10)),
trControl=trainControl(method="cv",number=2))
mod$results
alpha maxdepth Accuracy Kappa AccuracySD KappaSD
1 1 5 0.715 0.3511444 0.007071068 0.04724272
2 2 10 0.725 0.3273370 0.007071068 0.01357389
Thank you so much. Im going to try running the Bayesian optimization algorithm.
you're welcome .. happy modeling :)
| common-pile/stackexchange_filtered |
Effect of a magnetic field on cathode rays in a cathode ray tube
My question is regarding the direction in which cathode rays bend in a magnetic field.
My book states that :
When only electric field is applied, the electrons deviate from their
path and hit the cathode ray tube at point A. Similarly when only
magnetic field is applied , the electron strikes the cathode ray tube
at point C.
I tried to apply the Fleming's Left hand rule used to find the direction of force on a current carrying wire where the direction of current is taken opposite to the direction of flow of electrons.
The Fleming's Left hand rule states that :
Stretch the thumb, forefinger and middle finger of your left hand such
that they are mutually perpendicular. If the first finger points in
the direction of magnetic field and the second finger in the direction
of current, then the thumb will point in the direction of motion or
the force acting on the conductor.
Applying this rule I found that the electrons should hit the cathode ray tube at A in a magnetic field . But in the book it is given that they strike it at C.
Edit : I am adding a picture which makes it clear how I applied the rule.
Please explain me where I went wrong . Thank you.
Cross posting on Chemistry SE
It is just to signal that answers can be in another section of SE. I have no idea if the same Q is welcome as a multiple one. Both approaches have pros and cons.
Yeah, you're not wrong, following the rule, the electrons should be also at A when magnetic field is on!
@user8718165 The link you have provied show the same problem. But there also no conclusion has been reached as to why do the electrons deviate in downward direction when all the rules we use give opposite deviation.
@AshokSharma Don't worry! Your reasoning is absolutely correct. The magnetic polarity should be reversed in order to get the deflection suggested by the book. I don't think any answers are required because you know what's really happening.
@user8718165 Thanks . But I have seen several videos and websites in which Fleming's left hand rule gives the direction of deflection opposite from what really happens. You can google " deflection of cathode rays in a magnetic field" . Something is surely missing. Sorry if you feel annoyed by my repeated requests.
see this https://courses.lumenlearning.com/austincc-physics2/chapter/30-2-discovery-of-the-parts-of-the-atom-electrons-and-nuclei/ the image just after the discharge tube. Its a pretty reputable site and you can be sure that you are correct (in this situation). You may remove some of your previous comments as they are making this section messy.
The rule applied is correct but because electron is negatively charged your actual answer will be exactly opposite of what you would get. The answer that you got is correct for a positively charged particle.
Sir , I have edited my question and it is now very clear that the rule can be used for cathode rays and the direction of deflection of the rays is indeed opposite from what the rule predicts.
All this confusion wouldn't have happened if they got it right about 300 years ago, when they rubbed a sheet of glass & got a spark and assumed the electricity was jumping off instead of electrons being rubbed off & jumping back on. By the time they realized they had muffed it (in the 19th century), it was too much trouble & too expensive to rewrite all the papers & republish all the books, so they still said current goes from positive to negative but electrons go from negative to positive.
The way I prefer to remember it is, that, if the electrons go CLOCKWISE around a coil, the north pole it produces faces towards you, & the electrons around the atoms in the core go clockwise in it, or a magnet with the north pole facing you. If you pass an electron beam, or electron current (real current) through a wire in front of it, the wire (or beam) is pushed to the side where the electrons travel in the same direction as the electrons in the coil (or around the atoms in the magnet). Or otherwise, use the right hand rule for electron current.
Book is right because it says the magnetic field is in inward direction.so, while pointing your index finger inward,the thumb faces down.
I believe the force by Fleming rule is correct for POSITIVE charges, while electrons are negative and suffer force in opposite direction.
So, direction of current opposite than electrons movement; and magnetic force on electron opposite than Fleming rule.!
otherwise we can use the left hand with opposite direction, like in photo ;)
Contrary to everything you have learned about the so called “cathode rays”, I assert something completely different about them.
Please look at this drawing:
This is a kind of cathode ray tube (CRT), also called Braun tube, which can be found in every CRT TV, monitor or oscilloscope. On the left side of the tube is the negative electrode (the cathode) and a little to the right is the positive (the anode), which is in the form of a metal disk with a small hole in the middle. To the right of the anode there are two additional electrodes which, when connected to a high DC voltage, deflect the beam from its straight line upwards to the positive electrode. The beam itself is actually invisible, but is made visible by adding a small amount of some inert gas into the tube (neon, argon etc.).
The contemporary physics asserts that this is a beam of negative particles, called electrons, traveling from the cathode through the anode and then hitting the opposite wall of the tube.
I assert that this so-called “beam” is actually an electromagnetic vortex (EM-tornado).
(a real image of a CRT taken from the Youtube video)
What the author of this answer regards as contradictory in the assertion of moving negative electricity from the cathode through the anode to the opposite wall are two things. The first is of principle nature: one of the basic principles of nature is that the movement is always from the positive to the negative and not contrariwise (please read Why is the charge naming convention wrong?). The second is a matter of fact. Let us examine the nature of electricity around the right part of the tube in the drawing above, in other words, let us examine it in front of the screen of a CRT television, monitor, or oscilloscope - we will always find that the detector shows intense positive electricity.
That it is impossible, negative electricity to travel towards the screen and on its other side to appear positive electricity, shows the following experiment: we electrify a vinyl (gramophone) plate by rubbing it (as we know it is negatively electrified) and place it behind a big glass window. Then we test the nature of the electricity on the other side of the window. The detector shows presence of negative electricity just as it would have indicated without the glass. Glass does not change the nature of electricity on the other side.
Before we present our explanation of this phenomenon, let us consider a few more experiments. We place a stiff copper wire on a table. Parts of its length don’t touch the table. Above a wire section that does not touch the table we hold a strong cylindrical magnet with its positive pole down, so that the wire lies exactly under the middle of the magnet. Then we connect a new battery to the ends of the wire so that the positive pole is closer to us and the negative pole further away from us. At the moment of connection we will notice that the wire makes a strong deflection to the left and up. As soon as we turn the magnet over and repeat the same, the wire will make a strong deflection to the right and up. If we hold the magnet again with the positive pole down, now not directly over the wire, but left over it, however still close to it, we will notice that the wire after connecting to the battery makes a jerky movement to the right and down. How is this explained? In the first variant, the permanent magnet “blows” down; the magnetic wind in and around the wire blows clockwise spirally from the plus to the minus pole of the battery; it blows down on the right of the wire, up on the left of it; on the right of the wire both magnetic winds coincide (the effect intensifies), and on the left of the wire they collide (the effect weakens); the wire moves to where the effect only intensifies, namely to the maximum, and that is to the left and up. In the third variant, in which both winds only collide, the wire deflects to where the adverse effect is maximally attenuated or quite ceased, namely to the right and down.
Now, facing a CRT oscilloscope, we let its beam run slowly and uniformly from left to right (visible as a bright dot moving horizontally from left to right in the middle of the screen); then, exactly over the center of the screen, we place a magnet with its plus-pole down. We will notice that the dot moves no longer horizontally, but that it slopes downwards and passes through the center. When we turn the magnet upside down, the dot slopes upwards, passing through the center again. If we compare this observation with what we have just said about the experiment with the copper wire and the magnet, we find the same thing happening in both cases. We conclude that the rotational direction of the magnetic wind generated by the beam in the oscilloscope coincides with that of the wire, as long as the positive pole of the battery is closer to us. So it's also the oscilloscope's plus side closer to us when we stand in front of it.
[ The (+)pole of the magnet points downwards, the beam of the oscilloscope approaches it from the left. On the right side of the beam its magnetic wind blows down, i.e. both winds match; so, the beam is shifted upwards. When it goes to the right side of the screen, also on the right side of the magnet, then their winds collide, so the beam is shifted downwards.]
We explain this phenomenon as follows: the positive electricity radiating from the anode spreads to the right into the broader part of the Braun tube in the drawing above. Since the anode is a disc with a circular hole in the middle, this electricity, with the help of the suctioning minus cathode on the other side of the anode, forms a vortex which is directed to the opening of the anode and continues to the cathode. This electromagnetic tornado is actually the beam that is visible when a small amount of an inert gas is introduced into the tube.
So when we stand in front of an oscilloscope and the bright dot lies still in the center of the screen, then it flows in the tube around the bright dot invisible positive swirling electricity towards us and from the very dot begins a vortex in the opposite direction towards the hole of the anode and onward to the cathode. The bright dot is actually the top of this EM-tornado. (Even with toys that cause a vortex in a water-filled container by means of a small electric motor located at the bottom, it can be noticed that the movement of the water around the vortex is directed upwards, but in the vortex downwards).
The fact that the vortex is deflected to the positive of the two additional electrodes does not contradict this explanation, because I assert that this is not something that can be simply accommodated under the postulate “plus attracts minus”, but rather a positioning of a motion consistent with ambient influences whereby maximum effects are achieved (we could observe something similar in the previous experiment, where the wire was deflected to the left and up while the magnet with its plus pole was positioned over it). For the effect of the vortex to reach the maximum, it is deflected to the positive electrode when additional electrodes are inserted in the tube.
In the above-mentioned toys, the water vortex is fully upright when the electric motor is positioned right in the middle of the bottom of a cylindrical or slightly conical vessel. However, when the motor is displaced to one side of the vessel, the vortex is curved towards the opposite side. In this way, it strives to achieve the maximum effect, in this case to capture the largest possible amount of water and make it spin (YouTube video Discovery Kids Tornado Lab extreme weather toys.Real Tornado Sound Effects). In our case the electromagnetic vortex makes a curve to the positive electrode; so it seeks to capture and spin the largest possible amount of positive electricity.
[ When an air-tornado inclines to one side, then it does it to the side where the air-pressure is higher. Higher air-pressure means more air, so it strives to capture more air, make it spin and thus stay alive. If we imagine the positive electricity as a higher pressure, the negative electricity as a lower pressure, then the EM-tornado inclines of course to the higher pressure, i.e., to the positive of the additional electrodes. ]
It can also be assumed that a non-symmetrical conical glass tube would make the vortex curved even without the additional electrified electrodes (drawing below).
[please see also (in slow motion, let’s say 0.25 of the normal speed) how the vortex gradually position itself in the middle of the bottle in this YouTube video from 2:03 to 2:05].
https://youtu.be/5-Bco8KRpmU?t=2m3s
Another detail indicating that this is a kind of vortex is the shape that the bright dot takes when turning off the oscilloscope. It “dissolves” circularly. Something similar is also noticeable on the water surface of the mentioned toy after switching off the electric motor.
[ With the air- or the water-tornado, the force of the gravity is pulling the vortex down. With the electricity the suctioning minus cathode takes the role of the gravity, that is, the role of pulling the EM-vortex “down”.]
I don’t know whether you understand what this answer means if it is true. It means that the whole particle physics is questioned, because the CRT is the founding stone of the whole modern particle physics.
I am ready to advocate its truthfulness in front of any audience.
Source: https://newtheories.info
This presents an alternative theory about what cathode rays are. But it doesn't answer the question.
I think it answers the question in the part of the answer where I speak about the magnet above the oscilloscope screen.
This is a site for questions and answers within mainstream physics, based on mainstream accepted physical models. Answering to promote your own non-mainstream theories is not appropriate on this site. Answering based on non-mainstream physics doesn't really help people here.
In my view, the only important thing is whether there is truth or not in an answer. I don't philosophize, but I speak with experimental facts which are very easily verifiable. The experiments are the only thing that matter, nothing else. Everyone can prove them and if the results are true, then it throws undoubtedly a new light on the phenomena.
I see that you want to kick me out of here. Surely you can kick me out of StackExchange, but also surely you cannot kick out the truth. It will find its way, sooner or later.
| common-pile/stackexchange_filtered |
How can I download Bath 800 dataset?
how I can find valid link for download BATH800 dataset?
"This dataset is a collection of eye images"
I tried to find the download link through the references given in different articles.
But these links have problems and do not work.
I am not aware of the BATH800 dataset. There are many collections of eye images available though. (i.e., https://www.kaggle.com/datasets/kayvanshah/eye-dataset)
The dataset was used in the article I read, but the download link is broken. [204] Bath iris database, http://www.smartsensors.co.uk/irisweb/download.htm.
What's the link to the article?
https://doi.org/10.1016/j.neunet.2019.07.020
From my research, the BATH Iris Dataset is no longer available: https://www.sciencedirect.com/science/article/pii/S0262885621000147
"the extensive BATH database reached only limited popularity mainly due to its relatively short lifespan."
| common-pile/stackexchange_filtered |
When is complete-class scope really useful in default arguments, default member initializers and exception specification?
The standard says that default arguments, class member default initializers and method exception specifications should be parsed in a complete-class context; that is, the class should be regarded as complete within those expressions or specifiers. Naturally, it follows that all the names defined in the scope of the class should be visible to those expressions, even the names declared/defined after the expression:
struct Test
{
// Well-formed, `bar` is looked up in complete-class context:
static void foo() noexcept(noexcept(bar()));
static void bar() noexcept;
};
I imagine, this requirement puts a lot of burden on the compilers: they have little choice but to postpone the parsing of those expressions to until the class definition is complete (or even further, in case of nested classes), which leads to other problems and pitfalls.
Which is why I can't help wondering: Why? What good does it do?
Isn't it always possible to reorder the member declarations in such a way that a name is always declared before it is used (by before I mean up the source text flow).
Could you come up with an example which cannot be reordered, so that a default argument, a default initializer and an exception specification clearly must be parsed in the complete-class context?
I guess it's because you are putting more burden on the compiler otherwise - default arguments and default initializers can't be simply plugged in at the caller site.
Compilers actually do parse such things eagerly, resulting in issues like CWG325.
Counter example:
struct Test
{
// Well-formed, `bar` is looked up in complete-class context:
static void foo() noexcept(noexcept(bar()));
static decltype(foo()) bar() noexcept;
};
It maybe not a realistic example (I could explicity write void instead), but disallowing such mutual dependency would be a major restriction that would be unnecessary, because compilers can handle code like the above.
Different example that is maybe more addressing your question:
struct Test
{
// Well-formed, `bar` is looked up in complete-class context:
static void foo() noexcept(noexcept(bar()));
static void bar(decltype(foo) f = foo) noexcept;
};
not sure if I understood the question correctly. I'll just leave the answer...
I think you understood the question correctly: I was indeed asking for a counter example. However, I was hoping to see the ultimate counter example, the one that can't be written without this complete-class context...
@IgorG I think that is difficult because with lots of effort almost any restriction can be bypassed. Though, i'd be happy to be proven wrong by a different answer
May I ask if that second example is purely synthetic? Or have you really seen code like this in a real project, where it serves a good purpose?
@IgorG i think a class with only static methods is already purely synthetic ;). I dont remember to come across this, but If I would write a member function that takes a member function as parameter, this is what I would write.
@IgorG only the noexcept specification would probably be swapped (ie bar is noexcept if foo is), but then it wouldn't be a counter example anymore
| common-pile/stackexchange_filtered |
Altair: Can't facet layered plots
I read in the documentation that I can facet a multi-layer plot, but somehow the data are lumped together in output plot, and repeated in all facets.
I can facet each layer with no problem, here is an example with the cars dataset:
import altair as alt
from altair import datum
from vega_datasets import data
cars = data.cars()
horse = alt.Chart(cars).mark_point().encode(
x = 'Weight_in_lbs',
y = 'Horsepower'
)
chart = alt.hconcat()
for origin in cars.Origin.unique():
chart |= horse.transform_filter(datum.Origin == origin).properties(title=origin)
chart
miles = alt.Chart(cars).mark_point(color='red').encode(
x = 'Weight_in_lbs',
y = 'Miles_per_Gallon'
)
chart = alt.hconcat()
for origin in cars.Origin.unique():
chart |= miles.transform_filter(datum.Origin == origin).properties(title=origin)
chart
But when combined all the data show up in every plot
combined = horse + miles
chart = alt.hconcat()
for origin in cars.Origin.unique():
chart |= combined.transform_filter(datum.Origin == origin).properties(title=origin)
chart
Am I doing something wrong?
This is due to a little gotcha that's discussed very briefly at the end of the Facet section in the docs.
You can think of a layered chart in Altair as a hierarchy, with the LayerChart object as the parent, and each of the individual Chart objects as children. The children can either inherit data from the parent, or specify their own data, in which case the parent data is ignored.
Now, because you specified data for each child chart separately, they ignore any data or transform coming down from the parent. The way to get around this is to specify the data only in the parent.
As a side note, Altair also has a shortcut for the manual filtering and concatenating you're using here: the facet() method. Here is an example of putting this all together:
import altair as alt
from vega_datasets import data
cars = data.cars()
horse = alt.Chart().mark_point().encode(
x = 'Weight_in_lbs',
y = 'Horsepower'
)
miles = alt.Chart().mark_point(color='red').encode(
x = 'Weight_in_lbs',
y = 'Miles_per_Gallon'
)
alt.layer(horse, miles, data=cars).facet(column='Origin')
| common-pile/stackexchange_filtered |
jsp image display_ keep getting 0x0 size
I saved image files to
src/main/webapp/resources/upload
,insert the absolute path of the image file to database and display the file by using absolute path like this:
<td><img src="${content_view.bFilePath}" alt="" style="width:30px;height:30px;" /></td>
it displays image file only when I run the project in eclipse. If I run the project on browser, I doesn't show any image and it keeps saying img src 0x0 even though I set the image size.
I tried both:
<img src="${content_view.bFilePath}" alt="" style="width:30px;height:30px;" />
<img src="${content_view.bFilePath}" alt="" width=100 height=100 />
and my path is like this:
C:\Users\ADMIN\workspace\RankingWeb_v5\src\main\webapp\resources\upload\Koala_20170926-22-36-04.jpg
I don't see any problem with the path because it is displaying the image on eclipse.
How do I fix this?
Added
< resources mapping="/resources/**" location="/resources/upload/" />
on servlet_context.xml
and then changed my code to
<img src="<c:url value="/resources/${content_view.bFile}"/>" width="300"/>
| common-pile/stackexchange_filtered |
Android Media Recorder start failed exception
I am having problem with media recorder in android. I am recording an audio which works well with LG P500 but the same code is not working on Samsung GT - S5360. I am getting error as start failed -22.
This is the code I am using:
final MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.VOICE_CALL);
recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
recorder.setOutputFile(path);
try {
recorder.prepare();
recorder.start();
} catch (Exception e) { Log.d(TAG, "Exception : " + e); }
When debugging got cause as null in logcat.
Please suggest me some solution.
any code or logcat will be appreciated. we cant provide a solution just like that.
This is the code I am using:
final MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.VOICE_CALL);
recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
recorder.setOutputFile(path);
try
{
recorder.prepare();
recorder.start();
}
catch(Exception e)
{
Log.d(TAG, "Exception : " + e)
}
Finally I found the solution after a long searching, I added permission and now my code is working well.
Where did you find such permission? http://developer.android.com/reference/android/Manifest.permission.html
I have put the same permission in my manifest still the app is crashing on recorder.start()
I had the same issue, and I tried installing a voice recorder from the play store to check. It didn't allow me to record VOICE_CALL too. From that I realised some device manufacturers don't support this. So record with the MIC if the device doesn't support VOICE_CALL.
The VOICE_CALL option requires CAPTURE_AUDIO_OUTPUT permission, which is only granted for privileged apps.
The permission suggested by the op did not work for me, probably because it does not exist :)
If you are getting this error check the way you are setting up the media recorder. In my case it only failed in some devices as well. I was doing this for all devices running Os version > API 10:
mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.AMR_WB);
mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_WB);
Turns out that the faulting device was running JB but didn't support wideband, so changing it to raw/narrowband worked:
mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.RAW_AMR);
mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
Based on the comment of the actual poster... he suggested his answer in the comment. Putting this into an actual answer, hoping that it helps others:
The OP added the following permission and his code worked well for him:
<uses-permission android:name="android.permission.STORAGE" />
This is either deprecated or nonexistent.
| common-pile/stackexchange_filtered |
Can we make NFC NDEF card password protect after writing that card in react native
I'm using react-native-nfc-manager package for reading and writing NFC card from my app, i want to write my card from my app no other app can change data.
I have gone through multiple apps from playstore one of which is NFC tools in that there is feature of password protect, and that feature i didn't able to get in react-native-nfc-manager.
Can anyone please let me know, how should i apply password protect NFC feature in react native or any other way without making card readonly
Password protection is very dependant on the exact make and model of the NFC card, so without you tell us the make and model of the NFC card you are using it is hard to answer your question.
Thanks for replying, I'm using NXP NTAG215 chip by LINQS, memory size of 496 bytes, Its re-writable card, please let me know any other details i can provide.
The exact details of how to password protect writing this card are at https://www.nxp.com/docs/en/data-sheet/NTAG213_215_216.pdf Section 8.8
A summary is below:-
Write the password to the correct memory address for the NTag215 (page 85h for the 215 chip)
Write the Pack to the correct memory address (first 2 bytes of page 86h)
Write to the AUTH0 memory area to set the first page to be protected by the password, usually you protect from page zero 00h (Last byte of page 83h)
To write to a password protected Card you first need to send the PWD_AUTH command before and other commands to write.
So configuring and using password protection on these NFC cards requires you to send (Transceive) low level commands to the card, these cards are NfcA based so react-native-nfc-manager has an NfcAHandler to transceive to them.
In react-native-nfc-manager Demo App is shows how to transceive custom commands.
The write page custom command begins with A2h.
The password auth custom command begins with 1Bh
I've not tried this with react-native but it should all be possible by combining the low level NfcA methods of react-native-nfc-manager with the detail in the cards datasheet on how to password protect that card.
Thank you very much for your reply @Andrew, i have tried using tranceive custom commands to write for example : A2 00 01 02 03 04 from demo App, then card got protected, but now i don't understand how to unlock/ remove password from card. can you please help me.
If you have successfully set a write password on the Tag then before you can send it other write commands in the same session you need to send it the PWD_AUTH (1Bh) command with a payload of the password. Once authenticated for the session while the card is in range you can read and write freely, including writing to the config pages to turn off the password protection (reset the values to the defaults)
Thanks @Andrew, i have tried tranceive command for reseting the password like for eg: 1B 00 01 02 03 but it returns tranceive command failed. can you please help me what i'm doing wrong. like i have write NFC tag using A2h can you please provide an example of transceive command to remove password protect from NFC Tag 215
| common-pile/stackexchange_filtered |
Doing a mocha and chai unit testing for node js that need to query from dynamodb
Hi I am new to AWS dynamdoDB and mocha chai unit testing.
I wanted to create a node js unit testing with mocha and chai.
In my test.js i need to get expected outcome from AWS dynamoDB. However i not sure how to do it.
in my test.js
var assert = require('chai').assert;
describle('querying items from dynamodb', function(){
it('find date in Month collection', function(){
//not sure how should i put my inputs in here.
});
})
Do you have any articles or resources that I should read on?
If you want to make actual calls to AWS DynamoDB, a simple way to do it would be the following (based on documentation found for DynamoDB and DynamoDB.DocumentClient):
const assert = require('chai').assert;
const AWS = require('aws-sdk');
describe('querying items from dynamodb', function(){
it('find date in Month collection', function(done){
var params = {
TableName : <TEST_TABLE_NAME>,
Key: {
<PRIMARY_KEY>: <TEST_KEY_VALUE>
}
};
var expectedDate = <EXPECTED_VALUE>;
var documentClient = new AWS.DynamoDB.DocumentClient({apiVersion: '2012-08-10'});
documentClient.get(params, function(err, data) {
assert.strictEqual(data.Item.Date, expectedDate);
done();
});
});
});
BUT BUYER BEWARE! This will make calls to your actual DynamoDB and AWS may charge you money! To avoid this, mocking is highly recommended. Mocking calls to your DynamoDB can be done with the following code (based on documentation found on github, npmjs.com, and npmdoc.github.io):
const assert = require('chai').assert;
const AWS = require('aws-sdk');
const MOCK = require('aws-sdk-mock');
describe('querying items from dynamodb', function(){
before(() => {
// set up a mock call to DynamoDB
MOCK.mock('DynamoDB.DocumentClient', 'get', (params, callback) => {
console.log('Let us not call AWS DynamoDB and say we did.');
// return fake data
let fakeData = {
Item: {
Date: <FAKE_DATE>
}
};
return callback(null, fakeData);
});
});
after(() => {
// restore normal function
MOCK.restore('DynamoDB.DocumentClient');
});
it('find date in Month collection', function(done){
// set up the call like it's real
var params = {
TableName : <TEST_TABLE_NAME>,
Key: {
<PRIMARY_KEY>: <TEST_KEY_VALUE>
}
};
var expectedDate = <EXPECTED_VALUE>;
var documentClient = new AWS.DynamoDB.DocumentClient({apiVersion: '2012-08-10'});
documentClient.get(params, function(err, data) {
// data should be the fake object that should match
assert.strictEqual(data.Item.Date, expectedDate);
done();
});
});
});
| common-pile/stackexchange_filtered |
Bootstrap Tabs displaying stacked rather than inline
Bootstrap Tabs displaying stacked rather than inline.
What I want (taken from Bootstrap Website):
What I get (notice the tabs 'Tenants' and 'Landlords' are stacked):
Bootstrap Code (taken from website):
My code:
<ul class="nav nab-tabs" role="tablist">
<li role="presentation" class="active"><a href="#tenant" aria-controls="tenant" role="tab" data-toggle="tab">Tenants</a></li>
<li role="presentation"><a href="#landlord" aria-controls="landlord" role="tab" data-toggle="tab">Landlords</a></li>
</ul>
<div class="tab-content">
<!-- Tenant panel -->
<div role="tabpanel" class="tab-pane active" id="tenant">
<!-- blah blah giant div -->
</div>
<!-- Landlord panel -->
<div role="tabpanel" class="tab-pane" id="landlord">...</div>
</div>
Anyone know what's wrong?
What is your ".list-inline" class that you've added doing?
saw that somewhere online, was an attempt to fix but doesn't do anything. I'll take it out in the question.
it's nav-tabs not nab-tabs
wow, now i feel stupid. thanks Lars
don't worry bout it, we all feel like a bit of a nab sometimes.
Dang nab-bit !
Typo pointed out by @Lars:
"it's nav-tabs not nab-tabs"
| common-pile/stackexchange_filtered |
Sharing variables between applescript and bash script in same script
Is it possible, in a bash script, to have a variable, but also allow applescript (with osascript) to use that variable?
Sure. You can use bash variables like normal:
x=5
osascript -e "display alert \"$x\""
Helped me fix a script that had: osascript -e 'tell application "System Events" to display alert "blah blah blah" as critical'
| common-pile/stackexchange_filtered |
Display column name with max value between several columns
I have data I have collect from a form. And have "pivoted" the data so it looks like this:
COUNTY | denver | seattle | new_york | dallas | san fran
-----------+---------+-----------+----------+----------+---------
ada | 3 | 14 | 0 | 0 | 0
slc | 10 | 0 | 0 | 0 | 9
canyon | 0 | 5 | 0 | 0 | 0
washington | 0 | 0 | 11 | 0 | 0
bonner | 0 | 0 | 0 | 2 | 0
(This was accomplished using case statements, crosstab is not allowed in the environment I am using: cartodb)
I now need a column that list the CITY with the max value. For example:
COUNTY | CITY | denver | seattle | new_york | dallas | san fran
-----------+----------+---------+-----------+----------+----------+---------
ada | seattle | 3 | 14 | 0 | 0 | 0
slc | denver | 10 | 0 | 0 | 0 | 9
canyon | seattle | 0 | 5 | 0 | 0 | 0
washington | new_york | 0 | 0 | 11 | 0 | 0
bonner | dallas | 0 | 0 | 0 | 2 | 0
Could you post the query that you used to create the first result?
(i) Do you still have the original (not pivoted) data accessible in the database holding above table? What's the table structure? (ii) What should happen if more than a single city has the maximum value for a certain county?
I posted the original query above (this is what created the pivot for me). My data is coming in from a google form that is why I am pivoting it. If there are more then to cities that have the same max I would like to include both, but am willing to just select the first one.
The guideline is make it one issue per question. If you have another question, just open another question. You can always reference this one for context. By later adding more questions you render complete answers incomplete. More importantly, the question loses value for the general public.
That's a textbook example for a "simple" or "switched" CASE statement to avoid code repetition.
SELECT CASE greatest(denver, seattle, new_york, dallas, "san fran")
WHEN denver THEN 'denver'
WHEN seattle THEN 'seattle'
WHEN new_york THEN 'new_york'
WHEN dallas THEN 'dallas'
WHEN "san fran" THEN 'san fran'
END AS city, *
FROM tbl;
The first in the list (from left to right) wins in case of a tie.
added my first code above, wondering if your suggestion can be added to make this all one query?
You can do this with a big case statement:
select t.*,
(case when denver = greatest(denver, seattle, new_york, dallas, sanfran) then 'denver'
when seattle = greatest(denver, seattle, new_york, dallas, sanfran) then 'seattle'
when new_york = greatest(denver, seattle, new_york, dallas, sanfran) then 'new_york'
when dallas = greatest(denver, seattle, new_york, dallas, sanfran) then 'dallas'
when sanfran = greatest(denver, seattle, new_york, dallas, sanfran) then 'sanfran'
end) as City
from table t;
EDIT:
I would pivot the results at the very end. Something like this:
SELECT name, state, the_geom,
MAX(CASE WHEN seqnum = 1 THEN favorite_team END) as favorite_team,
MAX(CASE WHEN favorite_team = 'Arizona Cardinals' THEN cnt ELSE 0 END) as ari,
MAX(CASE WHEN favorite_team = 'Atlanta Falcons' THEN cnt ELSE 0 END) as atl,
MAX(CASE WHEN favorite_team = 'Baltimore Ravens' THEN cnt ELSE 0 END) as bal,
MAX(CASE WHEN favorite_team = 'Buffalo Bills' THEN cnt ELSE 0 END) as buf
FROM (SELECT c.name, c.state, c.the_geom, s.favorite_team, count(*) as cnt,
ROW_NUMBER() OVER (PARTITION BY c.name, c.state, c.the_geom ORDER BY COUNT(*) desc) as seqnum
FROM fandom_survey_one s JOIN
counties c
ON ST_Intersects(s.the_geom, c.the_geom)
GROUP BY c.name, c.state, c.the_geom, s.favorite_team
) c
GROUP BY name, state, the_geom
ORDER BY name, state
Thanks for the reply. I am wondering if I can make this happen in one query. For example, here is the query I used to get my data into the first table example above:
added my first code above, wondering if your suggestion can be added to make this all one query?
| common-pile/stackexchange_filtered |
Create Iterator from array with AMPHP
I have an array in php:
$array = [1,2,3];
When I do:
while(yield $array->advance())
I get Call to a member function advance() on array
How do I turn my array into an iterator?
You mean for/foreach?
It seems you have lots of unfinished previous questions. Are you not getting the help you need here on SO?
@Andreas yes I am, should I somehow close my questions?
And I mean: https://amphp.org/amp/iterators/
You can take the tour and learn the basics of SO, closing a question is translates to accept an answer. https://stackoverflow.com/tour
Awesome, got that figured now :) I cannot accept comments though, how can I finish a question then?
Oddly enough your profile does not show the 'informed' badge. Either way you can ask the commenter to post an answer, or post it yourself. Mind that answers need to be more than comments.
Thanks Andreas, it should now! I wasn't aware of all this.
You can call ->advance() only on instances of Amp\Iterator.
So you need to convert your basic php array first with the fromIterable method.
Amp\Iterator\fromIterable($array)
| common-pile/stackexchange_filtered |
About the definition of complex multiplication
Some people say that the complex product is the way it is to respect the distributive law of multiplication. However, the distributive law acts in the whole number, like:
$$(a+b)(c+d) = ac + ad + bc + bd$$
The multiplication for complex numbers would be something in the form:
$$(z_1+z_2)(z_3+z_4) = z_1z_3 + z_1z_4 + z_2z_3 + z_2z_4$$
Where $z_n$ is a complex number in the form $a+bi$
I don't see why the distributive law must act inside the number:
$$(a+bi)(c+di) = ac + adi + bci - bd^2$$
What's the real reason for this definition?
Rewrite one of your numbers as $z = (a + 0i) + (0 + bi)$, now appeal to your stated property.
Let $z_1 = a, z_2 = bi, z_3 = c, z_4 = di$.
Then the rule
\begin{equation*}
\tag{$\spadesuit$}(z_1 + z_2)(z_3 + z_4) = z_1 z_3 + z_1 z_4 + z_2 z_3 + z_2 z_4
\end{equation*}
tells us that
\begin{align*}
(a + bi)(c + di) &= ac + adi + bci + bd i^2 \\
&= ac - bd + (ad + bc)i.
\end{align*}
So if we want ($\spadesuit$) to be true, we are forced to use the standard formula for multiplication of complex numbers.
| common-pile/stackexchange_filtered |
gmake function / ifneq/else/endif
I am trying to create a function that will determine if a directory exist, and if it does, add a target to the all list. but something is wrong. Here is the Makefile code snippet:
define buildMQ
$(info **** Checking to see if the MQ series directory exist *****)
ifneq "$(wildcard $(MQ_DIR) )" ""
$(info /opt/mqm was found)
MQ_APPS=MQSAPP
else
$(error $n$n**** ERROR - The MQ Series direcory: "$(MQ_DIR)" does not exist ******$n$n)
endif
endef
ifeq ('$(FN)' , 'TEST')
TEST_APPS=
endif
ifeq ('$(FN)' , 'ONSITE_TEST')
ONSITE_TEST_APPS= # insert ONSITE_TEST only apps here
$(call buildMQ)
endif
ifeq ('$(FN)' , 'ACCOUNT')
ACCOUNT_APPS=
$(call buildMQ)
endif
all:$(COMMON_APPS) $(TEST_APPS) $(ONSITE_TEST_APPS) $(ACCOUNT_APPS) $(MQ_APPS) makexit
and when I run it with FN = ONSITE_TEST:
**** Checking to see if the MQ series directory exist *****
/opt/mqm was found
Makefile:128: ***
**** ERROR - The MQ Series direcory: "/opt/mqm" does not exist ******
How can both print statements get printed? What am I missing?
The directory does exist
There's a lot of misunderstanding here about how call works. The call function takes a variable (name), plus zero or more arguments. It assigns the arguments to $1, $2, etc. and then it expands the variable.
Note that by "expands" here we don't mean "interprets the variable value as if it were a makefile". We mean very simply, go through the value of the variable and locate all make variables and functions and replace them with their appropriate values.
So, you invoke $(call buildMQ). This will not assign any values to $1, etc. since you didn't provide any arguments: in effect this is exactly the same as just using $(buildMQ); the call function has no impact here.
So make expands the value of the buildMQ variable... basically it takes the value as one long string:
$(info **** Checking to see if the MQ series directory exist *****) ifneq "$(wildcard $(MQ_DIR) )" "" $(info /opt/mqm was found) MQ_APPS=MQSAPP else $(error $n$n**** ERROR - The MQ Series direcory: "$(MQ_DIR)" does not exist ******$n$n) endif
and expands it. So first it expands the $(info ... Checking ... function and prints that. Then it expands the $(wildcard ..) and replaces that. Then it expands the $(info /opt/mqm ...) and prints that. Then it expands the $(error ...) and shows the message and exits.
If it hadn't exited, then you'd have a syntax error because a function like call cannot expand to a multi-line statement; as above it's not evaluated like a set of makefile lines. It has to expand to a single value makefile line.
You need to use the eval function if you want make to parse the contents of a variable as if it were a makefile; eval doesn't take a variable name it takes a string to parse, so it would be:
$(eval $(buildMQ))
However, this won't do what you want for the same reason: it expands the buildMQ variable and that causes all the functions to be expanded first, before eval even sees them.
One option would be to escape all the variables and functions reference in buildMQ. But in your situation a simpler solution is to use the value function to prevent expansion before eval sees the value:
$(eval $(value buildMQ))
I think there is also a misunderstanding how ifneq works. I interpret [https://www.gnu.org/software/make/manual/html_node/Reading-Makefiles.html#Reading-Makefiles] this way that using any of the conditionals will modify how a multiline variable is put together, it is not useful to create a conditional statement inside a function. To achive such, one must use $(if ), no?
Thanks. The $(eval $(value buildMQ)) solved the issue.
| common-pile/stackexchange_filtered |
Date Math for Google Apps Script for User Submitted Dates
I created a Google Form to collect user input, including the expiration date of a contract.
I need to create a reminder date (6 months before the expiration date) in a new column of the gsheets that is linked to the form. Using the Event Object namedValues, I extracted the expiration date from gsheet. I converted the date to milliseconds and subtracted the number of milliseconds equal to 6 months (or thereabouts). However, the output that got sent back to the googlesheet is an undefined number.
I must be misunderstanding something and was hoping someone more skilled in this can help me out. Is the data type wrong? Thanks for any illumination you can provide.
function onFormSubmit(e) {
var responses = e.namedValues;
var MILLIS_PER_DAY = 1000 * 60 * 60 * 24;
var expireDate = responses['Expiration Date'][0].trim();
var expireDate_ms = expireDate * 1000; // converting to milliseconds
var noticeDate = expireDate_ms - (183 * MILLIS_PER_DAY);
// Create a new column to store the date to send out notice of expiration or renewal
var sheet = SpreadsheetApp.getActiveSheet();
var row = sheet.getActiveRange().getRow();
var column = e.values.length + 1;
sheet.getRange(row, column).setValue(noticeDate);
}
I’d recommend you to use a formula instead:
={"Reminder"; ARRAYFORMULA(IF(NOT(ISBLANK(B2:B)); EDATE(B2:B; -6); ""))}
Add this to the header of an empty column, it will generate all data in that column for you. Change B for the column you have the expiration date on.
If you really need to use Google Apps Script you can, but JavaScript is notorious for having bad date support (at least without an external library). To do it, you’ll have to manually parse the string, modify the date and format it back to the date number:
const dateParts = e.namedValues['Expiration Date'][0].trim().split('/')
const date = new Date(
Number(dateParts[2]), // Year
Number(dateParts[1]) - 1 - 6, // Month. -1 because January is 0 and -6 for the 6 months before
Number(dateParts[0]) // Day
)
const numericDate = Math.floor((date.getTime() - new Date(1900, 0, 1).getTime()) / (1000 * 60 * 60 * 24))
This example only works if the format used in the sheet is DD/MM/YYYY.
The numeral value of date is the number of days since the first day of 1900 (link to documentation). So we need to subtract it and change it from milliseconds to days. Math.floor ensures that it’s not decimal.
You can set numericDate to the cell but make sure the numeric format is Date.
References
EDATE - Docs Editors Help
ARRAYFORMULA - Docs Editors Help
Date - JavaScript (MDN)
DATE - Docs Editor Help
Marti's answer was helpful but the math didn't quite work out because the suggested solution was to minus 6 from the month retrieved. But the reminder date is supposed to be 6 months from the date (taking into account the year and the date), so it doesn't quite work.
The solution I worked out is the following:
function onFormSubmit(e) {
const expireDateParts = e.namedValues['Expiration Date'][0].trim();
if (expireDateParts != ""){
expireDateParts.split('/');
var MILLIS_PER_DAY = 1000 * 60 * 60 * 24;
const expireDate = new Date(
Number(expireDateParts[2]), // Year
Number(expireDateParts[0]) - 1, // Month. -1 because January is '0'
Number(expireDateParts[1]) // Day
);
const reminderNumericDate = Math.floor((expireDate.getTime() - 183 * MILLIS_PER_DAY));
var reminderDate = formatDate(reminderNumericDate);
// Create a new column to store the date to send out notice of expiration or renewal
var sheet = SpreadsheetApp.getActiveSheet();
var row = sheet.getActiveRange().getRow();
var column = 14 // hard-coded to Column 0;
sheet.getRange(row, column).setValue(reminderDate);
// Set up Schedule Send Mail
createScheduleSendTrigger(reminderNumericDate);
var status = 'Scheduled';
sheet.getRange(row, column+1).setValue(status);
}
}
| common-pile/stackexchange_filtered |
Starting a Junior Security Consultsnt job soon
I’ve just graduated university in information security and applied for various jobs. I really wanted to get a role as a junior pen testers but was told it was difficult without any experiance. Obviously I have done a full 3 year course but to me it felt fairly basic. Anyhow, I’ve been given the job offer as junior security consultan (pentesting). The interview basically consisted of a 2 hour technical interview. I felt in the interview I was only able to answer 50% of the questions asked. I feel so under qualified for the role I’ve been offered and now just worried I’ll fail probabtion. They also hinted that they’d expect me to complete the OSCP exam in the first few months.
If someone could just explain in general terms what’s usually expected from a junior pentester that may put my mind at ease a little.
Every company is different. For every job, and even I still do this, ask them what their expectations are of you for the first 90 days. Get them to define it in hard metrics then make sure you deliver. Then schedule a review meeting after 90 days to review.
| common-pile/stackexchange_filtered |
How has Hell been described in the Vedas and Upanishads?
How has Hell been described in the Vedas and Upanishads?
Whether Hell has mental torture or physical torture?
Nice question . Lets await for the answers @Archit
First, there are no permanent hells in Hinduism. All hells are temporary, there are no deeds which lead to any permanent hell. Identifying ourselves with our bodies, we indentify the material world and material existence as reality and want to think of hell as a material place where our outer material sheath will exist. The outer sheath falls apart, decays, and does not go with the inner sheath at the time of death. Whether there is or is not a material 'place' is not the purvey of the vedas and upanishads. The focus and aim of the upanishads is Brahman and It's attainment. It is not on unnecessary descriptions of different hells whose very existence is not firmly rooted. Descriptions of various hells are in the realm of the puranas and in various commentators. The Chandogya Upanishad V.10.7-10 (here - https://www.wisdomlib.org/hinduism) rather describes the rebirth of evil doers as rebirth in small creatures such as insects who suffer many painful births and deaths and rebirths that are numerous and difficult to get out of.
The Gita, 16.8-21 describes evil deeds and the results of evil deeds as falling into a terrible 'hell' (narake).
The Astavakra Samhita verse 1.11 (Swami Nityaswarupananda translator) says "He who considers himself free is free indeed, and he who considers himself bound remains bound. 'As one thinks, so one becomes' is a popular saying in this world, and it is quite true." In other words, if you are thinking of Brahman, Brahman you will become. If you are thinking whether a particular action will lead to hell or are thinking of hells, you'll go to hells.
Think of Brahman. Your mind cannot dwell on two things at the same time. If you are thinking of Brahman, you are not thinking of hells - and vice versa. Gita 9.30-31 (Swami Nikhilananda translator) says " Even the most vilest man, if he worships Me with unswerving devotion, must be regarded as righteous; for he has attained formed the right resolution. He soon becomes righteous and attains eternal peace. Proclaim it boldly, O son of Kunti, that My devotee never perishes." As Gita 6.30 says "He who sees Me everywhere and sees everything in Me, to him I am never lost, nor is he ever lost to Me.
Thanks for your answer sir
Sir dvaita school, Madhava believes in eternal hell
To my knowledge hells and their various names are first mentioned in detail in Manu Smriti. More elaborate descriptions are found in Puranas like Garuda Purana and others.
Vedas and Upanishads don't describe hells in details like it is done in Manu Smriti or various Puranas. Some Upanishads however just mention them.
Here are a few such quotes:
He who relinquishing all desires has his supreme rest in the One without a second, and who holds the staff of knowledge, is the true
Ekadandi. He who carries a mere wooden staff, who takes to all sorts
of sense-objects, and is devoid of Jnana, goes to horrible hells
known as the Maharauravas. Knowing the distinction between these
two, he becomes a Paramahamsa.
Paramhamsa Upanishad (associated with Shukla Yajurveda)
The self is like a drop of water in the lotus (leaf). This is
overwhelmed by Prakriti. Being overcome he is in a state of delusion
and does not see the Lord in himself making him act. Content with the
mass of constituents and confused, unsteady, in eager pursuit, smitten
by desire, yearning, conceited, thinking ‘I am that, this is mine’ he
binds himself by himself as in a net, he roams about. Elsewhere also
it has been said. ‘The agent indeed is the Elemental self. The inner
spirit causes actions by means of instruments just as iron pervaded by
fire and beaten by workers is split into may, so, the elemental self
pervaded by the inner spirit and pressed by Prakriti becomes many. The
group of three aspects, assuming the forms of 84 Lakhs of living
beings constitutes the mass of elemental beings. This is the form of
plurality. The constituents are impelled by the spirit as a wheel by
its driver. As the fire is not beaten (only the iron is), so the
elemental self and not the spirit is over-powered. It has been stated:
this body without consciousness has been generated by the sex-act – it
is hell – has via the urinary passage, sustained by bones, covered
with flesh and skin, filled with faeces, urine etc., -- it is a
shattered sheath. It has been affirmed ‘Delusion, fear, depression,
sleep, wound, old age etc., being full of these Tamasa and Rajasa
traits (like desire), the elemental self is overwhelmed. Hence indeed,
it inevitably assumes different forms.
Maitreyani Upanishad, Prapathaka 3 (associated with Sama Veda).
Niralamba Upanishad metaphorically answers various questions like "What is heaven?" "What is hell?" etc.
(18) Who are animals and so forth ? (19) What is the immobile ? (20)
Who are the Brahmanas, etc., ? (21) What is a caste ? (22) What is
deed ? (23) What is a non-deed ? (24) What is knowledge ? (25) What is
ignorance ? (26) What is pleasure ? (27) What is pain ? (28) What is
heaven ? (29) What is hell ?
...................
(28) Heaven is the association with the holy.
(29) Association with the worldly folk who are unholy alone is hell.
(30) Bondage consists in imagining due to the beginningless latent impressions of nescience, ‘I am born, etc.’
Niralamba Upanishad (associated with Shukla Yajurveda)
III-13. A wise man, when disillusioned with the world, may become a
mendicant monk; when a person has attachments he shall reside in his
house. That degraded Brahmana who turns ascetic when he has
attachments indeed goes to hell
Narada Parivrajaka Upanishad (associated with Atharva Veda)
VII-1. Then asked about the restrictions to (the conduct of) the
ascetic, the god Brahma said to them in front of Narada. (The ascetic)
being dispassionate shall reside in a fixed abode during the rains and
move about for eight months alone; he shall not (then) reside in one
place (continuously). The mendicant monk shall not stay in one place
like a deer out of fright. He shall not accept (any proposal to
prolong his stay) which militates against his departure. He shall not
cross a river (swimming) with his hands. Neither shall he climb a tree
(for fruits). He shall not witness the festival in honour of any god.
He shall not subsist on food from one place (alone). He shall not
perform external worship of gods. Discarding everything other than the
Self and subsisting on food secured as alms from a number of houses as
a bee (gathers honey), becoming lean, not increasing fat (in the
body), he shall discard (the fattening) ghee like blood. (He shall
consider) getting food in one house alone as (taking) meat, anointing
himself with fragrant unguent as smearing with an impure thing,
treacle as an outcaste, garment as a plate with leavings of another,
oil-bath as attachment to women, delighting with friends as urine,
desire as beef, the place previously known to him as the hut of an
outcaste, women as snakes, gold as deadly poison, an assembly hall as
a cemetery, the capital city as dreadful hell (Kumbhipaka), and
food in one house as lumps of flesh of a corpse.
He becomes a Brahmavit (knower of Brahman) by cognising the end of the
sleeping state even while in the waking state. Though the (same) mind
is absorbed in Sushupti as also in Samadhi, there is much difference
between them. (in the former case) as the mind is absorbed in Tamas,
it does not become the means of salvation, (but) in Samadhi as the
modifications of Tamas in him are rooted away, the mind raises itself
to the nature of the Partless. All that is no other than
Sakshi-Chaitanya (wisdom-consciousness or the Higher Self) into which
the absorption of the whole universe takes place, in as much as the
universe is but a delusion (or creation) of the mind and is therefore
not different from it. Though the universe appears perhaps as outside
of the mind, still it is unreal. He who knows Brahman and who is the
sole enjoyer of Brahmic bliss which is eternal and has dawned once
(for all in him) – that man becomes one with Brahman. He in whom
Sankalpa perishes has got Mukti in his hand. Therefore one becomes an
emancipated person through the contemplation of Paramatman. Having
given up both Bhava and Abhava, one becomes a Jivanmukta by leaving
off again and again in all states Jnana (wisdom) and Jneya (object of
wisdom), Dhyana (meditation) and Dhyeya (object of meditation),
Lakshya (the aim) and Alakshya (non-aim), Drishya (the visible) and
Adrishya (the nonvisible) and Uha (reasoning) and Apoha (negative
reasoning). He who knows this knows all.
4. There are five Avasthas (states): Jagrat (waking), Swapna (dreaming), Sushupti (dreamless sleeping), the Turya (fourth) and
Turyatita (that beyond the fourth). The Jiva (ego) that is engaged
in the waking state becomes attached to the Pravritti (worldly) path
and is the particular of Naraka (hell) as the fruit of sins. He
desires Svarga (heaven) as the fruit of his virtuous actions.
Mandala Brahmana Upanishad (linked with Shukla Yajurveda)
Thanks for the answer. But can you provide some verse from major upanishades
I don't think there are such verses from the 12/13 major Upanishads. @DarkKnight. But all these Upanishads I have quoted from are included among 108 Hindu Upanishads. They are all minor Upanishads.
The Mahabharata, Savrgarohonika Parva, Chapter 2 (Mahabharata 18.2) vividly describes hell. A celestial messenger took Yudhishthira with him to hell.
The path was inauspicious and difficult and trodden by men of sinful deeds. It was enveloped in thick darkness, and covered with hair and moss forming its grassy vesture. Polluted with the stench of sinners, and miry with flesh and blood, it abounded with gadflies and stinging bees and gnats and was endangered by the inroads of grisly bears. Rotting corpses lay here and there. Overspread with bones and hair, it was noisome with worms and insects. It was skirted all along with a blazing fire. It was infested by crows and other birds and vultures, all having beaks of iron, as also by evil spirits with long mouths pointed like needles. And it abounded with inaccessible fastnesses like the Vindhya mountains. Human corpses were scattered over it, smeared with fat and blood, with arms and thighs cut off, or with entrails torn out and legs severed.
"'Along that path so disagreeable with the stench of corpses and awful with other incidents, the righteous-souled king proceeded, filled with diverse thoughts. He beheld a river full of boiling water and, therefore, difficult to cross, as also a forest of trees whose leaves were sharp swords and razors. There were plains full of fine white sand exceedingly heated, and rocks and stones made of iron. There were many jars of iron all around, with boiling oil in them. Many a Kuta-salmalika was there, with sharp thorns and, therefore, exceedingly painful to the touch. The son of Kunti beheld also the tortures inflicted upon sinful men.
Later Indra told Yudhishthira that (Mahabharata, Svargarohonika Parva, Chapter 3) the place Yudhishthira viewed was hell.
Hell, O son, should without doubt be beheld by every king. Of both good and bad there is abundance, O chief of men. He who enjoys first the fruits of his good acts must afterwards endure Hell. He, on the other hand, who first endures Hell, must afterwards enjoy Heaven. He whose sinful acts are many enjoys Heaven first. It is for this, O king, that desirous of doing thee good, I caused thee to be sent for having a view of Hell. Thou hadst, by a pretence, deceived Drona in the matter of his son. Thou hast, in consequence thereof, been shown Hell by an act of deception.
| common-pile/stackexchange_filtered |
Mınt Gas Fee Problem
We have started the Mint sale of our new collection on Ethereum, but we cannot solve the gas fee problem. There is an unbelievably high gas fee. What should we do? Did we do something wrong in the Contract?
There's nothing you can do. It depends how busy the network is and when lots of NFT drops happen at the same time it gas gets expensive. Assuming you're on Ethereum mainnet.
Whether your contract could be optimised is hard to know without being able to see any code.
| common-pile/stackexchange_filtered |
Breaking down a date into segments
Possible Duplicate:
Simplest way to parse a Date in Javascript
I understand how do get the data and break it down into it's segments, i.e.
alert( ( new Date ).getDate() );
and
alert( ( new Date ).getFullYear() );
alert( ( new Date ).getFullMonth() );
etc etc.
But how do I do the same but use a date from a html textbox? instead of reading new Date?
The date in the HTML box would be formated as follows
31/10/2012
Dates inside of html text boxes (<input type="text">, I guess) don't follow any specific structure.
Use string parsing functions. HOW you parse it depends entirely on how it's typed in.
I've updated the question to show how the date will be formatted in a html box.
You could try:
var datearray = input.value.split("/");
var date = new Date(datearray[2],datearray[1] - 1,datearray[0])
I upvoted you BUT it will have to be
var date = new Date(datearray[2],datearray[1] - 1,datearray[0])
because month property is zero-indexed
@JohnnyLeung Thanks, that kind of leniency is rare but needed on SO
If your textbox has proper string format for a date object you can use:
var aDate = new Date($("textbox").val());
However, if you dont write it in the text box exactly as you would in a string passing to the object, you'll get null for your variable.
FYI, I made a plugin that "extends" the Date object pretty nicely and has preformatted date/times that include things like a basic SQL datetime format.
Just go to this jsFiddle, Copy the code between Begin Plugin and End Plugin into a js file and link it in your header after your jQuery.
The use is as simple as above example:
var aDate = new DateTime($("textbox").val());
And to get a specific format from that you do:
var sqlDate = aDate.formats.compound.mySQL;
| common-pile/stackexchange_filtered |
Receiving unexpected server calls
In adobe analytics I try to implement link tracking for all links can be found in a page using this:
$(document).on('click', 'a', function() {
s.tl(this, 'e', 'external', null, 'navigate');
return false;
});
Try to test it using a page like this
What happens when you call console.log($(this)); inside your click event? how does that differ when you call the same statement inside the s.tl() method? Can you post the code for s.tl()?
You might want some more meaningful names than s and tl as well...
The extra calls are likely coming from how you have Adobe Analytics configured. There are a handful of config variables that will cause extra requests depending on how you set them (on their own and/or in relation to each other).
Here is a listing of Adobe Analytics variables for reference. These are the ones for you to look at:
s.trackDownloadLinks - If this is enabled, any standard links with href value ending in value(s) specified in s.linkDownloadFileTypes will trigger a request on click. Generally, this is to enable automatic tracking for links that prompt a visitor to download something (e.g. a pdf file).
s.trackExternalLinks - If this is enabled, any standard links with href NOT matched in s.linkInternalFilters OR matched with s.linkExternalFilters will trigger a request on click. Generally, this is to enable automatic tracking for links you count as visitor navigating off your site(s).
s.linkInternalFilters - If you have either of the above enabled, clicking on links may trigger a request, depending on values here vs. what you enabled above vs. what you have in s.linkExternalFilters. Generally, this should include values that represent links you do NOT want to count as navigating off your site(s).
s.linkExternalFilters - If you have either of the above enabled, clicking on links may trigger a request, depending on values here vs. what you enabled above vs. what you have in s.linkInternalFilters. Generally, you should never set this. It's intended for edge-use-cases for people who know what they are doing and have a complex site eco-system and definitions of what counts as internal vs. external.
s.trackInlineStats - This is for clickmap/heatmap tracking. This may or may not trigger an extra request, depending on how a lot of different stars align.
In addition to these, you may already have some plugins or other custom code that triggers click tracking. For example, there are linkHandler, exitLinkTracker, and downloadLinkTracker plugins that you may have included in your code that may play a part in extra requests being triggered.
Finally, more recent versions of Adobe Analytics code may trigger multiple requests depending on how much data you are trying to send in the request (whereas older versions just truncated the request, which resulted in data loss).
In any case, the long story short here is if you are looking to roll your own custom link tracking, you should make sure the above variables/plugins are removed or otherwise disabled.
But on the note of rolling your own custom link tracking.. I'm getting a sense of de ja vu here, like I already made a comment about this relatively recently in another post, over this exact same code... but generally speaking, this is not a good idea:
$(document).on('click', 'a', function() {
s.tl(this, 'e', 'external', null, 'navigate');
return false;
});
You are wholesale implementing exit link tracking on every single link of your page. And you are giving them all the same generic "external" label. And the native exit link reports are pretty limited and useless to begin with, so ideally you should also pop an eVar or something with the exit url or something.
But more importantly.. unless literally every single link on your pages are links that navigate your visitor off-site, this is not going to be useful to you in reports in general, and it's even going to ruin a lot of your reports.
I can't believe (or accept) that you really want to count every link on your pages as exit links..
I assume s.tl does an ajax call.
It should then forward the link to the href of the link - if the link is allowed to be followed immediately, the ajax call will be interrupted which seems to be what you see
You may want to change to
$(document).on('click', 'a', function(e) {
e.preventDefault();
s.tl(this, 'e', 'external', null, 'navigate');
});
I found this article when looking to see what s.tl is https://marketing.adobe.com/developer/forum/general-topic-forum/difference-between-s-t-and-s-tl-function
| common-pile/stackexchange_filtered |
How can i know when recharts chart is ready (lines already appear and animation ended)?
I want to copy a chart to the clipboard. I'm doing it by converting the chart to a canvas using html2canvas npm package.
Is there any event I can listen to that will indicate that the lines in the chart are already drawn and i can safely copy to clipboard?
Right now if i'm not waiting long enough I get an empty chart.
You can make us of the onAnimationEnd props of any chart component like <Line>, <Bar>, <Pie> or whatever you are using. Just to make it clear; it is not defined for LineChart but Line component itself.
I think it is not documented, but works like a charm.
<LineChart width={730} height={250} data={data}
margin={{ top: 5, right: 30, left: 20, bottom: 5 }}>
<XAxis dataKey="name" />
<Line type="monotone" dataKey="pv" stroke="#8884d8" onAnimationEnd={this.copyToCanvas} />
</LineChart>
| common-pile/stackexchange_filtered |
Matplotlib margins/padding when using limits
I'm trying to set xlimits and keep the margins.
In a simplified code, the dataset contains 50 values. When plotting the whole data set, it is fine. However, I only want to plot values 20-40. The plot starts and ends without having any margins.
How do I plot values 20-40 but keep the margins?
Online I found to ways to play with the margin/padding
1) plt.tight_layout(pad=1.08, h_pad=None, w_pad=None, rect=None)
2) ax1.margins(0.05)
Both, however, do not seem to work when using xlimits.
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(1, 200, 50)
y = np.random.random(len(x))
fig_1 = plt.figure(figsize=(8, 4))
ax1 = plt.subplot(1,1,1)
ax1.plot(x, y)
ax1.set_xlim(x[19], x[40])
# ax1.plot(x[19:40], y[19:40])
# would create exactly the plot I want. But it is not the solution I am looking for.
# I cannot change/slice the data. I want to change the figure.
dx = 0.05*(x[40] - x[19]); ax1.set_xlim(x[19]-dx, x[40]+dx) ?
The plot would start earlier and end later, but the datapoints would be included. I want to include margins (white spaces), no data points before and after the x[19], x[40]
The only way to have no datapoints shown would be to not plot them, i.e. ax1.plot(x[19:40], y[19:40]), which is the solution you already know about. You could then register a callback to limit changes and reset the data depending on the new limits after panning.
Got it, thanks for the reply! I thought there has to be a way to tell matplotlib to use xlim and just add margins without showing the the datapoints
| common-pile/stackexchange_filtered |
C Programming Basics
Why does the code given below Prints b on the Screen?
#include<stdio.h>
main()
{
float a = 5.6;
if(a == 5.6)
{
printf("a");
}
else
{
printf("b");
}
}
You can not compare with floating numbers in this way.
It is not a good idea to compare floats for exact equality.
A decimal in C is by default a double and float has lower precision than a double.Here 'a' is float and hence when compared with the same valued double doesn't give what you expect.Infact, when you replace == with > in the program, still b will be printed on screen as double has higher precision.
As floating point Numbers can't be matched exactly (because between each 2 numbers you choose are infinite other numbers). A machine can't represent them all and is forced to represent them with a moddel of only some floating point numbers it is able to represent.
So in your case, the system is probably not storing 5.6 because that's a number your machine doesn't want to represent. Instead it is storing something which is pretty close to 5.6 into the memory.
So if you do comparing with floating point numbers you never should check for equivalenz. Instead you should use the system C define FLT_EPSILON and check for
if (((a - 5.6) > -FLT_EPSILON) && ((a - 5.6) < FLT_EPSILON))
{
...
}
Where FLT_EPSILON is the smallest representable float type value.
So if the difference from a to 5.6 is absolute smaller as EPSILON, you can be sure it WAS equal, but the machine has chosen the next number it knows instead of 5.6.
The same would be DBL_EPSILON for double type.
this types are defined in float.h
I would use if ( fabs( a - 5.6 ) < E ) instead of the expanded version.
@concept3d Yeah but if you want high performance, I wouldn't. If performance isn't the critical point fabs() is the better to read way, that I agree with so far.
I don't agree with the performance point, fabs will actually be reduced to a single instruction on most hardware. And even if it didn't the performance will hardly be any different (both version have two branches).
@concept3d But the && breaks allready if it is out of EPSILON space in first case. So anyway I will at least save the time of a single function call. And Even a function call takes time for stack pus and poping.
But as I already said: ofcourse fabs() is the prettyer way.
fabs will actually be reduced to a single instruction on most hardware.
When you want a float value don't forget to add f
#include<stdio.h>
main()
{
float a = 5.6f;
if(a == 5.6f)
{
printf("a");
}
else
{
printf("b");
}
}
prints a as expected.
The problem was that both 5.6 are defined as double literals, and the a got converted into a float while in the if it's still comparing it to a double value so you get false.
Actually adding f only inside the if would be enough, but better safe then sorry.
Though this worked by eliminating the cast between float and double, but I don't think absolute equality comparison with non integers are valid.
floating point number in c is double by default. If you want use as float you need to add f at end of the number. try below code it gives a
#include<stdio.h>
main()
{
float a = 5.6;
if(a == 5.6f)
{
printf("a");
}
else
{
printf("b");
}
| common-pile/stackexchange_filtered |
Soft-deleted object from parent model not accessible in my child model
My user is allowed do delete some of the resources he has created himself but when he is destroying a resource, there is is problem because I have a model that called resourcequantity that depend on the resource model and I don't want to create a dependent destroy as it will impact the workgroups my user has already created (workgroup is a model containing multiple resources through resource_quantities see below).
What I am trying to do is allowing my user to softly delete its resources while keeping the resource in the database to keep all documents unchanged even if some resources have been destroyed.
I am currently using the paranoia gem and I have tried to implement dependent: :nullify without big success. When using paranoia gem I got an NoMethodError for nill class as it will only look for the resources where deleted_at is null.
I am a bit lost and don't really where to begin.
Here are my three models
class Resource < ApplicationRecord
acts_as_paranoid
has_many :workgroups, through: :resource_quantities
has_many :resource_quantities, inverse_of: :resource, dependent: :nullify
accepts_nested_attributes_for :resource_quantities, allow_destroy: true
end
class ResourceQuantity < ApplicationRecord
belongs_to :workgroup, optional: true, inverse_of: :resource_quantities
belongs_to :resource, optional: true, inverse_of: :resource_quantities
accepts_nested_attributes_for :resource, :allow_destroy => true
validates :workgroup, uniqueness: { scope: :resource }
end
class Workgroup < ApplicationRecord
acts_as_paranoid
has_many :resource_quantities, inverse_of: :workgroup, dependent: :destroy
has_many :resources, through: :resource_quantities, dependent: :nullify
accepts_nested_attributes_for :resource_quantities, allow_destroy: true
belongs_to :contractor
belongs_to :workgroup_library, optional: true
validates :name, presence: true
end
Is it possible to do something like this where the resource would be softely deleted ?
def total_cost_price
total_cost_price = 0
self.resource_quantities.each do |f|
total_cost_price += f.resource.purchase_price
end
total_cost_price.round(2)
end
I am not the best in ruby so if you have any advices please feel free to share. Thank you in advance
You can use the with_deleted scope to see all the associations however if you nullified the relationship then it won't matter because the relationship is now completely disconnected.
Yes indeed your are right :p I am investigating this which goes in the same direction : https://github.com/rubysherpas/paranoia/issues/176
Ok this issue has been resolved by defining resource as such in the models that was belonging to the Resource model.
def resource
Resource.unscoped {super}
end
thanks to paranoia I can access my resources from my other models even if they have been deleted from the the Resource model. (No need to nullified the relationship)
Resource.unscoped {super}
it actually removes the "WHERE ("resources"."deleted_at" NULL)" you can observed in your consol which is pretty handy.
| common-pile/stackexchange_filtered |
How to fix image quality on PHP (GD)?
I have been trying to solve the border-image problem for many days, but I still can't come to a decision.. I really need your help!
I have code:
$image = imagecreatefromjpeg('https://vignette4.wikia.nocookie.net/matrix/images/1/1f/Monica_Bellucci_Dolce_and_Gabbana.jpg/revision/latest?cb=20130227074822');
$w = imagesx($image);
$h = imagesy($image);
$border = imagecreatefrompng('http://meson.ad-l.ink/8mRbHMnS9/thumb.png');
$borderW = imagesx($border);
$borderH = imagesy($border);
$borderSize = 70;
// New image width and height
$isHorizontalAlign = true;
$coProp = $w / $h;
if ($coProp > 1)
$isHorizontalAlign = false;
if ($isHorizontalAlign) {
$newWidth = $w - $borderSize * 2;
$newHeight = $newWidth * ($h / $w);
} else {
$newHeight = $h - $borderSize * 2;
$newWidth = $newHeight * ($w / $h);
}
// Transparent border
$indent = imagecreatetruecolor($w, $h);
//imagesavealpha($indent, true);
$color = imagecolorallocatealpha($indent, 0, 0, 0, 127);
imagefill($indent, 0, 0, $color);
$paddingLeft = ($w - $newWidth) / 2;
$paddingTop = ($h - $newHeight) / 2;
imagecopyresampled($indent, $image, $paddingLeft, $paddingTop, 0, 0, $newWidth, $newHeight, $w, $h);
// New border width
$x1 = $newWidth;
$x2 = $borderSize;
$x3 = (int)($x1 / $x2);
if ($x3 == $x1 / $x2)
$bw = $borderSize;
else {
$x4 = $x1 - $x3 * $x2;
$x5 = $x4 / $x3;
$x2 = $x2 + $x5;
$bw = $x2;
}
// New border height
$y1 = $newHeight;
$y2 = $borderSize;
$y3 = (int)($y1 / $y2);
if ($y3 == $y1 / $y2)
$bh = $borderSize;
else {
$y4 = $y1 - $y3 * $y2;
$y5 = $y4 / $y3;
$y2 = $y2 + $y5;
$bh = $y2;
}
// Horizontal
$percent1 = $bw / $borderSize;
$borderNewWidth1 = (int)$borderW * $percent1;
$borderNewHeight1 = (int)$borderH * $percent1;
$thumb1 = imagecreatetruecolor($borderNewWidth1, $borderNewHeight1);
imagesavealpha($thumb1, true);
$color = imagecolorallocatealpha($thumb1, 0, 0, 0, 127);
imagefill($thumb1, 0, 0, $color);
imagecopyresized($thumb1, $border, 0, 0, 0, 0, $borderNewWidth1, $borderNewHeight1, $borderW, $borderH);
// Vertical
$percent2 = $bh / $borderSize;
$borderNewWidth2 = (int)$borderW * $percent2;
$borderNewHeight2 = (int)$borderH * $percent2;
$thumb2 = imagecreatetruecolor($borderNewWidth2, $borderNewHeight2);
imagesavealpha($thumb2, true);
$color = imagecolorallocatealpha($thumb2, 0, 0, 0, 127);
imagefill($thumb2, 0, 0, $color);
imagecopyresized($thumb2, $border, 0, 0, 0, 0, $borderNewWidth2, $borderNewHeight2, $borderW, $borderH);
// Angles
$borderNewWidth3 = (int)$borderW * $percent1;
$borderNewHeight3 = (int)$borderH * $percent2;
$thumb3 = imagecreatetruecolor($borderNewWidth3, $borderNewHeight3);
imagesavealpha($thumb3, true);
$color = imagecolorallocatealpha($thumb3, 0, 0, 0, 127);
imagefill($thumb3, 0, 0, $color);
imagecopyresized($thumb3, $border, 0, 0, 0, 0, $borderNewWidth3, $borderNewHeight3, $borderW, $borderH);
// Horizontal border
$horizontalX = ($w - $newWidth) / 2;
$horizontalY = (($h - $newHeight) / 2 - $bw) + 1;
$horizontalY2 = $h - ($h - $newHeight) / 2;
for ($i = 0; $i < round($newWidth / $bw); $i++) {
// Top
imagecopy($indent, $thumb1, $horizontalX + ($i * $bw), $horizontalY, $borderSize * $percent1, 0, $bw + 1, $bw);
// Bottom
imagecopy($indent, $thumb1, $horizontalX + ($i * $bw), $horizontalY2, $borderSize * $percent1, $borderSize * 2 * $percent1, $bw + 1, $bw - 1);
}
// Vertical border
$verticalY = ($h - $newHeight) / 2;
$verticalX = (($w - $newWidth) / 2 - $bh) + 1;
$verticalX2 = $w - ($w - $newWidth) / 2;
for ($i = 0; $i < round($newHeight / $bh); $i++) {
// Left
imagecopy($indent, $thumb2, $verticalX, $verticalY + ($i * $bh), 0, $borderSize * $percent2, $bh, $bh + 1);
// Right
imagecopy($indent, $thumb2, $verticalX2, $verticalY + ($i * $bh), ($borderSize * $percent2) * 2, $borderSize * $percent2, $bh, $bh + 1);
}
// Left top border
imagecopy($indent, $thumb3, (($w - $newWidth) / 2 - $bw) + 1, (($h - $newHeight) / 2 - $bh) + 1, 0, 0, $bw, $bh);
// Left bottom border
imagecopy($indent, $thumb3, (($w - $newWidth) / 2 - $bw) + 1, $h - ($h - $newHeight) / 2, 0, ($borderSize * 2) * $percent2, $bw, $bh);
// Right top border
imagecopy($indent, $thumb3, $w - ($w - $newWidth) / 2, (($h - $newHeight) / 2 - $bh) + 1, ($borderSize * 2) * $percent1, 0, $bw, $bh);
// Right bottom border
imagecopy($indent, $thumb3, $w - ($w - $newWidth) / 2, $h - ($h - $newHeight) / 2, ($borderSize * 2) * $percent1, ($borderSize * 2) * $percent2, $bw, $bh);
// Save result to base64
header('Content-Type: image/png');
imagepng($indent);
(by default I have a transparent background, but for an example I removed it)
When image sizes are large - everything works perfectly:
https://i.gyazo.com/028f5834efe968a7295194600fac684b.png
But when I add a small image, it's quality is significantly reduced. Look at this:
I do not know how to fix this..
P.S.: I'm trying to do frame for my photo using similar border-image: http://polariton.ad-l.ink/8mRbHMnS9/thumb.png
The calculations that I did cost a lot of effort, but the result I could not achieve..
Thank you for attention!
I hope to solve the problem.
You may have a better time splitting this into 2 questions one regarding the quality which is a programming issue, the other regarding the issue with repeating at the bottom may be better suited to the math forum..
Small errors fixed. I just change this (int) ($newWidth / $bw) on round and also in the second for.
The quality of your border issue is because the angle is falling between pixels. As you don't seem to have any anti aliasing going on it seems to be picking the nearest whole pixel.
Ints being whole numbers makes sense for the quality issue.
The math side of things I think you need to work out how many zig-zags would fit within the height, as the image is size is variable you need to work out how many whole zig zags will fit and then increase the canvas size to accompany the Ceil() of that number.
Don't ceil, floor and round. GD make it. But for quality, you should use imagemagic. It fastly and can be choose filters/compisiting quality (smooth algo, zoom, compression, etc).
| common-pile/stackexchange_filtered |
Axis in a multidimensional NumPy array
I have not understood the difference between axis in a multidimensional array in NumPy. Can you explain to me?
In particular, I would like to know where are axis0, axis1 and axis2 in a NumPy tridimensional array.
And why?
I'm just posting this here to link these two questions. They're different, but about the same issue, and reading both would help future readers.
The easiest way is with an example:
In [8]: x = np.array([[1, 2, 3], [4,5,6],[7,8,9]], np.int32)
In [9]: x
Out[9]:
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]], dtype=int32)
In [10]: x.sum(axis=0) # sum the columns [1,4,7] = 12, [2,5,8] = 15 [3,6,9] = 18
Out[10]: array([12, 15, 18])
In [11]: x.sum(axis=1) # sum the rows [1,2,3] = 6, [4,5,6] = 15 [7,8,9] = 24
Out[11]: array([ 6, 15, 24])
axis 0 are the columns and axis 1 are the rows.
In a three dimensional array:
In [26]: x = np.array((((1,2), (3,4) ), ((5,6),(7,8))))
In [27]: x
Out[27]:
array([[[1, 2],
[3, 4]],
[[5, 6],
[7, 8]]])
In [28]: x.shape # dimensions of the array
Out[28]: (2, 2, 2)
In [29]: x.sum(axis=0)
Out[29]:
array([[ 6, 8], # [1,5] = 6 [2,6] = 8 [3,7] = 10 [4, 8] = 12
[10, 12]])
In [31]: x.sum(axis=1)
Out[31]:
array([[ 4, 6], # [1,3] = 4 [2,4] = 6 [5, 7] = 12 [6, 8] = 14
[12, 14]])
In [33]: x.sum(axis=2) # [1, 2] = 3 [3, 4] = 7 [5, 6] = 11 [7, 8] = 15
Out[33]:
array([[ 3, 7],
[11, 15]])
In [77]: x.ndim # number of dimensions of the array
Out[77]: 3
Link for a good tutorial on using multidimensional data arrays
So I can say that the axis0 is the first element of the tuple of the command "shape", the axis1 the second element and the axis2 the third element. Is it right?
yes exactly. It is a 222 array. You can access particular elements using x[0][0][1] etc..
"axis 0 are the columns and axis 1 are the rows." Or put it another way from which I managed to understand how numpy axes work: "axis 0 is the outermost axis indexing the largest subarrays while axis n-1 is the innermost axis indexing individual elements.
Or put it yet another way: x.sum(axis=0) is x[sum][*][*], x.sum(axis=1) is x[*][sum][*], x.sum(axis=2) is x[*][*][sum].
The axes can be named by traversing through the n-dimensional array, right from the outside of the array to the inside till we reach the actual scalar elements.
The outermost dimension will always be axis 0 and the the innermost dimension(scalar elements) will be axis n-1.
The below link will be more useful in imagining and realising NumPy axes -
How does NumPy's transpose() method permute the axes of an array?
Cheat Code #1: When you use the NumPy sum function with the axis parameter, the axis that you specify is the axis that gets collapsed.
Cheat Code #2: When we use the axis parameter with the np.concatenate() function, the axis parameter defines the axis along which we stack the arrays.
| common-pile/stackexchange_filtered |
$B \in \mathcal{B}(M) \Leftrightarrow (rB+x) \in \mathcal{B}(N)$
Let $M$ be a $d$-dimensional manifold in $\mathbb{R}^p$ and let $r>0, x \in \mathbb{R}^p$. Then $N:=rM+x$ is also a $d$-dimensional manifold in $\mathbb{R^p}$.
I want to show that for $B \subseteq M \mathcal{}\\B \in \mathcal{B}(M) \Leftrightarrow (rB+x) \in \mathcal{B}(N)$
where $\mathcal{B}$ is the Borel $\sigma$ algebra.
How can I show this? I thought about using the fast that $T(B)=rB+x$ is a homeomorphism somehow but I don't know if that works
Any homeomorphism $T$ between topological spaces $X$ and $Y$ preserves Borel sets in the sense $B$ is Borel in $X$ iff $T(X)$ is Borel in $Y$. This is easy to see from the definition of Borel sigma algebra as the one generated by open sets. In this case $Ty=ry+x$ defines a homeomorphism.
| common-pile/stackexchange_filtered |
How to extract eBooks from google play
I've uploaded a lot of ePub files to google play books, assuming I would be able to download them again at any time. I now have a new computer and would like to access them, but Google doesn't seem to allow downloads of eBooks previously uploaded, which seems odd to me.
Since there is no encryption or DRM or anything on them, I figured there must be a way to get them back. Here's what I tried so far:
On my linux computer, I installed virtualbox, and installed an Android system there following this guide.
I logged into my Google Account on my virtual Android device and opened one of the eBooks I would like to get back.
I used the vdfuse utility to mount the .vdi image and navigated to the location where the eBooks are stored, which is /data/data/com.google.android.apps.books/files/accounts/{your google account}/volumes according to this thread.
Now, however, I'm a bit at a loss. If I look at one of the eBooks, they look like this:
./cover.png
./cover_thumbnail.png
./res2
./res2/{some-obscure-id}=
./segments
./segments/html{some-index}
Naturally, I assumed that the segments/html* files would be, well, html files. However, that is not true - they seem to be binary files and just list as data when queried with the linux file utility.
What do I do with these files to get back an ePub? Or should I have taken a different approach to this altogether?
You can make use of Google Takeouts. Just unselect everything except Play books and download your data. You can then view your books. If the books you uploaded were in epub format, you may have to rename the extension of the downloaded books to epub from pdf.
Google Takeouts is somewhat bad since it disables some of your uploaded ebooks, since the epub files end up missing , you can view them but you can't download them with Google Takeouts sometimes, it depends on the file names I guess.
| common-pile/stackexchange_filtered |
Creating a custom function to create individuals using DEAP
I am using DEAP to resolve a problem using a genetic algorithm. I need to create a custom function to generate the individuals. This function checks the values inside in order to append 0 or 1.
Now I'm trying to register this function on the toolbox.
The function I have written is called individual_creator(n), n=IND_SIZE and it returns a vector [].
After creating individuals as a list:
creator.create("Individual", list, fitness=creator.FitnessMin)
I registered individual and population on my toolbox like the following:
toolbox.register("individual", individual_creator, creator.Individual, n=IND_SIZE)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
Using this code this error appeared: TypeError: individual_creator() got multiple values for argument 'n'
Registering individual without using n parameter doesn't work either:
toolbox.register("individual", individual_creator, creator.Individual)
TypeError: 'type' object cannot be interpreted as an integer
Could someone please help me? I need to create custom individuals using my function and then create the population.
Thank you in advance!
| common-pile/stackexchange_filtered |
Installing Shairport on Raspberry Pi running Arch Linux ARM
I am having trouble building Shairport on a Raspberry Pi (ARM architecture). I run makepkg -s and get the following result
Makefile:2: config.mk does not exist, configuring.
sh ./configure
Configuring Shairport
OpenSSL or its dev package not found
Required package not found, cannot continue
make: *** No rule to make target 'config.mk'. Stop.
==> ERROR: A failure occurred in build().
Aborting...
It says OpenSSL is not installed, though I can confirm it by running pacman -Q and finding openssl 1.0.1.f-1 in the list.
I am thinking the version of OpenSSL installed is not ARM architecture?
Is this an autools project? If so, you usually configure with buildandhost. For example, ./configure --build=./config.guess --host=arm-linux-androideabi` (for Android). If not, please provide more details (like the output of a log file)
The project is in AUR(https://aur.archlinux.org/packages/shairport-git-sysdcompat/). To my knowledge, this is the only log file there is.
| common-pile/stackexchange_filtered |
Parsing Multiple XML Docs from a Continuous Stream with NSXMLParser
My iPhone App is socket based and receives a continuous, non-delimited stream of XML documents one after the other in which I intend to parse with the event-based NSXMLParser.
Example: 2 documents one after the other
<?xml version='1.0' encoding='UTF-8' ?><document name="something"><foo>bar</foo></document><?xml version='1.0' encoding='UTF-8' ?><document name="somethingelse"><bar>foo</bar></document>
In the Java-based implementation of this, the XML parser simply parses a stream until it reaches the end of the document, at which point it does its thing and then starts parsing the next document from that point in the stream.
The problem is NSXMLParser does not accept a stream and does not tell me at what point in the NSData it finished parsing (except for a useless line and column number).
I have seen some solutions like the AQ StreamingXMLParser but again, when this gets to the end of the document it just stops and wont attempt to parse another document, or tell me exactly where in the stream it finished so that I can start a new parse.
You will likely have to drop down to libxml using the push parser context. Take a look at the sample project XMLPerformance which implements this in LibXMLParser. http://developer.apple.com/iphone/library/iPad/index.html#samplecode/XMLPerformance/Listings/Classes_LibXMLParser_m.html%23//apple_ref/doc/uid/DTS40008094-Classes_LibXMLParser_m-DontLinkElementID_10
| common-pile/stackexchange_filtered |
what how to get a node from an xml document using xslt
I am not familiar with xml. I have an xml document structured like below:
<?xml version="1.0" encoding="UTF-8"?>
<a:b xmlns="something">
<a:c>
<d>
<e>
<item>item1</item>
<item>item1</item>
<item>item1</item>
</e>
</d>
<a:c>
<a:b>
I want to get the node "e" to retrieve its child items in my xslt as below:
<xsl:variable name="Product" select="document('itemList.xml')/node()[1]/node()[0]/node()[0]/node()[0]"/>
But it is not working. Kindly suggest the right way to do it. Also, is the first node refered by node()[0] or node()[1]? Links to articles for a good understanding of this node concept of xml are welcome.
Is the /node()[1]/node()[0]/node()[0]/node()[0] XPATH? I think maybe you can replace it with //item or else /a:b/a:c/d/e/item
Also you asked for links: http://class2go.stanford.edu/db/Winter2013 check out the videos of the querying XML section. Watch at least the XPATH and the XSLT video.
Your XML doesn't have the prefix a bound to an URI. Assuming that is fixed.
a:b/a:c/x:d/x:e
will get you the node when x is bound to something
but to get all the children shouldn't you add a /item at the end there (or a /x:item)?
@Dan x:e provides the entire sub-tree. You'll have the item in it. When you have /item appended at last of your XPath, you'll be getting a list of item nodes, not e.
Yeah that's my interpretation of the OPs question though. Even though he says he wants node e I think he really wants a list of the item elements. He says he wants to retrieve the child elements of node e. Either way I'm sure this will get him to the right place. +1
The XML you provided is not currently valid. It has declared a default namespace, but has not declared the a: namespace. It would need to start with something like this:
<a:b xmlns="something" xmlns:a="somethingElse">
If, in your XSLT, you declared the a namespace and associated the something namespace with the prefix s, you could access the e node with:
/a:b/a:c/s:d/s:e
If you want to access the nodes simply based on their position, you could do this, though this is not usually a very good practice:
/*[1]/*[1]/*[1]/*[1]
To answer your question, XPath is 1-index based, so the first item in any selection is accessed with [1].
| common-pile/stackexchange_filtered |
Removing a user account in Windows 8 Pro
I just installed Windows 8 Pro on my PC, and I accidentally made two user accounts, one local and one connected online. I want to remove the local one, but I don't know how to do it. I searched the "change PC settings" panel, but I found nothing there. Where I can do this?
Command Prompt > lusrmgr.msc opens user management mmc. Right-click to delete the one you want.
Control Panel > More Settings > Add or remove user accounts > Select the local user > Click "Delete the account"
(source)
you want to delete the profile
right click on my pc
advanced
User profiles -> Settings
select the profile -> Delete -> ok-> ok-> ok-> ok
I also had problems with removed user's directory still existing under C:\users.
I tried to take ownership - worked for all files except a couple.
Used a utility called LockHunter to see "What's locking this file?" and well - it was the the service WMPNetworkSvc. (I had logged on one time on the removed user account.)
Closed down the service and... the directory disappeared!
Maybe this would have been resolved anyway after a restart.
You should check it the account is really gone by verifying that the directory of the account name is removed from c:\Users. I deleted two local accounts and still cannot get rid of the directories. Even as administrator you are denied access even with UAC off.
This isn't really an answer. All you need to do is take ownership of the directory.
| common-pile/stackexchange_filtered |
go to other section in same page in angularjs ui.router environment
First, I want to achieve go to sections on the same page. Traditionally I can do this by:
<a href="#about">about</a>
<a name="about"></a>
This approach is not work because my page implement ui router, when I defined href="#about", it detect as otherwise url, so will be redirect to wrong state.
From other source I get insight about multiple views in ui.router, so I designed config like this:
.state('landing', {
url: '/',
views: {
"" : { templateUrl: "tpl/landing/index.html"},
"header@landing" : { templateUrl: "tpl/landing/header.html"},
"about@landing" : { templateUrl: "tpl/landing/about.html"},
"contact@landing" : { templateUrl: "tpl/landing/contact.html"},
"section-a@landing" : { templateUrl: "tpl/landing/section-a.html"},
"section-b@landing" : { templateUrl: "tpl/landing/section-b.html"},
"services@landing" : { templateUrl: "tpl/landing/services.html"},
"footer@landing" : { templateUrl: "tpl/landing/footer.html"}
}
})
Content of tpl/landing/index.html is:
<div ui-view="header" autoscroll="true"></div>
<div ui-view="about" autoscroll="false"></div>
<div ui-view="contact" autoscroll="false"></div>
<div ui-view="section-a" autoscroll="false"></div>
<div ui-view="section-b" autoscroll="false"></div>
<div ui-view="services" autoscroll="false"></div>
<div ui-view="footer" autoscroll="false"></div>
In multi state ui.router we can go to other state by defined ui-sref="astate". How about in multiple views? how to go to other view in same state?
what does go to other view even mean?
Now web page focus on header, when I click about menu, it will be focused on about below
What you are trying to do can be solved using $anchorScroll. See here for more information: https://docs.angularjs.org/api/ng/service/$anchorScroll
In their example, you can see:
<div id="scrollArea" ng-controller="ScrollController">
<a ng-click="gotoBottom()">Go to bottom</a>
<a id="bottom"></a> You're at the bottom!
</div>
Then inside the gotoBottom() function they use:
$location.hash('bottom');
// call $anchorScroll()
$anchorScroll();
This will scroll to the div whose id is bottom. Hopefully that helps you.
| common-pile/stackexchange_filtered |
Stream Spliterator error after concat Java
When attempting to create an iterator from a Stream that was concatenated from two previous streams, I get a NoSuchElementException, as the iterator doesn't recognise that the Stream has elements due to some problems with the Spliterator. The concatenated Stream seems to have two spliterators, despite the previous streams being of the same type. I also get this error when I tried to convert the concatenated stream into an array to try get round the problem.
Stream<Node> nodeStream = Stream.of(firstNode);
while (!goal) {
Iterator<Node> iterator = nodeStream.iterator();
Node head = iterator.next();
Stream<Node> tail = Stream.generate(iterator::next).filter(n -> n != head);
Node[] newNodes = head.expand(end);
if (newNodes.length == 1) {
goal = true;
endNode = newNodes[0];
}
nodeStream = Stream.concat(Arrays.stream(newNodes), tail);
nodeStream = nodeStream.sorted(Comparator.comparing(n -> n.routeCost(end)));
}
The error is as follows:
Exception in thread "main" java.util.NoSuchElementException
at java.base/java.util.Spliterators$1Adapter.next(Spliterators.java:688)
at java.base/java.util.stream.StreamSpliterators$InfiniteSupplyingSpliterator$OfRef.tryAdvance(StreamSpliterators.java:1358)
at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.lambda$initPartialTraversalState$0(StreamSpliterators.java:292)
at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:206)
at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:298)
at java.base/java.util.stream.Streams$ConcatSpliterator.tryAdvance(Streams.java:723)
at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.lambda$initPartialTraversalState$0(StreamSpliterators.java:292)
at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:206)
at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161)
at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:298)
at java.base/java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681)
at java.base/java.util.Spliterators$1Adapter.next(Spliterators.java:687)
at inf.ed.ac.uk.Route.generateRoute(Route.java:35)
I am trying to expand the first node (returns 16 new nodes) add them to the stream, sort it, and repeat, this is part of an implementation of the A* algorithm on gps coordinates
Code for Node is
public class Node {
boolean goal = false;
Node parent;
final LngLat coords;
LngLat.Compass direction;
double cost;
private Route route;
public Node[] expand(LngLat end) {
ArrayList<Node> nodes = new ArrayList<>();
for (LngLat.Compass direction: LngLat.Compass.values()) {
Node node = new Node(coords.nextPosition(direction), this, direction);
if (noFlyClear(node)) {
if (!route.contains(node)) {
if (node.coords.closeTo(end)) {
node.goal = true;
return new Node[]{node};
}
nodes.add(node);
route.visited.add(node);
}
}
}
return nodes.toArray(Node[]::new);
}
But that's since I changed to ArrayLists
I'm still unsure what the issue was originally
ALWAYS include the complete stack trace (format as code).
And please explain what you're trying to do. Why would you try to call next() twice on a singleton stream?
It has only been called once no?
@AlistairTait I'm guessing but I think Stream.generate(iterator::next) is going to call it again.
@AlexanderIvanchenko Node is an object, I am working with arrays of Nodes, I am trying to expand the node, and add the new nodes to a stream
You can't add items to a stream.
@AlexanderIvanchenko Node has a geographic coordinate, and it generates new nodes by taking a step in 16 cardinal directions, I don't think that's relevant to what I am having a problem with
@shmosel it should be a new stream, when I debug the code, it adds the streams together, it just gives it two spliterators
I don't know what that means, but I doubt streams is the right tool here.
After executing iterator.next() for a stream with one element, if iterator::next in Stream.generate(iterator::next) is evaluated, NoSuchElementException will be thrown.
@モキャデWould it not create a new iterator each time the loop is run?
@モキャデIt fails on the second pass through the loop
Is that true? I think it's happening in the first loop. Because .sorted() tries to pick out all the elements.
Can add the code of the Node class (only fields and expand()) method?
Also, what's the line 35 in your code? I bet Node head = iterator.next();
Basically, you're trying to use a Stream as a replacement of a sorted Collection.
Streams are definitely the wrong tool for this task, but most obviously, Stream.generate expects its supplier to work forever.
You have to use the right way to create a Stream from an iterator but using an ArrayDeque instead will be straight-forward and more efficient.
(or use an ArrayList and always append to the end, since you’re sorting the result anyway)
@Holger I used an ArrayList to sort it yeah
@AlexanderIvanchenko line 35 is nodeStream = nodeStream.sorted(Comparator.comparing(n -> n.routeCost(end)));
| common-pile/stackexchange_filtered |
Android studio, Bluestacks Installation failed due to: 'closed'
I am trying to build an android app using bluestacks (my phone died)
Android studio can see the emulator in the devices list but when i try to run my app, android studio throws this error:
Installation did not succeed.
The application could not be installed.
Installation failed due to: 'closed'
Retry
I have tried launching both android studio and bluestacks as administrator.
I have also tried opening the standalone device monitor in the SDK tools folder
This throws another error :
could not open Selected VM debug port(8700)
The error log of the monitor contains lots of errors of missing directories.
error log: https://pastebin.com/mmA83ch7
thanks
Go to Settings -> Preferences and Enable Android Debug Bridge, Enable android input debugging
So someone on the bluestacks team decided that putting an adb toggle in "Preferences" and not in "advanced" was a good idea,
also literally not on any other single post has anyone said to make sure adb was toggled on.
the seting is in Setings -> Preferences, at the bottom
Open BlueStacks Go to Settings -> Advanced and Enable Android Debug Bridge, Enable android input debugging
Restart Bluestack and then Android Studio, this will make the connection.
happy debugging :)
| common-pile/stackexchange_filtered |
QComboBox stylesheet left margin without side-effects
I would like to add a 5px left margin outside a QComboBox.
How to do it in such a way that combobox height and the appearance of the dropdown button would remain untouched?
Without any margins:
QComboBox#commandComboBox {
}
it looks like this:
After adding a left margin:
QComboBox#commandComboBox {
margin-left: 5px;
}
it looks like this:
For some reason, combobox height changed. Also, now the item view contents stick out of the combobox frame.
To correct the second problem, I added a left padding:
QComboBox#commandComboBox {
margin-left: 5px;
padding-left: 5px;
}
Now the item view contents are displayed correctly, however, the height is still wrong, the dropdown arrow button is shifted towards the right and the arrow itself is shifted towards the left.
it looks like this:
Tried to specify negative left margin and padding for the arrow button, but this only messed it up further. It then lost the desktop manager style altogether and started to look blocky.
Ideas?
You don't need to alter stylesheets to add margin. You can just put a combo box inside a horizontal (or any other) layout and set left margin for this layout in Qt Designer.
Thanks. Though, I would like to have a small margin within the ComboBox, from which I could drag it around.
As an alternative, one could indeed create another widget to contain the ComboBox and horizontal box layout and make that widget draggable, instead.
Still, it should be possible with the stylesheets as well.
| common-pile/stackexchange_filtered |
Counting the number of lines of code that have been commented out
I use CLOC to count blank lines, comment lines, and physical lines of source code in many programming languages:
However, in some projects, a fair amount of comment lines are in fact lines of codes that have been commented, e.g. in:
static void findSystem(){
Config.load();
List<Question> lQuestions = Question.loadAllValidWithEquations();
for(Question question : lQuestions){
//System.out.println(question.ct);
String sSystem = "(-m*a)+(-n*b)+c = 0\n(-m*b)+(-n*a)+d = 0\n";
if(question.ct.mt.toString().equals(sSystem)){
System.out.println("Found it-" + question.iIndex + ":"
+ question.ct.mt);
}
}
}
the line //System.out.println(question.ct); is a line of code that has been commented out: I really don't want to count it as a comment line, since generally speaking having a high amount of true comment lines is good while having a high amount of lines of codes that has been commented out is bad. To make things worse, it's not uncommon to see not just one isolated such commented line of code, but a pack of 10, 100 or even more commented lines of code in a row, which totally fools my code audit statistics.
I am therefore looking for a program that can count the number of lines of codes that have been commented out. If possible, it should take a folder as input and go through all its files (,including subfolders). Ideally, free, work with Windows 7 SP1 x64 Ultimate, and support Java/Python/C++ (the more languages, the better).
I would imagine this would require a tool that you either a) write yourself or b) can customize to your heart's content. A well written grep script could do the job—but it likely can't be intelligent enough on its own to handle multi-language projects like cloc seems to be able to do.
as a side comment - your statement "generally speaking having a high amount of true comment lines is good" is subjective.
@GrzegorzOledzki For sure it is debatable. I tried to nuance with "generally speaking" as having too many comments is less common than not enough :) Beyond the mere comment line counting, it's another story to assess the comment quality.
Would you count a commented line found in Java, coded in COBOL, as such a line? How do you suppose such a tool would decide that a comment is really a statement from the langauge? What if the line was incomplete? syntactically malformed (e.g., the "if" part of if-then-else)? So syntactically malformed, one might mistake it for a non-code source line?
| common-pile/stackexchange_filtered |
Is “every cat is identical to some mammal” logically equivalent to “every non-mammal is identical to some non-cat”?
a stands for every
e stands for some
n stands for negation
R stands for relation
Addition stands for disjunction
Multiplication stands for conjunction
C stands for cat
R stands for is identical to
M stands for mammal
aCReM or every cat is identical to some mammal.
anMRenC or every non-mammal is identical to some non-cat.
Thus, Is aCReM logically equivalent to anMRenC?
In pure mathematical logic X implying Y would mean that not Y implies not X. If all cats are mammals, then a non-mammal cannot be a cat. But your use of "identical to" is confusing; I don't know how you are defining that term.
“every cat is identical to some mammal” means simply "Every Cat is a Mammal".
@MauroALLEGRANZA: thanks for the clarification. Equality, identity, and set membership are not always interchangable. You might want to edit the question to use a more standard term. (Is this a translation problem?)
Not necessary... as per answer below, the formalization of your convoluted expression boils down to the standard one.
From Conifold:
You mean ∀x(C(x) → ∃y(M(y) ∧ x=y)) and ∀y(¬M(y) → ∃x(¬C(x) ∧ x=y))? Yes, they are classically equivalent. By substitutivity of identity, y can be replaced by x in the first formula and x by y in the second. The ∃ quantifier and = then become redundant and they reduce to ∀x(C(x) → M(x)) and ∀y(¬M(y) → ¬C(y)), respectively, i.e. contrapositives of each other.
| common-pile/stackexchange_filtered |
Url Rewrite to new url
<rule name="WC2018" stopProcessing="true">
<match url="^product/sb/sales.htm" />
<action type="Rewrite" url="product/sb/market/sales.htm" />
</rule>
Above is the url rewrite that i do, the original link is
product/sb/sales.htm
I need to rewrite it to
product/sb/market/sales.htm
But it does not work. I not sure what went wrong.
What is your application stack?
@Yasir i'm using visual studio 2015
You can try this rules :
<rule name="rule 1k" stopProcessing="true">
<match url="^product/sb/market/sales.htm$" ignoreCase="true" />
<action type="Rewrite" url="/product/sb/sales.htm" />
</rule>
| common-pile/stackexchange_filtered |
Data is not flowing down to prop
I am having an issue getting data to flow down to my props to where when component rendered, the props are not displaying.
This is the container that contains my RecipeList Component
*---Note: I am getting my data asynchronously from a api btw *
import { postRecipes } from '../actions/postRecipes.js'
import { getRecipes } from '../actions/getRecipes'
class RecipesContainer extends Component{
constructor(props){
super(props)
}
componentDidMount(){
this.props.getRecipes()
}
render(){
return (
<div>
<RecipeInput postRecipes={this.props.postRecipes} />
<RecipeList recipes={this.props.recipes} />
</div>
)
}
}
const mapStateToProps = state =>{
return{
recipes: state.recipes
}
}
const mapDispatchToProps = dispatch =>{
return{
postRecipes: (recipe) => dispatch(postRecipes(recipe)),
getRecipes: () => dispatch(getRecipes())
// deleteRecipe: id => dispatch({type: 'Delete_Recipe', id})
}
}
export default connect(mapStateToProps,mapDispatchToProps)(RecipesContainer)
Here is my RecipeList component
import React, {Component} from 'react';
import Recipe from './Recipe.js'
class RecipeList extends Component {
render() {
const { recipes } = this.props
return (
<div>
{recipes.map((recipe,index) => <Recipe recipe={recipe} key={index} />)}
</div>
)
}
}
export default RecipeList;
And here is the Recipe component that it mapping as I enter and submit a recipe
import React, {Component} from 'react'
class Recipe extends Component {
render(){
return(
<div>
<h3>Name: {this.props.name}</h3>
<p>Category:{this.props.category}</p> <-------this one I will have to call differently since this is a one to many relationship
<p>Chef Name: {this.props.chef_name}</p>
<p>Origin: {this.props.origin}</p>
<p>Ingredients: {this.props.ingredients}</p>
</div>
)
}
}
export default Recipe
EDIT: Added getRecipe action as requested.
export const getRecipes = () => {
const BASE_URL = `http://localhost:10524`
const RECIPES_URL =`${BASE_URL}/recipes`
return (dispatch) => {
dispatch({ type: 'START_FETCHING_RECIPES_REQUEST' });
fetch(RECIPES_URL)
.then(response =>{ return response.json()})
.then(recipes => { return console.log(recipes), dispatch({ type: 'Get_Recipes', recipes })});
};
}
Why isn't it displaying my results? I did console to make I was return my api data, and the Recipe component is rendering as just the html tags are rendering just fine.
In the future, "isn't displaying my results" and "having an issue" are not specific and make it confusing to understand. Try to be as specific as possible when asking questions about the issue.
What happened from yesterday when your code was working?
@DrewReese issue was in my mapStateToProp function. I update the code in the question. The mapStateToProp function need to have a explicit return, not a implicit return. The component is rendering, just not displaying. I console.log my the results from getRecipe function and its returning results from api
Is this issue here that the Recipe component is rendering nothing, or that name, category, etc... are not rendering? I think @AndyRay has the answer. You pass only a recipe prop, so in the component you need to access this.props.recipe.name etc...
That worked, I was doing that earlier, but I think that was prior to my recent push to git
You pass in a prop called recipe to your <Recipe /> component, but your component reads from a non-existant this.props.name, etc.
Thank you btw. This helped out. I made some changes prior to this questions, and just didn't retrace my steps
In your recipe list component, try this.
{recipes ? recipes.map((recipe,index) => <Recipe recipe={recipe} key={index} />) : null}
| common-pile/stackexchange_filtered |
Given a Voronoi diagram created in $\mathcal{O}(n)$, is it possible to find the closest pair of points in $\mathcal{O}(n)$?
So given a set of points n there is a Voronoi-diagram given which was created in $\mathcal{O}(n)$.
Now is it possible to find the closest pair of points of this set in $\mathcal{O}(n)$?
I know that the closest pair of points in the set of points corresponds to 2 adjacent cells in the Voronoi-diagram, but checking every 2 adjacent cells would be slower than $\mathcal{O}(n)$ so there might be a better way? I am also aware of the relationship between a Voronoi-diagram and the Delauny-triangulation but I cant't find any hint in this direction either.
Any hint is appreciated!
We only need to consider pairs of points whose Voronoi cells are adjacent, i.e. we can check pairs of points that determine the boundaries of each of the Voronoi cells. Since the Voronoi diagram is planar, there are only $O(n)$ boundary components to check.
Ok so I was wrong about the runtime of checking every 2 adjacent cells and it is in fact possible in $\mathcal{O}(n)$. Thanks!
This is only possible if the size of the Voronoi diagram is actually ${\mathcal O}(n)$. In dimension $>2$, the total number of pairs of neighboring cells may be larger than ${\mathcal O}(n)$. In three dimensions, the Delaunay tetradedralization may contain ${\mathcal O}(n^2)$ tetrahedra (and edges), as shown in the example in this image from Attali, D.; Boissonnat, J.-D., A linear bound on the complexity of the Delaunay triangulation of points on polyhedral surfaces, Discrete Comput. Geom. 31, No. 3, 369-384 (2004).
Vertices are neighbors in the Voronoi diagram if they contain correspond to an edge in the Delaunay triangulation. So in this case, would need to iterator over ${\mathcal O}(n^2)$ pairs to find the closest pair of points. In those cases, the Voronoi diagram has a size of ${\mathcal O}(n^2)$ and thus will be difficult to use for anything in ${\mathcal O}(n)$ time.
| common-pile/stackexchange_filtered |
pdf.js: PDF file from servlet which requires a variable
I am using pdf.js to view PDF files in my GWT application. I've implemented the viewer exactly as the product of a build operation as described on the readme.
When I use the viewer with a static pdf, this works fine. When I supply the link to a servlet that serves the pdf however, the pdf viewer doesn't load.
Works fine
http://<IP_ADDRESS>:8888/pdfjs/web/viewer.html?file=http://<IP_ADDRESS>:8888/staticpdf.pdf
Doesn't work
http://<IP_ADDRESS>:8888/pdfjs/web/viewer.html?file=http://<IP_ADDRESS>:8888/api/getPdf?nodeRef=001
http://<IP_ADDRESS>:8888/api/getPdf?nodeRef=001 yields a pdf file. The servlet has always worked.
This won't work, because pdf.js#getDocument proceeds to make a GET call without parameters, while the servlet needs the nodeRef:
GET http://<IP_ADDRESS>:8888/api/getPdf?nodeRef
HTTP/1.1 200 OK
Content-Type: application/pdf
Content-Length: 0
How would I implement the java servlet and pdf.js to be able to view a PDF file given a certain nodeRef? (only the servlet knows how to turn nodeRef into PDF, I need the path to the PDF to remain hidden)
I've been thinking along the lines of api/getPdf/001, but have no idea how to catch this on the tomcat server, and if that is even possible.
It turns out I was thinking too much within pdf.js I had been tinkering with it for hours, and even this question itself has changed a dozen times because I kept finding new leads.
However, I've now found a simple solution.
Instead of accessing my servlet as /getPdf?nodeRef=001, I access it /getPdf/001
My servlet mapping is now /getPdf/*
The servlet contains the following new code in doGet:
String nodeRef = request.getPathInfo().substring(1);
This omits the need for basic GET parameters in the url, at least in the format ?a=1&b=2, and works fine to pass a variable to a servlet that returns a PDF file using pdf.js.
EDIT: I have editted my question title to reflect the situation so that those who stumble upon this problem too may find their answer here.
good job. be sure to select your own answer as the accepted answer.
Thanks :). I will when SO allows me in 2 days!
If you don't want to change your server mapping, you should encode your URL (with encodeURIComponent, for example):
http://<IP_ADDRESS>:8888/api/getPdf?nodeRef=001 will turn into http%3A%2F%2F<IP_ADDRESS>%3A8888%2Fapi%2FgetPdf%3FnodeRef%3D001 and pdf.js will handle it correctly as 'file' parameter value.
Cheers!
| common-pile/stackexchange_filtered |
Function for reading of the JPEG DCT coefficients by python
import jpeglib
im = jpeglib.read_dct("golet.jpeg")
Luminance = im.Y
print(Luminance)
I need to print luminance i get this error: TypeError: DCTJPEG.init() got an unexpected keyword argument 'path'
note: my image in the same folder of this code
I try to get or reading the DCT of jpeg image
| common-pile/stackexchange_filtered |
Handeling Dictionary of lists using Python
I have a single dictionary that contains four keys each key representing a file name and the values is nested lists as can be seen below:
{'file1': [[['1', '909238', '.', 'G', 'C', '131', '.', 'DP=11;VDB=3.108943e02;RPB=3.171491e-01;AF1=0.5;AC1=1;DP4=4,1,3,3;MQ=50;FQ=104;PV4=0.55,0.29,1,0.17', 'GT:PL:GQ', '0/1:161,0,131:99'], ['1', '909309', '.', 'T', 'C', '79', '.', 'DP=9;VDB=8.191851e-02;RPB=4.748531e-01;AF1=0.5;AC1=1;DP4=5,0,1,3;MQ=50;FQ=81.7;PV4=0.048,0.12,1,1', 'GT:PL:GQ', '0/1:109,0,120:99']......,'008_NTtrfiltered': [[['1', '949608', '.', 'G', 'A',...}
My question is how to check only the first two elements in the list for instance "1", "909238" for each of the key if they are the same and then write them to a file. The reason I want to do this is I want to filter only common values (only the first two elements of the list) for the four files (keys).
Thanks a lot in advance
Best.
What's wrong with my_dict["the_file_you_want"][0][0][{0 or 1}]?
does your values has a same dimension?
@EMBLEM Hi EMBLEM, I tried it and didn't work.
@Kasra they have the same dimension.
You can access to the keys of the dictionary dictio and make your comparison using :
f = open('file.txt','w')
value_to_check_1 = '1'
value_to_check_2 = '909238'
for k in dictio:
value_1 = dictio[k][0][0][0]
value_2 = dictio[k][0][0][1]
if (( value_1 == value_to_check_1) and (value_2 == value_to_check_2)):
f.write('What you want to write\n')
f.close()
If you want to do a check that imply every values of your dictionary dictio.
Maybe you want to store couples of values from dictio.
couples = [(dictio[k][0][0][0], dictio[k][0][0][1]) for k in dictio]
Then, you can do a loop and iterate over the couples to do your check.
Example you can adapt according to your need :
for e in values_to_check:
for c in couples:
if (float(e[0][0][0]) >= float(c[0]) and float(e[0][0][1]) <= float(c[1])):
f.write(str(e[0][0][0]) + str(e[0][0][1]) + '\n')
thanks for your help. The values to be checked are going to be between the values wihin the dictionary and not comparison with the values you declared "value_to_check_1" and "value_to_check_1" . Also,when I tried your way it does only go over a one or single iteration of the value and then it ends the checking but my goal is to check through the entire dictionary of values between the four keys and not only on a single iteration of value.I hope my description is clear. Thanks a lot once again.
| common-pile/stackexchange_filtered |
What is correct way to use "Habe ich" and "Ich habe"? please help me with this
I am waaay to much confused about this and this is apparently way too easy
Maybe this helps: https://en.wikipedia.org/wiki/V2_word_order
“Habe ich”
Its used in questions that are about you in your perspectiv or as a (elliptical) answer. Its like a "Do I...?" or "I did..."
For example:
Habe ich etwas falsch gemacht?
Did i do something wrong?
or
Somebody: Hast du den Müll rausgebracht?
You: Ja, habe ich.
Somebody:Did you bring out the trash?
You:Yes I did.
“Ich habe”
Its used to if its a statement about you in your perspectiv Its like a "I have..." or "I'm...".
For example:
Ich habe Hunger.
I'm hungry.
or
Ich habe 3 Äpfel.
I have 3 apples.
Edit: Thanks to @HagenvonEitzen for correcting me.
Note that (and perhaps this is the confusing part) "Habe ich" can be used as a (elliptical) answer, i.e., as a statement: "Hast du den Müll raus gebracht?" - "Ja, habe ich" (and not "Ja, ich habe")
@HagenvonEitzen True... I corect my answer.
| common-pile/stackexchange_filtered |
How to detect and predict sensor faults and failures (for weather stations to be specific)?
Need help. Especially those knowledgeable in weather systems/meteorology.
Best approach in detecting and predicting faulty weather sensors and their failures based on their readings alone?
I'm doing a project regarding detection and prediction of faulty sensors employed at weather stations to improve maintenance and optimize scheduling.
I've seen studies using operational data such as battery level, communication status, and temperature to detect faults. This is especially the case for industrial machines. They employ sensors to read the temperature, vibration, rotation, etc. of those machines. They also have labeled data such as if the machine/equipment is operational at the time being.
What method can I do if such data mentioned above isn't available for my weather station dataset?
I also don't have maintenance data due to confidential concerns from our local weather agency. I only have access to historical weather data, which contains values of actual physical phenomenon being sensed (temperature, humidity, etc.)
Anomaly detection would work but I'm thinking if it's really that accurate since sudden rain affects temperature, humidity, wind speed values might confuse the model as saying the sensor's faulty.
Pardon me if I sound noob because this is only my second machine learning project.
Dataset looks like this (per station)
Timestamp
Temp
Humid
12/24/23 00:00:00
28.4
10.1
12/24/23 00:00:10
28.5
10.2
12/24/23 00:00:20
28.3
11.8
I tried anomaly detection but haven't produced a good model as I am still in practicing stage.
Welcome to SE.
So you only have two independent variables and a lot of time points. How many examples of failures do you have? 10/100/1000? This will help to decide how complex your prediction model can be.
Looking at this data yourself, could you explain to another person what indicates that there was or will soon be a failure? It may help to design data processing steps to extract more potent feature representations, it may also suggest surrogate outcomes which you could try to predict instead, i.e. if before 50% of the failures the temperature readings start oscillating with a period of 5min, you could simply make Fourier-transformed temperature reading into a feature, and try to detect that, as part of the process.
Do you think temperature or humidity cause failures? If not, it may be better to think of comparing readings of nearby sensors, i.e. focusing on the difference between sensors, since if failure causes difference in temperature, then no difference in temperature can be taken to imply no failure (ignoring all other mechanisms that could cause difference).
Sorry not to give an easy answer, but I think it might be beneficial to answer the above questions before diving too deep into specific models
ADDENDUM
To predict failures you would need to tie failures to some preceding event. It does seem to me that your readings do not cause failure, and also that the actual temperature/humidity does not cause failure. Hence, IMHO, the best signal would be to the difference between any specific readings of a sensor and the nearby sensor, that presumably is not faulty. You can generalize this to using nearest sensors you have to predict the readings of your current sensor, and then using the difference between the predicted readings and the actual ones as a signal for your failure prediction mechanism.
If there are no sensors nearby, then you would need to extract this signal of impending failure from the sensor itself. This may be very tricky since many other factors will play a role here. We then come back to how many examples of failure do you have. I have seen Transformers being applied to time series. But you need a lot of data there. There are also things like TiDE - also quite data-hungry. You will probably be better of with ARIMA-type model and few additional features, i.e. thinking something like Fourier transform, or time-series, but with generic 1,2,3 lags as well as very specific 24hour lags, to compare sensor to itself a day ago. The predictions of the time-series model and your actual readings could be fed into a classifier and used to generate a score to predict failure.
It might be helpful to plot readings of few temperature/humidity sensors before they went faulty. You may spot some patterns there.
Just opened the metadata now. Below contains sample values that the sensors throw when it has issues or broken (basically if it throws unrealistic values then its faulty: {"""air_temperature_max"":""-39.34"",
""air_temperature_min "":""-39.51"",
""air_temperature"":""-39.41"",
""relative_humidity_max"":""0.7"",
""relative_humidity_min"":""0.5"",
""relative_humidity"":""0.6"",
""pressure_max"":""-999"",
""pressure_min"":""-999"",
""pressure"":""-999"",
""solar_radiation_max"":""497.4"",
""solar_radiation_min"":""115.7"",
""solar_radiation"":""224.1"",
""accumulated_rain_1h"":0}"
@JasonYuliver I guess detection of failures should be easy enough - you just said it. Non-realistic readings, like negative temperatures. Will add the other part to the answer
What about prediction though? My goal is to predict failures to reduce downtime and prevent those failures.
@JasonYuliver, have you read the extra I added to the answer? Does this help?
Yes, a lot. I do have another question. Would 10-15 years of weather data be enough? Data from the station is every 10 mins. Although, I don't have labeled failures. There are data of failures, such as when temperature readings become "-999" or "-39.41". But to be honest, they are very random. Could occur twice, thrice, or 10 times in a month, then none in the next.
If you have weather data for such a long time, it is probably the number of failures that you need to worry about. How many failure examples do you have? Also, from the message above it is not quite clear to me. Once you see a failure, do you replace the sensor, or do you continue using it?
I'm sorry by "data of failures" I meant readings that indicate a malfunction, such as the values I provided above (unrealistic readings) and not really one where it indicares the sensor has failed or become non-operational already. I don't have labeled data of failures, as I mentioned, but only values that indicate that there is a malfunction on the sensor, which could then be used for Fault detection. Additionally, my plan is to use the faulty or anomalous readings to predict failure based on an assumption that the failure of a sensor is preceded by its anomalous behavior. Would that work?
@JasonYuliver, I think it would work. There is a lot of detail that needs to be filled in, but the big picture is this. You will need some way of preparing a set of labels of anomalous behavior, I would eye-ball the traces before this behavior starts, you may spot something, then try applying some classifiers to this. As I said before, you may find that some features can be built by comparing predicted with actual behaviors of sensors, or behaviors of nearby sensors.
If I can find a dataset online that contains sensor readings with labeled failure, can I use it in making the model instead of using my own weather station dataset that contains no labels?
@JasonYuliver, maybe, but why not use your own. You should be able to automate the label creation, you have enough data, by the looks of it, so why bother finding another dataset and hoping that your dataset matches the other one
I did find a similar study: [a], in which they used autoencoders for an anomaly-based approache. I just don't understand how exactly they did it. I'm confused with the sliding window thing.
[a] https://ceur-ws.org/Vol-1649/123.pdf
In [a] we do exactly this. For example, see fig 15 and 16.
Moreover, it is very simple and fast...
[a] https://www.cs.ucr.edu/%7Eeamonn/DAMP_long_version.pdf
Link-only answers are discouraged for a number of reasons. Give a journal ref if possible (in addition to the link to your site), and summarize the main point enough to stand on its own here.
| common-pile/stackexchange_filtered |
Cron Job SQL Check New User
This is more of a is it possible question rather then a technical question.
I've got a tender asking if its possible to check a old user database and if there's new users then add it to the new Joomla User database table.
I thought the best way would be to use a cron job but I don't understand how you would check if there was a new user added to the old database?
Would this be possible and if so what kind of theory would you use?
Thanks
this sounds hackish... but easy way is adding a table storing a timestamp (yeah, a table with only one row) and comparing that timestamp with the latest user-db entries. If a newer user entry is found then this timestamp during cron run, create new users in your db and updateing that timestamp... but somehow I love the idea to shutdown the database and simply redirect the users directly to the new joomla.
It was all part of the tender, but they use a in house database that it would be pulled from.
http://stackoverflow.com/questions/2765409/adding-php-script-to-cron and google for mysql+select and mysql+insert.
Supposing the Tender has access to both databases and files, i have a question.
Does the old database(website) still in use? I guess so since the user can be incremented!
So why not to edit Joomla files (user related component com_user), to also add the new user to the new database, after the validation/insert on the old one.
This is simple in either cases old 1.5 and newer versions.
Hope i have helped and drove you into the right direction.
Regards.
Yeah seems like the best way to do it, but the old database wont be a php website more likely a internal system. But thanks anyway :)
| common-pile/stackexchange_filtered |
how to update the badge value during application loading?
In my application i need to show the badge value depending on the cart items count and i had implemented the below lines in the view controller which i am getting the data using json and the badge value was not loading whenever the view controller was displayed and not updating the value after delete and the badge value was not showing at start of the application as i am new to swift 3 i don't where to implement it can anyone help me how to clear the issue ?
let appDelegate = UIApplication.shared.delegate as! AppDelegate
func downloadJsonWithURL() {
let url = NSURL(string: urlString)
URLSession.shared.dataTask(with: (url as URL?)!, completionHandler: {(data, response, error) -> Void in
if let jsonObj = try? JSONSerialization.jsonObject(with: data!, options: .allowFragments) as? NSDictionary {
// print(jsonObj!.value(forKey: "Detail"))
self.itemsArray = (jsonObj!.value(forKey: "Detail") as? [[String: AnyObject]])!
print(self.itemsArray.count)
OperationQueue.main.addOperation({
self.tableDetails.reloadData()
let key = "productPrice"
for array in self.itemsArray{
if let value = array[key] {
let cart = String(describing: value)
let endIndex = cart.index(cart.endIndex, offsetBy: -2)
let truncated = cart.substring(to: endIndex)
self.totalcartPrice.append(truncated)
print(self.totalcartPrice)
var sum = 0.00
for numbers in self.totalcartPrice{
sum += Double(numbers)!
}
print(sum)
self.increment = Int(sum)
self.total = String(sum) + "0KD"
let tabItems = self.tabBarController?.tabBar.items as NSArray!
let tabItem = tabItems?[1] as! UITabBarItem
self.appDelegate.badgeValue = String(Int(self.itemsArray.count))
}
}
})
}
}).resume()
}
func deleteButtonAction(button : UIButton) {
let buttonPosition = button.convert(CGPoint(), to: tableDetails)
let index = tableDetails.indexPathForRow(at: buttonPosition)
self.itemsArray.remove(at: (index?.row)!)
self.tableDetails.deleteRows(at: [index!], with: .automatic)
tableDetails.reloadData()
let tabItems = self.tabBarController?.tabBar.items as NSArray!
let tabItem = tabItems?[1] as! UITabBarItem
self.appDelegate.badgeValue = String(Int(self.itemsArray.count))
if (tableView(tableDetails, numberOfRowsInSection: 0) == 0){
tableDetails.isHidden = true
emptyView.isHidden = false
}
}
You need to get rootViewController to update badge from AppDelegate. Like this
let rootViewController = self.window?.rootViewController as!UITabBarController!
let tabArray = rootViewController?.tabBar.items as NSArray!
let tabItem = tabArray.objectAtIndex(1) as! UITabBarItem
tabItem.badgeValue = "34"
how to pass the value to that class ?
you can try this for passing value https://stackoverflow.com/questions/15049924/passing-data-from-app-delegate-to-view-controller
this link is for passing data from app delegate to view controller but here i need to send the data from view controller to app delegate is this the same process i need to used ? @Vinod Kumar
passing data from viewcontroller
to app delegate https://stackoverflow.com/questions/32124728/pass-data-to-variable-in-appdelegate
you can also create a variable in appdelegate then get app delegate object into view controller then set value in aap delegate variable @user
i tried creating a variable and passing data from app delegate object into view controller but the data was not passed @Vinod Kumar
this is the code for app delegate class var badgeValue: String?
let rootViewController = self.window?.rootViewController as!UITabBarController!
let tabArray = rootViewController?.tabBar.items as NSArray!
let tabItem = tabArray?.object(at: 1) as! UITabBarItem
tabItem.badgeValue = badgeValue
and in view controller class let appDelegate = UIApplication.shared.delegate as! AppDelegate
self.appDelegate.badgeValue = String(Int(self.itemsArray.count))
i had added my complete code please check and help me bro
var value = 0
this variable in app delegate and below code in view controller resultent it is updated
let appDelegate = UIApplication.shared.delegate as! AppDelegate
appDelegate.value = 5
print(appDelegate.value)
if you want to update badge in app delegate then use local notification or custom delegate@user
how to use local notification ? @Vinod kumar
NotificationCenter.default.addObserver(self,selector: #selector(self.getMyProfile),name: NSNotification.Name(rawValue:"UpdateProfileNotification"),object: nil)
above code set in app delegate didFinishLaunchingWithOptions
and put below method in app delegate
func getMyProfile()
{
}
below code put into view controller
NotificationCenter.default.post(name: Notification.Name("UpdateProfileNotification"), object: nil)
but how to pass the data here in this ? @VinodKumar
you can pass data as
let imageDataDict:[String: UIImage] = ["image": image]
// post a notification
NotificationCenter.default.post(name: NSNotification.Name(rawValue: "notificationName"), object: nil, userInfo: imageDataDict)
you can send value in user info and also get from notification.userinfo
got bro @Vinod Kumar
great bro @user
| common-pile/stackexchange_filtered |
When is $T$-Alg monoidal closed?
Given a category $\mathcal{V}$ and a monad $(T,\eta,\mu)$, what would be the sufficient conditions on $\mathcal{V}$ and $T$, for the category of $T$ algebras to be monoidal closed?
(I'm pretty sure that Kock proved that, if $T$ has strength and is commutative, then $T$-alg is closed; can we relax that condition, or change it in anyway?)
Yes, that is the theorem of Kock. Why are you seeking different conditions?
Well I have a cartesian (but not closed) category as $\mathcal{V}$ and a strong (but not commutative) monad, and have a strong suspicion that the $T$-Alg should be closed monoidal (as in the case of abelian groups) -- I was hoping that there exists some general argument. (Although I just found some work by Hyland and Powers that I might be able to use http://dx.doi.org/10.1016/S0022-4049(02)00133-0 )
I don't see any reason why the algebras would be closed if the base isn't closed. After all, the hom module between two modules is a subset of the hom set... Not to mention, the identity monad is the nicest possible monad there is, so if its algebras are closed, then the base is already closed!
(Also, the abelian group monad is commutative.)
Abelian group monad is commutative wrt tensor product and not with cartesian product, no? And if I understand things correctly algebras over the abelian monad in pSet will produce a closed category, even though the monad has nothing to do with the smash product of pSet (or is it commutative with the smash product? I plead ignorance in this case).
Every monad on sets has a unique strength and being a commutative monad is a property of that strength. The tensor product of abelian groups doesn't come into the question.
You need that $\mathcal{V}$ is closed monoidal, $T$ is a monoidal monad and that certain coequalizers in $\mathcal{V}$ exist which commute with $T$. For details, see Tensors, monads and actions by Gavin Seal.
It is not necessary to assume that the coequalisers commute with $T$, but some condition about existence of coequalisers and preservation of epimorphisms seems to be required.
I know that, but I wanted to keep it short.
| common-pile/stackexchange_filtered |
EntityFieldQuery condition
How can i ommit an condition in Entity Field Query if variable do not exist, for example
->fieldCondition('field_filter_color', 'tid', $filter_color)
Recieves in $filter_color one term id.
But there are situations where $filter_color is not present, so i would like to do:
if $filter_color is empty, ignore this condition.
Alternative is to present it as an array like
->fieldCondition('field_filter_color', 'tid', $filter_color, 'IN')
and prepopulate all values in $filter_color, and if $filter_color is chosen, create array with only one value, but i am not sure it is best way to do it.
You can simply do this:
$query = new EntityFieldQuery();
$query->entityCondition('entity_type', 'node');
// all other conditions go here
if (isset($my_field_val)) {
$query->fieldCondition('my_field', 'value', $my_field_val);
}
$result = $query->execute();
Untried code as I don't have access to my files at the moment.
Unfortunately EntityFieldQuery can only query for what is there, it doesn't have the capacity to query based on empty fields.
However cumbersome it might be, your solution is about all you can do other than add a tag to the query, and implement hook_query_alter() to manually add the "empty" condition to it.
Thanks! I am thinking, can i somehow achieve to exclude whole parameter ( ->fieldCondition ... on some external condition). That way, if condition is met, ->fieldCondition is in the query, if not, it is not. But i guess i would need somehow to "construct" whole query on the fly ?
To be honest I normally just construct my own queries and go into the field tables directly when EntityFieldQuery's limitations kick in. If you're using MySQL for field storage it's probably easier than trying to work round the problem, however creative you get
You mean, you use db_select ? I am thinking hard how to stay in EntityFieldQuery for this task, for now if $field_color is not defined, i offer for $field_color an array with all values to simulate non-existance of value :D
| common-pile/stackexchange_filtered |
How to calculate the world position of pixels in an HLSL script?
I want to make a simple ray marching loop in the fragment shader. I think the main issue is that I'm not giving the correct world position input.
Here is my current attempt:
Shader"SDF"{
Properties{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader{
Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
#include "HLSLSupport.cginc"
int steps = 64;
float3 centre = float3(0,0,0);
float radius = 1.0;
sampler2D _MainTex;
float sph(float3 posi){
return distance(posi, centre) - radius;
};
struct v2f{
float4 vertex : SV_POSITION;
float3 wpos : TEXCOORD0;
};
struct appdata{
float4 vertex : POSITION;
};
v2f vert(appdata v){
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.wpos = mul(unity_ObjectToWorld, v.vertex).xyz;
return o;
}
fixed4 frag(v2f i) : SV_TARGET{
float3 wpos = mul(unity_ObjectToWorld, i.wpos).xyz;
float t = 0;
float3 dir = normalize(wpos - _WorldSpaceCameraPos);
fixed4 col;
for (int j = 0; j < steps; j++){
if (t > 100){
col = fixed4(0, 0, 0, 1);
break;
}
float3 p = wpos + dir * t;
float d = sph(p);
if (d < 0.001){
col = fixed4(1, 0, 0, 1);
}
t += d;
}
return col;
}
ENDCG
}
}
}
The output of this code is always black.
Also, wpos is supposed to be the world position of the pixels.
Can you copy and paste the code instead of a screenshot, please? Also, if you have any more details, that would be useful to help you.
Why are you multiplying by the object-to-world matrix twice, once in the vertex shader, and again in the fragment shader?
@DMGregory I thought it returns the world position of the pixels.
That does not explain why you're doing it twice.
| common-pile/stackexchange_filtered |
Xpath selector not working in IE but working fine in Chrome and Firefox
I am using following xpath to click on element using JSExecutor in Selenium webdriver. This works fine in Firefox and chrome but does not work in IE.
Any idea to make this work? After lot of trial and error I have made this work in FF and chrome and have come up with the following XPath.
//*[contains(@class,'ui-select-choices-row') or contains(@id,'ui-select-choices-row')]//*[text()='TextofMyElementToBeclicked'
Additional info: This is a Jquery drop down on an angularJS application. When the user clicks on the drop down //ul is loaded and i am using the above xpath (which is part of //ul) to select the element based on text (using Javascript executor click). I used JS executor because, click() function in selenium simply could not click on the drop down element.
I am clicking element using below.
WebElement element = driver.findElement(By.xpath("YourNumbersXpath"));
JavascriptExecutor executor = (JavascriptExecutor)driver;
executor.executeScript("arguments[0].click();", element);
enter code here
I think that you can find the solution Here or Here
I successfully tested your XPath with IE11, so it's not an issue related to IE. It's most likely a timing issue. First click on the drop button, then wait for the targeted element to appear and finally click on it:
WebDriver driver = new FirefoxDriver();
WebDriverWait wait = new WebDriverWait(driver, 30);
driver.get("...");
// move the cursor to the menu Product
WebElement element = driver.findElement(By.xpath("drop down button")).click();
wait.until(ExpectedConditions.elementToBeClickable(By.xpath("drop down item"))).click();
Have tried this and also tried elemntToBevisible.. Both fail with error not able to find element.. Strangely this fails only in IE 11.
No.. It simply says that it cannot find the xpath.. It waits foe the default timrout and throws the error..
Can you provide some HTML from IE?
unfortunately I cannot provide it as I work for a financial customer and we have strict security guidelines not to share details outside..
I am just thinking if I should get all the descendants from the root element (post click) and match it with text and click the element...
try with some simple xpath like //*[contains(.,'TextofMyElementToBeclicked')]
It could be that the document is loaded differently in IE. Anyway, without being able to look at it, there is nothing much I can do to help.
IE11 seems to struggle with contains(@class and possibly also the contains(@id. Try using alternative solutions like starts-with.
| common-pile/stackexchange_filtered |
What's the meaning of "push through"? (Could you please explain it in the text below?)
The main text is: People who wear exercise watches become trapped in a cycle of escalation. 10000 steps may have been the gold standard last week, but this week it's 11000. Next week, 12000 ... That trend can't continue forever, but many people push through stress fractures and other major injuries to seek the same endorphin high that came from a much lighter exercise load only months earlier.
Push through means surpass, to push so hard as to get to the other side. If you ignore pain, and pass through it to enjoy other benefits, you have traveled 'into and out of' the barrier.
I think it's more in the line of "continuing despite of".
more like, "many people are pushing their way through stress fractures and other major injuries to seek the same endorphin high"
intuitive enough?
To a certain extent.
In defense of my interpretation, I'd ask you to look at the argument. The paper(or whatever it is) is arguing 'against' the exercise watches. So push through is not the usual idiomatic dictionary meaning of "passing your way towards a beneficial state" but "enduring, pushing your way through something unpleasant". I think!
And what is the relationship between your interpretation and the last sentence ( to seek the same endorphin high that came from a much lighter exercise load only months earlier)?
exactly that. The high which once came from from a much lighter exercise now has to come from a higher load,.. load high enough to cause stress fractures n stuff which people are pushing through.
Many people push through stress fractures and other major injuries to seek the same endorphin high that came from a much lighter exercise load only months earlier.
It's related to push through the crowd, meaning to force your way through, and get through, which is to "get through something to manage to deal with a difficult situation or to stay alive until it is over -- The refugees will need help to get through the winter; I just have to get through the first five minutes of my speech, and then I’ll be fine." But with "push through," the connotation isn't just of persistence -- there's also forcefulness and making a quantum leap. I haven't been able to find the new use of "push through" in any dictionaries. But here are some examples taken from articles about exercising and working out:
8 Ways to Override the Urge to Quit: When you want to give in, use these tips to push through that mental block and safely finish the most challenging workouts
Science-Backed Ways to Push Through Workout Fatigue: Don't let a workout wall stop you from achieving the body breakthrough you've been sweating for.
How Much Is Too Much?: When Not to Push Through Exercise: ... For every individual, there is an activity threshold beyond which pain and injury occur. If this is exceeded, downtime from exercise necessarily occurs, and the exercise training effect and other health benefits obviously stop accruing. Sports medicine physicians spend a substantial proportion of their daily practice dealing with individuals who have pushed too hard and exceeded this threshold.
23 Ways to Push Through a Tough Workout: ... There are ways to push through the invisible wall and squeeze every last drop out of a workout.
Your sentence means that people persist in exercising, despite an injury, hoping that they can get past the discomfort through sheer willpower.
| common-pile/stackexchange_filtered |
Retrieve taxonomy terms from ContentItem in Orchard
I'trying to retrieve the full list of taxonomy terms from a ContentItem doing something like this:
var product = Services.ContentManager.Get<ContentItem>(33);
foreach (var term in product.TermsPart.Terms.Where(x => x.Field == "MyIndex"))
{
...
}
Can someone help me?
I read it is possible, using dynamic to do this:
dynamic product = Services.ContentManager.Get<ContentItem>(33);
foreach (var term in product.MyIndex.Terms)
{
...
}
but I cannot find the right syntax!
The best way to do it is using the ITaxonomyService (http://orchardtaxonomies.codeplex.com/SourceControl/latest#Services/ITaxonomyService.cs)
Inject it on the constructor of your class(controller, dependency or whatever):
//Note that I didn't try this code
public SomeController(ITaxonomyService taxonomyService){
var product = Services.ContentManager.Get<ContentItem>(33);
var terms = taxonomyService.GetTermsForContentItem(product.Id); //Or just 33
}
| common-pile/stackexchange_filtered |
How does C treat struct assignment
Suppose I have a struct like that:
typedef struct {
char *str;
int len;
} INS;
And an array of that struct.
INS *ins[N] = { &item, &item, ... }
When i try to access its elements, not as pointer, but as struct itself, all the fields are copied to a temporary local place?
for (int i = 0; i < N; i++) {
INS in = *ins[i];
// internaly the above line would be like:
// in.str = ins[i]->str;
// in.len = ins[i]->len;
}
?
So as I increase the structure fields that would be a more expensive assignment operation?
Correct, in is a copy of *ins[i].
Never mind your memory consumption, but your code will most likely not be correct: The object in dies at the end of the loop body, and any changes you make to in have no lasting effect!
There is no indication in the question that the code will make any changes to in after creating it. Defining in may simply be a convenience for accessing members of the desired array element. That would be perfectly valid code.
Yes, struct have value semantics in C. So assigning a struct to another will result in a member-wise copy. Keep in mind that the pointers will still point to the same objects.
The structure assignment behaves like a memcpy. Yes, it is more expensive for a larger structure. Paradoxically, the larger your structure becomes, the harder it is to measure the additional expense of adding another field.
The compiler may optimize away the copy of the structure and instead either access members directly from the array to supply the values needed in your C code that uses the copy or may copy just the individual members you use. A good compiler will do this.
Storing values via pointers can interfere with this optimization. For example, suppose your routine also has a pointer to int, p. When the compiler processes your code INS in = *ins[i], it could “think” something like this: “Copying ins[i] is expensive. Instead, I will just remember that in is a copy, and I will fetch members for it later, when they are used.” However, if your code contains *p = 3, this could change ins[i], unless the compiler is able to deduce that p does not point into ins[i]. (There is a way to help the compiler make that deduction, with the restrict keyword.)
In summary: Operations that look expensive on the surface might be implemented efficiently by a good compiler. Operations that look cheap might be expensive (writing to *p breaks a big optimization). Generally, you should write code that clearly expresses your algorithm and let the compiler optimize.
To expand on how the compiler might optimize this. Suppose you write:
for (int i = 0; i < N; i++) {
INS in = *ins[i];
...
}
where the code in “...” accesses in.str and in.len but not any of the other 237 members you add to the INS struct. Then the compiler is free to, in effect, transform this code into:
for (int i = 0; i < N; i++) {
char *str = *ins[i].str;
int len = *ins[i].len;
...
}
That is, even though you wrote a statement that, on the surface, copies all of an INS struct, the compiler is only required to copy the parts that are actually needed. (Actually, it is not even required to copy those parts. It is only required to produce a program that gets the same results as if it had followed the source code directly.)
| common-pile/stackexchange_filtered |
How can i remove unused iCloud keys from entitlements using ionic config.xml or an azure DevOps task?
i'm having this error when trying to archive my xcode project on azure devops.
error: exportArchive: exportOptionsPlist error for key
'iCloudContainerEnvironment': expected one of {Development,
Production}, but no value was provided
I'm not using iCloud support on my app, so i don't need them, but these keys are being auto generated:
com.apple.developer.icloud-services *
com.apple.developer.icloud-container-identifiers
com.apple.developer.icloud-container-development-container-identifiers
iCloud option is disabled on my app Identifier.
How can i remove these icloud keys from config.xml or an azure devops task?
Finally i had to create a container on my 'certificates and profiles' page, enabled iCloud option, and it worked at the first try.
| common-pile/stackexchange_filtered |
call static void
How do I call my function?
public static void dial(Activity call)
{
Intent intent = new Intent(Intent.ACTION_DIAL);
call.startActivity(intent);
}
Obviously not with:
dial(); /*Something should be within the brackets*/
Try to pass this as argument (provided you start from an Activity of course)
dial(this); works. Like Anakin said: ITS WORKING!
You can't pass null. You have to send a context object.
Where is your function located? If it's inside an Activity or the such, simply pass "this" as the parameter.
If it's inside an BroadcastListener, or a Service, just change the parameter to Context and pass "this".
What if i call it from another class? Lets say the class with the function is named Action. I know I cant do this: Action.dial(this);
@Patrick Well it's pretty common to pass the Context around when you invoke methods e.g. calling a utility class. But since you always start executing your logic from an Activity or BroadcastListener or Service, and those are all Context subclasses, you always have something to work with.
You should try
ClassName.dial();
The reason is that static methods belong the class itself, not to an individual instance of it. The call to instance.dial() is legal, but discouraged.
ClassName.dial() doesn't work for me because of "Activity call"
Ive tried ClassName.dial(null) and ClassName.dial(call) aswell.
@Patrick and what issues do you get? Normally, both your approaches should at least compile. The call with "null" will throw a NullPtrException
Both do compile, but I get "force close" when I run the application :D
I would look at the LogCat output for the Exception thrown. If this does not clarify the issue, you might also add a logger statement to check on the Activity, or to add a breakpoint at your call and step the method through.
What exaclty is the Problem?
If you've got a class like
public class Test {
public void nonStaticFct() {
staticFct();
}
public static void staticFct() {
//do something
}
}
Works perfectly (even if you should call static functions always by Classname.FctName (Test.staticFct())
I guess the problem here is the missing argument.
[Edit] Obviously I am wrong, according to the Java Code Conventions you may use a Classmethod by simply calling it, without using the classname (even if it seems odd, since I would expect an implicit this.staticFct() - but possibly the Java compiler is smart enough)
| common-pile/stackexchange_filtered |
How to access the native android scrollview listener from nativescript angular?
Iam working android and ios application using nativescript application,I was created the customised scrollview with extending android.widget.ScrollView in nativescript angular and i had listeners inside the scrollview.I want to set my customised scrollview listeners and its override methods to the nativescript angular scrollview.How can i set this android customised listeners to scrollview like setting ios delegate methods?
My Customised scrollView java class is:
import android.content.Context;
public class AndScroll extends org.nativescript.widgets.VerticalScrollView
{
public interface OnEndScrollListener {
public void onEndScroll(int x,int y);
}
private boolean mIsFling;
private OnEndScrollListener mOnEndScrollListener;
public AndScroll(Context context)
{
super(context)
}
@Override
public void fling(int velocityY) {
super.fling(velocityY);
mIsFling = true;
}
@Override
protected void onScrollChanged(int x, int y, int oldX, int oldY) {
super.onScrollChanged(x, y, oldX, oldY);
if (mIsFling) {
if (Math.abs(y - oldY) < 2 || y >= getMeasuredHeight() || y == 0) {
if (mOnEndScrollListener != null) {
mOnEndScrollListener.onEndScroll(x,y);
}
mIsFling = false;
}
}
}
public OnEndScrollListener getOnEndScrollListener() {
return mOnEndScrollListener;
}
public void setOnEndScrollListener(OnEndScrollListener mOnEndScrollListener) {
this.mOnEndScrollListener = mOnEndScrollListener;
}
}
Iam want to access my customised class inside my FoodCourt Scroll ts class:
public createNativeView() {
return this.orientation === "horizontal" ? new org.nativescript.widgets.HorizontalScrollView(this._context) : new org.nativescript.widgets.VerticalScrollView(this._context);
}
How to access My AndScroll java class in my foodcourt class like
new org.nativescript.widgets.VerticalScrollView
You should use org.nativescript.widgets.VerticalScrollView if you like to preserve the features exposed by ScrollView component while modifying it.
If your aim is to just intercept OnScrollChangedListener then you just have to do it like it's done here in the original source code.
But I don't think you would actually need any of these, scroll event itself gives you the x & y points constantly if that's all you are looking for within OnScrollChangedListener.
Is VerticalScrollview widget is available in nativescript angular?
It's part of your NativeScript Framework. Angular is a wrapper around the Core Framework just to expose the features of Angular. So anything that's available in Core, should be available for framework.
Ok thank you manoj..and one more question i want to ask you is the verticalscrollview widget only for android?
Yes, it's Android specific native component, like UIScrollView on iOS.
Ok thank you manoj.i am going to try implement the verticalscrollview.
Hai manoj i was extends the VerticalScrollView like "export class AndScroll extends VerticalScrollView" but it shows error Cannot find name 'VerticalScrollView'.
The fully qualified class name is org.nativescript.widgets.VerticalScrollView. As I mentioned above, I don't actually see an requirement for extending when the scroll event itself gives you the x & y axis points you need to work with.
Ok..ios scrollview have the scroll end delegate method to detect scroll ends. but android don't have.so only i was saw an workaround to detect end scroll in android java code and i was converted the code into nativescript.But i dont know how to achieve in android like ios
I think with Android you may just listen to scroll event, set an interval - let's say 200 or 300ms, if the next event is not fired within that time frame you may assume the scroll ended.
Why i want to extend org.nativescript.widgets.VerticalScrollView is "fling" override method is depends on the native android scrollview.I will update my question please see.
I think you are pretty confused between the cross platform JavaScript component & native component. Whatever you use in XML are the cross platform JavaScript component that implements the native view within, for more info refer docs.
ok manoj.in the above mentioned code implementation was wrong?
I also tried setinterval inside the scroll event but it was slows the scroll speed.
Let us continue this discussion in chat.
Hai manoj i have implemented like original implementation. i have one doubt how to access my AndScroll java class like they accessing "org.nativescript.widgets.VerticalScrollView"
I will update my question please check.
Any solution Manoj?
| common-pile/stackexchange_filtered |
Taking a ScreenShot of SurfaceView with Camera Preview in it
I am trying to Implement a Functionality that includes taking Pictures While Recording Video. That's the reason i have concluded to use the Screenshot approach of SurfaceView.
However, when i try to take the Screen Shot of SurfaceView. I am always getting a Blank Image.
Here is the code that i am using for taking a Snapshot:
View tempView = (View)MY_SURFACE_VIEW;
tempView.setDrawingCacheEnabled(true);
Bitmap tempBmp = Bitmap.createBitmap(tempView.getDrawingCache());
tempView.setDrawingCacheEnabled(false);
//Saving this Bitmap to a File....
In case you guys may think this is a duplicate Question, let me assure you that i have tried the following Solutions provided on SO for the same Problem before asking this one.
https://stackoverflow.com/questions/24134964/issue-with-camera-picture-taken-snapshot-using-surfaceview-in-android
Facing issue to take a screenshot while recording a video
Take camera screenshot while recording - Like in Galaxy S3?
Taking screen shot of a SurfaceView in android
Get screenshot of surfaceView in Android (This is the correct answer, but Partially Answered. I have already asked @sajar to Explain the Answer)
Other Resources on Internet:
1. http://www.coderanch.com/t/622613/Android/Mobile/capture-screenshot-simple-animation-project
2. http://www.phonesdevelopers.com/1795894/
None of this has worked so far for me. I also know that we need to create some Thread that interacts with the Surface Holder and Gets the Bitmap from It. But i am not sure how to implement that.
Any Help is Highly Appreciated.
God sent me you today. Thanks for this question
Were you able to figure out a solution to this problem?
Here's another one: Take screenshot of SurfaceView.
SurfaceViews have a "surface" part and a "view" part; your code tries to capture the "view" part. The "surface" part is a separate layer, and there's no trivial "grab all pixels" method. The basic difficulty is that your app is on the "producer" side of the surface, rather than the "consumer" side, so reading pixels back out is problematic. Note that the underlying buffers are in whatever format is most convenient for the data producer, so for camera preview it'll be a YUV buffer.
The easiest and most efficient way to "capture" the surface pixels is to draw them twice, once for the screen and once for capture. If you do this with OpenGL ES, the YUV to RGB conversion will likely be done by a hardware module, which will be much faster than receiving camera frames in YUV buffers and doing your own conversion.
Grafika's "texture from camera" activity demonstrates manipulation of incoming video data with GLES. After rendering you can get the pixels with glReadPixels(). The performance of glReadPixels() can vary significantly between devices and different use cases. EglSurfaceBase#saveFrame() shows how to capture to a Bitmap and save as PNG.
More information about the Android graphics architecture, notably the producer-consumer nature of SurfaceView surfaces, can be found in this document.
Thanks for this brilliant response =) Grafika is a great Project for learning.
I am unable to find the method glReadPixels() is it in the same Activity 'Texture from Camera' ?
glReadPixels() is called in EglSurfaceBase#saveFrame(). See https://github.com/google/grafika/blob/master/src/com/android/grafika/gles/EglSurfaceBase.java#L157
Do i need to change the SurfaceView component to GLSurfaceView component and re-implement the Camera Previewing using Textures in GLSurfaceView ?
GLSurfaceView is just a SurfaceView with some helper classes (which, in some cases, can be more of a bother than helpful). You can do anything in SurfaceView that you can in GLSurfaceView; it's mostly a matter of having to do your own EGL management, but the gles library in Grafika shows how to do that. If you want to show the camera preview and record it at the same time, you will need to send the preview to a SurfaceTexture and render it twice (note "continuous capture" may be more relevant than "show + capture camera" for SurfaceView).
@SalmanMuhammadAyub: I'm searching for a solution to the issue of capturing an image through SurfaceView content, & I can see your posts/comments at most of the questions I reach on StackOverflow on this. So, I wanted to know if you were able to achieve this? If yes, can you help me out as well?
public class AndroidSurfaceviewExample extends Activity implements SurfaceHolder.Callback {
static Camera camera;
SurfaceView surfaceView;
SurfaceHolder surfaceHolder;
static boolean boo;
static Thread x;
GLSurfaceView glSurfaceView;
public static Bitmap mBitmap;
public static Camera.Parameters param;
public static Camera.Size mPreviewSize;
public static byte[] byteArray;
PictureCallback jpegCallback;
private Bitmap inputBMP = null, bmp, bmp1;
public static ImageView imgScreen;
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.camera);
surfaceView = (SurfaceView) findViewById(R.id.surfaceView);
surfaceHolder = surfaceView.getHolder();
Button btnTakeScreen = (Button)findViewById(R.id.btnTakeScreen);
imgScreen = (ImageView)findViewById(R.id.imgScreen);
btnTakeScreen.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
Bitmap screen = Bitmap.createBitmap(getBitmap());
imgScreen.setImageBitmap(screen);
}
});
// Install a SurfaceHolder.Callback so we get notified when the
// underlying surface is created and destroyed.
surfaceHolder.addCallback(this);
// deprecated setting, but required on Android versions prior to 3.0
surfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
jpegCallback = new PictureCallback() {
@SuppressLint("WrongConstant")
public void onPictureTaken(byte[] data, Camera camera) {
FileOutputStream outStream = null;
try {
outStream = new FileOutputStream(String.format("/sdcard/%d.jpg", System.currentTimeMillis()));
outStream.write(data);
outStream.close();
Log.d("Log", "onPictureTaken - wrote bytes: " + data.length);
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
}
Toast.makeText(getApplicationContext(), "Picture Saved", 2000).show();
refreshCamera();
}
};
}
public void refreshCamera() {
if (surfaceHolder.getSurface() == null) {
// preview surface does not exist
return;
}
// stop preview before making changes
try {
camera.stopPreview();
} catch (Exception e) {
// ignore: tried to stop a non-existent preview
}
// set preview size and make any resize, rotate or
// reformatting changes here
// start preview with new settings
try {
camera.setPreviewDisplay(surfaceHolder);
camera.startPreview();
} catch (Exception e) {
}
}
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
// Now that the size is known, set up the camera parameters and begin
// the preview.
refreshCamera();
}
public void surfaceCreated(SurfaceHolder holder) {
if (camera == null) {
try {
camera = Camera.open();
} catch (RuntimeException ignored) {
}
}
try {
if (camera != null) {
WindowManager winManager = (WindowManager) getApplicationContext().getSystemService(Context.WINDOW_SERVICE);
camera.setPreviewDisplay(surfaceHolder);
}
} catch (Exception e) {
if (camera != null)
camera.release();
camera = null;
}
if (camera == null) {
return;
} else {
camera.setPreviewCallback(new Camera.PreviewCallback() {
@Override
public void onPreviewFrame(byte[] bytes, Camera camera) {
if (param == null) {
return;
}
byteArray = bytes;
}
});
}
param = camera.getParameters();
mPreviewSize = param.getSupportedPreviewSizes().get(0);
param.setColorEffect(Camera.Parameters.EFFECT_NONE);
//set antibanding to none
if (param.getAntibanding() != null) {
param.setAntibanding(Camera.Parameters.ANTIBANDING_OFF);
}
// set white ballance
if (param.getWhiteBalance() != null) {
param.setWhiteBalance(Camera.Parameters.WHITE_BALANCE_CLOUDY_DAYLIGHT);
}
//set flash
if (param.getFlashMode() != null) {
param.setFlashMode(Camera.Parameters.FLASH_MODE_OFF);
}
//set zoom
if (param.isZoomSupported()) {
param.setZoom(0);
}
//set focus mode
param.setFocusMode(Camera.Parameters.FOCUS_MODE_INFINITY);
// modify parameter
camera.setParameters(param);
try {
// The Surface has been created, now tell the camera where to draw
// the preview.
camera.setPreviewDisplay(surfaceHolder);
camera.startPreview();
} catch (Exception e) {
// check for exceptions
System.err.println(e);
return;
}
}
public void surfaceDestroyed(SurfaceHolder holder) {
// stop preview and release camera
camera.stopPreview();
camera.release();
camera = null;
}
public Bitmap getBitmap() {
try {
if (param == null)
return null;
if (mPreviewSize == null)
return null;
int format = param.getPreviewFormat();
YuvImage yuvImage = new YuvImage(byteArray, format, mPreviewSize.width, mPreviewSize.height, null);
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
Log.i("myLog","array: "+byteArray.toString());
Rect rect = new Rect(0, 0, mPreviewSize.width, mPreviewSize.height);
yuvImage.compressToJpeg(rect, 75, byteArrayOutputStream);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPurgeable = true;
options.inInputShareable = true;
mBitmap = BitmapFactory.decodeByteArray(byteArrayOutputStream.toByteArray(), 0, byteArrayOutputStream.size(), options);
byteArrayOutputStream.flush();
byteArrayOutputStream.close();
} catch (IOException ioe) {
ioe.printStackTrace();
}
return mBitmap;
}
Hello, please provide some description to your answer. Just posting code is not very helpful.
| common-pile/stackexchange_filtered |
Iterators and iterables
I was reading a book about python (beginning python, from novice to prefessional, 3rd edition) and noticed the following sentence which has confused me considering that I am a beginner
"in many cases you would put the iter method in another object which you would use in the for loop, that would then return your iterator" page 179
I read every sources that I could find and I am still confused because in all of them we put the iter in one class, make objects from it and then that object gets used in for loop and we never put it in another class
I would be grateful if you correct me if i am wrong and give me a simple example if this could also be done
The __iter__ method should return a class with a __next__ method, or a generator. The advantage of using a separate class is it allows you to iterate over a object multiple times without consuming it. For example iter([]) returns a list_iterator, not a list. You can create multiple iterators from the same list
| common-pile/stackexchange_filtered |
guice servlet is not a singleton
something awful is happening
i have 2 servlets in my project - one of them simply has a post method and is responsible for handling file uploads. i recently added the other one - it has a get and a post method.
here is the 2nd servlet code
@Singleton
@WebServlet("/Medical_Web")
public class XXXDetailsServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
@Inject
private Provider<XXXPersistenceManager> persistenceManager;
@Inject
private Provider<XXXChain> chainProvider;
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
System.out.println("servlet address = " + this);
final String xxx= request.getParameter("xxx");
String json = "";
try {
final XXXBean xxxBean = persistenceManager.get().find(xxx);
json = new GsonBuilder().create().toJson(xxxBean);
} catch (Exception ex) {
ex.printStackTrace();
}
request.setAttribute("json", json.trim());
getServletContext().getRequestDispatcher("/XXX.jsp").forward(request, response);
}
@Override
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
System.out.println("servlet address = " + this);
final String xxx = request.getParameter("xxx");
try {
final XXXChain chain = chainProvider.get();
chain.getContext().setAttribute(XXX_TYPE, XXXType.DELETE);
final XXXBean xxxBean = persistenceManager.get().find(xxx);
final List<XXXBean> xxxList = new ArrayList<XXXBean>();
xxxList.add(xxxBean);
chain.process(xxxList);
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
now - here's whats getting me
for some reason - even though this is marked as a @Singleton - the servlet addresses are definitely coming back as different. I noticed this initially when I hit a null pointer in my post method - whenever i call the get method, the instance of the servlet i get back has all the fields populated. whenever i call the post method, the instance of the servlet i get back (it is a different instance) does not have the fields populated (just null, seems like they didn't get injected).
I'm really struggling to figure out what's going on here. it seems as if an instance of this servlet was created outside of the guice context. if it matters - we are using JBoss 7.1
(sorry about all the XXX's, don't know if i can post actual names)
here's the rest of my guice setup
public class XXXServletContextListener extends GuiceServletContextListener {
@Override
protected Injector getInjector() {
return Guice.createInjector(new XXXUploadModule(), new XXXServletModule());
}
}
and here's the servlet module
public class XXXServletModule extends ServletModule {
@Override
protected void configureServlets() {
serve("/xxx1").with(XXXDetailsServlet.class); // this one fails on post
serve("/xxx2").with(XXXUploadServlet.class); // this one works
}
}
I am not familiar with how Guice servlet integration works, but having the @WebServlet("/Medical_Web") annotation means that your web container will also instantiate that servlet to serve requests. A pool of them actually, it doesn't have a concept of singletons.
My guess is you just have to remove the annotation and let ServletModule control the servlet life-cycle.
+1, XXXDetailsServlet is actually a singleton in Guice scope, but is instantiated here directly by the container.
| common-pile/stackexchange_filtered |
How to customize make:auth in laravel 5.4?
make:auth generated register page accepts name,email,password .
i would like to add additional fields dateofbirth,address.
How can i add more fields?
Override the default register() method in RegisterController. If you don't see any register() method, create a new one and write your code to register a new user.
You should
add the fields you need in the database migration file,
edit the blade view files (directory resources/views/auth)
edit the RegisterController create() function and maybe the validator() too. (directory app/Http/Controllers/Auth)
| common-pile/stackexchange_filtered |
Data binding in b-card title
How can I data bind a data in the the title of a b-card in bootstrap-vue
<b-card header="{{month}}} test runs" header-tag="header">
Currently this is what I have and I got an error
What error? Have you tried :header="month"?
I want to combine the month data and "test runs" text
Try :header="month + ' test runs'", then (it's just a JavaScript expression). Or make a computed property for the text.
I got an undefined
Please give more info. You're saying stuff like "I got an error" and "I got an undefined" but the error messages you're getting should include more info than that. What variable is it saying is undefined?
Use title instead of header if you want a bigger sized text.
Make a computed property for this:
computed: {
headerText() {
return this.month + ' test runs';
}
}
Now use this as a value of header
<b-card :header="headerText" header-tag="header">
I got an error Cannot use 'in' operator to search for 'headerText' in undefined
@Dranier sure you have added the computed property the correct way? Try hardcoding the return statement..
Make sure you fix the retutn typo, too. It should say return.
| common-pile/stackexchange_filtered |
Selecting single attribute of elements from a list object based on another Hashset's element
I have a list object type of some class,
class person
{
public string id { get; set; }
public string regid { get; set; }
public string name { get; set; }
}
List<person> pr = new List<person>();
pr.Add(new person { id = "2",regid="2222", name = "rezoan" });
pr.Add(new person { id = "5",regid="5555", name = "marman" });
pr.Add(new person { id = "3",regid="3333", name = "prithibi" });
and a HashSet of type string,
HashSet<string> inconsistantIDs = new HashSet<string>();
inconsistantIDs.Add("5");
Now i want to get only all the *regid*s from pr list that contains the id's in the inconsistantIDs HashSet and store them into an another HashSet of type string.
i have tried but can only get all the person that has the id's in inconsistantIDs list(This is only an example).
HashSet<person> persons = new HashSet<person>(
pr.Select(p=>p).Where(p=>
inconsistantIDs.Contains(p.id)
));
Could anyone help me out there?
What is your desired result? You say you want "only all the regids", but that's a string property, why is persons a HashSet<person> then?
my desired result is to only take the regids from the pr list based on the id that is under inconsistantIDs. e.g here i want to get the regid='5555' from pr list based on id 5 in inconsistantIDs @TimSchmelter
this is all i can done yet but its not my desired output. its and example persons a HashSet @TimSchmelter. my dsire output is only to get the regids not the entire person
var regIDs = from p in pr join id in inconsistantIDs on p.id equals id
select p.regid;
HashSet<string> matchingRegIDs = new HashSet<string>(regIDs); // contains: "5555"
I am not sure what is your desired output, but I will try anyway:
HashSet<string> persons = new HashSet<string>(
pr.Select(p=>p.regid)
.Where(p=> inconsistantIDs.Any(i=>p.Contains(i))));
| common-pile/stackexchange_filtered |
How to get push notification on/off status programmatically to send it to my webserver?
I have implemented push notification in my app sucessfully using GCM.
I Am using my dotnet webserver to send push message to my app vai GCM Server.
I have stored GCM Registered Id in database on my server.
Now I want to check that if user has manually disable/off the notification for my app (by going through setting), so that i can send request to my webserver that not to send push message to that device by removing GCM ID from database(unregistering).
I know push messages are delivered to my app and are ignored if notification is off but my problem is that my app is going to be used by more 100000 users/devices and if only few of them are using notification on, then there is unnecessary traffic of sending push messages over 100000 devices/users.
So is there any way to know the status of the notification on/off programmatically so that i can send request to deregister the app on my web server?
I have googled and found that there is no way to find it out, so is any one has other solution for my scenario.
Please help me on this as i am stuck on this from last couple of days.
Thanks in advance,
Ketan Bhangale
Store user status in SharedPreference and check status while displaying notification.
same status you can store on your server and sort list of sender with status.
The ability to disable an app’s notifications is only for devices currently running Android 4.1+ Jelly Bean. Also there seems to be no documented way to know the value for notification flag(check box) in default application settings.
Simply you need to maintain that manually. Say first create a web service for update "notification setting" . The app will send the value for notification say "true" or "false" as chooses by user fro our app settings menu.
Now at server side store that value for notification_flag corresponding to that user in your db.
Before sending the notification to the user on your desired even(as per app requirement when to send notification) , check first the value of the flag before firing the send notification msg on the device id for that user.
Also finally to be more reliable at app end too , check the value for that flag before showing the notification in the app when the message arrived.
many thanks for the quick reply.....As per your solution i need to keep one notification setting in my app and based on that functionality will work....... But is there any way to know the status of notification which user manually do outside the app i.e. check/uncheck of the notification box in the devices setting?
@Ketan The ability to disable an app’s notifications is only for devices currently running Android 4.1+ Jelly Bean. Also there seems to be no documented way to know the value for that.
| common-pile/stackexchange_filtered |
Why is the Chinese Room argument such a big deal?
I've been re-reading the Wikipedia article on the Chinese Room argument and I'm... actually quite unimpressed by it. It seems to me to be largely a semantic issue involving the conflation of various meanings of the word "understand". Of course, since people have been arguing about it for 25 years, I doubt very much that I'm right. However... the argument can be thought of as consisting of several layers, each of which I can explain away (to myself, at least).
There is the assumption that being able to understand (interpret a sentence in) a language is a prerequisite to speaking in it.
Let's say that I don't speak a word of Chinese, but I have access to a big dictionary and a grammar table. I could work out what each sentence means, answer it, and then translate that answer back into Chinese, all without speaking Chinese myself. Therefore, being able to interpret (parse) a language is not a prerequisite to speaking it.
(Of course, by the theory of extended cognition I can interpret the language, but we can all agree that the books and lookup tables are simply a source of information and not an algorithm; I'm still the one using them.)
Nevertheless, this task can be removed by a dumb natural language parser and a dictionary, converting Chinese to the set of concepts and relationships encoded in it and vice versa. There is no understanding involved at this stage.
There is the assumption that being able to understand (identify and maintain a train of thought about concepts in) a language is a prerequisite to speaking in it.
We've already optimised away the language, to a set of concepts and relationships between concepts. Now all we need is another lookup table: a sort of verbose dictionary that maps concepts to other concepts and relationships between them. For example, one entry for "computer" might be "performs calculations" and another might be "allows people to play games". An entry for "person" might be "has opinions", and another might be "has possessions". Then an algorithm (yes, I'm introducing one now!) would complete a simple optimisation problem to find a relevant set of concepts and relationships between them, and turn "I like playing computer games" into "What is your favourite game to play on a computer?" or, if it had some entries on "computer games", "Which console do you own?".
The only "understanding" here, apart from the dumb optimisation algorithm, is the knowledge bank. This could conceivably be parsed from Wikipedia, but for a good result it would probably be at least somewhat hand-crafted. Following this would fall down, because this process wouldn't be able to talk about itself.
There is the assumption that being able to understand ("know" how information in affects one's self) a language is a prerequisite to speaking in it.
A set of "opinions" and such associated with the concept "self" could be implemented into the knowledge bank. All meta-cognition could be emulated by ensuring that the knowledge bank had information about cognition in it. However, this program would still just be a mapping from arbitrary inputs to outputs; even if the knowledge bank was mutable (so that it could retain the current topics from sentence to sentence and learn new information) it would still, for example, not react when a sentence is repeated to it verbatim 49 times.
There is the assumption that being able to have effective meta-cognition is a prerequisite to speaking in a language.
Except... there's not. The program described would probably pass the Turing Test. It certainly fulfils the criteria of speaking Chinese. And yet it clearly doesn't think; it's a glorified search engine. It'd probably be able to solve maths problems, would be ignorant of algebra unless somebody taught it to it (in which case, with sufficient teaching, it'd be able to carry out algebraic formulae; Haskell's type system can do this without ever touching a numerical primitive!), and would probably be Turing-complete, and yet wouldn't think. And that's OK.
So why is the Chinese Room argument such a big deal? What have I misinterpreted? This program understands the Chinese language as much as a Python interpreter understands Python, but there is no conscious being to "understand". I don't see the philosophical problem with that.
If you check the Wikipedia article on the argument that you linked, in the History section, you'll note the following statement:
Most of the discussion consists of attempts to refute it.
I think this directly answers the question,
Why is the Chinese Room argument such a big deal?
The Chinese Room Argument was a relatively early attempt in the intermingling of Philosophy and Computer Science to make as concrete an argument on the definition of a "mind", "understanding", and whether or not these can be created with a program. That is, it conjectures that a "mind" cannot.
It's famous in part because the concreteness of this argument means that a counter-argument can be made in kind. That is, the argument spawned a large quantity of refutations, attacking the argument from a large number of vectors, outlined later in the argument.
Put another way,
The Chinese Room Argument is such a big deal because of how many people in the field have found value in refuting it.
Incidentally, your post provides more evidence to this point.
The Chinese room argument is such a big deal because it takes the concept of the Turing machine and Turing's conception of the electronic digital computer (so-called) as a practical version of the universal Turing machine, and shows that the resulting concept of machine computation does not allow the creation of an internal semantics (knowledge). Explaining why this is the case is is a bit of a fraught task, though.
| common-pile/stackexchange_filtered |
Problem in understanding the proof of "every nonempty set of natural numbers has a smallest member".
I was reading Real Analysis by Royden and Fitzpatrick; there the authors introduced the theorem
Theorem 1:
Every non-empty set of natural numbers has a smallest member.
Then
Proof:
Let $E$ be a non-empty set of natural numbers. Since, the set $\{x\in \mathbb R| ~ x\geq 1\}$ is inductive, the natural numbers are bounded below by $1\;.$ Therefore, $E$ is bounded below by $1\;.$ As a consequence of the Completeness Axiom, $E$ has an infimum; define $c= \textrm{inf}~ E\;.$ Since, $c+1$ is not a lower bound for $E,$ there is an $m\in E$ for which $m\lt c+1\;.$ We claim $m$ is the smallest member of $E\;.$ Otherwise, there is an $n\in E$ for which $n\lt m\;.$ Since, $n\in E, c\leq n\;.$ Thus, $c\leq n\leq m \leq c+1$ and therfore, $m-n\lt 1\;.$ Therefore, the natural number$m$ belongs to the interval $(n,n+1)\;.$ an induction argument shows that for every natural number $n,$ $(n, n+1)\cap \mathbb N= \emptyset\;.$ This contradiction confirms that $m$ is the smallest element.
I'm having some problems in conceiving the proof:
$\bullet$ How is the fact $E$ being bounded below by $1$ related to the fact that the set $\{x\in\mathbb R|~x\geq 1\}$ being inductive? I'm not getting that.
$\bullet$ What is the necessity in the proof to show that $E$ is bounded below by $1\;?$ I'm not seeing how it helped the later part of the proof.
$\bullet$ The authors didn't show how $(n, n+1)\cap \mathbb N= \emptyset\;.$ How could I show that?
As to your three points, for the first, you should return to the definitions: What is an inductive set? How are the natural numbers defined? Second point: On what grounds do the authors conclude that the $\inf E$ exists? Third: the authors tell you how to do it! (Induction...)
You're not alone to find this proof a little weak
I suppose that this book defines an "inductive set" to be a set $S$ of real numbers such that $1 \in S$ and if $k \in S$ then $k + 1$ in $S$.
(There is more than one definition of an "inductive set"; see the article in MathWorld.
But this definition seems to be the one that makes sense in this context.)
The fact that the set $\{x\in \mathbb R \mid x\geq 1\}$
is inductive then implies that the natural numbers are a subset of
$\{x\in \mathbb R \mid x\geq 1\}$. Presumably, the book has already shown
that $\{x\in \mathbb R \mid x\geq 1\}$ is bounded below by $1$,
so the natural numbers also are bounded below by $1$.
Regarding why it is important that $E$ is bounded below by $1$,
consider the integers, which are a lot like the natural numbers except
that you can count down indefinitely (not bounded below by $1$) as well as count up. It is easy to construct a subset of the integers with no least element.
Regarding the specific steps of the book's proof, if we ignore this fact about $E$, then we lose the reason by which the book invokes the Completeness Axiom, so we lose the definition of $c$ and we lose every fact that uses $c$.
The authors presumably mean for you to prove that $(n,n+1)\cap\mathbb N = \emptyset$ by induction on $n$ with a base case of $n=1$
(for which you must prove that $(1,2)\cap\mathbb N = \emptyset$)
and an inductive case in which you prove that if
$(n,n+1)\cap\mathbb N = \emptyset$
then $(n+1,n+2)\cap\mathbb N = \emptyset$.
Logicians have the notion of begging the question, from the archaic meaning of beg, to assume something you have no right to use or possess. If you say maple syrup is good for you, because it comes from the sap of a tree, you are begging the question: Socrates drank hemlock and it wasn't good for him. I think the proof is correct, but begs the question. The authors assume that somehow you have been able to come upon the real numbers, without ever using that every non-empty set of natural numbers has a least member. And amazingly you are asked to accept that every nonempty set of real numbers with a lower bound has an infimum, while at the same time holding in abeyance until proven, that every nonempty set of natural numbers has a least member. In short, I believe the axiom of completeness is really a theorem, but the theorem you are considering is not really a theorem; it's an axiom.
This bothered me, too. It does seem rather odd to invoke properties of the real numbers in order to prove a simple property of the natural numbers.
glad that you raised this point +1. This is the main problem with the axiomatic approach of real numbers. You don't develop a theory of natural numbers from an existing setup of real numbers because it looks so artificial and pointless. It is better to develop complex stuff on the basis of simple stuff and not the other way round.
| common-pile/stackexchange_filtered |
Checkbox's isChecked getting NullPointerException
I have an activity with 6 fragments(Swipe disabled). For each fragment I have 2 button( next and previous) which move to 1 fragment next and previous perfectly. I have 2 checkboxes which I need to check if they are checked or not before migrating to next activity. But I get null pointer because checkboxes are initialized in onCreateView Method but the fragment is loaded already due to viewpager. How can I check if checkboxes are checked or not?
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
v = inflater.inflate(R.layout.fragment_step6, container, false);
cb1 = (CheckBox)v.findViewById(R.id.cb1);
next = (Button)v.findViewById(R.id.next);
next.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if(cb1.isChecked())
return true;
}
});
}
Can you post some code of what you have implemented!
Problem is that view pager loads the fragment before the user reaches. So now when user clicks on next button cb1 is giving NPE.
ok, so what is null?
This is the error I am getting: Attempt to invoke virtual method 'boolean android.widget.CheckBox.isChecked()' on a null object reference
so cb1 is null, so v.findViewById(R.id.cb1) returns null, so finally @cstew is right .... or you didn't provide your real code in the question...
But I have double checked the checkbox in fragment6.xml. It's there with the same id.
fragment_step6 <> fragment6
Yes, fragment_step6. Can't it be a case that viewpager already loads fragment 6 when user is at fragment 5 or 4 and view is initialized already before a user actually views it?
v = inflater.inflate(...) then v.findViewById(..) => you see v there ... so ViewPager is irrelevant ... and the only possibility(with given code!) is that R.layout.fragment_step6 doesn't contain view with id R.id.cb1 ... end of story!
ViewPager will preload fragments for you and it's not going to remove old fragments from memory immediately. I do not think the problem is that the view is gone when the user clicks on the next button.
Check your layout file and make sure you have a checkbox in res/layout/fragment_step6.xml with the ID cb1.
it will do as soon as you started to slide left/right. best scenario is to use setOffscreenPageLimit()
the ViewPager is irrelevant and the second part of this answer is the key
Yes, I have defined the checkbox with this id in xml.
pager.setOffscreenPageLimit(6);//6 = to the number you included in your question.
this will tell viewpager to retain your fragments while scrolling pages so you dont lost your state.
I have disabled swiping. The two buttons are used to navigate to next and previous frgament
when you click next or right you are actually sliding. so yeah!. you are scrolling to the next/previous.
| common-pile/stackexchange_filtered |
Rstudio will not knit
Rstudio will not knit. I have been using it for a few weeks in a course.
When I try to knit it executes until it comes to some code and it stops. The code is
ggplot(data = gss, aes(x = year, fill = degree)) +
geom_bar()
The message is
could not find function "ggplot"
This happens with other functions as well. With the code commented out is will run and produce an HTML file.
I have reinstalled R and Rstudio twice. I have had difficulty with my AV software (Bitdefender) blocking the install of some of the apps. The software is a new version updated a couple of weeks ago.
I have loaded tinyverse and it (ggplot) executes in the markdown file.
Please load the package in the document (R Markdown? You didn't say) before you use any functions in it, e.g., library(ggplot2).
| common-pile/stackexchange_filtered |
Two-point Green for Free Dirac Fields
I am trying to compute the $2$-point Green function $\tau_2(x,y)$ for free Dirac fields. The corresponding formula for $\tau_2(x,y)$ is given by
$$\tau_2(x,y) = -\frac{\delta^2}{\delta\eta_x \delta \bar{\eta}_y} \, Z_0[\eta_w, \bar{\eta}_z],$$
where $Z_0$ is the generating functional for free Dirac fields given by
$$
Z_0[\eta_w, \bar{\eta}_z] = \exp\left(-i\int \bar{\eta}_z \, S(z-w) \, \eta_w \, dz \, dw\right).
$$
Here, $\eta$ and $\bar{\eta}$ are source terms. Also, $S^{-1} = i\gamma\cdot\partial - m$ is the operator appearing in the quadratic term of the lagrangian.
Notation
$\eta_x \equiv \eta(x)$ etc.
At first, I determine $\frac{\delta Z_0}{\delta\bar{\eta}_y}$ as
$$
\frac{\delta Z_0}{\delta\bar{\eta}_y} = -iZ_0 \int S(y-w) \eta_w \, dw. \label{a}\tag{1}
$$
Then I try to compute
\begin{align}
-\frac{\delta^2 Z_0}{\delta\eta_x\delta\bar{\eta}_y} &= -\frac{\delta}{\delta\eta_x} \left[-iZ_0 \int S(y-w) \eta_w \, dw \right] \\
&= i\frac{\delta}{\delta\eta_x} \left[Z_0[\eta_w, \bar{\eta}_z] \int S(y-w) \eta_w \, dw \right] \label{b}\tag{2}
\end{align}
Question
How to proceed from this step (eq. (\ref{b}))? I have to take the functional derivative of the product of two Grassmann functionals. What is the relevant formula for it? If you also mention any reference, that would be great.
In eq. (\ref{a}) I have written $Z_0$ before the functional derivative part. Should I write it after the functional derivative term? In other words, what is the chain rule for Grassmann functionals?
In the Appendix, I have mentioned the formula to take the functional derivative of a product of Grassmann functionals. Let's say, I have a product of some Grassmann functionals and an ordinary function $f(x) \in \mathbb{C} \forall x$. Then how to evaluate this functional derivative? That is,
$$
\frac{\delta}{\delta\psi(x)} [\psi(y_1) f(y_2) \psi(y_3)] = ?
$$
where $\psi$ is a Grassmann field.
Appendix
The formula used to compute the eq. (\ref{a}) is given below.
$$
\frac{\delta}{\delta\psi(x)} [\psi(y_1) \cdots \psi(y_n)] = \delta(y_1-x) \psi(y_2) \cdots \psi(y_n) + (-1) \delta(y_2-x) \psi(y_1) \cdots \psi(y_n) + \cdots \cdots + (-1)^{n-1} \delta(y_n-x) \psi(y_1) \cdots \psi(y_{n-1}).
$$
The answer is obviously $S$. Are you sure you’re not overthinking this?
First of all, the functional integral $Z_0$ is a real number, since it is defined as an vacuum expectation value:
$$Z_0[\zeta,\bar{\zeta}]:=\langle0|T\,e^{i\langle\bar{\zeta}_x\psi_x+\bar{\psi}_x\zeta_x\rangle}|0\rangle$$
where $T$ is the time ordering operato and:
$$\langle\bar{\zeta}_x\psi_x+\bar{\psi}_x\zeta_x\rangle:=\int d^4x(\bar{\zeta}(x)\psi(x)+\bar{\psi}(x)\zeta(x))$$
Now, after some analytical steps, it is found that this object must satisfy the Symanzik equation:
$$\left[(i\gamma^\mu\partial_\mu-m)\frac{\delta}{i\delta\bar{\zeta}_z}-\zeta_z\right]Z_0[\zeta,\bar{\zeta}]=0$$
It is found that a solution is readly obtained by putting (as an ansatz):
$$Z_0[\zeta,\bar{\zeta}]=e^{-\int d^4x\int d^4y\,(\bar{\zeta}(x)S_F(x-y)\zeta(y))}=e^{-\langle\bar{\zeta}_xS^F_{xy}\zeta_y\rangle}$$
But, in general, a solution for a linear differential equation can be searched by means of a fourier transform. In this way we define the functional Fourier transform of $Z_0$ as:
$$Z_0[\zeta,\bar{\zeta}]=\int\mathscr{D}\psi\int\mathscr{D}\bar{\psi}\,\tilde{Z}[\psi,\bar{\psi}]e^{i\int d^4x(\bar{\zeta}(x)\psi(x)+\bar{\psi}(x)\zeta(x))}=\int\mathscr{D}\psi\int\mathscr{D}\bar{\psi}\,\tilde{Z}[\psi,\bar{\psi}]e^{i\langle\bar{\zeta}_x\psi_x+\bar{\psi}_x\zeta_x\rangle}$$
By putting this functional fourier transform in the Symanzik equation, we can identify:
$$\tilde{Z}[\psi,\bar{\psi}]:=\mathcal{N}e^{i\int d^4x\,\bar{\psi}(i\gamma^\mu\partial_\mu-m)\psi}=\mathcal{N}e^{iS_D[\psi,\bar{\psi}]}$$
where $\mathcal{N}$ is a constant, and obtain:
$$Z_0[\zeta,\bar{\zeta}]=\mathcal{N}\int\mathscr{D}\psi\int\mathscr{D}\bar{\psi}\,e^{iS_D+i\langle\bar{\zeta}_x\psi_x+\bar{\psi}_x\zeta_x\rangle}$$
Now (in analogy with the bosonic case), the 2n-point Green Function can be written as:
$$S^{(2n)}_0(x_1,...,x_n;y_1,...,y_n)=\langle 0|\psi(x_1)\cdots\psi(x_n)\bar{\psi}(y_1)\cdots\bar{\psi}(y_n)|0\rangle=\frac{\delta^{(2n)}Z_0[\zeta,\bar{\zeta}]}{\delta\bar{\zeta}(x_1)\cdots\delta\bar{\zeta}(x_n)\delta\zeta(y_1)\cdots\delta\zeta(y_n)}\bigg|_{\zeta=0,\bar{\zeta}=0}$$
And if we want to evaluate the 2-point Green function, we first evaluate:
$$\frac{\delta Z_0}{\delta\zeta(x_2)}=\frac{\delta}{\delta\zeta(x_2)}e^{-\int d^4x\int d^4y (\bar{\zeta}(x)S_F(x-y)\zeta(y)}=-\int d^4x\, (-1)\bar{\zeta}(x)S_F(x-x_2)\cdot Z_0[\zeta,\bar{\zeta}]$$
where the $(-1)$ is due to the fact the grassman functional derivative has to jump over the $\bar{\zeta}$, which is a grassman valued function. Then:
$$\frac{\delta}{\delta\bar{\zeta}(x_1)}\left(\frac{\delta Z_0}{\delta\zeta(x_2)}\right)=\frac{\delta}{\delta\bar{\zeta}(x_1)}\left(\int d^4x\, \bar{\zeta}(x)S_F(x-x_2)\cdot Z_0[\zeta,\bar{\zeta}]\right)=S_F(x_1-x_2)\cdot Z_0+\left(\int d^4x\, \bar{\zeta}(x)S_F(x-x_2)\cdot (-1)\int d^4 y\, S_F(x_1-y)\zeta(y)\cdot Z_0\right)$$
By putting $\zeta=0,\bar{\zeta}=0$, the $Z_0$ is $1$ and the second term goes to zero, obtaining:
$$S^{(2)}_0(x_1,x_2)=S_F(x_1-x_2)$$
The functional integral is a real Number, it is not Grassman valued, so you don't have to worry about the order of your equation (2).
"the functional integral $Z_0[\zeta, \bar{\zeta}]$ is a real number". NO, $Z_0$ is NOT a real number. It's instead EVEN-graded in Grassmann numbers $\zeta$ and $\bar{\zeta}$. The n-point Green functions are real/complex though.
In this non-supersymmetric scheme, does it even matter this dinstinction? :)
I am trying to compute the 2-point Green function $\tau_2(x,y) = -\frac{\delta^2}{\delta\eta_x \delta \bar{\eta}_y} \, Z_0[\eta, \bar{\eta}]$
You have to take the $\eta=0$/$\bar{\eta}=0 $ limit as the final step
$$ \tau_2(x,y) = -\frac{\delta^2}{\delta\eta_x \delta \bar{\eta}_y} \, Z_0[\eta, \bar{\eta}] \bigg|_{\eta=0,\bar{\eta}=0}.$$
Green functions (as opposed to the functional integral $Z_0[\eta, \bar{\eta}]$) should not be explicitly $\eta$/$\bar{\eta}$ dependent.
How to proceed from this step (eq. (2)?
The $\frac{\delta}{\delta\eta_x} \left[Z_0[\eta, \bar{\eta}] \right]$ part in eq.(2) will drop out after taking the $\eta=0$/$\bar{\eta}=0$ limit at the final step. Thus you only care about the $\frac{\delta}{\delta\eta_x} \left[\int S(y-w) \eta_w \, dw \right]$ part.
In eq. (1) I have written $Z_0$ before the functional derivative part. Should I write it after the functional derivative term?
The key point is that the functional integral $Z_0[\eta, \bar{\eta}]
$ is EVEN-graded in Grassmann numbers $\eta$/$\bar{\eta}$. The order of $Z_0$ in eq. (1) does NOT matter.
Then how to evaluate this functional derivative?
Just treat the ordinary function $f(x)$ as a constant real/complex number. It does not interfere with the functional derivative.
| common-pile/stackexchange_filtered |
hibernate - How to get select within a select using Criteria
I have the following mysql statement SELECT * FROM (SELECT * FROM NINJA ORDER BY NAME LIMIT 0,5) AS TABLE ORDER BY NAME DESC. I don't know how am I gonna convert it into hibernate criteria. The reason Im doing this kind of select statement is to get the first 5 result then order it descending. In my current criteria in which Im doing the normal Order.addOrder(Order.desc(field)) what happens is it gets the last 5 result of the whole record.
Please help. Thanks in advance.
UPDATE:
Below are some of my codes :
Criteria ninjaCriteria = session.createCriteria(Ninja.class);
ninjaCriteria.setFirstResult(firstResult);
ninjaCriteria.setMaxResults(maxResult);
if (isAscending)
ninjaCriteria.addOrder(Order.asc(field));
else
ninjaCriteria.addOrder(Order.desc(field));
Note: firstResult, maxResult, isAscending, and field are variables.
Even with HQL derived table is not possible: http://stackoverflow.com/questions/2433729/subquery-using-derived-table-in-hibernate-hql, and with criteria query you can go as far as sub-query with DetachedCriteria and another restriction is that DetachedCriteria does not have setMaxResult
Try this one. i hope it will work
Criteria ninjaCriteria = session.createCriteria(Ninja.class);
ninjaCriteria.addOrder(Order.desc("NAME"));
ninjaCriteria.setMaxResults(5);
This will get the last five sorted in descending order. The OP wants the first five sorted in descending order.
The only way how you can solve this without using pure SQL, for my opinion, is to select the list of Ninjas as usual:
Criteria ninjaCriteria = session.createCriteria(Ninja.class);
ninjaCriteria.addOrder(Order.asc("NAME"));
ninjaCriteria.setMaxResults(5);
create a custom Comparator:
/**
* Compare Ninjas by Name descending
*/
private static Comparator<Ninja> ninjaNameComparator = new Comparator<Ninja>() {
public int compare(Ninja ninja1, Ninja ninja2) {
return ninja2.getName().compareTo(ninja1.getName());
}
};
and use it to sort that list descending:
Collection<Ninja> ninjas = ninjaDao.listNinja();
Collections.sort(ninjas, ninjaNameComparator);
From bussiness point of view it's absolutely identical and do not required hard solutions.
| common-pile/stackexchange_filtered |
how do I calculate the Roche limit for a body that is only partially fluid?
I want to calculate the Roche limit for the Earth/Moon system. The general equation given is:
$$d = C \cdot r_p \cdot (\rho_p / \rho_s)^{1/3}$$
where $d$ is the roche limit, $r_p$ is the radius of the primary (Earth), $\rho_p$ and $\rho_s$ are the densities of the primary and secondary respectively.
$C$ is a constant that is $1.26$ for rigid bodies and $2.44$ for completely fluid bodies.
Do I just linearly vary $C$ between $1.26$ and $2.44$ for different percentages of fluidity? For example, for a half fluid system, $C = (2.44+1.26)/2$ ? That seems way too simple. Am I missing something?
There isn't a simple way to do this. It depends on what you mean by "percentage of fluidity". A viscous body could start to break up at 2.44, but the process would be very slow.
The "rigid" calculation assumes that the body will stay perfectly spherical, but otherwise has no tensile strength, so will break up as soon as tides from the planet exceed gravity.
The "fluid" approximation assumes that the body will immediately flow into an "oval" or "teardrop" shape under the effect of the tide, this shape is longer than a sphere, and so more greatly affected by the tides, so it breaks up at a larger ratio.
A body with tensile strength can orbit within the roche limit. The space station does! It is held together by the strength of the materials, not by gravity.
The rigid calculation can be done exactly, but a body that is rigid but with no tensile strength is impossible. The fluid calculation is based on approximation or numerical simulation. A body that consisted of a perfectly rigid shell, with a fluid interior would not deform, and so could be treated like a rigid body. A body with a fluid "ocean" surrounding a rigid core, would deform, so would behave more like a fluid moon.
Actual bodies like the moon may respond in complex ways to tidal forces, but at the scale of the moon, even solid rock will "flow" or at least "deform", so a value closer to 2.44 is more likely to be correct. But what would actually happen as the moon approached and passed this point would be complex and dynamic. It wouldn't break up instantaneously as it reached 2.44, but the process of break-up could begin then. The process would be faster if the moon came closer.
muchas gracias for the info.
I don't suppose you could direct me towards code for those numerical calculations? I've tried to find some, but I think I'm using all the wrong search terms.
| common-pile/stackexchange_filtered |
Problem reading a file containing decimals
Im having a problem trying to figure out why my file is returning 0's instead of the numbers inside the file, here is the code I did in C++ on reading a file:
#include <cstdlib>
#include <iostream>
#include <fstream>
#include <string>
using namespace std;
string cfile;
int cnum1,cnum2,cnum3,cnum4;
bool fired = false;
/*
*
*/
void printMatrix(double **x, int n)
{
int size = n;
for(int i=0; i<size; i++)
{
for(int j=0; j<size; j++)
{
std:: cout << x[i][j] << " " ;
}
std:: cout << std::endl;
}
}
void readFile(string file,double **x, int n)
{
std::ifstream myfile(file.c_str());
int size = n;
for(int i=0; i<size; i++)
{
for(int j=0; j<size; j++)
{
myfile >> x[i][j];
}
}
}
void GetCommandLineArguments(int argc, char **argv,string &file, int &n, int &k, int &m, int &i)
{
if( argc == 6 )
{
cfile = argv[1];
cnum1 = atoi(argv[2]);
cnum2 = atoi(argv[3]);
cnum3 = atoi(argv[4]);
cnum4 = atoi(argv[5]);
}
file = cfile;
n = cnum1;
k = cnum2;
m = cnum3;
i = cnum4;
}
int main(int argc, char** argv) {
int k; //Factor of n
int m; //Innner matrix size
int i; //Iteration
int n; //Matrix Size
string file;
GetCommandLineArguments(argc, argv, file, n, k, m, i);
double **matrix;
matrix = new double*[n];
for(int i = 0; i<n; i++)
matrix[i] = new double[n];
for(int j=0; j<n; j++)
for(int i=0; i<n;i++)
matrix[i][j] = 0;
readFile(file, matrix, n);
printMatrix(matrix, n);
return 0;
}
And here is a sample of my file containing the values I want to extract from it:
20.0
20.0
20.0
20.0
20.0
200.0
20.0
200.0
Hope someone can help me out since I researched some info about this and didn't really find a solution.
Is there a question? There is no code using this, and as it is, it might just work. Does your input contain empty lines? It better not
Well this isn't all my code I just wanted to display the area in my code that I was seeing the issue in.
How are you calling this function? Can you show the code that uses this function?
Yeah I just edited my post and added a pastebin link to my code that I have so far.
@Novazero: your pastebin link will expire in 24h - how useful do you think that will be in, say, a day? Please inline the code into your question.
You should probably put in a bunch of printfs or step through the code with a debugger to figure out where it's going wrong. Nothing obvious comes to mind while looking at your code.
Your reading and printing code appears to work, but your command line reading code may have some problems.
I ran your code without getting command line arguments. The following code, is pretty much copy-pasted from your main minus getting command line args.
int main()
{
double **matrix;
std::string file = "test.dat";
int n = 5;
matrix = new double*[n];
for(int i = 0; i<n; i++)
matrix[i] = new double[n];
for(int j=0; j<n; j++)
for(int i=0; i<n;i++)
matrix[i][j] = 0;
readFile(file, matrix, n);
printMatrix(matrix, n);
return 0;
}
With the input you provide, I get the output:
20 20 20 20 20
200 20 200 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
However looking at your command line arg reading code, I can see some potential problems. First you use atoi(). When atoi fails, it returns 0. Do you know that this code is working? Is everything getting initialized correctly? Or is atoi failing on the input, causing n to be 0 and therefore causing nothing to be read in? (You may wish to look into stringstreams for doing this kind of thing).
Moreover, when argc is not 6, you're silently failing and reading from uninitialized global memory. This memory is garbage. Do you know that this is not happening? If you're just doing:
your.exe test.dat 5
then 5 isn't going to be read from the command line because argc is not 6. Are you always passing 6 arguments like you should when testing? Maybe not, cause in its current state your code really only needs the file name and size.
Most important thing, see if you're getting what you expect from the command line.
PS This is g++:
$ g++ --version
g++ (Ubuntu/Linaro 4.4.4-14ubuntu5)
4.4.5 Copyright (C) 2010 Free Software Foundation, Inc. This is free
software; see the source for copying
conditions. There is NO warranty; not
even for MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.
Well here is the issue the size of my array will be made during runtime through the GetCommandLineArguments command that I show in my post. All my code is shown in my edited first post.
Ill take a look at checking my commandline function I created I believe your right that it might be changing the values
well here is an example on what I need to input through command line p1.exe sample.input 8 4 16 100. Does the p1.exe count as a commandline argument then?
@Novazero yes it counts as one. argv[0] is the name of the executable.
Hmm well it definitely passes the value 5 back if I use the command line and all the other functions contain the number 5 for its n.
Well I did edit my code to only read 3 arguments atm, and it still displays zeros, and like I said my n is equal to 5 throughout my whole code wherever n is used. Im going to try to use stringstreams and see what difference it makes.
@Novazero, there's a lot you can try to debug -- what happens when you print out the values as they're read? Are they coming from the file correctly? Can you printout the matrix at that point? Is the matrix getting filled out there but just not passed back correctly? Is the problem in your print routine? Is everything getting passed correctly there?
I did this to my readFile function and got a bunch of zero's being read before the values even get put into my array: std:: cout<< myfile;
I suggest a fixup like this, making things more robust:
#include <fstream>
#include <iostream>
#include <string>
#include <stdexcept>
template <size_t size>
void readFile(std::string file, double (&x)[size][size])
{
std::ifstream myfile(file.c_str());
for(int i=0; i<size; i++)
{
for(int j=0; j<size; j++)
{
if (!(myfile >> x[i][j]))
throw std::runtime_error("Couldn't read double from " + file);
}
}
}
int main(int argc, const char *const args[])
{
try
{
double data[10][10];
readFile("input.txt", data);
} catch(const std::exception& e)
{
std::cerr << "Whoops: " << e.what() << std::endl;
return 255;
}
return 0;
}
Your input operator will read the numbers but leave the line breaks in the file.
See this answer about how to ignore() the end-of-line after reading your values
Why would we call cin.clear() and cin.ignore() after reading input?
| common-pile/stackexchange_filtered |
Support different versions of a third-party library for different OS
how to support to vendor libraries. Apart from file changes, and APIs, the new version is 64 bit whereas the old is 32 bit
What's the best design and implementation effort to support two version of a library.
Lets call them Lib_v1, and Lib_v2. We want to reduce the amount of code base changes that call the existing 'Lib_v1 APIs for two specific platforms--OS_A and OS_B--but have the underlying library be Lib_v2. As mentioned, we want to keep a lot of the existing API names, but the internal library use would be dependent on the OS. For the third OS, call it OS_C, we want to use the old Lib_v1 library.
Summary:
OS_A and OS_B:
-use Lib_v2
OS_C:
-use Lib_v1
What kind of design should I look at, and what kind of implementation effort should I steer myself to doing.
Edit: yes, I've done some preliminary search. I am lacking a lot of SWE design principles. I should note, that we use different compilers for the various platforms.
Have you done some basic research? That's expected before asking questions here. In case you didn't find an appropriate search term, "configure script" is it, and it leads to https://en.m.wikipedia.org/wiki/Configure_script which should give you answers.
Typically the dynamic link editor ldd would work out
details related to bumping the major version from 1 to 2,
assuming that some functions continue to work the same.
But upgrading all functions to accept wider integers
is more like sun-setting "Product32" and launching "Product64".
It sounds like you want to write app code which is portable
across three operating systems and oblivious to the v1 / v2 schism.
Decide on whether you have a 32 or 64 bit app.
Prohibit the app code from calling either product.
Write an adapter layer for the app to call.
There's no need to support all of Product32 -- just the subset
of its functions that your app relies on.
Start stubbing them out, and compile / link errors will immediately
tell you about any that you've not yet gotten to.
Soon you will have them all.
The adapter will accept application calls and on OS_C
will change width and make the corresponding Product32 v1 call.
On OS_A and OS_B the adapter layer will instead change width and
make a Product64 v2 call.
Changing width probably means casting int32 to a wider integer
prior to the call as needed, and of course adapting any result widths, too.
If you choose to make your app uniformly 32-bit,
then changing width is pretty straightforward except for
any large 64-bit values that come back from v2.
In general for unsigned args or return value,
when downcasting to a narrower integer you'll need to check
for overflow and signal an error on magnitudes
greater than four billion.
| common-pile/stackexchange_filtered |
Segmentation Fault Before function is even called
When I try executing this code, the code crashes before the cout statement in print_array even executes. I have no clue why !
However, if I comment out the call to mergesort in the main function, the print_array executes fine.
#include<iostream>
using namespace std;
void print_array( int A[], int n)
{
int i;
cout<<"\n Array elts:\t";
for(i=0;i<n;i++) cout<<A[i]<<" ";
}
void mergesort(int A[], int beg, int end)
{
if(beg>end) return;
mergesort(A,beg,(beg+end)/2);
mergesort(A, ((beg+end)/2)+1, end);
int B[end-beg+1],i,j,k;
i=beg; j=(beg+end)/2;
k=0;
while(i<(beg+end)/2 && j<end)
{
if(A[i] < A[j]) B[k++]=A[i++];
else B[k++]=A[j++];
}
while(i<(beg+end)/2) B[k++]=A[i++];
while(j<end) B[k++]=A[j++];
for(i=beg; i<end; i++) A[i]=B[i];
}
int main()
{
int n=10;
int A[]={1,23,34,4,56,60,71,8,99,0};
print_array(A,n);
mergesort(A,0,n);
print_array(A,n);
}
Update:
Using endl will flush the output and the print_array values will get displayed on the screen. Apart from this, the reason I got a seg fault was because I had not included the equality check in mergesort. Here is the updated code:
#include<iostream>
using namespace std;
void print_array( int A[], int n)
{
int i;
cout<<"\n Array elts:\t";
for(i=0;i<n;i++) cout<<A[i]<<" ";
}
void mergesort(int A[], int beg, int end)
{
if(beg>=end) return;
mergesort(A,beg,(beg+end)/2);
mergesort(A, ((beg+end)/2)+1, end);
int B[end-beg+1],i,j,k;
i=beg; j=(beg+end)/2;
k=0;
while(i<(beg+end)/2 && j<end)
{
if(A[i] < A[j]) B[k++]=A[i++];
else B[k++]=A[j++];
}
while(i<(beg+end)/2) B[k++]=A[i++];
while(j<end) B[k++]=A[j++];
for(i=beg; i<end; i++) A[i]=B[i];
}
int main()
{
int n=10;
int A[]={1,23,34,4,56,60,71,8,99,0};
print_array(A,n);
mergesort(A,0,n);
print_array(A,n);
}
The code is by no means doing what it should but it isn't giving seg faults anymore.
Thanks guys !
On what line does your debugger say the seg fault occurs?
Your're not flushing the output. Add an endl.
I added int to main. As posted your code shouldn't have compiled as C++. C++ does not have the implicit int that original C had. Which leads to the question: is this the real code?
My code does compile....
Also, the gdb output says seg fault in mergesort(int*, int, int) ()
I tried on C++98 and C++11 ( using -std=c++11 with g++).
Which line in mergesort? Which recursive iteration? What were the variables values before it crashes?
The C++ language doesn't like Variable Length Arrays: int B[beg-int+1]. I recommend you use dynamic memory allocation instead. VLA is a language extension by G++.
Thanks for your suggestion. Tried using malloc and new but I still get the same seg fault in the same place !
My first though is that this is a Stack Corruption which is leading to a "Stack Overflow" when you call other methods. Try getting additional information in GDB, try compiling increasing the debugging level and turning off the optimizations of gcc (i.e., -g3 -O0).
Moreover, you can use the "valgrind" software to find the corruption and post the results here. (Sorry for requesting this here, but I cannot make comments).
Look at the last line in mergesort: You're accessing elements in B from beg to end, but B is zero indexed. That's one problem; there may be more.
The issue is that the program crashes even before the mergesort function is called. It doesn't even print the elements as done in print_array
@LockStock As mentioned in one of the comments to your question, your output isn't being flushed, so you're not seeing it. You need to add a cout << endl to the end of your print_array function.
Ah yes, that was it !
| common-pile/stackexchange_filtered |
Voice recording in 8000hz,16bit,mono wav format is possiple in html5
HI I want to creating web application for voice recording i need to capture voice in 8000hz,16bit,mono and pcm wav format it's possiple in html?
HTML alone, no. You will need some JavaScript as well.
@DJDavid98 Did You refer any javascript?
| common-pile/stackexchange_filtered |
error: The argument type 'Color' can't be assigned to the parameter type 'MaterialColor'?
primaryColor: const Color(0xff090C22),
colorScheme: ColorScheme.fromSwatch(
primarySwatch: Color(0xff090C22),
),
),
I want that specific color to change the appbar color and to be the primary color but instead it showed this error
error: The argument type 'Color' can't be assigned to the parameter type 'MaterialColor'. (argument_type_not_assignable at [bmi_calculator] lib\main.dart:12)
Does this answer your question? The argument type 'Color' can't be assigned to the parameter type 'MaterialColor?
primarySwatch takes MaterialColor. You can check different between primaryColor and primarySwatch
MaterialColor Defines a single color as well a color swatch with ten shades of the color.
The color's shades are referred to by index. The greater the index, the darker the color. There are 10 valid indices: 50, 100, 200, ..., 900. The value of this color should the same the value of index 500 and shade500.
We need to provide primarySwatch: Colors.black, where Colors.black can use different shades.
You can use your custom color like this,
First creating your file name where all you color is defined .
import 'package:flutter/material.dart';
class Palette {
static const MaterialColor kToDark = const MaterialColor(
0xffe55f48, // 0% comes in here, this will be color picked if no shade is selected when defining a Color property which doesn’t require a swatch.
const <int, Color>{
50: const Color(0xffce5641 ),//10%
100: const Color(0xffb74c3a),//20%
200: const Color(0xffa04332),//30%
300: const Color(0xff89392b),//40%
400: const Color(0xff733024),//50%
500: const Color(0xff5c261d),//60%
600: const Color(0xff451c16),//70%
700: const Color(0xff2e130e),//80%
800: const Color(0xff170907),//90%
900: const Color(0xff000000),//100%
},
);
}
Here , we can import and use it as follows,
import 'package:flutter/material.dart';
import 'config/palette.dart';
void main() {
runApp(App());
}
class App extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Demo for swatch',
theme: ThemeData(
primarySwatch: Palette.kToDark, //our palette goes here by tapping into the class
),
home: YourFirstPage(),
);
}
}
| common-pile/stackexchange_filtered |
Unable to fetch item using getitem method
I was trying to access item using getitem method from SOAP service.
Sitecore.Data.Items.Item item = Sitecore.context.Database.GetItem(<ID>)
But Item is null. I tried to fetch using Id and path also. And checked the connection strings as well. Does the SOAP service need to have any special permissions to fetch the item? Do I have to check any other things why SOAP service is not able to search.
My guess here would be that the SOAP service execution doesn't trigger Sitecore's http request pipeline which are responsible to resolve the information which are normally available through the Sitecore.Context static class - nor does this normally what the developers should use to expose an API endpoint in Sitecore web application.
Instead of SOAP service, you should look in the following options if you're not tied to SOAP service approach:
Use Web API controller
Use Sitecore Services Client
ok....I will check on this....I am working on Sitecore 7.2 version. Hope it supports.
| common-pile/stackexchange_filtered |
Running a Query that returns the first instance of each ID sorted alphabetically by another column
I know this may sound a little confusing, but basically I have a query returning a set of "Cluster Names" and "IDs". I want to have the query only select one row for each ID, based on the "Cluster Name", sorted alphabetically.
For example, we are currently having the query return the following:
Cluster Name ID
Construction Technologies 239378769
Construction Technologies 239378942
Construction Technologies 239510698
Law and Public Safety 239510698
Health Science 239510698
Health Science 240236166
Health Science 240236203
Construction Technologies 240236209
Health Science 240236236
Education and Training 240236303
If you notice the ID 239510698 shows up three times, for Construction Technologies, Law and Public Safety, and Health Science. What we want is instead of returning this row three times, it would only return it once, which would be the first one alphabetically, Construction Technologies.
Here is the Query:
SELECT c.clustername AS ClusterName, MAX(cc.monsterid) AS CountOfClusters
FROM tblCareerCluster cc
INNER JOIN tblClusters c ON c.clusterid = cc.clusterid
LEFT JOIN tblStudentPersonal sp ON sp.monsterid = cc.monsterid
INNER JOIN tblStudentSchool ss ON ss.monsterid = cc.monsterid
INNER JOIN tblSchools s ON s.schoolid = ss.schoolid
INNER JOIN tblSchoolDistricts sd ON sd.schoolid = s.schoolid
INNER JOIN tblDistricts d ON d.districtid = sd.districtid
INNER JOIN tblDistrictUserDistrictGroups rurg ON rurg.schoolid = ss.schoolid
INNER JOIN tblGroups g ON g.groupid = rurg.groupid
INNER JOIN tblUserGroups ug ON ug.groupid = g.groupid
WHERE cc.ranking = (SELECT MAX(cc2.ranking) from tblCareerCluster cc2 INNER JOIN tblCareerCluster cc ON cc.monsterid = cc2.monsterid)
AND ss.graduationyear IN (SELECT Items FROM FN_Split('2015,2016,2017,2018,2019,2020,2021,2022,2023', ',')) AND sp.optin = 'Yes'
AND g.groupname = 'My Districts' AND ug.userid = 14
GROUP BY c.clustername, cc.ranking, d.district, cc.monsterid, cc.clusterid, d.IRN, d.districtid
Let me know if you would like any additional information, any help would be greatly appreciated.
In your sample result you have a column ID but in the query you have CountOfClusters - this is a bit confusing. Please make sure that the sample and query shows the same things.
I think this one maybe simpler
Using the first query returning Cluster Name and ID
SQL Fiddle Demo
SELECT MIN(Cluster_Name) as ClusterName, ID
FROM <your query>
GROUP BY ID
select MIN(A.ClusterName), A.CountOfClusters
FROM
(
SELECT c.clustername AS ClusterName, MAX(cc.monsterid) AS CountOfClusters
FROM tblCareerCluster cc
INNER JOIN tblClusters c ON c.clusterid = cc.clusterid
LEFT JOIN tblStudentPersonal sp ON sp.monsterid = cc.monsterid
INNER JOIN tblStudentSchool ss ON ss.monsterid = cc.monsterid
INNER JOIN tblSchools s ON s.schoolid = ss.schoolid
INNER JOIN tblSchoolDistricts sd ON sd.schoolid = s.schoolid
INNER JOIN tblDistricts d ON d.districtid = sd.districtid
INNER JOIN tblDistrictUserDistrictGroups rurg ON rurg.schoolid = ss.schoolid
INNER JOIN tblGroups g ON g.groupid = rurg.groupid
INNER JOIN tblUserGroups ug ON ug.groupid = g.groupid
WHERE cc.ranking = (SELECT MAX(cc2.ranking) from tblCareerCluster cc2 INNER JOIN tblCareerCluster cc ON cc.monsterid = cc2.monsterid)
AND ss.graduationyear IN (SELECT Items FROM FN_Split('2015,2016,2017,2018,2019,2020,2021,2022,2023', ',')) AND sp.optin = 'Yes'
AND g.groupname = 'My Districts' AND ug.userid = 14
GROUP BY c.clustername, cc.ranking, d.district, cc.monsterid, cc.clusterid, d.IRN, d.districtid
) A
GROUP BY A.CountOfClusters
Use Row_Number to pick the first result for each ID, ordered by the ClusterName:
With CTE as (select *, Row_Number() over (partition by ID order by ClusterName) as RN from [Long sequence of tables and joins])
Select ClusterName, ID from CTE where RN = 1
You can make the CTE into a temp table or subquery instead if you would prefer.
You can use the row_number window function to number rows within each partition of ID and then select the row number = 1. if you can't add the function directly to your query you can use a common table expression and use your original query as a derived table:
with cte as (
select *, rn = row_number() over (partition by id order by clustername)
from ( << your_original_query here >> ) t
)
select * from cte
where rn = 1
order by ClusterName
| common-pile/stackexchange_filtered |
Statistical model of a website
I know that HMMs can be used to construct statistical models of text. Thus, we can generate text according to this model, and compute the likelihood of a text sample under the model.
What tools are used to construct statistical models of websites or webpages? Basically, it means things like various paragraphs, headings, links, pictures, menus on sidebars etc. or some subset of these features. Are there any generative models for webpages or websites?
| common-pile/stackexchange_filtered |
Iterate through List from Controller to View
Ok. so I have this controller that retrieves the user password.
Function ViewUsers(ByVal users As Users) As ViewResult
Dim pwordList = New List(Of String)()
Dim passdecList = New List(Of String)()
Dim pwordQuery = From pword In db.UsersDB
Select pword.Password
For Each pass As String In pwordList
passString = PassEncrypt.Decrypt(pass)
passdecList.Add(passString)
//send each decrypted password to a table Password column in the view
Next
End Function
I dont know how I would do it.
Don't store passwords in reversible form, and never (never!) display them to admins.
You shouldn't be storing passwords in clear text in your database. They should always be encrypted. But for the purpose of this example you could do the following:
Function ViewUsers(ByVal users As Users) As ViewResult
Dim pwordList = db.UsersDB.Select(Function(u)u.Password).ToList()
return View(pwordList)
End Function
and inside your strongly typed view:
@ModelType List(Of string)
<table>
<thead>
<tr>
<th>Password</th>
</tr>
</thead>
<tbody>
@For Each password In Model
@<tr><td>@item</td></tr>
Next password
</tbody>
</table>
He is encrypting them (see PassEncrypt.Decrypt). They should always be hashed.
Yes, I created a function to encrypt and decrypt passwords. So the encrypted passwords are in my db. as you can see in my controller, i used linq to select the passwords from db and encrypted each one and added those to a list. In my view, I created a for each loop to iterate thru the database but putting in the for each loop for the password inside of the other for each would cause redundancy of rows.
| common-pile/stackexchange_filtered |
Existing Seaborn jointplot add to scatter plot part only
Is there a way to create a Seaborn Jointplot and then add additional data to the scatter plot part, but not the distributions?
Example below creates df and Jointplot on df. Then I would like to add df2 onto the scatter plot only, and still have it show in the legend.
import pandas as pd
import seaborn as sns
d = {
'x1': [3,2,5,1,1,0],
'y1': [1,1,2,3,0,2],
'cat': ['a','a','a','b','b','b']
}
df = pd.DataFrame(d)
g = sns.jointplot(data=df, x='x1', y='y1', hue='cat')
d = {
'x2': [2,0,6,0,4,1],
'y2': [-3,-2,0,2,3,4],
'cat': ['c','c','c','c','d','d']
}
df2 = pd.DataFrame(d)
## how can I add df2 to the scatter part of g
## but not to the distribution
## and still have "c" and "d" in my scatter plot legend
Add points to the scatter plot with sns.scatterplot(..., ax=g.ax_joint) which will automatically expand the legend to include the new categories of points.
Assign new colors by selecting a color palette before generating the joint plot so that appropriate new colors can be selected automatically.
import pandas as pd # v 1.1.3
import seaborn as sns # v 0.11.0
d = {
'x1': [3,2,5,1,1,0],
'y1': [1,1,2,3,0,2],
'cat': ['a','a','a','b','b','b']
}
df = pd.DataFrame(d)
# Set seaborn color palette
sns.set_palette('bright')
g = sns.jointplot(data=df, x='x1', y='y1', hue='cat')
d = {
'x2': [2,0,6,0,4,1],
'y2': [-3,-2,0,2,3,4],
'cat': ['c','c','c','c','d','d']
}
df2 = pd.DataFrame(d)
# Extract new colors from currently selected color palette
colors = sns.color_palette()[g.hue.nunique():][:df2['cat'].nunique()]
# Plot additional points from second dataframe in scatter plot of the joint plot
sns.scatterplot(data=df2, x='x2', y='y2', hue='cat', palette=colors, ax=g.ax_joint);
| common-pile/stackexchange_filtered |
Systems of equations in ConTeXt
In LaTeX+amsmath, I can say
\documentclass{article}
\usepackage{amsmath}
\begin{document}
\begin{equation*}
\left\{
\begin{aligned}
px-y-4&=0\\
2x-2y-6m&=0
\end{aligned}
\right.
\end{equation*}
\end{document}
to get an aligned system of equations with a brace on the left. How to achieve a similar effect in ConTeXt? I tried saying \startalign[left=\{] and similar things with no effect.
Edit: thanks to Aditya's and Marco's comments below I came up with this:
\starttext
\startformula
\startmathmatrix[left={\left\{},right={\right.},distance=0pt,align={right,left}]
\NC px-y-4\NC{}=0\NC\NR
\NC 2x-2y-6m\NC{}=0\NC\NR
\stopmathmatrix
\stopformula
\stoptext
Now my question is: can this be made better? I do not like the distance=0pt and {} stuff very much (but I guess this keeps the spacing on both sides of the equation sign uniform).
Use mathmatrix.
Could you show an example?
Here are two articles worth reading: Using \startalign and Display math in ConTeXt
I don't have an answer to the {} spacing issue. But to keep the code cleaner you can define your own environment: \definemathmatrix[eqnsystem][left={\left\{},right={\right.},distance=0pt,align={right,left}] and then use \starteqnsystem … \stopeqnsystem instead.
@Marco: could you make your comment into an answer, so that I could accept it?
As mentioned in the documentation Using
\startalign and Display
math in ConTeXt you
can use the \startmathmatrix … \stopmathmatrix environment.
It makes sense to define your own environment for equation systems. This has
the advantage that it is logically marked up. “This is an equation sytem” as
opposed to “This is a multi-line formula with a brace at the left”. You can
change the layout of the equation systems globally and it keeps the source code clean and makes it easier to read.
I would also get rid of the ugly pair of braces. However, I don't have a
proper solution for the correct spacing (either wait for Aditya to tune in –
he's the expert for math or ask on the mailing list).
\definemathmatrix
[eqnsystem]
[left={\left\{},
right=\right.,
distance=.3em,
align={right, left}]
\starttext
\startformula
\starteqnsystem
\NC px-y-4 \NC =0 \NC\NR
\NC 2x-2y-6m \NC =0 \NC\NR
\stopeqnsystem
\stopformula
\stoptext
Thanks a lot! And what's wrong with the .3em approach? It seems perfect! (I'd also add a \thinspace after \left\{, but this is a matter of taste.)
The .3em seems arbitrary and a little hackish. I did some testing and found that the actual value is more about .2278em. I'd prefer the exact value that TeX uses for this distance.
| common-pile/stackexchange_filtered |
How to paint only DataGridView's cell background not its content?
i need to paint only background of DataGridView cell not its Content.But while i'm doing painting it paint its content too.Please help me out.
My code goes like this.
private void Daywisegrid_CellPainting(object sender, DataGridViewCellPaintingEventArgs e)
{
if (e.RowIndex == 0 )
{
using (Brush gridBrush = new SolidBrush(this.Daywisegrid.GridColor))
{
using (Brush backColorBrush = new SolidBrush(e.CellStyle.BackColor))
{
using (Pen gridLinePen = new Pen(gridBrush))
{
// Clear cell
e.Graphics.FillRectangle(backColorBrush, e.CellBounds);
//Bottom line drawing
e.Graphics.DrawLine(gridLinePen, e.CellBounds.Left, e.CellBounds.Bottom-1 , e.CellBounds.Right, e.CellBounds.Bottom-1);
e.Handled = true;
}
}
}
}
i am not sure why you need to catch CellPainting event to change cell background color just do it like this
Daywisegrid.Rows[RowIndex].Cells[columnIndex].Style.BackColor = Color.Red;
But if you want to do it in painting try this
private void Daywisegrid_CellPainting(object sender, DataGridViewCellPaintingEventArgs e)
{
if (e.RowIndex == 0 )
{
using (Brush gridBrush = new SolidBrush(this.Daywisegrid.GridColor))
{
using (Brush backColorBrush = new SolidBrush(e.CellStyle.BackColor))
{
using (Pen gridLinePen = new Pen(gridBrush))
{
// Clear cell
e.Graphics.FillRectangle(backColorBrush, e.CellBounds);
//Bottom line drawing
e.Graphics.DrawLine(gridLinePen, e.CellBounds.Left, e.CellBounds.Bottom-1 , e.CellBounds.Right, e.CellBounds.Bottom-1);
// here you force paint of content
e.PaintContent( e.ClipBounds );
e.Handled = true;
}
}
}
}
Should be
e.Graphics.DrawLine(gridLinePen, e.CellBounds.Left, e.CellBounds.Bottom-1 , e.CellBounds.Right - 1, e.CellBounds.Bottom - 1);
e.CellBounds.Right, e.CellBounds.Bottom - 1 point would be erased by next cell.
| common-pile/stackexchange_filtered |
How to fix the ElementNotVisibleException in Edge browser using Selenium?
I am currently testing a GWT web application similar to MS Paint and the problem which I am facing is that my test case passes on the browsers like Chrome, Firefox, and IE but sadly, it fails in the Microsoft Edge browser. I am unable to find out how to fix this issue after searching the whole Internet and trying out many methods which are written on the Stack Overflow and Google Groups, I am feeling extremely hopeless. Meanwhile, here is my code:
public class Insert_Element_in_Edge {
private static WebDriver driver;
@Test
public void main() throws InterruptedException, IOException
{
// TODO Auto-generated method stub
System.setProperty("webdriver.edge.driver", "D:\\SELENIUM\\Drivers\\MicrosoftWebDriver.exe");
driver = new EdgeDriver();
driver.manage().timeouts().implicitlyWait(10,TimeUnit.SECONDS);
driver.manage().window().maximize();
Thread.sleep(3000);
driver.get("xxxx.com");
WebDriverWait wait = new WebDriverWait(driver, 20);
wait.until(ExpectedConditions.titleContains("xxxx"));
WebElement canvas = driver.findElement(By.id("frontCanvas"));
Thread.sleep(3000);
final String InsertPath="//img[contains(@src,'Insert Element')]";
WebElement Insert = driver.findElement(By.xpath(InsertPath));
Thread.sleep(3000);
Insert.click();
}
}
Here's the error which I am facing:
org.openqa.selenium.ElementNotVisibleException: Element not displayed (WARNING: The server did not provide any stacktrace information)
Given below is the HTML code of the element which I am trying to locate in the Edge browser.
<div id="isc_2Q" eventproxy="isc_XXXIconButton_SClient_1" role="button" aria-label="xxx"
onfocus="isc.EH.handleFocus(isc_XXXIconButton_SClient_1,true);"
onblur="if(window.isc)isc.EH.handleBlur(isc_XXXIconButton_SClient_1,true);"
tabindex="1020" style="position: absolute; left: 0px; top: 54px; width: 40px;
height: 34px; z-index: 201152; padding: 0px; opacity: 1; overflow: visible;
cursor: pointer;" onscroll="return isc_XXXIconButton_SClient_1.$lh()" class="iconHoverZindex">
<div id="isc_2R"
eventproxy="isc_XXXIconButton_SClient_1" style="position: relative;
-webkit-margin-collapse: separate separate; visibility: inherit;
z-index: 201152; padding: 0px; cursor: pointer;">
<table role="presentation" cellspacing="0" cellpadding="0" width="40px" height="34px">
<tbody>
<tr>
<td nowrap="true" class="iconButton" style="background-color:transparent;"
align="center" valign="middle" onfocus="isc_XXXIconButton_SClient_1.$47()">
<img src="Insert Element.png" width="24" height="24" align="TEXTTOPundefined"
class="iconButtonHIcon" eventpart="icon" border="0"
suppress="TRUE" draggable="true"/>
<span style="vertical-align:middle;align-content:center;"></span>
</td>
</tr>
</tbody>
</table>
</div>
</div>
This element is visible in other 3 browsers and is clicked successfully. But I just got stuck in Edge. Also, in Edge, the HTML5 canvas is also not displayed using Selenium. Please help.
Which of the elements fails here? Is it possible to post the website url or not?
Sorry i can't post the URL because of the policies of my company. The WebElement Insert is failing here.
Hmm. Can you maybe post part of the html? So I can create my own html file using that part and try it out locally.
I have added the HTML code in the question details. Please check.
Hmm. In that html part I can't find any of the elements you are trying to find. So this part of html wont work.
From the HTML you provided, I am unable to locate either of the elements i.e By.id("frontCanvas") as well as //img[contains(@src,'Insert Element')]. Am I missing something?
@DebanjanB, sorry it was my mistake. I forgot to edit xxx.png to Insert Element.png because of the privacy policy of company.
Now, I have edited that, please check
@DG4 Currently when I'm using the new html. I can't find the frontCanvas part because it isn't in the html. So when I removed that part and continue with the test to search the webelement Insert, then it works here on my edge computer. I think we need the canvas part also. As i understand you are using html5 canvas and that might be the cause of your problems.
The element that you are trying to interact, is visible in edge browser while doing manually? What i think is it might be a bug and your application developer might work on.
You can add wait statement before click on it as given below.
wait.until(ExpectedConditions.ElementIsVisible(By.xpath(InsertPath)));
I tried adding it and I was doing it the same way as you suggested but then it showed org.openqa.selenium.TimeoutException: Expected condition failed: waiting for visibility of element located by By.xpath
you can increase the time in wait
I tried increasing the timeout time but it didn't work for me. Can you suggest some other method?
try with actions class to click on it.
Tried it, still the same problem.
can you infer something from the HTML code of the element which I am trying to locate.
You have to use element to be clicable as it is a button
WebDriverWait wait = new WebDriverWait (driver, 50);
final String InsertPath="//img[contains(@src,'Insert Element')]";
WebElement Insert= wait.until(ExpectedConditions.elementToBeClickable(By.xpath(InsertPath)));
Insert.click();
So increase timeout time, it means your website take time to load
I tried increasing the timeout time but it didn't work for me. Can you suggest some other method?
In the past I aslo had some trouble location elements with xPath in Edge, GWT and Selenium
Sadly I didnt found a solution but a workaround:
We gave every Element an ID
for example:
panel.getElement().setId("panel-id");
and then used the ID-Locator to find the Element:
findElement(By.id("panel-id"))
Well, I want to know if these getElement() and setId() functions are associated with Selenium or GWT and if not, then how do I set the ID by myself because I haven't developed the GWT application and I am working on just the automation of that app.
Along with that, one thing which I think I forgot to mention is that when I check whether the EdgeDriver in Selenium is able to find the elements or not, then it was successfully able to find those elements but the real problem lies in the fact that they were not visible and thus, showing the exception.
@DG4 if you want to use getElement() and setId() these functions are part of the GWT code - When you are able to locate the WebElements but the click() fails and even if you wait for the Element to be Clickable - i would say its a weird bug with Edge and or Selinum/GWT
I would suggest a few tweaks in your code block as follows:
First of all we will try to get rid of all the Thread.sleep(n) calls.
Now, to address the reason for ElementNotVisibleException, we will scroll the element of our interest within the Viewport and then invoke click() method.
Your final code block will may look:
driver = new EdgeDriver();
driver.manage().window().maximize();
driver.get("xxxx.com");
WebDriverWait wait = new WebDriverWait(driver, 20);
wait.until(ExpectedConditions.titleContains("xxxx"));
WebElement canvas = driver.findElement(By.id("frontCanvas"));
WebElement Insert = driver.findElement(By.xpath("//div[@id='isc_2R']//td/img[@class='iconButtonHIcon' and contains(@src,'Insert Element.png')]"));
JavascriptExecutor jse = (JavascriptExecutor)driver;
jse.executeScript("arguments[0].scrollIntoView()", Insert);
Insert.click();
I tried your method and for the statement wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("frontCanvas"))); it is showing the TimeOut exception.
Edited the Answer, modified partial code as per your initial code which worked for you. Let me know the status.
Other than above suggestions,
Make sure that "Your Element" that you are looking for is physically visible in your Browser screen. Meaning, if element is not visible , user needs to scroll down in order to see it. Driver behaves similar. In order to prevent "ElementNotVisiable" exception make sure it is also visible on browser page
Try This,
Select elm = new Select(driver.findElement(By.Xpath("Your element's Absolute Xpath")));
//Thread.sleep(100);
elm.selectByValue("Dropdown Value which you want to select");
Yes, it is physically available on the screen and manually, I am able to perform all the actions but not through the automation.
Try to select it by using "Absolute Xpath" of the element,unless you don't have ID of the element. If you are having this problem on a page which angular 4+ is used, you also might try to update selenium version. Also another approach could be waiting until angulars' views are loaded. If there are more than one view and you cannot detect, try Thread.sleep before you execute Click on element. I hope one of these approaches will solve your issue.
I read answers to your question that should solve your problem.
But noone asked you (and you did not provide us, in your question) informations about the version of your MicrosoftWebDriver, Selenium, and Browser that are you using.
From Microsoft WebDriver download, are you using a version that should be supported from your browser?
From EdgeHTML issue tracker, I found only 2 issues (that are fixed) with key element ElementNotVisibleException.
I am using Selenium 3.5.3 and the version of the Microsoft WebDriver is 4.15063 which matches with the build number of my PC. My PC has a Windows 10 Pro Edition, version is 1703 and OS Build is 15063.674.
@DG4 and the version of your browser? should be 15.15063
@DG4 I really think could be a version problem with something (with selenium 3.5.3, if we trust what say Microsoft). Could you try to update your browser at the latest version and try with the latest release of the Microsoft WebDriver? Anyway, if doesn't work, I think there are the extremes to open an issue in their issue tracker.
In these cases I usually take a screenshot just before the line where exception occurs and save it somewhere to further investigation. Then I mostly have to add a scrollUp()/scrollToVisible() function before this line to fix the problem.
I've exactly the same problem with GWT. For any reason there's no way to call click directly.
Workaround that worked for me
JavascriptExecutor jse = (JavascriptExecutor)driver;
jse.executeScript("arguments[0].click()", element);
| common-pile/stackexchange_filtered |
Trouble installing Kinetic on Ubuntu 16.04
I've followed these instructions exactly:
http://wiki.ros.org/kinetic/Installation/Ubuntu
I've tried changing the Download from: setting in Software & Updates from Main Server to Server for United States.
If I use Main Server, I get an unmet dependencies error and ros-kinetic-desktop ros-kinetic-perception ros-kinetic-simulators ros-kinetic-urdf-tutorial are not going to be installed.
If I use Server for United States, I get 404 errors for several files and the install won't complete:
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/m/mesa/libgles2-mesa_17.0.7-0ubuntu<IP_ADDRESS>_amd64.deb 404 Not Found [IP: <IP_ADDRESS> 80]
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/w/wayland/libwayland-bin_1.12.0-1~ubuntu16.04.1_amd64.deb 404 Not Found [IP: <IP_ADDRESS> 80]
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/w/wayland/libwayland-dev_1.12.0-1~ubuntu16.04.1_amd64.deb 404 Not Found [IP: <IP_ADDRESS> 80]
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/m/mir/libmircore-dev_0.26.3+16.04.20170605-0ubuntu1_amd64.deb 404 Not Found [IP: <IP_ADDRESS> 80]
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/m/mir/libmircommon-dev_0.26.3+16.04.20170605-0ubuntu1_amd64.deb 404 Not Found [IP: <IP_ADDRESS> 80]
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/m/mir/libmircookie2_0.26.3+16.04.20170605-0ubuntu1_amd64.deb 404 Not Found [IP: <IP_ADDRESS> 80]
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/m/mir/libmircookie-dev_0.26.3+16.04.20170605-0ubuntu1_amd64.deb 404 Not Found [IP: <IP_ADDRESS> 80]
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/m/mir/libmirclient-dev_0.26.3+16.04.20170605-0ubuntu1_amd64.deb 404 Not Found [IP: <IP_ADDRESS> 80]
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/m/mesa/libegl1-mesa-dev_17.0.7-0ubuntu<IP_ADDRESS>_amd64.deb 404 Not Found [IP: <IP_ADDRESS> 80]
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/m/mesa/libgles2-mesa-dev_17.0.7-0ubuntu<IP_ADDRESS>_amd64.deb 404 Not Found [IP: <IP_ADDRESS> 80]
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/c/curl/curl_7.47.0-1ubuntu2.2_amd64.deb 404 Not Found [IP: <IP_ADDRESS> 80]
E: Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/c/curl/libcurl4-openssl-dev_7.47.0-1ubuntu2.2_amd64.deb 404 Not Found [IP: <IP_ADDRESS> 80]
Everything else appears to install. Are these libs necessary?
Originally posted by crashtest84 on ROS Answers with karma: 3 on 2017-10-15
Post score: 0
These are weird errors. all of the 404s are not ros specific. Perhaps you should go back to the factory settings for the ubuntu repos. When using the Main server, can you output the full error, and ensure that sudo apt update works without error?
Originally posted by PeterMilani with karma: 1493 on 2017-10-15
Post score: 0
Comment by crashtest84 on 2017-10-15:
I think that actually answers my question. I have a horribly botched install of NVIDIA's CUDA toolkit that I haven't cleaned up yet. apt-get update fails because of this. Good to know these aren't ROS-specific.
| common-pile/stackexchange_filtered |
Find the tangent plane for $\sqrt{1-x^2-y^2}$ at $(\frac{1}{2},\frac{1}{2},\frac{1}{\sqrt{2}})$
Find a function whose graph is the tangent plane to the graph at the indicated point. Sketch the graph of the function and the tangent plane near th epoint of tangency.
Tangent plane formula: $$z = f(a,b)+(x-a)\frac{\partial f}{\partial x}(a,b) + (y-b)\frac{\partial f}{\partial y}(a,b)$$
Use the formula for the following equation: $\sqrt{1-x^2-y^2}$ at $(\frac{1}{2},\frac{1}{2},\frac{1}{\sqrt{2}})$
Given that three points are required then:
$\sqrt{1-x^2-y^2}=z=1-x^2-y^2-z^2=0=x^2+y^2+z^2=1$ - which is a ball with radius 1
Then evaluating the partial derivative in respect to $(x,y,z)$, I get:
$\frac{\partial f}{x}=2x;\frac{\partial f}{y}=2y;\frac{\partial f}{z}=2z$
Calculating $f(a,b)=\frac{1}{2}+\frac{1}{2}+\frac{1}{\sqrt{2}}=\frac{1+2\sqrt{2}}{4}$
Putting the points into the differentiated variables we get:
$$\frac{\partial f}{x}=1;\frac{\partial f}{y}=1;\frac{\partial f}{z}=1$$
And finally placing everything into the equation:
$\frac{1+2\sqrt{2}}{4}+(x-\frac{1}{2})+(y-\frac{1}{2})+(z-\frac{1}{\sqrt{2}})$
I am unsure whether I have approached this correctly given the extra variable $z$ which is new to me.
Using an online graphing calculator I get:
You are using the formula you quoted incorrectly. $f(x,y)$ should be a function from the equation of the surface of the form
$$
z= f(x,y).
$$
We have the equation $z=\sqrt{1-x^2-y^2}$, thus, $f(x,y)=\sqrt{1-x^2-y^2}$. Further,
$$
\frac{\partial f}{\partial x}= \frac{-2x}{2\sqrt{1-x^2-y^2}},\quad
\frac{\partial f}{\partial y}= \frac{-2y}{2\sqrt{1-x^2-y^2}};
$$
$$
\frac{\partial f}{\partial x}\left(\frac12,\frac12\right)= \frac{-1/2}{\sqrt{1/2}}=-\frac1{\sqrt2},\quad
\frac{\partial f}{\partial y}\left(\frac12,\frac12\right)= \frac{-1/2}{\sqrt{1/2}}=-\frac1{\sqrt2}.
$$
Finally, according to the formula, the equation of
the tangent plane is
$$
z=\frac1{\sqrt2}+\left(x-\frac12\right)\cdot\left(-\frac1{\sqrt2}\right)+\left(y-\frac12\right)\cdot\left(-\frac1{\sqrt2}\right).
$$
Thanks for the clarification! I got a little carried away when the third point was introduced
There are a number of notation flaws that should be pointed out. First you mean "$f(x,y)= \sqrt{1- x^2- y^2}$" not just "$\sqrt{1- x^2- y^2}$". Second, while you can write $z= \sqrt{1- x^2- y^2}$ that is NOT equal to $1- x^2- y^2- z^2$ which is NOT equal to $x^2+ y^2+ z^2= 1$. Third, the graph is not a sphere. It is a hemisphere, that part of the sphere $x^2+ y^2+ z^2= 1$ that is above the xy-plane. Fourth, $\frac{1+ 2\sqrt{2}}{4}+ (x-\frac{1}{2})+ (y- \frac{1}{2})+ (z- \frac{1}{\sqrt{2}})$ is NOT an equation at all, there is no "=". Finally, the picture from your on-line graphing calculator clearly shows the plane cutting through the sphere not tangent to it!
You are given $z= \sqrt{1- x^2- y^2}$. That can be written as $z^2= 1- x^2+ y^2$ or $x^2+ y^2+ z^2= 1$ as long as you keep in mind that z is non-negative. Yes that is a hemisphere and a tangent plane to a sphere is orthogonal to the radius at that point. The center of the hemisphere is (0, 0, 0) and $\frac{1}{2}\vec{i}+ \frac{1}{2}\vec{j}+ \frac{1}{\sqrt{2}}\vec{k}$ is the radial vector at $\left(\frac{1}{2}, \frac{1}{2}, \frac{1}{\sqrt{2}}\right)$. The tangent plane at that point is $\frac{1}{2}\left(x- \frac{1}{2}\right)+ \frac{1}{2}\left(y- \frac{1}{2}\right)+ \frac{1}{\sqrt{2}}\left(z- \frac{1}{\sqrt{2}}\right)= 0$.
You obtained the point of tangency correctly. It is $(\frac{1}{2}, \frac{1}{2}, \frac{1}{\sqrt2})$ on the hemisphere $x^2 + y^2 + z^2 = 1, z \geq 0$.
$\frac{\partial f}{\partial x}=2x = 1;\frac{\partial f}{\partial y}=2y = 1;\frac{\partial f}{\partial z}=2z = \sqrt2 \ $. Note You made a mistake in calculation of $\frac {\partial f}{\partial z}$. Rest are fine.
So equation of plane is,
$(x-\frac{1}{2}) + (y - \frac{1}{2}) + \sqrt2 (z - \frac{1}{\sqrt2}) = 0$
$x + y + \sqrt2 z = 2$
| common-pile/stackexchange_filtered |
How to split the multiple json objects present in a json array in Apache NiFi?
Example Input is below:
I need to split JSON objects present in a JSON array into individual JSON files using Apache NiFi and publish it to a Kafka Topic. There are multiple JSON objects present in the below array
[
{
"stops": "1 Stop",
"ticket price": "301.20",
"days to departure": -1,
"date of extraction": "03/22/2019",
"departure": ", Halifax",
"arrival": ", Toronto",
"flight duration": "0 days 3 hours 58 minutes",
"airline": "Porter Airlines",
"plane": "DE HAVILLAND DHC-8 DASH 8-400 DASH 8Q",
"timings": [
{
"departure_airport": "Halifax, NS, Canada (YHZ-Stanfield Intl.)",
"departure_date": "03/22/2019",
"departure_time": "6:40pm",
"arrival_airport": "Ottawa, ON, Canada (YOW-Macdonald-Cartier Intl.)",
"arrival_time": "7:58pm"
},
{
"departure_airport": "Ottawa, ON, Canada (YOW-Macdonald-Cartier Intl.)",
"departure_date": "03/22/2019",
"departure_time": "8:30pm",
"arrival_airport": "Toronto, ON, Canada (YTZ-Billy Bishop Toronto City)",
"arrival_time": "9:38pm"
}
],
"plane code": "DH4",
"id": "8e6c69c8-65e0-4f1b-b540-ae61abf8aa6d"
},
{
"stops": "Nonstop",
"ticket price": "390.95",
"days to departure": -1,
"date of extraction": "03/22/2019",
"departure": ", Halifax",
"arrival": ", Toronto",
"flight duration": "0 days 2 hours 35 minutes",
"airline": "Air Canada",
"plane": "Boeing 767-300",
"timings": [
{
"departure_airport": "Halifax, NS, Canada (YHZ-Stanfield Intl.)",
"departure_date": "03/22/2019",
"departure_time": "7:40pm",
"arrival_airport": "Toronto, ON, Canada (YYZ-Pearson Intl.)",
"arrival_time": "9:15pm"
}
],
"plane code": "763",
"id": "fc13c5cb-93d1-46f9-b496-abbf6faba85a"
},
{
"stops": "Nonstop",
"ticket price": "391.33",
"days to departure": -1,
"date of extraction": "03/22/2019",
"departure": ", Halifax",
"arrival": ", Toronto",
"flight duration": "0 days 2 hours 30 minutes",
"airline": "WestJet",
"plane": "BOEING 737-700 (WINGLETS) PASSENGER",
"timings": [
{
"departure_airport": "Halifax, NS, Canada (YHZ-Stanfield Intl.)",
"departure_date": "03/22/2019",
"departure_time": "7:10pm",
"arrival_airport": "Toronto, ON, Canada (YYZ-Pearson Intl.)",
"arrival_time": "8:40pm"
}
],
"plane code": "73W",
"id": "4d49c24b-6fb0-4f45-ba05-a3969ce7308a"
}
]
Needed Output:
Individual JSON objects like below. I would like to post each JSON object to a Kafka topic.
{
"stops": "Nonstop",
"ticket price": "390.95",
"days to departure": -1,
"date of extraction": "03/22/2019",
"departure": ", Halifax",
"arrival": ", Toronto",
"flight duration": "0 days 2 hours 35 minutes",
"airline": "Air Canada",
"plane": "Boeing 767-300",
"timings": [
{
"departure_airport": "Halifax, NS, Canada (YHZ-Stanfield Intl.)",
"departure_date": "03/22/2019",
"departure_time": "7:40pm",
"arrival_airport": "Toronto, ON, Canada (YYZ-Pearson Intl.)",
"arrival_time": "9:15pm"
}
],
"plane code": "763",
"id": "fc13c5cb-93d1-46f9-b496-abbf6faba85a"
}
Have you try anything? https://stackoverflow.com/help/on-topic
i don't see any difference between 2 json
I'm sorry. I modified the input now. Thank you!
You can use SplitJson processor, this processor will split json array of messages into individual messages as content of each flowfile i.e if your json array having 100 messages in it then split json processor splits relation will output 100 flowfiles having each message in it
JSONPath is $.*
https://community.hortonworks.com/questions/183055/need-to-display-each-element-of-array-in-a-separat.html
this is fine, but the issue with SplitJson processor is that each item is now in an individual flow file causing you to post individual messages to Kafka. Is there a way to keep these JSON in a single flow file separated by a newline ?
This is an old post, but still wants to add my suggestions. Firstly, @OneCricketeer is correct that you have to use SplitJson processor for the same, but expression is very important in that.
As per the json provided by @Meghashaym, i would suggest to wrap the array into one object like below:
{"payload":[
{
"stops": "1 Stop",
"ticket price": "301.20",
"days to departure": -1,
"date of extraction": "03/22/2019",
"departure": ", Halifax",
"arrival": ", Toronto",
"flight duration": "0 days 3 hours 58 minutes",
"airline": "Porter Airlines",
"plane": "DE HAVILLAND DHC-8 DASH 8-400 DASH 8Q",
"timings": [
{
"departure_airport": "Halifax, NS, Canada (YHZ-Stanfield Intl.)",
"departure_date": "03/22/2019",
"departure_time": "6:40pm",
"arrival_airport": "Ottawa, ON, Canada (YOW-Macdonald-Cartier Intl.)",
"arrival_time": "7:58pm"
},
{
"departure_airport": "Ottawa, ON, Canada (YOW-Macdonald-Cartier Intl.)",
"departure_date": "03/22/2019",
"departure_time": "8:30pm",
"arrival_airport": "Toronto, ON, Canada (YTZ-Billy Bishop Toronto City)",
"arrival_time": "9:38pm"
}
],
"plane code": "DH4",
"id": "8e6c69c8-65e0-4f1b-b540-ae61abf8aa6d"
},
{
"stops": "Nonstop",
"ticket price": "390.95",
"days to departure": -1,
"date of extraction": "03/22/2019",
"departure": ", Halifax",
"arrival": ", Toronto",
"flight duration": "0 days 2 hours 35 minutes",
"airline": "Air Canada",
"plane": "Boeing 767-300",
"timings": [
{
"departure_airport": "Halifax, NS, Canada (YHZ-Stanfield Intl.)",
"departure_date": "03/22/2019",
"departure_time": "7:40pm",
"arrival_airport": "Toronto, ON, Canada (YYZ-Pearson Intl.)",
"arrival_time": "9:15pm"
}
],
"plane code": "763",
"id": "fc13c5cb-93d1-46f9-b496-abbf6faba85a"
},
{
"stops": "Nonstop",
"ticket price": "391.33",
"days to departure": -1,
"date of extraction": "03/22/2019",
"departure": ", Halifax",
"arrival": ", Toronto",
"flight duration": "0 days 2 hours 30 minutes",
"airline": "WestJet",
"plane": "BOEING 737-700 (WINGLETS) PASSENGER",
"timings": [
{
"departure_airport": "Halifax, NS, Canada (YHZ-Stanfield Intl.)",
"departure_date": "03/22/2019",
"departure_time": "7:10pm",
"arrival_airport": "Toronto, ON, Canada (YYZ-Pearson Intl.)",
"arrival_time": "8:40pm"
}
],
"plane code": "73W",
"id": "4d49c24b-6fb0-4f45-ba05-a3969ce7308a"
}
]}
Now i am using the Jsonpath finder to view the json structure. When we click on Payload object, we can see the array items in path x.payload
In this case, You can use $.payload[*] as the expression in the processor and Set the Primary Node For Execution option under scheduling tab.
This should queue up the individual items in the queue list. So basically we are parsing each element of the array object.
| common-pile/stackexchange_filtered |
Proof of the Compactness Theorem for propositional logic by topological compactness
I'm reading Russ's proof of the compactness theorem, wherein we suppose $\Gamma = \{\phi_1, \phi_2,...\}$ is a set of propositional sentences with all of the propositional variables chosen from $V = \{p_1,p_2,...\}$. We identify $2^V$ as the set of all truth-assignments--I take it this means that although $2^V$ is the power-set of $V$, an element like $\{p_1,p_3,p_5,...\}$ would be identified with the truth-assignment $\tau$ in which $\tau(p_1)=T$ and $\tau(p_2)=F$ and so on. I believe he also says to identify these with $X\times X\times ...$ where $X=\{T,F\}$ so that in my example this truth-assignment is identified with the tuple $(T,F,T,F,T,...)$, and to give this the product topology. He notes that, by the Tychonoff theorem $2^V$ is compact. From here he defines $D_\phi = \{\tau | \tau \vDash \phi\}$ and notes that this is always both open and closed.
Getting to the proof of compactness, we want to show that if every finite subset of $\Gamma$ has a model then so does $\Gamma$. He points out that $\Gamma$ has a model if and only if $\cap_{\phi\in\Gamma}D_\phi\ne\emptyset$. So far I'm good with all this.
Next he says that, due to the compactness of $2^V$, this is equivalent to $\cap_{\phi\in\Gamma_0}D_\phi\ne\emptyset$ for every finite $\Gamma_0\subset\Gamma$. Here's where I get lost. How does compactness get us this, where are we using open covers?
I have a suspicion about what the open covers are but I'm not seeing exactly the move, still. I suspect that the open sets in the cover are, for any given $\Gamma_0$ and $\phi\in\Gamma_0$, the set of all associated $D_\phi$ and $2^V-D_\phi$, hence why he remarked that $D_\phi$ is both open and closed. How does the fact that every open cover has a finite subcover connect to these particular finite subcovers?
Google for the Finite Intersection Property, a well´known and extremely useful characterization of compactness. It should be explained in every textbook on general topology.
One of the key theorems that characterize compactness says that a topological space $X$ is compact if and only if every collection $\mathcal{C}$ of closed subsets of $X$ such that the intersection of every finite subset of $\mathcal{C}$ is nonempty, has nonempty intersection.
This theorem provides the link between the definition of compactness in terms of finite subcovers and the view of compactness that we use when we claim that propositional logic enjoys compactness.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.