text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Don't show any content and article after migration in joomla 2.5
Sir,
I upgrade my current joomla 1.5.14 site to 1.5.26 latest version. It has done successfully. After that i migrate this site in joomla 2.5 with redmigrator. Migrate has been done but now no content and article show in my mirated joomla 2.5 site. I migrate site in localhost.
Please anyone help me.
Your component can migrate data from Joomla 1.5 to Joomla 2.5, not your template. So just look around the template files - check first in your backend whether you've got all data or not. If yes, then go for the template.
| common-pile/stackexchange_filtered |
C++ Pre-compiled header and included file organization
I have a very large Native C++ project with hundreds of classes each defined in their own .cpp and .h file. Nothing weird here. I have all of the classes declared in my pre-compiled header stdafx.h file and also included in the stdafx.h file as shown below (to allow Foo to reference Bar AND Bar to reference Foo).
Is this organization bad practice?
Essentially, all classes are declared and included in every object. I assume the compiler is not going to generate overhead binary by having things included which are not necessary to compiling the particular .cpp object file. As of now I'm not having any problem with compile time either. I'm just wondering if this is a bad organization for the code.
I like this layout because the stdafx.h file is essenetially a code map since all classes are declared there and it reveals the nested namespaces is an easy to read format. It also means that my individual class .h files don't require several include statements which means less code for me to maintain when making changes to filenames or directories.
stdafx.h:
#pragma once
namespace MyProject
{
class Foo;
class Bar;
}
#include "Foo.h"
#include "Bar.h"
foo.h:
#pragma once
namespace MyProject
{
class Foo
{
// Declare class members, methods, etc
}
}
foo.cpp:
#include "stdafx.h"
namespace MyProject
{
class Foo
{
// Define class members, methods, etc
}
}
No, it just does not compile. Did you perchance forget include-guards?
But now you've got to update stdafx.h every time you add or remove a class (this means recompiling the entire project), and at least some of your classes won't compile outside the context of one project's stdafx since they don't forward declare the things they need.
Ah add a #pragma once on each header.
Yes it is bad organization. Precompiled headers are for speeding up compile times. You have to at least do a back of the napkin calculation for which headers are rarely changed and only include those. You gain exactly 0 performance by just including everything in the precompiled header.
I should also clarify that the circular reference of the classes is a must.
What bothers me is that stdafx.h includes Foo.h, and Foo.h includes stdafx.h. That doesn't look very good to me. (From what I understand, stdafx.h should not be included from other headers; only from source files.)
Ah, yeah, I copied this incorrectly when I simplified my work for the question. Its clarified now.
In my mind there are 4 key factors here:
Compile Time: Keep in mind that a #include "x.h" embeds all the code in x.h at that line. If the project is going to grow substantially have a care for how you're going to impact future compile times.
Foo and Bar Stability: If Foo and Bar are changing you're going to have to keep recompiling stdafx.h
Circular Definitions: You will no longer be able to break circular definitions with forward declares.
Explicit Dependencies: You won't be able to look at your includes and know what is used in a particular file.
If you feel your pros outweigh the cons listed here, there's nothing illegal about what you're doing so go for it!
To clarify, external dependencies are included in the individual .cpp files to track includes in each class. Only internal includes or includes used consistently everywhere are in the precompiled header.
Compile Time is not an issue at this point. stdafx.cpp takes about 5 seconds and the rest of the .cpp files take about an additional 5 seconds total. No problem here since I am not including many externals. 2. As stated before, I currently don't care about having to recompile the stdafx.h because its fast. 3. Circular definition are a must. Is there a better way to do what I'm doing? 4. External dependencies are included within each .cpp file.
@RussellTrahan On 2 I'd agree with AndyG's comment, why even do precompiled headers then? You could just use /FI with all your headers at that point and remove some complexity. As far as 3 it means that your includes have to go in your implementation file, not your header. And you can't accomplish that if all of your headers are in stdafx.h. See this for more info: http://en.wikipedia.org/wiki/Circular_dependency#Example_of_circular_dependencies_in_C.2B.2B
I agree that I am perhaps exploiting the pre-compiled header, but the fact is that many of my changes are in the definitions, not declarations. In most of my recompile events, I am not recompiling the stdafx.h. I could use the forced include feature which I wasn't aware of but I'm unsure of what this will gain me. Also, I think all of the commenters don't realize that the circular references DO work as I have written it (Foo CAN reference Bar).
Anyway, I am marking this as the answer because what I'm gathering is that the code here is perhaps unorthodox but does not incur any penalty at run-time which is part of what I asked in my original question. Personally, I think that laying out a single file as a code-map like this is more beneficial than a possible but unlikely change in the compile time (based on how I edit this project).
I agree that if header files are changing, stdafx.h (precompiled headers) should not be used. With modern computers, the build time difference may not even exist.
@RussellTrahan I'd encourage you to have one more look at 3. I can't understand how you'd solve that. If A has-a B and B has-a A if you just put #include "A.h" at the top of A's header and #include "B.h" at the top of B's header, how will you break that circular reference? Again this isn't to say you shouldn't do this, but if you'll need circular references this might not work.
It already does work. This project is already far underway. The #pragma once directive prevents the multiple inclusions, and because I declare the classes before including the definitions in the stdafx.h file (as posted here in the question) the circular reference is not an issue.
A header should include everything it needs, no more, no less.
That's a basic requirement for ease-of-use and simple encapsulation.
Your example explicitly fail that point.
To detect violations, always include the corresponding header first in the translation-unit.
This is slightly modified if you use precompiled headers, in which case they have to be included first.
Pre-compiled headers should only include rarely changing elements.
Any change to the pre-compiled header or its dependencies will force re-compilation of everything, which runs counter to using one at all.
Your example fails that point too.
All in all, much to improve.
So is a short stdafx.h file really more important that eliminating hundreds of #include statements scattered throughout my project. I figured this eliminates most of the internal #include references. stdafx.cpp takes about 5 seconds to compile--no problem there.
If you go by that, why don't you just include everything into one big "monster.cpp" (headers and source-files), and compile that. Should not be too long if re-compiling everything is not an issue anyway... And the important one was not making it short, but making sure it (including dependencies) rarely changes, because that forces recompilation of everything.
@RussellTrahan: You might want to look here: Reducing dependencies http://stackoverflow.com/q/23091601
| common-pile/stackexchange_filtered |
Change ToolTip font
I need a tooltip with custom Font.
I have the following code, and this works... but the tooltip size does not fit the text.
Where is the error?
Public Class KeolisTooltip
Inherits ToolTip
Sub New()
MyBase.New()
Me.OwnerDraw = True
AddHandler Me.Draw, AddressOf OnDraw
End Sub
Private _Font As Font
Public Property Font() As Font
Get
Return _Font
End Get
Set(ByVal value As Font)
_Font = value
End Set
End Property
Public Sub New(ByVal Cont As System.ComponentModel.IContainer)
MyBase.New(Cont)
Me.OwnerDraw = True
AddHandler Me.Draw, AddressOf OnDraw
End Sub
Private Sub OnDraw(ByVal sender As Object, ByVal e As DrawToolTipEventArgs)
Dim newArgs As DrawToolTipEventArgs
If _Font Is Nothing Then
newArgs = e
Else
Dim newSize As Size = Size.Round(e.Graphics.MeasureString(e.ToolTipText, Me._Font))
Dim newBounds As New Rectangle(e.Bounds.Location, newSize)
newArgs = New DrawToolTipEventArgs( _
e.Graphics, _
e.AssociatedWindow, _
e.AssociatedControl, _
newBounds, _
e.ToolTipText, _
Me.BackColor, _
Me.ForeColor, _
Me._Font)
End If
newArgs.DrawBackground()
newArgs.DrawBorder()
newArgs.DrawText()
End Sub
End Class
I don't see where you are actually instantiating the font.
@JustBoo: remark in the code "If _Font Is Nothing Then"
Size.Round (from the MSDN page)
Converts the specified SizeF structure to a Size structure by rounding the values of the SizeF structure to the nearest integer values.
(my emphasis).
Therefore, if
e.Graphics.MeasureString(e.ToolTipText, Me._Font)
produces values of 23.4 and 42.1 (say) then they will be rounded to 23 and 42 respectively so your tooltip will be slightly too small.
sow, what solution do you propose?
@serhio - you need to round up to the next highest integer (or even add a bit more padding).
Could you try to add the resizing logic on the OnResize event in addition to the OnDraw event? I think you will get the correct values on that event. Just try and let know if it works.
| common-pile/stackexchange_filtered |
Is it better to prove a generalization of a theorem before presenting specific cases?
In (mostly) mathematical publications, when proving a theorem that can be generalized to a wider range of parameters, is it generally considered better practice to:
Present a special case of a theorem first, and then prove it in a general context, or
Prove the general case and then move on to present one (or more) special cases?
The first approach makes understanding the theorem easier, and makes the paper quicker to read. The second one looks more "rigorous", but is much more difficult to understand for more complex theorems and formulae.
What is generally considered the better practice?
There is no "general" answer here. It always depends on circumstances:
Is the general proof really hard and difficult to understand?
Is the general proof of interest, or does only the special case have applications (yet)?
Can the general proof be done in the same way as the special case, just with uglier notation, more indices, etc.?
Is the generalization natural or does it require work to actually show that the special case does indeed follow from the general one?
Is the special case the only application of the general theorem, or are there other corollaries of interest?
...
This is just a small part of the questions to consider, depending on what the theorem and the proof looks like. In some cases, it might be good to use the special case to motivate the generalization, especially if this is a new idea and you are the first to look at this general case.
On the other hand, people want to see the benefits of the generalized version, so it would be best to get results from it that go beyond just showing a special case.
I would suggest leaving the choice of which proof to read to the reader. So if you have a really easy proof for the special (and most interesting) case and the general proof is really hard, you should give the special case first and let the reader decide if he wants to skip the general one or not. I sometimes see phrases like "readers only interested in the case ... can safely skip to section X". Of course this has to be properly organized, such that section X is still readable even if the general proof got skipped. In this way, it is easy to see and understand the basic concepts of your paper and how you showed the special case, and there is always the opportunity to come back later to study the general one (or to give it to a student for a bachelor thesis...). Note, however, that this is only my personal opinion and approach, this is not a general consensus.
"it might be good to use the special case to motivate the generalization" is a very good way to phrase exactly what I was thinking. Thanks for the detailed answer.
One example where the specific case is usually used/taught/shown is the formula for Gravitational Acceleration, where the general case is a curve, and one of those curve's tangents is the case that happens on earth, and as such, for non extra-terrestrial examples the formula for that tangent is used, despite being a special case
I would add: you should think about your target audience.
| common-pile/stackexchange_filtered |
Spring MVC RestController scope
I have the following Spring controller:
package hello;
import java.util.concurrent.atomic.AtomicLong;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class TestController {
private final AtomicLong counter = new AtomicLong();
@RequestMapping("/test")
public String test() {
long val = counter.incrementAndGet();
return String.valueOf(val);
}
}
Each time I access the REST API, it returns an incremented value.
I am just learning Java and I am wondering why it does not always return 1 as a new instance of AtomicLong must have been created each time the request comes.
Why do you think that it's creating a new instance?
@chrylis: I am originally from .net background and just had a comparison with it.
No, the TestController bean is actually a singleton. @RestController annotation declares a Spring @Component whose scope is by default SINGLETON. This is documented in the @Scope annotation:
Defaults to an empty string ("") which implies SCOPE_SINGLETON.
This means that it will be the same instance of TestController that will handle every requests. Since counter is an instance variable, it will be same for every request.
Is it a good practice to keep it singleton or set the scope to so-called prototype?
@BabuJames For a Controller, I would say it is better to keep it a singleton. Controllers are not typically stateful so it makes sense to make them singleton.
A @RestController is not created for each request, it remains the same for every request. So your counter keeps its value and is incremented each time.
| common-pile/stackexchange_filtered |
How to bind all functions of 'Shift' button to another one?
I have a keyboard with broken Shift button, but I have some additional buttons like Mail, Home etc.
I want to make my Mail button as Shift button (because I have used Windows and software for my keyboard wich makes it).
I have almost read this answer, but it isn't enough. As I understand, I need to write in .xbindkeysrc something like
"xte 'keydown Shift'"
"xte 'keydown XF86Mail'"
"xte 'keyup Shift'"
"xte 'keyup XF86Mail'"
and it didn't work. But the code below works in that way: after press and release Mail button both Shift buttons still be pressed (as I have checked using a virtual keyboard).
"xte 'keydown Shift_L'"
XF86Mail
How can I use one of the additional buttons as Shift?
| common-pile/stackexchange_filtered |
Remove duplicated firewall rules
SimpleFC::AdvAddRule from NSIS Simple Firewall Plugin will add a duplicate rule if the exact rule already exists.
SimpleFC::AdvRemoveRule [name] will remove one entry, but not all of them.
What is a good way to remove duplicated firewall rules?
You could probably use a loop and remove one by one until SimpleFC::AdvExistsRule is false if they all have the same name.
I was hoping for a one liner, such as AdvRemoveRules or a different plugin that handles duplicate adv rules. A loop will work though, thanks!
| common-pile/stackexchange_filtered |
Providing normal users(non-root) with initialization and shutdown auto-run capabilities
I'm hosting an experimental/testing Linux box, running Debian Wheezy 7.4.0 distribution. Different users log into the machine over ssh to their accounts and are allowed to run the development tools and leave their programs running as services in background if they so wish.
Since this is a testing machine for all kinds of purposes there is often a need to restart the whole machine and then the users have to log back in and restart their user-space stuff that was running.
I would like to automate that. Basically I would like to provide the users with a mean to launch stuff right after the machine boots up(after everything else is initialized) and a mean to launch stuff upon system shutdown(with no time limitations, basically stalling the shutdown until all those shutdown user processes have completed).
What I have tried so far:
I've created an init bash script, by following the principles found in the 'skeleton' template file under /etc/init.d/ (Skeleton template source code: https://gist.github.com/ivankovacevic/9917139)
My code is here:
https://github.com/ivankovacevic/userspaceServices
Basically the script goes through users home directories and looks for executable files in corresponding subdirectories named .startUp, .shutDown or .status. Depending on the event that is currently going on the scripts get executed with su as if the users have started them themselves.
The problem I'm currently facing with this approach is that there is a strange process left hanging after the system boots and the script starts all the processes of other users. This is how it looks in the processes list:
UID PID PPID C SZ RSS PSR STIME TTY TIME CMD
root 3053 1 0 1024 620 1 17:42 ? 00:00:00 startpar -f -- userspaceServices
I don't know what that process is and man page for it does not mention the -f argument. So I'm clueless but I must be doing something wrong since no other script/service from init.d leaves such a process hanging after boot.
So I'm looking for someone to help me debug this solution I have(which also seems a bit complex in my opinion). Or give me some idea how this could be implemented in an entirely different way.
UPDATE
I've started a separate question for the startpar issue:
startpar process left hanging when starting processes from rc.local or init.d
UPDATE 2
Problem solved for my original solution. Check the previously mentioned question for startpar. The code on GitHub is also corrected to reflect that.
UPDATE 3 - How to use crontab
As Jenny suggested, regular users can schedule tasks to be executed once upon boot using crontab. I find that to be the easiest method, if all you need is starting user tasks on boot and not shutdown. However there is a drawback that users can leave cron process "hanging" as parent when they launch on-going, service-like tasks. First let me just explain how it works:
regular users themselves should call:
crontab -e
( -e as in edit )
Which opens a default console text editor with their user crontab file. To add a task to be executed at boot, a user must add one line at the end of the file:
@reboot /path/to/the/executable/file
Now if the user would do just that and if that file is not just some simple script that linearly completes something and ends, but some sort of watchdog for example, after a reboot you would end with something like this in your processes list:
1 2661 root 20 0 20380 860 660 S 0.0 0.0 0:00.00 ├─ /usr/sbin/cron
2661 2701 root 20 0 33072 1152 868 S 0.0 0.0 0:00.00 │ └─ /USR/SBIN/CRON
2701 2944 someuser 20 0 4180 580 484 S 0.0 0.0 0:00.00 │ └─ /bin/sh -c ./watchdog
2944 2945 someuser 20 0 10752 1204 1016 S 0.0 0.0 0:00.00 │ └─ /bin/bash ./watchdog
2945 2946 someuser 20 0 23696 4460 2064 S 0.0 0.1 0:00.01 │ └─ /usr/bin/python ./some_program.py
To avoid that the user needs to modify his crontab entry to look like this:
@reboot /path/to/the/executable/file >/dev/null 2>&1 &
The redirections of file descriptors are optional but recommended to keep it clean. If you want to study why, try looking at them:
ls -l /proc/pid_of_started_process/fd
You should post your "solutions" as answers, not as edits to your question.
Well I always have doubts whether that is fair towards other people that would like to answer.
I agree that your solution seems a bit complex, so I'll go with "give me some idea how this could be implemented in an entirely different way" :-)
The standard solution for this is to use a configuration management system, such as puppet, and allow users to add their stuff to the puppet config for the server. Puppet will then push out the start script and add them to the relevant runlevels.
A quicker way would be to give them sudoedit access to /etc/rc.d/rc.local and add their things there.
Or give them each a directory to put the start scripts they want started, and have a cron job copy those scripts to /etc/init.d, inserting su $USER -c at suitable places and run chkconfig on them.
Or give them each a directory to put the start scripts, and add some lines at the end fo /etc/rc.d/rc.local to go through those directories and run edited su $USER -c 'script start' on each script in them.
Edited to add:
5. Let them use crontab to schedule the jobs to be run @reboot
The issue with your suggestions is that they allow users to edit scripts which would be executed as root during boot or shutdown. Thus without me inspecting these scripts every time the machine needs to reboot, I can not be sure that some code provided by users will not modify(harm) the system in unwanted ways.
That constraint wasn't in the spec... You could still do that with puppet, either by checking scripts before deploying them or by changing puppet config to deploy those scripts to run as user. But I also changed point 3 and 4 and added a point 5 in response to that.
:) thanks for your edits, don't take it too personal :). Puppet seems just another layer of complexion to the topic. The number 4 idea stands. That's what I'm testing right now. BUT I still see this problem with startpar -f process when I do that. Now I see it in the processes list as: startpar -f -- rc.local and all I have added in rc.local is one line to start a simple watchdog bash script in one of the users directory exactly as per your number 4. suggestion. I now really would like to figure out what the heck is that process. Maybe I should start another separate question for that...
I forgot to say, crontab @reboot seems like a nice solution. However I would also like something right before shutdown. Can crontab be used for that also?
I don't take it personal, I enjoy figuring things out which is why I hang around here... Yes, puppet adds complexity, so when you're using just one server it may be too much - on the other hand, this kind of thing is one of the things it's meant for.
as for crontab at shutdown - no, it doesn't have that. But you could add a K99 script that looks at every user's crontab, checks for @reboot and runs the same script with su -c $USER 'crontabentry stop' - a little contrived, but doable.
I should add that this is a fun problem, probably more so for me who doesn't get her server cracked if it comes out wrong than for you :-)
hahaha, yup. And this with K99 script for shutdown could work, thanks for the ideas! Now I'm basically just wondering what the heck is(was) that startpar -f and why I can not avoid it like any other normal init script does. I take that personal! :D
Aha yes, I also wanted to ask you, will crontab @reboot job work also in case of shutdown or system crash? I found some comment on one site that it works only after an issued reboot?! Seems nonsensical to me.
It should work at every boot, not just an issued reboot. I have never heard of a distribution where it didn't work that way.
Thanks for all the effort you've put into this answer. I have accepted it. I've opened a new question regarding startpar: http://serverfault.com/questions/585975/startpar-process-left-hanging-when-starting-processes-from-rc-local-or-init-d
Thanks for the link, it was interesting to know the cause! (I've upvoted your answer - it's not unusual that non-redirected output causes a process to not be successfully daemonized, but I didn't know enough about startpar to pick that up immediately. The next person who finds that question will definitely be helped by your answer.)¨
Thanks for the up-vote! :) I've also added a little update to my question here, regarding the crontab method.
| common-pile/stackexchange_filtered |
Find $a$ and $b$ in a 4 equation system
$a, b \in\mathbb{R}$. I have four equations:
$$x+3y-2z+t=-3$$
$$3x+11y+az+5t=2$$
$$3x+12y-6z+6t=b$$
$$4x+15y-8z+8t=-5$$
I have to find out the values of $a$ and $b$ where the system is solvable (has exactly 1 solution).
I also have to find out what values of $a$ and $b$ make the system have infinite solutions and no solutions at all (unsolvable). I know I'm asking for a lot of answers, but this is something where I have absolutely no idea how to solve this, I'd really appreciate the help.
What matrix transformations have you applied so far?
Do you mean exactly One Solution Set?
@SufyanNaeem yes, that is what I meant
@Nejc if you contact me on facebook I can give you the wolfram mathematica page with the algebraic explenation
@JanEerland just did.
The wrong Jan xD
@NejcPisk you've to request the one with to (drawed) people and a blue background :)
Yeah, I've sent you a message, I can't actually add you.
I dont get anything
I've made a system of equations which gave me this:
1 Solution:
$$t=-\frac{19}{2},y=15,z=\frac{1}{4}(2x+77),b=\frac{309}{2}, a=-6$$
2 Solution:
$$t=\frac{4}{5}-\frac{b}{15}, x=-\frac{9a(b+38)+58b+1434}{45(a+6)}, y=\frac{1}{45}(4b+57),z=\frac{309-2b}{45(a+6)},a+6\ne0$$
| common-pile/stackexchange_filtered |
JS not working after doing custom css on Wordpress Site
I am currently in the process of developing an online shop via wordpress. Everything was working fine, now I wanted to give my page a custom border( inverted round corners) and found the css code for it as seen here:
css:
body {
background-color: #fff;
}
.wrapper {
overflow:hidden;
width:200px;
height:200px;
}
div.inverted-corner {
box-sizing:border-box;
position: relative;
background-color: #3e2a4f;
height: 200px;
width: 200px;
border: solid grey 7px;
}
.top, .bottom {
position: absolute;
width: 100%;
height: 100%;
top:0;
left:0;
}
.top:before, .top:after, .bottom:before, .bottom:after{
content:" ";
position:absolute;
width: 40px;
height: 40px;
background-color: #fff;
border: solid grey 7px;
border-radius: 20px;
}
.top:before {
top:-35px;
left:-35px;
}
.top:after {
top: -35px;
right: -35px;
box-shadow: inset 1px 1px 1px grey;
}
.bottom:before {
bottom:-35px;
left:-35px;
}
.bottom:after {
bottom: -35px;
right: -35px;
box-shadow: inset 1px 1px 1px grey;
}
html:
<div class="wrapper">
<div class="inverted-corner">
<div class="top"> </div>
<h1>Hello</h1>
<div class="bottom"> </div>
</div>
</div>
I renamed the classes to get no conflict with the existing css classes of the theme. It is working fine as seen here:my site. The problem is now, that I cannot interact with the site anymore, no links, no hover effects. It seems like the custom css is overlaying the actual site. Do you have any suggestions what I maybe did wrong?
P.S. I edited the header.php so that inverted corner div and the top div are right underneath the page-wrapper div( site content) and in the footer.php I edited the top div and the inverted-corner div closing right above the page-wrapper div closing.
The DIV .bottom-corner is overlapping all your site. Setting pointer-events: none; would fix it but older browsers doesn't support this property.
@Rob You meant all browsers except IE<11 and opera mini: http://caniuse.com/#feat=pointer-events
In your custom.css you have this:
.top-corner, .bottom-corner {
position: absolute;
width: 100%;
height: 100%;
top:0;
left:0;
}
This basically overlays the whole page and thus disables any interaction.
That's the diagnostic, not the solution :)
This is a direct answer to the OP question "Do you have any suggestions what I maybe did wrong?"
Ahh this is actually making sense now :D
FYI, the cause of such type of issues is easily spotted with the Inspector type tool in your browser's Developer tools. E.g. https://developer.chrome.com/devtools/docs/dom-and-styles, https://developer.mozilla.org/en/docs/Tools/Page_Inspector
Add :
pointer-events: none;
to the .bottom-corner CSS, so the mouse passes through.
Please note that this will not work on IE10 or lower. http://caniuse.com/#search=pointer-events
Oh, I forgot this damn IE. Screw people who still use IE<10.
Thank you so much. This single line fixed it :D
Nice :) Please validate the answer if you're happy with it. (The "answer" you validated, for me, is the diagnostic, but it brings no solution at all. It just tells you what you did wrong, but brings no fix. My answer is the fix.)
One other option I would like to suggest to change following css rule
CSS
.top-corner, .bottom-corner {
position: absolute;
width: 100%;
height: 100%;
top:0;
left:0;
}
Replace above code with the below one
.top - corner, .bottom - corner {
position: absolute;
width: 100 % ;
}
this solution will work on all modern browsers and IE8 and above ( I'm not sure about lower version of IE, but it may work on them as well )
| common-pile/stackexchange_filtered |
What's the error in the code.?
Okay..so the Question is ..assume it was monday on the 1st of january 2001 so..taking that as a reference...a YEAR is input by the keyboard,find out what is the day on the 1st january of that(input) year.
In my program(below)..when person types a input greater than the reference year(2001)..answer comes out to be correct but..if the input is less than 2001..than answer is coming wrong..
Can you please point out and explain the error in my code..
Thanks..
#include <stdio.h>
#include<math.h>
#include <stdlib.h>
int main()
{
int present_year;
int normal_days;
int normal_year;
int leap_year;
int leap_days;
int check;
int total_days;
int reference_year=2001;
int day;
printf("Enter the year you want to check\n");
scanf("%d",&present_year);
if(reference_year<present_year)
/*if year entered is greater than reference year(2001)*/
{
check=present_year-reference_year;
}
if(present_year<reference_year)
/* if year entered is smaller than reference year*/
{
check=reference_year-present_year;
}
leap_year=check/4;
normal_year=check-leap_year;
normal_days=normal_year*365;
leap_days=leap_year*366;
total_days=leap_days+normal_days;
day=total_days%7;
if(day==0)
printf("January 1 of year %d will be Monday\n",present_year);
if(day==1)
printf("January 1 of year %d will be Tuesday\n",present_year);
if(day==2)
printf("January 1 of year %d will be Wednesday\n",present_year);
if(day==3)
printf("January 1 of year %d will be Thursday\n",present_year);
if(day==4)
printf("January 1 of year %d will be Friday\n",present_year);
if(day==5)
printf("January 1 of year %d will be Saturday\n",present_year);
if(day==6)
printf("January 1 of year %d will be Sunday\n",present_year);
return 0;
}
What if a user entered 2001 ? then you would use an uninitialized variable in your code.
one note, leap_year needs more than just modulo 4, it needs both modulo 100 (not leap year) and modulo 400 (leap year) but if difference is small it works as is
you cant check for leap year by just dividing by 4 it will be by 400
if present year is less that reference year (2001) use negative numbers or subtract, as is right now you always add days this is why it is wrong, plus the leap_year issue mentioned above
@varuntewari year 2000 was a leap year. You need to improve yourself. :P
@ Nikos M. why do i need to use modulo 100 for leap year..i am finding NUMBER of leap years...which can be found out by dividing (check) by 4 and getting the quotient.
@harsher, check i40west's answer for clarification. This is the definition of a leap year. Itr is not just modulo 4 it also should not be modulo 100 but modulo 400
Where to begin... do you even understand what a leap year is? I don't think that you do. It's not once every four years.
Here, try this.
#include <stdio.h>
#include <stdbool.h>
#include <math.h>
double mod(double x, double y)
{
return x - y * floor(x / y);
}
bool gregorian_leap_year(int year)
{
return (
mod(year, 4) == 0 &&
!(mod(year, 400) == 100 ||
mod(year, 400) == 200 ||
mod(year, 400) == 300)) ? true : false;
}
int fixed_from_gregorian(int year, int month, int day)
{
int correction, f;
if (month <= 2) correction = 0;
else if (month > 2 && gregorian_leap_year(year)) correction = -1;
else correction = -2;
f = 365 * (year - 1) +
floor((year - 1) / 4.0) -
floor((year - 1) / 100.0) +
floor((year - 1) / 400.0) +
floor((367 * month - 362) / 12.0) +
correction + day;
return f;
}
char *daynames[] = {
"Sunday",
"Monday",
"Tuesday",
"Wednesday",
"Thursday",
"Friday",
"Saturday"
};
int main(int argc, char *argv[])
{
int present_year;
printf("Enter the year you want to check\n");
scanf("%d", &present_year);
int f = fixed_from_gregorian(present_year, 1, 1);
int day = (int)mod(f,7);
printf("January 1 of year %d will be %s\n", present_year, daynames[day]);
return 0;
}
I don't know why you have initialized with 1 Jan 2001 while you can initialize it with 1 Jan 0001 which is Monday. And then simply find the odd days and calculate the day on 1 jan XXXX
odd_days=(year-1) + leap_years;
Here year is the user input
and then
day=odd_days%7;
can you please explain, why did you do (year-1)?
year-1 is because i want the difference between the year entered by user and year 0001.
if someone has to calculate the day for year 1003...it won't come right as 2%7 will give 0 so..i think we will have to calculate total no..of days and then divide by 7.
oops i forgot to mention. **If odd_day<7 then day=odd_day**
You need to calculate number of days only when you want to find the day of a date other than 1 jan
| common-pile/stackexchange_filtered |
UIControl to create a popup in an ios ipad application
I want to create a popup window (like given in image) in an ipad application. What UI Control that i should be using. It would be great if someone can suggest me a tutorial. I am looking for ipad only application.
This is view controller presented in modal mode. See - presentModalViewController:animated:
View Controller Programming Guide for iOS - Presentation Styles for Modal Views
| common-pile/stackexchange_filtered |
Linux: How can i grep text from string to string
How can I get text using grep command txt that seats between two strings?
for example:
<--string 1-->
the text i need
<--string 2-->
the "the text i need" between the two tags is dynamic, therefor i need a command that will output text from "<--string 1-->" to "<--string 2-->"
will "the text i need" be always one line or it could be more?
Supposing "the text I need" is just one line, you should check that both string1 and string2 appear (Alex's solution only checks one thing).
A better solution would be:
grep -A 2 "string 1" $file | tail -2 | grep -B 1 "string 2" | head -1
you cannot always assume the "the text I needed" is one line
This might work for you:
grep -A2 "<--string 1-->" file | grep -v "<--string 1-->\|<--string 2-->"
or
grep -A1 "<--string 1-->" file | grep -v "<--string 1-->"
or in a single process:
sed '/<--string 1-->/,/<--string 2-->/!d;//d' file
or:
awk '/<--string 2-->/{p=0};p;/<--string 1-->/{p=1}' file
You can use Awk for this.
Inclusive:
awk '/<--string 1-->/,/<--string 2-->/' file
Excluding the string 1 and 2 lines:
awk '/<--string 1-->/{flag=1; next} /<--string 2-->/{flag=0} flag' file
Here, a flag is set when '<--string 1-->' is found in the line, and unset when '<--string 2-->' is found.
You can also keep either the first or second line using:
awk '/<--string 1-->/{flag=1} /<--string 2-->/{flag=0} flag' file
or
awk 'flag; /<--string 1-->/{flag=1} /<--string 2-->/{flag=0}' file
if you know that "the text i need" is always above or always below string 1 or string 2, you can use grep -A 1 "string 1" $file | tail -1 or grep -B 1 "string 2" $file | head -1
we need to know what is the line number for string1 and string2
we can use grep -n for that
then using head and tail we can get lines between string1 and string2
for example:
<--string 1-->
the text i need
<--string 2-->
start=$(cat file | grep -n <--string 1--> | grep -Eo [0-9]+)
finish=$cat file | grep -n <--string 2-->) | grep -Eo [0-9]+)
res=$((finish-start))
result=$(cat file | head -$start | tail -$res)
It is a little bit hacky, but it worked for me.
I hope this helps you.
DATA=$(cat /tmp/file)
STARTVAR=$(echo "$DATA" | grep -n '<--string 1-->' | grep -Eo [0-9]+)
ENDVAR=$(echo "$DATA" | grep -n '<--string 2-->' | grep -Eo [0-9]+)
CALC=$((($ENDVAR - $STARTVAR) - 1))
result=$(echo "$DATA" | grep -A $CALC '<--string 1-->')
echo "$result"
Replace the CALC=$((($ENDVAR - $STARTVAR) - 1)) line with CALC=$(($ENDVAR - $STARTVAR)) if you want to include '<--string 2-->' in output
grep word filename
Check grep on wiki.. http://en.wikipedia.org/wiki/Grep
the text between two trings is dynamic!
| common-pile/stackexchange_filtered |
Python Tornado handler behavior
I am playing with Python Tornado and have what should be a very basic question.
As I understand the code snippet below invoking either localhost:3000 or localhost:3000/register should direct me to register.html but for whatever reason localhost:3000 successfully takes me to the page where localhost:3000/register produces a 404. What subtlety am I overlooking?
Thanks.
class RegisterHandler(tornado.web.RequestHandler):
def post(self):
self.render("register.html")
-------------------------------
options.parse_command_line()
app = tornado.web.Application(
[
(r'/', RegisterHandler),
(r'/register', RegisterHandler),
],
debug=True
)
app.listen(options.port)
logging.info("app started, visit http://localhost:%s" % options.port)
tornado.ioloop.IOLoop.instance().start()
I just tried your code on my machine with the latest Tornado and it works as you expect.
This means my problems lie elsewhere. This was helpful.
For future people I want to just say that this was complete and total PEBCAK error. The code I thought I was running was out of sync with my templates.
| common-pile/stackexchange_filtered |
Is there a way to receive notifications from specific questions updates?
Sometimes, there are questions that let you really curious about what will be the final solution (if there is finally a solution). And I wonder if there is any way to subscribe to that question, and receive updates as they happen.
Thanks!
You can favorite questions (star icon under voting arrows) and there is your favorites tab in your profile.
Though I don't think there is proactive option to have it email you or something, so you will probably just have to check it in profile now and then. It will track and show if any changes happened since last time you viewed the thread.
Ok @Rarst I didn't even knew about the favorite questions, my apologies. Perhaps a custom notifications digest could be a good improvement!
Nothing to apologize for, now you do. :)
| common-pile/stackexchange_filtered |
Adding new object to retrieved NSMutableArray - returns NSInternalInconsistencyException
I keep getting this error when I try to add an object to a retrieved NSMutableArray.
Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '-[__NSCFArray insertObject:atIndex:]: mutating method sent to immutable object'
Retrieve:
NSUserDefaults *userDefaults = [NSUserDefaults standardUserDefaults];
NSMutableArray *arrayOfTitles = [userDefaults objectForKey:@"mainArraySaveData"];
NSMutableArray *arrayOfSubjects = [userDefaults objectForKey:@"subjectArraySaveData"];
NSMutableArray *arrayOfDates = [userDefaults objectForKey:@"dateArraySaveData"];
_mutableArray=arrayOfTitles;
_subjectArray=arrayOfSubjects;
_dateArray=arrayOfDates;
[_tableView reloadData];
Save:
NSUserDefaults *userDefaults = [NSUserDefaults standardUserDefaults];
_mutableArray = [[NSMutableArray alloc] initWithArray:[userDefaults objectForKey:@"mainArraySaveData"]];
[userDefaults setObject:_mutableArray forKey:@"mainArraySaveData"];
[userDefaults synchronize];
_dateArray = [[NSMutableArray alloc] initWithArray: [userDefaults objectForKey:@"dateArraySaveData"]];
[userDefaults setObject:_dateArray forKey:@"dateArraySaveData"];
[userDefaults synchronize];
_subjectArray = [[NSMutableArray alloc] initWithArray: [userDefaults objectForKey:@"subjectArraySaveData"]];
[userDefaults setObject:_subjectArray forKey:@"subjectArraySaveData"];
[userDefaults synchronize];
I'm confused as I thought this was designed to return an NSMutableArray, but it says not - NSArray. What's my issue??
Thanks, SebOH
I'm confused as I thought this was designed to return an NSMutableArray, but it says not - NSArray. What's my issue?
The documentation of the NSUserDefaults talks specifically about this issue in the "special considerations" section of the objectForKey: method:
Special Considerations
The returned object is immutable, even if the value you originally set was mutable.
Fixing this problem is easy - use mutableCopy:, initWithArray: or arrayWithArray: method to make mutable copies:
_mutableArray=[arrayOfTitles mutableCopy]; // You can do this...
_subjectArray=[NSMutableArray arrayWithArray:arrayOfSubjects]; // ...or this
_dateArray=[[NSMutableArray alloc] initWithArray:arrayOfDates]; // ...or this.
The 1st line ended up helping my issue, _mutableArray=[arrayOfTitles mutableCopy];
User defaults returns immutable objects so you need to call mutableCopy on each before you can modify them.
When you define a variable as NSMutableArray * it is your responsibility to ensure that the instance you store there is of the correct type. The compiler will only tell you that you're wrong if it can tell. In this case the method returns id as you are requesting 'whatever object type exists for this key' from user defaults.
| common-pile/stackexchange_filtered |
Data for specific date
My report gets data for the 1st of the current month. Let's say the 1st has still not come then how would I make the report show the data for the 1st of the previous month.
Thanks.
Sharing your research helps everyone. Tell us what you've tried and why it didn’t meet your needs. This demonstrates that you’ve taken the time to try to help yourself, it saves us from reiterating obvious answers, and most of all it helps you get a more specific and relevant answer! Also see how to ask
If you mean your table doesn't yet have any data for the first of the current month, then you can either check for that and then decide which month to query; or query both months' data and discard the previous month if you have anything for the current month in the result set. But that affects aggregation depending on what you're reporting on. It's rather vague at the moment.
This is what I did to get the 1st of the current month:
Date = to_date('1-'||to_char(sysdate,'MON-RRRR'),'dd-mon-rrrr'). For example, if the 1st of January has no data, then I want to display data from the 1st of December.
Surely the 1st of the current month has by definition always come?
You could then use add_months(ProgressDate,-1) to get the 1st of last month.
trunc(sysdate, 'MM') will give you the first of the current month more simply. But you haven't said how you're determining there is no data, or how you're running the query. Might be simple to run it, see how many rows were found, and run it again with the previous month (via add_months). Too many unknowns in your process though.
Simply use a select top 1 from your table, filtering by extract(day from yourDateColumn) = 1 to get only the rows with the data for the 1st day of any month, and order them in descending order by your date column (order by yourDateColumn desc), so that you always get the 1st day of the last available month in your table.
Docs for Oracle EXTRACT function
| common-pile/stackexchange_filtered |
TFS SDK how to download image from TFS Description Field
I developed some integration between Jira and TFS.
Some bugs in TFS have image in Description like this:
<span style="color:black;font-family:"Segoe UI",sans-serif;font-size:9pt;"><img style="width:606px;" src="http://server:8080/tfs/IT_Systems/WorkItemTracking/v1.0/AttachFileHandler.ashx?FileNameGuid=7d796b11-588f-4266-a783-8d3fa61cb4bd&FileName=temp1465385989194.png"><br> </span>
How can I download this image programmatically using c#?
I know I should parse HTML and so on. But problem is I don't know how to extract data from URL.
In TFS web part I select image, copy it, open for example Paint, past image and save it as jpg-file.
I need the same in my c# code.
Can anyone help me?
I would use Regex to parse the HTML description and pull out the img URL then download the image using a HttpWebRequest.
You could use the following Regex:
(?<=<img.*src=")[^"]*
I use this too. But my problem is I wanna download image, not detect it in HTML
Use a HttpWebRequest to download the image.
Could you give me an example how i can download it by follow url?
http://server:8080/tfs/IT_Systems/WorkItemTracking/v1.0/AttachFileHandler.ashx?FileNameGuid=7d796b11-588f-4266-a783-8d3fa61cb4bd&FileName=temp1465385989194.png
http://stackoverflow.com/questions/2368115/how-to-use-httpwebrequest-to-pull-image-from-website-to-local-file
it won't work, because in my example it's not direct link. And as I know it's impossible to download it using HttpWebRequest only.
That's why I copied/pasted image and not to download in TFS Web client. There isn't opportunity to do it directly
| common-pile/stackexchange_filtered |
Hasse principle for $H^2$ of a maximal torus of a connected quasisplit group?
Let $k$ be a number field and let $G$ be a quasisplit reductive algebraic group over $k$. Does there exist a maximal torus in $G$ such that the Hasse principle in dimension $2$ holds, i.e., such that the map $H^2(k,T) \to \prod_v H^2(k_v,T)$ is injective? It is true if $G$ is split or if $G$ is adjoint. If $G$ is simply connected, it is true whether $G$ is quasisplit or not.
Just to clarify: By "Hasse principle in dimension 2" you are asking whether the map $H^2(k,T) \to \prod_v H^2(k_v,T)$ is injective for some maximal torus $T$?
Exactly .......
Welcome new contributor. If $T$ is split, this follows from the "reciprocity" short exact sequence of Hasse (in class field theory).
No. Let $G=T$ be a $k$-torus. Then $B=T$ is a Borel subgroup defined over $k$, and hence $G$ is a quasi-split reductive $k$-group. The only maximal torus in $G$ is $T=G$. I think that there exist a $k$-torus for which the Hasse principle for $H^2$ fails.
Now I see that you already had the split case. By restriction and corestriction, the kernel of your map is contained in the $N$-torsion, where $N$ is the greatest common divisor of all degrees of all finite extension fields that split $T$.
Yes, if $G$ is semisimple, not necessarily quasisplit. Indeed, let $v$ be a nonarachimedean place of $k$. Then there exists an anisotropic maximal $k_v$-torus $T_v\subset G_{k_v}$; see Platonov and Rapinchuk, Section 6.5, Theorem 21 of the Russian edition. There exist a maximal $k$-torus $T\subset G$ that is conjugate to $T_v$ over $k_v$. Thus $T_{k_v}$ is anisotropic. It follows that $Ш^2(k,T)=1$; see formula (1.9.3) in Sansuc's paper: Sansuc, J.-J. Groupe de Brauer et arithmétique des groupes algébriques linéaires sur un corps de nombres. J. Reine Angew. Math. 327 (1981), 12–80.
Theorem 21 in Section 6.5 of the Russian edition of Platonov and Rapinchuk is Theorem 6.21 of the English edition.
| common-pile/stackexchange_filtered |
Recursive query within recursive query
I would like to solve a problem consisting of 2 recursions.
In one of the 2 recursions I find out the answer to one question which is "What is the leaf member of a specific input (template)?" This is already solved.
In a second recursion I would like to run this query for a number of other inputs (templates).
1st part of the problem:
I have a tree and would like to find the leaf of it. This part of the recursion can be solved using this query:
with recursive full_tree as (
select id, "previousVersionId", 1 as level
from template
where
template."id" = '5084520a-bb07-49e8-b111-3ea8182dc99f'
union all
select c.id, c."previousVersionId", p.level + 1
from template c
inner join full_tree p on c."previousVersionId" = p.id
)
select * from full_tree
order by level desc
limit 1
The query output is one record including the leaf id I'm interested in. This is fine.
2nd part of the query:
Here's the problem. I would like to run the first query n times.
Currently I can run the query only if it's just one id ('5084520a-bb07-49e8-b111-3ea8182dc99f' in the example). But what If I have a list of 100 such ids.
My ultimate goal is to get one id response (the leaf id) to each of the 100 template ids in the list.
In theory, a query that allows me to run above query for each of my e.g. 100 template ids would solve my problem.
Sample data and desired results would really help. I would advise you to set up a db fiddle of some sort.
seems like you could use an IN statement in the WHERE clause, i.e. where template."id" in ('1', '2', '3', '4'). Don't know if the inner join full_tree will get tricky, but you'd definitely need to ad template."id"` to your JOIN clause somehow.
| common-pile/stackexchange_filtered |
How to resolve "requires DC 10 Wisdom (Perception) check"?
Background: I will be DM of an upcoming gaming session. I and the players are all new to D&D or roleplaying. This will be our first session (and unfortunately there are no other gaming groups in our area that we can join).
For running this first gaming session, since we are all noobs, I thought of using a one-shot adventure - for which I selected "A Most Potent Brew" from Winghorn Press. I am trying to read up on the rules and at least get a basic handle to run the session.
One of the things generally mentioned in these adventures goes something along the lines of
Spotting the rats before then requires a DC 10 Wisdom (Perception)
check from adventurers in the room.
I understand DC 10 is the difficulty class that the players will have to roll the dice and beat. (DC 10 being 'Easy' as per the table 'Typical Difficulty Classes' given in Player Handbook).
However, what does the Wisdom (Perception) mean?
Once they roll a d20, should they be adding Perception skill modifier? What is the significance of Wisdom?
Have you read the part about proficiency bonus based on character level in the PHB? Related question here.
Just a note for you: Passive perception in 5e starts at 10 + WIS modifier. So unless they have negative WIS, players should automatically succeed on any DC 10 perception check.
Hi, and welcome to the site and to the game. I'd like to encourage you to drop into [chat] when you've got a moment--there are almost always people familiar with D&D5e in there, so you could probably find a lot of good clarification, quickly, through that channel, too.
@LinoFrankCiaralli this is quite likely an area where the perception may be happening in dim light (e.g. darkvision), if so, this creates light obscurement and disadvantage on the check (-5 for a passive check). This is probably going to require proficiency and quite a high wisdom (16+) to overcome.
@DaleM - yes, dim to no light condition. All - thanks for all your explanation. The answers below and your comments have clarified it for me. (Not sure if I should be marking this question as answered - if so how?)
One point I would like to make is that while skills (and tools) have a normal/default attribute associated, the DM can call for a different attribute when appropriate. For example, if you were looking for specific, social cues, a Charisma (Perception) roll might be in order.
@LinoFrankCiaralli That's assuming they want to use passive perception for the encounter. It's completely up to the GM if they compare a DC to passive Perception or simply have the players roll for it.
@ifusaso - Yes, DM's run the game.
Just pointing out that "unless they have negative WIS, players should automatically succeed on any DC 10 perception check" is not quite accurate even not accounting for disadvantage situations. In the example case, I would rule that players have to roll to determine if their characters are being attentive to the danger since it's actively hidden, even if it's still pretty easy to find.
How the Ability Check works
When you look in the PHB in Chapter 7 (sub heading Skills) covering ability checks, you will find that Perception is one of five skills that use Wisdom for Ability Score modifiers.
Wisdom: Animal Handling, Insight, Medicine, Perception, Survival
Ability Score modifiers are in the table in the beginning of Chapter 7.
For a first level character, the proficiency bonus is +2, from Chapter 1, Character Advancement Table (In the Tiers of Play paragraph).
If your character has a skill proficiency in Perception, add the Proficiency bonus. If your character has a Wisdom ability score bonus, add that regardless of whether the character is proficient or not.
Two cases: with Perception chosen as a skill choice, and without it
Nelda the Cleric has a Wisdom of 16 and chose the Perception skill during character creation. This means that she has proficiency in Perception, so she applies the proficiency bonus as well. Nelda will add +3 +2 (+5) to the d20 roll: +3 for the ability score bonus (16 Wisdom) and +2 for proficiency in the Perception skill. When she reaches level 5, her proficiency bonus will increase by one to +3
Ted the Wizard has a 12 Wisdom, but did not choose Perception as a skill during character creation. He adds +1 to the d20 roll for the Perception check, since he does not get the proficiency bonus for that skill.
Thank you! I understand this concept clearly now. After re-reading the sections you specified, and actually going in and rolling a new PC helped me understand how to arrive at the numbers and how to resolve the ability check's. (The proficiency in a particular skill, where you check/fillup the bubble beside the skill in character sheet, was what I had missed, and once I understood how the number is arrived at - it was pretty obvious). Feel like a thick-skull now. :P
Wisdom is the ability that Perception uses. You can see this with various other skills such as Dexterity (Stealth), Charisma (Persuassion), etc. (Basic Rules, p. 60-61)
It is there so you can see which ability modifier to use, in this case Wisdom. If you are proficient in Perception, you can also add your proficiency bonus to the roll.
So to answer your question, yes they should be adding their Perception skill modifier to the roll. The significance of Wisdom is that Wisdom is the ability Perception is based on.
I think that this answer, while entirely correct, would be better with references to either the PHB or PBR backing it up. As OP is brand-new, I think pointing them to the places where the information lies is as important as giving them the information.
@nitsua60 ah I don't have my PHB in front of me right now! I shall update when I'm home if I get the chance
Extra context
I see a lot of accurate information here, but I haven't seen anybody mention the portion of the rules about why it's "Wisdom (Perception)". Yes, it is a Wisdom check that you add your proficiency bonus for if you have Perception proficiency. But there's more...
The other reason is that, as the GM, you can call for different Perception checks than Wisdom; Wisdom is the default for most situations. Some off-the-cuff examples a GM might use this rule for:
a player is trying to notice who is in charge in a hectic urban situation
call for a Charisma (Perception) check to identify social structure or get a panicked civilian to answer them
a player wants to tell what the most important bit of an arcane ritual is at a glance
call for an Intelligence (Perception) check to jump to the important aspects
These are just a couple examples, and given more time would likely be Investigation checks, but might be relevant to your game at some point. In such a case, the player would add their Charisma/Intelligence or whichever ability score you call for to their Perception proficiency if applicable.
Both normal and this variant version of Ability checks are detailed in the PHB, p174-175 in my edition.
Here is the link to the variant rule in the Basic Rules document at DNDBeyond.
Your guess is right on the money. If the character is proficient in Perception, they should add their Perception skill modifier, which is their Wisdom modifier added to their proficiency bonus. If a character is not proficient in Perception, it is simply 1d20 + their Wisdom modifier.
This works for other skills as well. If you have one of the official character sheets from Wizards, each skill will have the attribute that it is keyed off of next to it, for easy reference.
So, say "Fogbuz the Fighter" has STR 17, DEX 10, CON 16, INT 8, WIS 13, CHR 12 and a +1 Perception; then it is d20 + 1 (Wisdom) + 1 (Perception) ?
@Vyoma In 5e they are either proficient or not proficient in a skill. If they are not proficient, then it is just the ability check, and sometimes not having a proficiency might make it so that the character cannot even attempt it. If you have a proficiency in perception, then you add wisdom + proficiency bonus. Amount of proficiency bonus is dependant on character level, at 1st level it is +2. So in your example the fighter should get +1 from wisdom and +2 from proficiency, making it 1d20+3.
If Fogbuz is a pregenerated character, then this is likely the full (wisdom + perception) value and Fogbuz does not have proficiency in Perception, yielding a net +1 (for 13 WIS). If not, you would need to tell us which skills the player picked for Fogbuz when they generated the character (they'd pick two from the list at the very top of page 72 of the PHB for the Fighter class and pick or get two more from whatever background they chose from chapter 4 (pages 126 through 141).
Fogbuz was something I made up trying to elaborate/clarify on this answer. @KorvinStarmast answer above, and me re-reading the section specified + actually rolling a new PC clarified it for me. Thank you all for your effort in helping me understand!
| common-pile/stackexchange_filtered |
AspectJ Maven Plugin <weaveDependency>
I am trying to use aspectj maven plugin in our project that has multiple modules. Following the instructions given in this link http://mojo.codehaus.org/aspectj-maven-plugin/weaveJars.html
I am using @Aspectj annotation. My aspect is in a separate maven module called
artifactId - consumer
And the class whose method i want to intercept or advice is in
artifactId - producer
I have added the following configuration in the pom file of the consumer module:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.4</version>
<configuration>
<source>1.6</source>
<target>1.6</target>
<showWeaveInfo>true</showWeaveInfo>
<weaveDependencies>
<weaveDependency>
<groupId>com.home.demo</groupId>
<artifactId>producer</artifactId>
</weaveDependency>
</weaveDependencies>
</configuration>
<executions>
<execution>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
Also added "producer" as a dependency in the same pom file.
When i am doing mvn clean install for the consumer module the following information comes in the console.
[INFO] [aspectj:compile {execution: default}]
[INFO] Join point 'method-execution(void com.home.demo.producer.messaging.MomServiceEndpointListener.handle(com.home.messaging.service.MessageContext, com.home.messaging.service.MessageContext))' in
Type 'com.home.demo.producer.messaging.MomServiceEndpointListener' (MomServiceEndpointListener.java:21) advised by before advice from 'com.home.demo.ods.app.OdsConsumer' (OdsConsumer.java:38)
But while executing the application, it's not working. The aspect is not getting invoked.
I am not able to understand whether i am missing something.
Also i am having confusion whether the plugin configuration shown above should be in which module consumer(where my aspects are) or producer.
The problem is that weaveDependencies act like sources only.
Your consumer module takes original "sources" from weaveDependencies (producer), weaves them with aspects and put weaved classes into consumer(!!!) target/classes.
Therefore, producer artifact never knows about aspects and you use it unchanged.
You have to re-build a producer jar using classes from consumer/target/classes.
I don't think it's convenient, so i left my attempts to use this plugin in this way.
Also, several weaveDependencies will be merged into one scrap-heap of classes.
You better try Aspects from your external jar dependency and plugin config that is built into producer.
It can also work if you make sure that consumer is loaded before producer by the classloader. I am currently using this approach to slightly adapt the behaviour of a third party library and it is working just fine.
| common-pile/stackexchange_filtered |
How to set title of a xhtml panel when it is closed
I use this code:
<p:panel header="Advanced User Data" toggleable="true" toggleOrientation="horizontal" collapsed="true">
some other stuff...
</p:panel>
Is there an attribute with which I can set the title when th panel is closed like this:
example
There is no attribute like this AFAIK. You could bind the value of collapsed attribute to a managed bean and add an AJAX listener on toggle event:
<p:panel header="Advanced User Data" toggleable="true"
toggleOrientation="horizontal" collapsed="#{myBean.booleanVal}" style="display: inline-block;">
<p:ajax event="toggle" process="@this" update="pnlAlternativeTitle" />
</p:panel>
<h:panelGroup id="pnlAlternativeTitle">
<h:outputText rendered="#{myBean.booleanVal}"
value="Alternative title" />
</h:panelGroup>
| common-pile/stackexchange_filtered |
How to determine the occurrence of 4 points in one plane in Euclidean geometry
Today I was trying to compile the basic laws of Euclidean geometry, and I found that I was missing something basic.
I want to determine whether 4 points fall in one plane or not
Note that my question is related to Euclidean geometry so please I don't want answers like we calculate the equation of the plane and check that the coordinates of the point satisfy the equation, forget something called a coordinate system
Suppose you know the six lengths between the points and each other and I want a direct law to apply it and check this
Initial attempt:
We are told by Heron's formula for the area of a triangle that if a triangle has side lengths $a,b,c$ and $p=\frac{a+b+c}{2}$ its area $S$ then:
$S=\sqrt{p(p-a)(p-b)(p-c)}$
We can calculate the areas of the four triangles this way, and then if the area of the larger triangle is equal to the sum of the areas of the remaining three triangles, this means that the points are in one plane.
If the above is unfulfilled and there is no pair of triangles whose total area is equal to the sum of the areas of the other pair, this means that the points do not fall in one plane
My main problem is with the case where there is a pair of triangles whose area sums up to the sum of the areas of the other pair of triangles.
I also wish if there was a simpler technique to deal with the problem using Euclidean geometry I would be grateful, thank you
One can use a similar formula for volume , as in wikipedia, and check if the volume is $0$. But it's not pretty.
https://math.stackexchange.com/questions/2140214/volume-of-tetrahedron-given-6-sides and https://mathworld.wolfram.com/Cayley-MengerDeterminant.html
Let $P_1, \ldots, P_4$ be your points and $d_{ij}$ the distance from $P_i$ to $P_j$. The Cayley-Menger determinant of your points is the determinant $D$ of the $5 \times 5$ matrix
$$ \pmatrix{0 & 1 & 1 & 1 & 1\cr
1 & 0 & d_{12}^2 & d_{13}^2 & d_{14}^2\cr
1 & d_{12}^2 & 0 & d_{23}^2 & d_{24}^2\cr
1 & d_{13}^2 & d_{23}^2 & 0 & d_{34}^2 \cr
1 & d_{14}^2 & d_{24}^2 & d_{34}^2 & 0\cr} $$
The volume $V$ of the convex hull of $P_1, \ldots, P_4$ satisfies
$$ V^2 = \frac{D}{288} $$
In particular, the four points are coplanar if and only if $D = 0$.
This is a lovely solution!
I think I've found the idea I'm looking for, but I still need to do the calculations.
Suppose all the points lie in one plane
Data: You know the values of lengths: $\bar{AB},\bar{BC},\bar{CA},\bar{R_{1}},\bar{R_{2}}$
also you know that $R_{1}∩R_{2}=(D_{1},D_{2})$.
Asks: Calculate the two lengths:
$\bar{AD_{1}},\bar{AD_{2}}$
Now if you can calculate the equation and make $\bar{BD}=\bar{R_{1}}$ and $\bar{CD}=\bar{R_{2}}$
Then check if $\bar{AD}=\bar{AD_{1}}$ or
$\bar{AD}=\bar{AD_{2}}$
This will determine if the $D$ falls into the same Plane as points $A,B,C$ or not.
This means that now all we need is:
$f(\bar{AB},\bar{BC},\bar{CA},\bar{R_{1}},\bar{R_{2}})=(\bar{AD_{1}} ,\bar{AD_{2}})$
| common-pile/stackexchange_filtered |
When I add new .feature file in intellij in resources folder ,it is not getting added ,I tried refresh as well
I'm adding new feature file to the resources/appfeatures folder ,it is accepting addition of new .feature file but after adding there is no feature file even I tried refreshing as well in Intellij
When I refreshed the new .feature file added is not appeared but when I close the folder and open it again instead of refresh it is appeared
| common-pile/stackexchange_filtered |
How to Remove an object from a triple nested document in a MongoDB collection?
I have a Wikidata collection in MongoDB with documents of following structure:
{
id: 178
type: "something"
claims: {
P1{
[0]: {
id: "234"
obj:{...}
}
[1]: {
id: "456"
obj:{}
}
[2]: {
id: "789"
obj:{...}
}
}
P2: {
[list of objects]
}
P3: {
[list of objects]
}
}
}
I'm trying to iterate over all items under claims (i.e., P1, P2, P3) and delete certain objects under them (e.g., id:234)
In other words how to delete the nested object with id: "234" for example?
Try db.colletion_name.update( { }, { $pull: { "claims.P1": { id: "234" } } } )
Under P1, P2, and P3 fields are nested arrays.
As @veeram commented, an update query with the $pull operator is ideal for this scenario.
condition = {'id': '234'}
field_val = {'claims.{}'.format(field): condition for field in ('P1', 'P2', 'P3')}
db.collection_name.update({}, {$pull: field_val })
| common-pile/stackexchange_filtered |
Automatically convert spherical jpeg image (360x360 view) into a movie sequence for youtube
Is there any way/program(s)/api/etc. which can help to write a program which can automatically convert a spherical jpeg image (for example like http://www.jcwilson.net/craigendarroch.jpg) into a virtual reality movie so that youtube can enable the view-around function?
The photo should just be displayed for maybe 10 seconds.
Thank you for help!
So you're looking for a way to change a picture to a movie with one frame that lasts 10 seconds? You want ffmpeg for that: Create a video slideshow from images
.
The question is: why on youtube? There are better ways to display a spherical picture.
Oh and it's 360x180.
Thank you very much for this link! I know other ways (like krpano which I have or so). In my specific situation I should do it with YouTube.
| common-pile/stackexchange_filtered |
The entity "uuml" was referenced, but not declared - XMLStreamException
Trying to generate XLSX file from SpreadsheetML 2003 (which is basically a XML). Using the CLOB from database, CLOB contains the SpreadsheetML 2003 (XML). I am trying to parse the xml through STAX parser, and write it into the XLSX file using POI API.But it is throwing below exception while processing in stax.
Note : XML encoding UTF-8 format is used.
Exception :
javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,16706]
Message: The entity "uuml" was referenced, but not declared.
at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:588)
at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.getElementText(XMLStreamReaderImpl.java:845)
at com.db.smis.planus.servlet.ServletApp.doProcess(ServletApp.java:224)
Sample XML:
<?xml version="1.0"? encoding="UTF-8">
<?mso-application progid="Excel.Sheet"?>
<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"
xmlns:o="urn:schemas-microsoft-com:office:office"
xmlns:x="urn:schemas-microsoft-com:office:excel"
xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet"
xmlns:html="http://www.w3.org/TR/REC-html40">
<Row>
<Cell ss:StyleID="s29"><Data ss:Type="Number">7662</Data></Cell>
<Cell ss:StyleID="s73"><Data ss:Type="String">C. & A. AAAAA & CO. KG</Data></Cell>
</Row>
<Row>
<Cell ss:StyleID="s29"><Data ss:Type="Number">7662</Data></Cell>
<Cell ss:StyleID="s28"><Data ss:Type="String">München,Köln</Data></Cell>
</Row>
<Row>
<Cell ss:StyleID="s29"><Data ss:Type="Number">7662</Data></Cell>
<Cell ss:StyleID="s28"><Data ss:Type="String">Düsseldorf</Data></Cell>
</Row>
</Workbook>
You need to declare ü entitie, or replace them with a hex or decimal equivalent, in this case i guess ü:
<!DOCTYPE definition [
<!ENTITY uuml "Ü">
]>
UPDATEIf you have more special characters, use our Apache commons lang’s friends StringEscapeUtils.unescapeXML.
Full example here
Thanks for your reply. But I cannot assure only this kind of character will come in the XML. May be some other special/foreign character also come. Then I need to define all the entities in DTD ?
Any other alternative way to fix this issue ?
you must parse/replace the special characters... Googling a bit does not seem hard to find for Java
| common-pile/stackexchange_filtered |
Appium2.0 server not starting in MacOS
I have installed Appium 2.0 using the command
npm install -g appium@next
Then installed the driver using below commands:
appium driver install xcuitest
appium driver install uiautomator2
After this when tried to run the Appuium server using command:
Appium
As if we see the screenshot added by me, it displayed "No Plugins is installed", I have installed a driver and also activated it. But still it was not proceeding further.
But it is getting stuck at some point and not preceding further.
Please check attached screenshot.
Appium plugins are not the same as the drivers you are mentioning. you can use the command: appium plugin list in your terminal to see the available plugins. This is not an error, you just have to start the driver using your desired capablities, check following link: https://appium.io/docs/en/writing-running-appium/caps/
| common-pile/stackexchange_filtered |
Match and insert in a python sorted list
I have a sorted list and I want to insert a string if it matches the pattern in list.
Example :
Sorted List
['Amy Dave', 'Dee Waugh', 'Eva A', 'Gin', 'Joy Kola', 'Kay Min', 'Mae', 'Pam Deing']
Above list is in sorted order. I need to insert a name in sorted order and also if the name already exist then it should be inserted before the existing name.
Example
Name 'Eva Henry'
As Eva is already in the list then after matching the pattern it should be inserted before "Eva A". If name does not match then it should be inserted in sorted order in the list. The output should be like :
Sorted List
['Amy Dave', 'Dee Waugh', 'Eva Henry', 'Eva A', 'Gin', 'Joy Kola', 'Kay Min', 'Mae', 'Pam Deing']
Any help will be appreciated.
Thank you
"if the name already exist then it should be inserted before the existing name" Can I ask why this request?
Sounds like a homework question. If it is, please mark it as homework.
We have this requirement for privileged customer. Yeah its really strange but it needs to be done :(
What's keeping you from doing it? What have you tried, and what question do you have? "Write my code" is not a suitable question for this forum.
And what are you paid to do for your customer, if you can't do this for yourself?
@Alexis right its not write my code forum but this problem is eating my head just seeking some help cause I am unable to think of any solution. I thought of bisect, split but nothing seems to be giving me result and also I am not Guru of Python.
Might be somebody in circle have faced same issue and know some tricks can share with me.
No one will have faced this before, because the requirement is ridiculously stupid. Also, if the user has ever inserted 2 people with the same name, the list isn't really sorted is it?
@VladtheImpala http://www.thisblogrules.com/2011/09/ranking-the-5-worst-online-universities.html
If you had a counter that said the number of times it appeared, you could write it out more than once, without storing it more than once.
@PratapSingh: Why is the client telling YOU the developer how to store the data? This might be one of the times you fire the client :-)
In my opinion, there are no stupid questions. If the names are meant as full names and only the first names are the key for sorting, there always may be some funny idea and the need to solve the problem. You can use bisect this way:
>>> fullnames = ['Amy Dave', 'Dee Waugh', 'Eva A', 'Gin', 'Joy Kola', 'Kay Min', 'Mae', 'Pam Deing']
>>> names = [full.split()[0] for full in fullnames]
>>> names
['Amy', 'Dee', 'Eva', 'Gin', 'Joy', 'Kay', 'Mae', 'Pam']
So, we have parallel list of first names that will be used to find the position of another full name xx (the first name extracted to x the same way as in the previous case):
>>> xx = 'Eva Henry'
>>> x = xx.split()[0]
>>> x
'Eva'
Now, use bisect to find the wanted position in the first-name list:
>>> import bisect
>>> pos = bisect.bisect_left(names, x)
Then update both lists:
>>> fullnames.insert(pos, xx)
>>> names.insert(pos, x)
Here is the result:
>>> fullnames
['Amy Dave', 'Dee Waugh', 'Eva Henry', 'Eva A', 'Gin', 'Joy Kola', 'Kay Min', 'Mae', 'Pam Deing']
>>> names
['Amy', 'Dee', 'Eva', 'Eva', 'Gin', 'Joy', 'Kay', 'Mae', 'Pam']
Here's a complete answer that does what you want, however ridiculous. I didn't test any edge cases.
sorta_sorted_list = ['Amy Dave', 'Dee Waugh', 'Eva A', 'Gin', 'Joy Kola', 'Kay Min', 'Mae', 'Pam Deing']
print sorta_sorted_list
def insert_kinda_sorted(name, sorta_sorted_list):
new_list = []
fname = name.split()[0]
inserted = False
for index in range(len(sorta_sorted_list)):
if not inserted:
if sorta_sorted_list[index].split()[0] == fname:
new_list.append(name)
inserted = True
if sorta_sorted_list[index] > name:
new_list.append(name)
inserted = True
new_list.append(sorta_sorted_list[index])
return new_list
sorta_sorted_list = insert_kinda_sorted('Eva Henry', sorta_sorted_list)
print sorta_sorted_list
sorta_sorted_list = insert_kinda_sorted('Joe Blow', sorta_sorted_list)
print sorta_sorted_list
output is:
['Amy Dave', 'Dee Waugh', 'Eva A', 'Gin', 'Joy Kola', 'Kay Min', 'Mae', 'Pam Deing']
['Amy Dave', 'Dee Waugh', 'Eva Henry', 'Eva A', 'Gin', 'Joy Kola', 'Kay Min', 'Mae', 'Pam Deing']
['Amy Dave', 'Dee Waugh', 'Eva Henry', 'Eva A', 'Gin', 'Joe Blow', 'Joy Kola', 'Kay Min', 'Mae', 'Pam Deing']
Why do you help them to commit programming suicide and spread bad code? :-) Sorry your code is fine, I mean the algorithm.
Yeah...after I reread that, I realized the insort method won't work. If we're tokenizing the names, we could also create a dictionary where the first token is the key and the value is a list where we push the names into like a stack, then use a flattening function that returns a list (or generator) when needed.
Excellent point gecko. If the requirement is as described (and hopefully it's not), a list is not even the right data structure any more.
Thanks Jgritty and Pepr for your solutions even I found another solution too myself I will post it soon :)
Here is my solution I think its easy
:
#spliting against whitespace
first_name = name.split()
#Stroting the first name of the user
first_name = first_name[0]
#Matching the pattern
match = re.compile(first_name,re.IGNORECASE)
ind = ''
for i in sort_names:
if re.match(match, i):
ind = sort_names.index(i)
break
#If name matches for the first time end the loop and do insert name in the sorted list
if ind != '':
sort_names.insert(ind, val)
print ""
print sort_names
else:
bisect.insort(sort_names, val)
print sort_names
| common-pile/stackexchange_filtered |
Linux daylight savings notification
I was trying to find a way to receive a notification from the system (Linux) when daylight savings are applied, but I do not seem to be able to find anything like that.
Consider a program sits on a pselect() waiting for a number of timer fd's, all which have exactly 24-hour intervals, but differing start times, which are defined by a user; "07:00 ON, 07:25 OFF" (for example, if it were a coffee maker).
Because the user gives these times in local time and Linux runs on UTC, the time zone adjusted timer fd's need to be readjusted each time a daylight savings occure. (user expects coffee when his daylight savings compliant alarm clock has woken him up...)
Intelligent way to go about this, as I would imagine, would be to register to the system/kernel/init/whatever to be notified when daylight savings are applied, and avoid getting into the messy business of trying to determine such dates and times yourself and hope the system agrees with your results (ie. your resync actions and actual daylight savings happen in the same time).
Is there any way to be notified on DST changes? Or perhaps on any changes to local time (assuming DST change modifies that)?
Consider a program sits on a pselect() waiting for a number of timer fd's, all which have exactly 24-hour intervals, but differing start times
Therein lies your fundamental problem. All days are not exactly 24 hours long -- sometimes they are off by an hour (daylight savings time), or by seconds (leap seconds); just like not every February has 28 days.
A much simpler and lightweight (less resources consumed) way is to use a min-heap of future events in UTC, something like
struct trigger {
/* Details on how the event is defined;
for example, "each day at 07:00 local time".
*/
};
struct utc_event {
struct trigger *trigger;
time_t when;
};
struct event_min_heap {
size_t max_events;
size_t num_events;
struct utc_event event[];
};
The event C99 flexible array member in struct event_min_heap is an array with num_events events (memory allocated for max_events; can be reallocated if more events are needed) in a min heap keyed by the when field in each event entry. That is, the earliest event is always at the root.
Whenever current time is at least event[0].when, it is "triggered" -- meaning whatever action is to be taken, is taken --, and based on the struct trigger it refers to, the time of the next occurrence of that event is updated to event[0], then it is percolated down in the heap to its proper place. Note that you simply use mktime() to obtain the UTC time from broken-down local time fields.
(If this were a multi-user service, then you can support multiple concurrent timezones, one for each trigger, by setting the TZ environment variable to the respective timezone definition, and calling tzset() before the call to mktime(). Because the environment is shared by all threads in the process, you would need to ensure only one thread does this at a time, if you have a multithreaded process. Normally, stuff like this is perfectly implementable using a single-threaded process.)
When the event in the root (event[0]) is deleted or percolated (sifted), the event with the next smallest when will be at the root. If when is equal or less to current time in UTC, it too is triggered.
When the next when is in the future, the process can sleep the remaining interval.
That is all there is to it. You don't need multiple timers -- which are a system-wide finite resource --, and you don't need to worry about whether some local time is daylight savings time or not; the C library mktime() will take care of such details for you.
Now, if you don't like this approach (which, again, uses fewer resources than the approach you outlined in your question), contact the SystemD developers. If you kiss up to them obsequiously enough, I'm sure they'll provide a dbus signal for you. It's not like there is any sanity in its current design, and one more wart certainly won't make it any worse. Switching to C# is likely to be considered a plus.
It is crucial to understand that mktime() computes the Unix Epoch time (time_t) for the specified moment, applying daylight savings time if it applies at that specific moment. It does not matter whether daylight savings time is in effect when the function is called!
Also, UTC time is Coordinated Universal Time, and is not subject to timezones or daylight savings time.
Consider the following program, mktime-example.c:
#define _POSIX_C_SOURCE 200809L
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <time.h>
static time_t epoch(struct tm *const tm,
const int year, const int month, const int day,
const int hour, const int minute, const int second,
const int isdst)
{
struct tm temp;
time_t result;
memset(&temp, 0, sizeof temp);
temp.tm_year = year - 1900;
temp.tm_mon = month - 1;
temp.tm_mday = day;
temp.tm_hour = hour;
temp.tm_min = minute;
temp.tm_sec = second;
temp.tm_isdst = isdst;
result = mktime(&temp);
if (isdst >= 0 && isdst != temp.tm_isdst) {
/* The caller is mistaken about DST, and mktime()
* adjusted the time. We readjust it. */
temp.tm_year = year - 1900;
temp.tm_mon = month - 1;
temp.tm_mday = day;
temp.tm_hour = hour;
temp.tm_min = minute;
temp.tm_sec = second;
/* Note: tmp.tm_isdst is kept unchanged. */
result = mktime(&temp);
}
if (tm)
memcpy(tm, &temp, sizeof temp);
return result;
}
static void show(const time_t t, const struct tm *const tm)
{
printf("(time_t)%lld = %04d-%02d-%02d %02d:%02d:%02d",
(long long)t, tm->tm_year+1900, tm->tm_mon+1, tm->tm_mday,
tm->tm_hour, tm->tm_min, tm->tm_sec);
if (tm->tm_isdst == 1)
printf(", DST in effect");
else
if (tm->tm_isdst == 0)
printf(", DST not in effect");
else
if (tm->tm_isdst == -1)
printf(", Unknown if DST in effect");
if (tzname[0] && tzname[0][0])
printf(", %s timezone", tzname[0]);
printf("\n");
fflush(stdout);
}
int main(int argc, char *argv[])
{
struct tm tm;
time_t t;
long long secs;
int arg, year, month, day, hour, min, sec, isdst, n;
char ch;
if (argc < 2 || !strcmp(argv[1], "-h") || !strcmp(argv[1], "--help")) {
fprintf(stderr, "Usage: %s [ -h | --help ]\n", argv[0]);
fprintf(stderr, " %s [ :REGION/CITY | =TIMEZONE ] @EPOCH | YYYYMMDD-HHMMSS[+-] ...\n", argv[0]);
fprintf(stderr, "Where:\n");
fprintf(stderr, " EPOCH is in UTC seconds since 19700101T000000,\n");
fprintf(stderr, " + after time indicates you prefer daylight savings time,\n");
fprintf(stderr, " - after time indicates you prefer standard time.\n");
fprintf(stderr, "\n");
return EXIT_FAILURE;
}
for (arg = 1; arg < argc; arg++) {
if (argv[arg][0] == ':') {
if (argv[arg][1])
setenv("TZ", argv[arg], 1);
else
unsetenv("TZ");
tzset();
continue;
}
if (argv[arg][0] == '=') {
if (argv[arg][1])
setenv("TZ", argv[arg] + 1, 1);
else
unsetenv("TZ");
tzset();
continue;
}
if (argv[arg][0] == '@') {
if (sscanf(argv[arg] + 1, " %lld %c", &secs, &ch) == 1) {
t = (time_t)secs;
if (localtime_r(&t, &tm)) {
show(t, &tm);
continue;
}
}
}
n = sscanf(argv[arg], " %04d %02d %02d %*[-Tt] %02d %02d %02d %c",
&year, &month, &day, &hour, &min, &sec, &ch);
if (n >= 6) {
if (n == 6)
isdst = -1;
else
if (ch == '+')
isdst = +1; /* DST */
else
if (ch == '-')
isdst = 0; /* Not DST */
else
isdst = -1;
t = epoch(&tm, year, month, day, hour, min, sec, isdst);
if (t != (time_t)-1) {
show(t, &tm);
continue;
}
}
fflush(stdout);
fprintf(stderr, "%s: Cannot parse parameter.\n", argv[arg]);
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
Compile it using e.g.
gcc -Wall -O2 mktime-example.c -o mktime-example
Run it without arguments to see the command-line usage. Run
./mktime-example :Europe/Helsinki 20161030-035959+ 20161030-030000- 20161030-030000+ 20161030-035959- 20161030-040000-
to examine the Unix timestamps around the time when DST ends in 2016 in Helsinki, Finland. The command will output
(time_t)1477789199 = 2016-10-30 03:59:59, DST in effect, EET timezone
(time_t)1477789200 = 2016-10-30 03:00:00, DST not in effect, EET timezone
(time_t)1477785600 = 2016-10-30 03:00:00, DST in effect, EET timezone
(time_t)1477792799 = 2016-10-30 03:59:59, DST not in effect, EET timezone
(time_t)1477792800 = 2016-10-30 04:00:00, DST not in effect, EET timezone
The output will be the same regardless of whether at the time of running this DST is in effect in some timezone or not!
When calling mktime() with .tm_isdst = 0 or .tm_isdst = 1, and mktime() changes it, it also changes the time specified (by the daylight savings time). When .tm_isdst = -1, it means caller is unaware of whether DST is applied or not, and the library will find out; but if there is both a valid standard time and DST time, the C library will pick one (you should assume it does so randomly). The epoch() function above corrects for this when necessary, un-adjusting the time if the user is not correct about DST.
If I understood fully, the main gist is to reschedule after each event expiration, but just one (the next to trigger) in order to conserve timers. Rescheduling would keep events in effective local time and at worst case I would be looking at 23-25h periods, twice a year, when an event does not trigger at a correct time. (case when running just one event triggering 23:59 and getting rescheduled that one minute before local time jumps DST). And if I would know for sure that DST jumps always, regardless of TZ, at midnight, I could do a daily check and readjust if I find .tm_gmtoff changed.
@Tammi: No! Whenever an event triggers, you use mktime() to find out the UTC time of the next time that event should trigger. The C library timezone handling is intelligent enough to determine that time in UTC, including whether daylight savings time applies. All events will trigger at their correct times. I shall try to add a practical example in my answer.
@Tammi: In POSIXy systems, including Linux and Mac OS, daylight savings time is not some global flag that is turned on and off by some operating system service. It is a timezone property. The C library mktime() function takes broken-down local time fields, and converts it to UTC (Epoch), applying the current timezone rules, including DST for the target time. It does not matter whether DST applies at the calling time. When you use UTC timestamps, "DST jumps" are irrelevant: we prepare for them beforehand, so there really are no "jumps" at all.
I think I learned a valuable lesson here (though it escaped me the first time around) With mktime()'s capabilities (unrecognized by me until now) there is indeed no need for DST notifications (nor rescheduling). This is better than I imagined. Thank you very much all the time you took to help and great example code.
It later occured to me that "better than I imagined" might be understood incorrectly - I mean that @nominal-animal 's solution is much better than the best possible solution I was looking for or expecting to find. :-)
@Tammi, timezone adjustments for daylight savings apply on some regions and not in others, some regions (mainly in the past) had even four corrections per year (first one hour, then two) as it depends on how far is the region from the equator. Some countries apply the daylight correction in some years and not in others. look at zic(1) command to get some idea on how local time is calculated.
@Tammi: Excellent to hear. mktime() and related functions often trip people up -- the interface is unintuitive, even if it is very useful. Experimenting with a test program like mine above helps one build up a complete picture of the functionality. The hard part is solving the ambiquities ("what does the user actually mean by that") near DST changes; that's what the extra + and - are for after a timestamp in my example program. We should always find an user interface that works, then worry about the code...
@Luis Colorado, yes you're absolutely correct. Messy business that most would prefer to somehow avoid in their own projects. I am very happy that mktime() (and related) shield me from all that. Now that I have updated my small personal project accordingly, I feel pretty good about how it ended up. And even more so having learned something very useful.
Unix/linux systems only deal with UTC, and they use the time_t data (the number of seconds since 00:00h jan, 1st of 1970 UTC till now) as the internal time. Conversions to local time (with the complexities due to exceptions, variations for summer-winter periods, etc.) is done only when displaying the information to the user, so only on converting to local time it is done. As said, no provision to schedule something or preparation for it is made in the unix system.
From zdump(1) you can get all the info you want, per timezone, and use it to construct a crontab to notify you when the switch is to be made. It consults the local database of timezones and extracts all the info about switching (including historic) from winter to summer or the reverse.
$ zdump -v Europe/Madrid
Europe/Madrid Fri Dec 13 20:45:52 1901 UTC = Fri Dec 13 20:45:52 1901 WET isdst=0 gmtoff=0
Europe/Madrid Sat Dec 14 20:45:52 1901 UTC = Sat Dec 14 20:45:52 1901 WET isdst=0 gmtoff=0
Europe/Madrid Sat May 5 22:59:59 1917 UTC = Sat May 5 22:59:59 1917 WET isdst=0 gmtoff=0
Europe/Madrid Sat May 5 23:00:00 1917 UTC = Sun May 6 00:00:00 1917 WEST isdst=1 gmtoff=3600
Europe/Madrid Sat Oct 6 22:59:59 1917 UTC = Sat Oct 6 23:59:59 1917 WEST isdst=1 gmtoff=3600
Europe/Madrid Sat Oct 6 23:00:00 1917 UTC = Sat Oct 6 23:00:00 1917 WET isdst=0 gmtoff=0
Europe/Madrid Mon Apr 15 22:59:59 1918 UTC = Mon Apr 15 22:59:59 1918 WET isdst=0 gmtoff=0
Europe/Madrid Mon Apr 15 23:00:00 1918 UTC = Tue Apr 16 00:00:00 1918 WEST isdst=1 gmtoff=3600
Europe/Madrid Sun Oct 6 22:59:59 1918 UTC = Sun Oct 6 23:59:59 1918 WEST isdst=1 gmtoff=3600
Europe/Madrid Sun Oct 6 23:00:00 1918 UTC = Sun Oct 6 23:00:00 1918 WET isdst=0 gmtoff=0
Europe/Madrid Sat Apr 5 22:59:59 1919 UTC = Sat Apr 5 22:59:59 1919 WET isdst=0 gmtoff=0
Europe/Madrid Sat Apr 5 23:00:00 1919 UTC = Sun Apr 6 00:00:00 1919 WEST isdst=1 gmtoff=3600
Europe/Madrid Mon Oct 6 22:59:59 1919 UTC = Mon Oct 6 23:59:59 1919 WEST isdst=1 gmtoff=3600
Europe/Madrid Mon Oct 6 23:00:00 1919 UTC = Mon Oct 6 23:00:00 1919 WET isdst=0 gmtoff=0
Europe/Madrid Wed Apr 16 22:59:59 1924 UTC = Wed Apr 16 22:59:59 1924 WET isdst=0 gmtoff=0
Europe/Madrid Wed Apr 16 23:00:00 1924 UTC = Thu Apr 17 00:00:00 1924 WEST isdst=1 gmtoff=3600
Europe/Madrid Sat Oct 4 22:59:59 1924 UTC = Sat Oct 4 23:59:59 1924 WEST isdst=1 gmtoff=3600
Europe/Madrid Sat Oct 4 23:00:00 1924 UTC = Sat Oct 4 23:00:00 1924 WET isdst=0 gmtoff=0
Europe/Madrid Sat Apr 17 22:59:59 1926 UTC = Sat Apr 17 22:59:59 1926 WET isdst=0 gmtoff=0
Europe/Madrid Sat Apr 17 23:00:00 1926 UTC = Sun Apr 18 00:00:00 1926 WEST isdst=1 gmtoff=3600
Europe/Madrid Sat Oct 2 22:59:59 1926 UTC = Sat Oct 2 23:59:59 1926 WEST isdst=1 gmtoff=3600
Europe/Madrid Sat Oct 2 23:00:00 1926 UTC = Sat Oct 2 23:00:00 1926 WET isdst=0 gmtoff=0
Europe/Madrid Sat Apr 9 22:59:59 1927 UTC = Sat Apr 9 22:59:59 1927 WET isdst=0 gmtoff=0
...
By the way, if you want to be advised of an imminent localtime change, you can use the previous info to construct a crontab file, including all the info, or simply construct a crontab file that include the rules that apply at your localty. For example, if I want to be advised one day before a switch change in Spain (it changes on last sunday of march/october, at 02/03h) you can add some rules in your crontab file:
0 0 24-30 3,10 5 echo Time daylight savings change scheduled for tomorrow | mail<EMAIL_ADDRESS>
and a mail will be sent to you in on every saturday(5) that happens to be in the week from 24-30th of march and october (3,10 part) of each year at 00:00h (localtime). I'm sure you'll be able to adapt this example to your localty or time of advance (so, the day before a time change happens).
Good information to have under this question - others searching for similar information may still want an actual notification.
How to know who can be interested on this information? A good way to know is to upvote it :)
"Unix/linux systems only deal with UTC, and .... time_t data (the number of seconds since 00:00h jan, 1st of 1970 UTC till now) as the internal time." is amiss. UTC considers leap seconds, time_t rarely does that.
@chux, it should be easy to internally use TAI instead of UTC and implement the leap seconds correction in the timezone correction database (I've seen the opposite example in the tz database files) (and by that reason the TAY->UTC conversion) allowing the internal time to flow without disturbation. But I've seen no reference to this possibility.
@LuisColorado A singular problem with UTC as time_t is the the list of leap seconds is historic and not suitable for far future calculations. TAI sounds great except civil use of time does count leap seconds. Instead I suspect UNIX time_t, struct tm, etc. will continue to pretend leap seconds do not exist.
TAI corrections are known with weeks/months beforehand. Automatic schedule for corrections file is available for download beforehand. FreeBSD at least, has automatic update (or at least warns you of due status of last file) The thing here is not to perturbate the internal clock discipline (more on systems with precise timing demands) and make all corrections at once (leap second, local time, even solar adjustments can be made) see https://www.timeanddate.com/time/leapseconds.html for more info.
| common-pile/stackexchange_filtered |
toast after removing last item from recyclerview android
I have a recyclerview with an adapter. I am using the following method in onbindviewholder to remove the items from the list.
List<MyList> myList;
Adpater myAdapter = myList.get(position);
myList.remove(position);
notifyDataSetChanged();
I want a toast after the last item is removed from the list. Please help.
Just check length of list in adapter if 0 then show toast
How to set the condition for that ? Have any reference?
you can set toast after myList.remove(position); line.
your can doif ( myList.size() == 0 ) showToast(); else myList.remove(position);
@Akash That worked
Yes post that as answer
Set toast after myList.remove(position); line.
I see that it is custom adapter, so You can have overrided method to check empty:
@Override
public boolean isEmpty() {
return items.size()==0; //items is example collection
}
Use that method
if (myAdapter.isEmpty()){
//here toast show
}
This works generally when list size is 0. How do i check the list size after i remove the item??
Got it. Thank u :)
| common-pile/stackexchange_filtered |
How to trigger NodeJS service based on user's calendar event
I am trying to allow a user to log in to a web app with their calendar service (eg. Google Calendar, Outlook) and then whenever one of their calendar events start or end, I want a NodeJS service to be triggered.
My thinking was to try to use pipedream.com (though only an idea, very happy to use another approach) in the following fashion:
1. User logs in to their calendar service via my web app, or perhaps provides a webhook or equivalent
2. I store their calendar credentials and then pass them to a service such as Pipedream
3. On calendar event start or end, Pipedream then triggers my NodeJS service
I am stuck working out how I can pass the user's calendar credentials/webhook to Pipedream, as it requires login at the point of setup:
Any help would be much appreciated!
Is pipedream a requirement for this project? Firebase has some nice Cloud Functions triggers that would work well for what you're trying to do here.
Nope it is not at all, just an idea to use Pipedream. Thanks a lot @Harry will look into those!
Firebase cloud functions would be a good solution here. The Cloud Functions triggers allow execution of a serverless node.js function to be triggered at a specific time, in a very simple way:
exports.scheduledFunctionCrontab = functions.pubsub.schedule('5 11 * * *')
.timeZone('America/New_York') // Users can choose timezone - default is America/Los_Angeles
.onRun((context) => {
console.log('This will be run every day at 11:05 AM Eastern!');
return null;
});
The big advantage is that you only pay for the execution time of your function, so you don't need to worry about a server sitting idle until the time of the meeting (which triggers the function).
Many thanks for the answer @Harry! This looks great, but how are you imagining that it would be triggered by calendar events? Are you thinking that I would try to continually grab the user's calendar events and then setup cronjobs based on event's start and finish times?
Yes, exactly - you can automate that. Just pull the event times into a database (e.g. Cloud Firestore if you're staying in the Firebase ecosystem) and use another function that is triggered when a new event is added to grab the time of the event and schedule the cronjob.
If this seems like a solution for you, please feel free to mark the question as solved and/or upvote the answer.
Sure have upvoted. Am looking into whether there is a more simple solution (eg. https://developers.google.com/calendar/v3/push) and if not, will go ahead with yours and tick it
| common-pile/stackexchange_filtered |
Disable custom module functions for user roles
I've created a custom module that allows you to delete orders. Now what I want is that when another user has logged in with a custom role (so not the Administrator), this function won't work. If I disable the resource responsible for deleting orders for the custom user, the user can still perform the action although the settings for disabling and enabling are not visible.
The .php file responsible checks if the module is enabled, and if so, it will run the code. If it is disabled, it should throw the following
else {
$this->messageManager->addError(__("Either you're not allowed to delete orders or the function has been disabled"));
$resultRedirect = $this->resultRedirectFactory->create();
$resultRedirect->setPath($this->getComponentRefererUrl());
return $resultRedirect;
}
But unfortunately, when the function has been enabled by the Administrator, the function will also be enabled for other users.
Is there a work around for this, so that when the administrator is not logged in, the function will not work?
Thanks in advance.
-- SOLUTION (thanks to Shireen N) --
add the following function and use it as validation like so:
protected function _isAllowed() {
return $this->_authorization->isAllowed('Vendor_Module::resource');
}
if ($enabled and $this->_isAllowed()) {
// your code here
}
Follow this tutorial to achieve the desired results - https://www.magestore.com/magento-2-tutorial/3194-2/
| common-pile/stackexchange_filtered |
Let $f\colon (a,b)\to \mathbb{R}$ be nondecreasing and continuous. If $E=\{x\in (a,b)\mid f'(x)\text{ exists and } f'(x)=0\}$, then $\lambda(f(E))=0$
I need help to understand the proof below of the following theorem.
Let $f\colon (a,b)\to \mathbb{R}$ be an arbitrary function. If $E=\{x\in (a,b)\mid f'(x)\text{ exists and }
f'(x)=0\}$, then $$\lambda(f(E))=0.$$
$\lambda$ denotes the Lebesgue measure.
You pointed out this wonderful answer to me:
More precisely, let $f:\mathbb R\to\mathbb R$ be an arbitrary function, $\Sigma$ is the set of $x\in\mathbb R$ such that $f'(x)$ exists and equals 0. Then $f(\Sigma)$ has measure 0.
By countable subadditivity of measure, we may assume that the domain of $f$ is $[0,1]$ rather than $\mathbb R$. Fix an $\varepsilon>0$. For every $x\in\Sigma$ there exists a subinterval $I_x\ni x$ of $[0,1]$ such that $f(5I_x)$ is contained in an interval $J_x$ with $m(J_x)<\varepsilon m(I_x)$. Here $m$ denotes the Lebesgue measure and $5I_x$ the interval 5 times longer than $I_x$ with the same midpoint. Now by Vitali's Covering Lemma there exists a countable collection $\{x_i\}$ such that the intervals $I_{x_i}$ are disjoint and the intervals $5I_{x_i}$ cover $\Sigma$. Since $I_{x_i}$ are disjoint, we have $\sum m(I_{x_i})\le 1$. Therefore $f(\Sigma)$ is covered by intervals $J_{x_i}$ whose total measure is no greater than $\varepsilon$. Since $\varepsilon$ is arbitrary, it follows that $f(\Sigma)$ has measure $0$.
This answer is certainly correct, but for me, who am just a student, it is too lacking in details that I cannot write. I can't translate what is said into symbols. Could someone be kind enough to explain the details of this answer to me?
Prior state of this question.
I was looking for some text or some suggestions to prove this statement using Vitali's lemma, but at first, under the additional
nondecreasingness and continuity assumptions for $f$, hoping the proof should prove to be more simplified.
I even decided to open a bounty, in the hope that someone will be able to provide me with a proof of this fact with these hypotheses using Vitali's Lemma. But since I didn't receive any answer, I shifted my attention to the more general result.
Check this: https://math.stackexchange.com/a/363208/42969
This is true without the nondecreasingness or the continuity assumption, see the answer to https://mathoverflow.net/questions/113991/counterexample-to-sards-theorem-for-a-non-c1-map/114000#114000
Thanks, but on the basis of these hypotheses the result should be simplified, right? Here I am looking for this simplified version.
The application of Vitali’s Lemma in the link I posted is really as simple as applications of Vitali’s Lemma go. So are you looking for a proof that does not use this lemma?
Here is another approach to the problem that does not require monotonicity.
@MittensThanks! But I would like to know if this fact can be proved with the Vitali covering theorem.
@JonathanHoleThanks!Since we have more hypotheses here, I believe that we can still do it with the vitali covering lemma, but by exploiting these hypotheses some simplifications will emerge somewhere compared to the general case. Here I am looking for a proof of this fact under these hypotheses that uses the Vitali covering lemma.
The inverse $g=f^{-1}$ is non-increasing, (hence of bounded variation,) so has a finite derivative almost everywhere, this gives the desired result.
@Steen82Thanks. But as I mentioned in the post I would like to solve the problem using Vitali's Covering Theorem.
Just saying, because you have more assumptions does not necessarily mean a simpler proof must exist. Please explai what exactly you do not like about the linked answers?
@NatMath The answer provided in the MO link given by Jonathan Hole proceeded by finding a collection of subintervals in an obvious way, then applying the Vitali covering lemma. It cannot reasonably get any simpler than that.
@Divide1918Thanks for your interest, the problem with that answer is that I can't understand it in detail. Could you rewrite it in an explicit and understandable way? the answer will still be accepted. Thank you.
Your tecent edit makes your post unclear: as Jonathan commented on June 13th while providing this MO link, the proof in it holds without your nondecreasingness and continuity assumptions, but you insisted that you were looking for a simpler proof making use of these assumptions. Now you seem to have changed your mind and only want to understand that proof, without your additional hypothesis. If so, you should: 1) remove them explicitely from your question; 2) explain what exactly you do not understand in that proof.
@AnneBauvalI didn't change because I wanted to share with everyone the evolution of the question. Yes, I started from the hypotheses of continuity and non-decrease of the function, but since I didn't receive any answers I shifted my attention to the more general result that had been suggested to me, after all this is also fine.
Ok, and again (as also asked before by Behnam): please explain (in your post) what exactly you do not understand in that proof.
The following proof is a more explicit version of the one already mentioned above. If anything remains unclear, I’ll be happy to provide clarification.
Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be an arbitrary function, and $\Sigma$ the set of $x\in \mathbb{R}$ such that $f’(x)$ exists and equals $0$. Then $f(\Sigma)$ has measure $0$.
Proof
Since $$m(f(\Sigma))\leq \sum_{n\in \mathbb{Z}} m(f(\Sigma \cap [n, n+1]))$$ (by $\sigma$-subadditivity), we can assume the domain of $f$ to be the interval $[0,1]$, i.e. if all of the measures of the series are $0$, then so must the measure of $f(\Sigma)$. Fix $\varepsilon >0$, and let $x\in \Sigma$. Since
$$
\lim_{y\rightarrow x} \frac{f(y)-f(x)}{y-x}=0,
$$
there exists a $\delta>0$ such that
$$
\left| \frac{f(y)-f(x)}{y-x} \right|<\frac\varepsilon5,$$
for all $y\in (x-\delta,x+\delta)$. Now define $I_x=(x-\delta/5,x+\delta/5)$. We can choose $\delta$ small enough so that $I_x\subset [0,1]$. Now let $z\in 5I_x$ (if and only if $|z-x|<\delta$). Then $$|f(z)-f(x)|<|z-x|\frac{\varepsilon}5< \frac{\delta\varepsilon}5 .$$
So $$f(z)\in (f(x)-\delta\varepsilon/5,f(x)+ \delta \varepsilon/5)=:J_x,$$ which implies $f(5I_x)\subset J_x$, and importantly $m(J_x)=2\delta\varepsilon/5 = \varepsilon m(I_x)$.
Now by Vitali’s covering lemma there exists a countable collection $\{x_i\}$ such that the intervals $I_{x_i}$ are disjoint and the intervals $5I_{x_i}$ cover $\Sigma$. Since $I_{x_i}$ are disjoint and $I_{x_i}\subset [0,1]$, we have $\sum_i m(I_{x_i})\leq m([0,1])= 1$. Since $\Sigma$ is covered by $\{5I_{x_i}\}$, and $f(5I_{x_i})\subset J_{x_i}$ for each $i$, we have that $f(\Sigma)$ is covered by $\{J_{x_i}\}$. Finally
$$
m(f(\Sigma))\leq m\left(\bigcup_i J_{x_i}\right)\\ \leq \sum_im(J_{x_i})=\varepsilon \sum_im(I_{x_i})\leq \varepsilon, $$
but $\varepsilon$ was arbitrary, so $m(f(\Sigma))=0$.
Edit: clarification of the usage of Vitali’s covering lemma.
Although Vitali’s (infinite) covering lemma holds for all separable metric spaces, I’ll formulate it in the case of $\mathbb{R}$.
If $\mathbf{F}$ is a family of intervals where
$$
\sup \{\ell(I)\mid I\in \mathbf{F}\}<\infty,
$$
i.e. the lengths of the intervals are bounded ($\ell(I)$ denotes the length of $I$). Then there exists a countable subfamily of intervals $\mathbf{G}\subseteq \mathbf{F}$ such that the elements of $\mathbf{G}$ are pairwise disjoint, and
$$
\bigcup_{I\in \mathbf{F}}I\subseteq \bigcup_{I\in \mathbf{G}} 5I,
$$
Where $5I$ denotes the interval of length $5\ell(I)$ with the same midpoint as $I$.
Why does the countable collection $\{x_i\}$ mentioned in the proof exist?
The construction of $I_x$ and $J_x$ is of course for each $x\in \Sigma$. Thus, the family of intervals (with bounded lengths, $\ell(I_x)\leq 1$) $\mathbf{F}=\{I_{x}\mid x\in \Sigma\}$ has a countable subfamily, $\mathbf{G}$, of intervals as described in the lemma. The midpoints of those intervals are, by definition, the $\{x_i\}$.
Why do the intervals $5I_{x_i}$ cover $\Sigma$?
Since $$\Sigma=\{x\in \Sigma\}\subseteq \bigcup_{x\in \Sigma} I_x\subseteq \bigcup_{i} 5I_{x_i},$$
by the construction of the $\{x_i\}$ in Vitali’s covering lemma.
@NatMath. What is it you don’t understand about the proof?
@JanThanks, The application of Vitali's lemma is not clear to me. ${x_i}$ is a Vitali covering of $\Sigma$? Why the intervals $5I_{x_i}$ cover $\Sigma$?
I don't know this version of Vitali's Lemma. Shouldn't we find closed intervals such that $E$ minus the union of these intervals is less than epsilon?
$z\in 5I_x$ means that $z$ is in an interval five times longer than $I_x$?
Correct, with the same midpoint as $I_x$. I’m currently editing the post for clarification.
I stated the version of Vitali’s covering lemma that we used here in the edit.
@JanThanks, so $I_x$ is a vital cover of $E$?
@JanI'm sorry but I can't understand how Vitali's covering lemma applies.
No problem! I think the confusion stems from the fact that this we’re using the (weaker) covering lemma instead of Vitali’s covering theorem. And no, ${I_x}$ isn’t a Vitali covering of $E=\Sigma$, since the lemma doesn’t require that.
To be precise, I am referring to Vitali's cover theorem on page 262 of Hewitt-Stromberg Real and Abstract Analysis.
@JanI don't understand what result you are referring to. I am sorry.
Let us continue this discussion in chat.
| common-pile/stackexchange_filtered |
Google analytics enhanced ecommerce, help me debug setup please
I've implemented enhanced ecommerce tracking in Google UA by populating the dataLayer with as far as I can see the correct data.
Here is the html of the actual datalayer script:
<script type="text/javascript">
dataLayer.push({
'ecommerce': {
'purchase': {
'actionField': {
'id': 'ZW10317808', // Transaction ID. Required for purchases and refunds.
'affiliation': 'Online Store',
'revenue': '9.95', // Total transaction value (incl. tax and shipping)
'tax':'0.00',
'shipping': '0.00',
'coupon': '',
'products': [
{
'name': 'Test product', // Name or ID is required.
'id': 'ZCMNR010',
'price': '9.95',
'brand': 'Brand',
'category': '',
'variant': '',
'quantity': 1
//, 'coupon': '' // Optional fields may be omitted or set to empty string.
}
]
}
}
}
});
</script>
And here is a sample output from the console so it appears (to me) that the values are all making to the dataLayer:
dataLayer
[
0: {
[functions]: ,
__proto__: { },
ecommerce: {
[functions]: ,
__proto__: { },
purchase: {
[functions]: ,
__proto__: { },
actionField: {
[functions]: ,
__proto__: { },
action: "purchase",
affiliation: "Online Store",
coupon: "",
id: "ZW10317808",
products: [
0: {
[functions]: ,
__proto__: { },
brand: "Brand",
category: "",
id: "ZCMNR010",
name: "Test Product",
price: "9.95",
quantity: 1,
variant: ""
},
length: 1
],
revenue: "9.95",
shipping: "0.00",
tax: "0.00"
}
}
}
},
1: {
[functions]: ,
__proto__: { },
event: "gtm.js",
gtm.start:<PHONE_NUMBER>007
},
2: {
[functions]: ,
__proto__: { },
ecommerce: {
[functions]: ,
__proto__: { },
checkout: {
[functions]: ,
__proto__: { },
actionField: {
[functions]: ,
__proto__: { },
step: "Order Confirmation"
}
}
},
event: "checkout"
},
3: {
[functions]: ,
__proto__: {
[functions]: ,
__proto__: null
},
event: "gtm.dom"
},
4: { },
length: 5
]
The transaction is listed in UA but with 0.00 revenue, 0 items, etc. This transaction was from last week so unlikely to be data latency.
Also the checkout step tracking isnt showing in the reports either. I've enabled the enhanced tracking in the GA view and deployed the plugin view the setting in GTM.
I'm scratching my head. Hopefully a fresh pair of eyes can spot something obvious.
Thanks if you can help.
Thanks
I don't see anything obvious with your code. Have you tried using the Google Analytics Debugger to monitor the request?
Blexy, many thanks for taking some time to look at this for me. I can confirm that I have resolved this issue and like often it was a simple defect that was hidden by my own eyes seeing what they wanted to and not what was there. My rendered script had a missing { from the code ,
'products': [
and the matching closing tag but otherwise appeared correct. Methodical debugging revealed the issue, which should have been my first port of call. And thanks for the Chrome extension, I was not aware of that and am installing it now. Every cloud.. :)
Products must sit beneath "purchase", not within the actionFieldObject:
<script type="text/javascript">
dataLayer.push({
'event': 'transaction',
'ecommerce': {
'purchase': {
'actionField': {
'id': 'ZW10317808', // Transaction ID. Required for purchases and refunds.
'affiliation': 'Online Store',
'revenue': '9.95', // Total transaction value (incl. tax and shipping)
'tax':'0.00',
'shipping': '0.00',
'coupon': ''
},
'products': [
{
'name': 'Test product', // Name or ID is required.
'id': 'ZCMNR010',
'price': '9.95',
'brand': 'Brand',
'category': '',
'variant': '',
'quantity': 1
//, 'coupon': '' // Optional fields may be omitted or set to empty string.
}
]
}
}
});
Hope that helps.
| common-pile/stackexchange_filtered |
I need to make button to go to another page workable if it's written in php code(with echo)
i need to make the html that is written in php(via echo) workable
<?php
echo
"
<html>
<head>
</head>
<body>
<div onclick='location.href'='car_details.php';' >
...
</div>
</body>
</html>
"
?>
Is there any reason why you are not using an anchor tag?
Your quotes are wrong. It should be
<div onclick='location.href=\"car_details.php\"' >
You have to escape the double quotes because otherwise they would end the string started with echo ".
that worked thanks, didn't know about "... "
| common-pile/stackexchange_filtered |
MSMQ for those application which are deployed on different machines
I am wondering how can i use msmq with different application which are deployed on different machine. MSMQ will work perfectly for those application which are deployed on the same server.
Server 1
Webapplicaiton 1
Webapplication 2
Both appplication can exchange message using msmq.
MSMQ can be help full if the application are deployed on different machines?
Server 1
Application 1
Server 2
Application 2
How they will exchange messages as both are deployed on separate servers.
Any help would be highly appreciated.
What is your observation? Did you try that out?
i have an idea i have to deploy the web services which have the rights to access the queue, and rest of the world will access that web api. ?
http://nthrbldyblg.blogspot.sg/2017/02/msmq-between-two-computers.html
You can send messages across servers using MSMQ. I don't know your how your websites interact but this might help.
Look at the following article regarding Direct Format Names
| common-pile/stackexchange_filtered |
Does Zend_Validate work on filtered/unfiltered values?
i wonder if a Zend_Form_Element has filters attached to it, will Zend_Validate validate the filtered or unfiltered inputs?
i guess it depends on what u pass as the values. in the context of forms,
$form->isValid($this->getRequest()->getParams()); // unfiltered
$form->isValid($form->getValues()); // filtered, cos getValues() returns filtered values
Unfiltered.
Filters applied to the elements when you try to get their values via getValue() method.
so i guess u mean unfiltered if i use, what i see in most tuts, $form->isValid($this->getRequest()->getParams()) if i do something like $form->isValid($form->getValues()) it will be filtered i guess?
| common-pile/stackexchange_filtered |
Queries that rank users according to the number of close/reopen votes they have cast
I know that I can check this statistics for my profile by going to the "votes" tab. But for other users I can not do the same (it is hidden).
I want to see which users have cast the most number of close or reopen votes.
So, is it possible to create such queries?
What I have written here is to the best of my knowledge, but I am only a beginner at using the SE data explorer and corrections may be due. I did not write the queries mentioned but only slightly adjusted them. Others will certainly be able to improve them.
To some extent, votes are private. The public database mainly includes information that is somehow publicly accessible and excludes information that could compromise privacy. Close voters are listed under a closed post and permanently in the timeline. Reopen voters can also be seen in the timeline. Timeline information is in the data explorer, so it is possible to get a limited version of the information you want.
Close Voters
This query should return the top close voters.
Note that:
It uses the PostHistory table record for close stats. The relevant schema stores the fact that the post was closed. Unsuccessful close votes are not accessible as far as I know, so this is only a count of those votes that resulted in the post being closed.
Deleted posts are not included. This is a problem, because closing questions makes it much more likely that they will be deleted. But I do not think the PostsWithDeleted table has enough data to fix this because it has no user data.
Reopen Voters
This query should return the top reopen voters.
Note that:
It uses the PostHistory table record for reopen stats. The relevant schema stores the fact that the post was reopened. Unsuccessful reopen votes are not accessible as far as I know, so this is only a count of those votes that resulted in the post being reopened.
Deleted posts are not included.
I was also trying to modify this query but you posted first. I think its not working properly because I have also successfully re-opened some posts but its not showing. One example is this.
@TriyugiNarayanMani I am very willing to believe it is not working properly, but the post you linked to was reopened very recently and the data only gets dumped once a week, sometimes less often, so may not include that one yet. According to my version you have participated in 52 successful votes to reopen...
That was only one example. There are many posts to which I have voted to re-open and they have re-opened. Now, your changed query is showing correct info.
@TriyugiNarayanMani oh! I am surprised that the small change I made fixed that. There may still be problems with it. Thanks a lot for reporting the inconsistency! Please feel free to post a better answer if you manage to improve the query more...
| common-pile/stackexchange_filtered |
Read Excel file from a starting row using OLEDB Data Provider in C# .Net
I have an excel sheet with dynamicnumber of columns.
I want to read the excel skipping first 2 rows. So starting from rownumber 3 all columns from the excel using OLEDB Data Provider in C# .Net.
Thanks in advance.
Why do you want to read excel through an Oledb data provider?
@Iliass Nassibane I want to develop an import functionality reading/ using excel data in ASP.NET. Is there any better option instead of OLEDB Data Provider for reading excel data? Please suggest.
If you can change excel file like adding column that is an Id. After you can insert data autoincreasing number.
Column ID screenshot
After you can write an SQL script like 'SELECT * FROM [YourSheet$] WHERE Id>2'.(Id is first row so rowNumber>3 is equals to Id>2)
| common-pile/stackexchange_filtered |
Carriage return in DataReceivedEventArgs
I am running a console process attached to my C# application using UseShellExecute and redirecting the output to my application. I have the OutputDataReceived event handled.
Now if the console process return data containing a carriage return (like in case of chkdsk the completion percentage is updated on the same line of console window), the event handler function does not receive the carriage return character in the DataReceivedEventArgs.Data
Is there a way to enable getting carriage return in the data received?
Why do you need to enable getting the carriage return? There is implicitly a carriage return at the end of the data every time the event is raised. That's what ReadLine does for any StreamReader and its why the method to start reading the output asynchronously is called BeginOutputReadLine.
@mike-z, Every line read has an implicit carriage return and also, most have a new line character as well. Now I want to distinguish just carriage return from carriage return with new line character. As I mentioned in my post, chkdsk command or format command print the completion percentage on the same line on console. I want to mimic that behavior while writing the content read in a rich text box.
| common-pile/stackexchange_filtered |
Implement animated Button in cocos2d
I wish to clone the effects of the button animations found in Candy Crush Saga.
And also I wish to know how to do such fluid & beautiful animations.
Is it possible with Cocos2d-iPhone?
This is the link of Candy Crush Sage:
http://www.youtube.com/watch?v=KAMUWIqYN24
Is it done using image sequences?
It is possible. Just run animation on buttons normal sprite.
GTAnimSprite *frame_normal = [GTAnimSprite<EMAIL_ADDRESS>GTAnimSprite *frame_sel = [GTAnimSprite<EMAIL_ADDRESS>frame_sel.color = ccc3(128,128,128);
CCMenuItemSprite *plyBtn = [CCMenuItemSprite itemWithNormalSprite:frame_normal
selectedSprite:frame_sel
target:self
selector:@selector(playBtnPress:) ];
plyBtn.position = ccp(size.width*0.77f, size.height*0.1f);
CCMenu *menu2 = [CCMenu menuWithItems:plyBtn, nil];
menu2.position = ccp(0.0f,0.0f);
[self addChild:menu2 z:2 ];
//Here is class file:GTAnimSprite.h
#import <Foundation/Foundation.h>
#import "cocos2d.h"
@interface GTAnimSprite : CCSprite
{
bool bouncing;
float counter;
}
@end
//Here is class file:GTAnimSprite.mm
#import "GTAnimSprite.h"
@implementation GTAnimSprite
-(void)onEnter
{
[super onEnter];
counter = 0.0f;
bouncing = true;
[self scheduleUpdate];
}
-(void)update:(ccTime)dt
{
if (bouncing)
{
counter += dt;
self.scaleX = ( (sin(counter*10) + 1)/2.0 * 0.1 + 1);
self.scaleY = ( (cos(counter*10) + 1)/2.0 * 0.1 + 1);
if (counter > M_PI*10){
counter = 0;
}
}
}
-(void)onExit
{
[self unscheduleUpdate];
[super onExit];
}
@end
HERE IS XCODE SAMPLE SOURCE: https://www.box.com/s/52i4xyznetyyc329evcu
I have implemented similar behavior with CCActions.What's best performance wise, custom CCSprite (like this) or implement CCActions?
@KarlosZafra, Good to implement with CCActions. Show me code that you used with CCActions.
@Guru: This is working fine... no doubt in this... But when i simply create a sprite than it is working good, but when i use this sprite in ccMenuItemSprite it will be scaled vertically and in direction of positive x axis. That is , its sprites left side is fixed. How can i solved this? If you need further description than let me know.?
instead of running it on CCMenuItemSprite, run it on its normal and selected image.
Is it possible to implement this kind of buttons on website using jquery, css3, .... ?
Nice work. I would love to implement this code sometimes. However, I have a question @Guru, could you explain why you have used that sin(..), cos(..) in update method? Why it is necessary? Is there any formula related to this?
to get wave factor we used sin and cos in it, those sin and cos gives floating point value that gives smooth effect for buttons.
This is how I do it using CCActions
First, I define the CCAction:
id scaleHorDown = [CCScaleTo actionWithDuration:duration * 5/30.f scaleX:0.75f scaleY:1.0f];
id scaleHorBouncing = [CCEaseBounceIn actionWithAction:scaleHorDown];
id scaleVerDown = [CCScaleTo actionWithDuration:duration * 5/30.f scaleX:1.0f scaleY:0.65f];
id scaleVerBouncing = [CCEaseBounceInOut actionWithAction:scaleVerDown];
id shrink = [CCSequence actions:scaleHorBouncing,scaleVerBouncing, nil];
id swell = [CCScaleTo actionWithDuration: duration * 15/30.f scale:1.10f];
id swellEase = [CCEaseElasticOut actionWithAction:swell];
id resetScale = [CCScaleTo actionWithDuration:duration * 5/30.f scale:1.0f];
id resetScaleBouncing = [CCEaseInOut actionWithAction:resetScale];
id buttonAction = [CCSequence actions: shrink, swellEase, resetScaleBouncing, nil];
Then I run the animation over the desired sprites when CCMenuItem is initialized
CCMenuItem aMenuItem = [CCMenuItemSprite itemFromNormalSprite:buttonNormalSprite
selectedSprite:buttonSelectedSprite
block:^(id sender) {
//play animation highlighting button
[buttonSelectedSprite runAction:buttonAction]];
}}];
In my case I'm only running the animation when the button is pressed.
| common-pile/stackexchange_filtered |
Chrome's Autofill Leaves Greyish Corners on Rounded Input Fields
Let's say someone is working on a web site that allows users to create a profile. This designer really likes the way input fields look with rounded corners, but Chrome's autofill feature is doing something odd to them. Of course, they could take the easy way out and remove the border-radius definition, therefore avoiding the weird corners, but then the site wouldn't have the look they were hoping for.
Here are before-and-after images of what the fields would look like when autofill is used.
And here's a JSFiddle for anyone that would like to play around with it.
If helpful, here is the relevant code being used to modify the fields:
.field {
background-color: #202020;
border: 1px solid #000;
border-radius: 4px;
box-shadow: 0 0 5px #000;
color: #d8d8d8;
}
input:-webkit-autofill {
box-shadow: 0 0 0 100px #202020 inset, 0 0 5px #000;
-webkit-text-fill-color: #d8d8d8;
}
Several attempts were made to find the culprit behind this problem, including removing the outer shadow from both definitions, as well as changing the inner shadow's position and blur radius. The greyish corners were still there. The only real "solution" was to revert to square corners, but that option is being reserved as a last resort.
After numerous searches for a solution to this issue, all that could be found were ways to circumvent the default pale yellow background. And that's great news, but the designer is still left with those ugly corners. Is there a way to get rid of them entirely and maintain the field's original style? or is it a glitch that has no work-around?
Thank you for any insight or help you can provide.
Kreven's solution, while not the most elegant line of code, will definitely get the job done for most people I reckon. However, I'd like to modify it a bit and explain why it even works in the first place. Let's take a look at this line of code:
transition: background-color<PHONE_NUMBER>s;
Here is a transition that would take 68.24 years to complete. Looks silly, right? If you're wondering where that magic number came from<PHONE_NUMBER>), this is the maximum size of an integer, and thus the maximum duration for a CSS transition. What this transition is doing is making it take 64 years for your browser's autofill implementation to change the background color of your input.
It's also worth noting that this cheap trick will negate the need for you to use the "-webkit-box-shadow" CSS command (unless, of course, you need the autofill background-color to be different than the non-autofill background-color).
Hope this helps somebody! Cheers.
Note that this transistion should go in the input block, and not in the .field block. (it might be confusing, as people see already a background-color defined on the .field block, they might put it in there)
I found that increasing the border width and making it the same colour as the input background seems to help. Then reduce the padding to achieve the same height:
https://jsfiddle.net/Lguucatv/1/
border: 4px solid #202020;
padding: 1px;
Also modified the box-shadow to match original design:
box-shadow: 0 0 0 1px #000, 0 0 5px 1px #000;
Is there a way to get rid of them entirely and maintain the field's original style?
Here is the css
body {
background-color: #282828;
}
.field {
background-color: #202020;
border: 1px solid #000;
color: #d8d8d8;
margin: 100px; /* adding space around it to */
padding: 5px; /* make it easier to see */
}
input:-webkit-autofill {
box-shadow: 0 0 0 100px #202020 inset, 0 0 5px #000;
-webkit-text-fill-color: #d8d8d8;
}
DEMO
This is only removing the rounded corners, which does work, but it doesn't solve the problem. If it's a glitch that currently has no fix or work-around, that's understandable and we have no choice but to accept it and move on. However, if a solution does exist, I'm hoping to find it here. :)
add transition to background-color with high animation time to the .field element
.field {
...
transition: background-color 5000s;
}
solutuion found here
demo on codepen
I fixed the problem by adding this to my css:
@media (prefers-color-scheme: dark) {
[data-theme*=dark] {
color-scheme: dark;
}
}
This can help you to avoid these annoying borders:
.field {
-webkit-box-shadow: inset 0 0 0 30px #2E2E2E !important;
background-clip: content-box !important;
}
| common-pile/stackexchange_filtered |
Aluminium oxide phase diagram
I have been looking for the phase diagram of alumina or Aluminium oxide, could it be that there is not something like that?
thank you
There is really not much to it. Basically you have a metal and an oxide phase, and never the twain shall mix significantly even after melting the oxide above 2000°C [1]. The suboxide that is sometimes mentioned, $\ce{Al2O}$, is not indicated here.
From Ref. 1
Reference
1.
S. Das, "The Al-O-Ti System", J. Phase Equilibria 23(6) (2002), 525-536.
https://www.sciencedirect.com/science/article/pii/036459169290005I?via%3Dihub is another reference - it also shows no other binary phases, although it does show an ionic liquid of Al2O3 composition with a miscibility gap to the Al liquid.
This diagram shows the ionic liquid as well, its saturation line is the short segment on the upper right.
| common-pile/stackexchange_filtered |
configure devise for "real email sending"?
I need to know how configure the gem devise,Im using ruby 1.9.2, rails 3.1.3, devise 1.5.3, Im making an app about surveys, and im creating my "authentication module", in my log file I can see that the user receive the email for activate the account, when I copy the link (from the log file) to activate the account it works, but in this moment I can't test it "really", because don't receive a "real email" about activate the account in my gmail (I fill the form with my own information) I just can see it in my log file...so my question is: how can configure devise in production mode? which are the commands? (basic) step by step...Im a beginner...thanks in advance.
have a look at the guides:
config.action_mailer.delivery_method = :smtp
config.action_mailer.smtp_settings = {
:address => "smtp.gmail.com",
:port => 587,
:domain => 'baci.lindsaar.net',
:user_name => '<username>',
:password => '<password>',
:authentication => 'plain',
:enable_starttls_auto => true }
http://guides.rubyonrails.org/action_mailer_basics.html#action-mailer-configuration-for-gmail
EDIT
this configuration would need to go into the config/environments/development.rb if you want that in development environment. see the guides about configuring your application: http://guides.rubyonrails.org/configuring.html
thanks...but as I said...Im a beginner...so...in what file paste this code...sorry...
| common-pile/stackexchange_filtered |
Prompt a message when datagrid is empty
I am using WPF. I am using datagrid to dynamically add items in it.
When the application is initially loaded, the datagrid is empty, or when all the items in the datagrid are remove, it only shows datagrid header.
How can I remove the header, and show a message like "Please insert an item." when the datagrid is empty.
Possible duplicate of http://stackoverflow.com/questions/3117754/zero-results-message-in-the-wpf-grid
I'd use an IValueConverter for this. Bind directly to the Items source, and when it's null/empty, then return Visibility.Collapsed. Add text notice as a TextBlock, and negate the converter using a parameter.
<TextBlock Text="There are no items"
Visibility="{Binding Items,
Converter={StaticResource ItemsToVisibilityConverter},ConverterParameter=negate}" />
<DataGrid Visibility="{Binding Items,
Converter={StaticResource ItemsToVisibilityConverter}}">
</DataGrid>
And the converter has to make use of the ConverterParameter:
public class ItemsToVisibilityConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
{
var items = value as IEnumerable<object>;
bool isVisible = items != null && items.Count() > 0;
if ((string)parameter == "negate") isVisible = !isVisible;
return isVisible ? Visibility.Visible : Visibility.Collapsed;
}
public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
{
throw new NotImplementedException();
}
}
| common-pile/stackexchange_filtered |
What are the laws, rules and regulations that make South Korea more technocratic than the United States?
In Asia, they already do. From Taiwan to South Korea to Singapore,
doctors, engineers, and other professionals occupy the top rungs of
elected office and functional agencies. In these countries, public
administration is a vocation, and revolving doors between corporate
and political life are minimal. Transparency is high and corruption is
low. What differentiates all three Asian states—and others with
ultra-low COVID-19 death rates—is they are highly technocratic.
https://foreignpolicy.com/2021/06/24/pandemic-technocrats-global-challenges/
There's this passage that says Korea is more technocratic than the United States, but it doesn't explain why there are more technocratic or skilled people in the government of Korea than in the U.S. and what are other factors that make it more technocratic than the United States.
That statement also says corruption is low but how many South Korean Presidents have gone to jail for corruption?
"There's this passage that says Korea is more technocratic than the United States" Where does it say so? The US do not seem to be mentioned at all in the quote.
The answer would seem to be in the quote "From Taiwan to South Korea to Singapore, doctors, engineers, and other professionals occupy the top rungs of elected office and functional agencies." This is what the article means by "technocratic". As for "why", I think we can just say "cultural differences", and leave it at that.
Ah, a revolving door between corporations and politics. That has produced so many benefits in the US. Well, for corporations, anyway. And in South Korea, arguably, for the chaebols.
This is in part a frame challenge answer. The article says that these eastern democracies are more technocratic. It means "doctors, engineers, and other professionals occupy the top rungs of elected office and functional agencies." There is no reason to suppose that there are laws, rules or regulations that require this. It is quite possible that this is a cultural difference between these eastern democracies and western ones.
For example, in the USA many roles in the executive are given only to supporters of the President (and the office holders are expected to resign if the president loses power in an election)
Moreover many roles in the states for functional positions are elected and contested in partisan elections. Together these mean that many positions in government are held by partisans and not technocrats.
Finally, note that this is just the opinion of the article's author. There's no evidence that the author has actually measured the level of technocracy in an objective way, it seems to be based more on the impressions of the author rather than a detailed analysis.
| common-pile/stackexchange_filtered |
ASP.NET Dynamic Data and uploading files
I am developing a small, internal-use only web application. Given its simple nature and intended audience, I decided that it might be a good opportunity to use a ASP.NET Dynamic Data project to get things up-and-running quickly. So far so good, except for one issue that has me reconsidering the whole plan:
I need to be able to upload files through the website. There is an entity in the model that represents an uploaded file. This entity has properties for the file's contents, the file name, and the file's content type. When uploading a file, all of these values are obtained from a single FileUpload control.
Since a FieldTemplate has a one-to-one association with an entity property, I decided that I needed to create a custom EntityTemplate for the File entity. At this point, I have created an "edit" template for the entity that has a FileUpload control. What I have not been able to figure out is, when the user clicks the 'Update' link, how do I get the data from the FileUpload control back into the entity and (ultimately) into the database.
Any advice or guidance is much appreciated.
you can add other values to the dictionary in the ExtractValues method of the FiedlTemplate, what you have to be aware of is that if those values also appear as rows or column in the page template then they will overwrite the values you added. I usually scaffold them as false and then only referance them in the custom field template.
Note you can access their initial values from the Row property in the OnDataBinding event, you can cast the property as the actual type or use an interface added via buddy classes.
| common-pile/stackexchange_filtered |
Is the two slit experiment "measurement device" always symmetric?
In this simplified video that is quite famous on youtube, it says that they tried to measure the electron by one slit. That seems to imply that it could be introducing interaction asymmetry in the underlying (higgs and or gravity) field.
Is the same interference pattern measured by single electrons when there is no asymmetry in the measuring architecture/apparatusi? (That is simply speaking, symmetrically, a measurement device by each slit).
And when both the architecture is symmetric, and they put one electron through each slit at once, is the result the same as when they put a single electron through a single slit?
How do you "put one electron through each slit at once" without destroying the experiment? If both slits are open, each electrons will go through both slits; that is one classic conclusion of the experiment.
@GuyInchbald that is the quantum wave-function implication, not necessarily what we expect from a classical electron particle. i guess i need to find a paper reviewing all these experiments to fully understand the conditions and results.
Yes, Young's slits provide one of the definitive demonstrations that reality is quantum and not classical; it cannot be understood in classical terms. Look up de Broglie waves.
i understand that this is a quantum experiment. but this doesn't answer my question about potential effects of symmetry.
Your question appears to ask about an impossible situation. hence I posted a comment not an answer. Maybe you need to rethink your underlying assumptions and ask about those.
ok. u seem to be focusing on the uncertainty in "putting each electron through each slit at once bit". yet, we can at least attempt to do this with two separate lasers, one each pointed at each slit. that is what im asking about. is there a difference between this versus only a single electron aimed at one slit, when there is underlying symmetry versus asymmetry in the apparatus?
The uncertainty I am focusing on is not quantum, it is yours. Your question appears to ask about an impossible situation. hence I posted a comment. It would help if you illustrated your question with diagrams of each experimental arrangement.
@GuyInchbald your still only focusing on the one case of a particle through each slit at once. that was only a sub-question to my overall question about symmetry. your comment is noted though. when i get time ill do a diagram.
Yes. I made a comment on that one sub-question. I did not answer it, I did not address your other questions. This is perfectly normal for Stack Exchange, perhaps you are new here?
I would not say "infamous", rather that it is on its way to become famous.
I want to provide some more context from the video: Starting at 3:40, after describing the double-slit experiment resulting in multiple "bright and dark bands on the screen", Dr. Quantum says as follows:
But physicists were completely baffled by this, so they decided to
peak and see which slit it actually goes through. They put a measuring
device by one slit to see which one it went through... and let it fly!
But the quantum world is far more mysterious than they could have
imagined. When they observed, the electron went back to behaving like a
little marble. It produced a pattern of two bands, not an interference
pattern of many! The very act of measuring or observing which slit it
went through meant: it only went through one, not both!
There is no reason to imply anything besides non-relativistic quantum mechanics, which explains this experiment perfectly.
The explanation for the double-slit experiment as described has the following assumptions/approximations:
All the electrons act like single electrons (i.e. no interaction), at least qualitatively.
They are prepared so that the wave function that approximates them is a plane wave (or a spherical wave at sufficient distance).
This produces the behavior as described in the video, including the scenario with the measuring device by no, one or both slits.
So, to answer your questions specifically:
Yes, a single electron (to be precise: the experiment repeated many times, because a single electron produces only a single "dot" on the detector as it is detected as a particle) produces the same patterns because, as mentioned, the behavior of many electrons is actually well-approximated by the behavior of a single electron in this particular experiment. So, spatial symmetry of the experimental setup is of no relevance as long as the electrons fulfill the above approximation.
Putting one electron through each slit would require an experimental setup that can do that, i.e. preparing the two electrons so that they travel separately from the two emitter devices to their respective slit. Then, the wave function of each electron is not a plane or spherical wave any more. It is localized, at least so that the wave functions of the two electrons don't overlap. The behavior of each electron-slit-combination can be calculated separately, and they act like the other combination just doesn't exist in very good approximation. So it's just two single-slit experiments next to each other.
I think an intuitive explanation for the latter experiment is that, by preparing the electrons to be localized, you "observe" them as being localized in a similar way as the measuring devices. Mathematically, both scenarios impose boundary conditions to the wave functions (i.e. it has to be zero in some portions of the experiment).
The statement “ Yes, a single electron produces the same patterns because, as mentioned, the behavior of many electrons is actually well-approximated by the behavior of a single electron in this particular experiment.” is missleading. A single electron ends as a dot on the observation screen. In reality, as long as you not burn the edges by too much electron the intensity distribution on the screen will be the same for this and for transmitted one by one electrons. Not every Gedankenexperiment is to be taken seriously.
@HolgerFiedler yes, you're right of course, I used the word "behavior" in a very broad sense. I adapted the text, thank you!
| common-pile/stackexchange_filtered |
xpages display a new doc. inside a dialog
i have an issue that it gives me some headache lately.
In my XPage there is a view displaying some docs ( let say Pdoc as datasource ) and I open/create them inside a <xe:dialog>. This dialog has only Pdoc declared as datasource, and it inherits some values from the Xpage datasrouce. My clickable column formula is:
// some var declarations
var formName = rowData.getDocument().getItemValueString("Form");
if ( formName == "fmP" )
{ viewScope.put("dlgDocUnid", pe.getUniversalID())
getComponent("exampleDialog").show(); }
On the same XPage, I can create a new Pdoc using the same dialog via a button, New Pdoc.
The problem is: when I opened an existing Pdoc and then just save it or close it, and after I use the button to create a newNote => the old / previous ( already saved Pdoc ) is showed...
If firstly I just created a new note Pdoc, it works, and it is showing a new empty Pdoc.
My dialog data code:
<xp:this.data>
<xp:dominoDocument var="Pdoc" formName="fmPersContact"
ignoreRequestParams="true" scope="request" action="editDocument">
<xp:this.documentId><![CDATA[#{javascript:viewScope.get("dlgDocUnid");}]]></xp:this.documentId>
</xp:dominoDocument>
</xp:this.data>
I use the .documentId for the open method from the viewPanel. I think here is the problem. I think ,( I'm not sure), I should compute this documentId in such way that when I create a newNote this documentID shouldn't be anymore the viewScope.get("dlgDocUnid").
Thanks for your time.
If I understood correctly, you have defined two data sources within the XPage and you try to consume them in the dialog, right? Instead I suggest defining a single data source within a panel inside the xe:dialog.
I have blogged about a similar example. In this example, tooltip dialog has been used but it's the same logic, you might replace xe:tooltipDialog with xe:dialog.
http://lotusnotus.com/lotusnotus_en.nsf/dx/mini-patterns-for-xpages-parameter-editing-with-dialogs-1.htm
The idea here is that you use a viewScope variable named noteId. To open an existing document, set this variable to the note id of the existing document. To create a new document, the value will be set as NEW. Then you define the data source within the dialog according to this variable:
<xe:dialog>
<xp:panel style="width:500.0px">
<xp:this.data>
<xp:dominoDocument
var="document1"
formName="Parameter"
action="#{viewScope.noteId eq 'NEW'?'createDocument':'editDocument'}"
documentId="#{viewScope.noteId eq 'NEW'?'':viewScope.noteId}"
ignoreRequestParams="true">
</xp:dominoDocument>
</xp:this.data>
..... Dialog content ....
</xp:panel>
</xe:dialog>
When you put the data source inside the dialog, you don't refresh the page to load or prepare data sources before launching the dialog which is your current problem I guess.
Thanks for your answer. There are quite big chances not to understand you ( it is my fault / little experience in Xpages ): my xpages has only one datasource declared in data ( Cdoc ), the dialog indeed is on my xpage and it has Pdoc. as datasource declared ). I will try to change the action and the documentId parameters and let you know.
Paste some more code from data sources. Maybe I understood the problem wrong.
My xpage code is: xp:this.data
<xp:dominoDocument var="Cdoc" formName="fmCompanie"
computeWithForm="both">
</xp:dominoDocument>
</xp:this.data>
My dialog code:
xp:this.data
<xp:dominoDocument var="Pdoc" formName="fmPersContact"
ignoreRequestParams="true" scope="request" action="editDocument">
xp:this.documentId#{javascript:viewScope.get("dlgDocUnid");}</xp:this.documentId>
</xp:dominoDocument>
</xp:this.data>
<xe:dialog id="exampleDialog" refreshOnShow="true" .....
When you open a new dialog, you clear viewScope.dlgDocUnid?
I guess I didn't :) My code for the button that creates a new dialog is in my question ... How can I clear the viewscope?
viewScope.remove("dlgDocUnid"), but if you are removing this variable, you need to check if it's null within the dialog data source. Look at my example from my blog.
So, if I just : viewScope.dlgDocUnid="NEW" in my NEW DOC button, I will just change its value, without removing it. And then I could use your action and documentID parameters ?
One more little misunderstanding: Adding xp:this.documentId .. </xp:this.documentId> is equally with just documentId = " .. " into the xp:dominoDocument ? Now I'll make the changes you suggested and let you know.
Yes. That is coming from XML syntax, you can't use some characters in attributes, so XPages editors change that into CDATA...
Thanks, it works. Useful blog, I've added it to my favorites xpages/Lotus notes learning platforms. Thanks again
Could it be that you forgot to deactivate the flag to ignore request parameters.
Sounds like the dialog is always associated with the current document instead of the parameters from the docid
| common-pile/stackexchange_filtered |
jQuery slideshow - last photo wont loop to first one with animation
I have created a very simple slideshow using jQuery. The slideshow is working just fine however when it is at the last photo and i want it to loop to the first one, instead of animate to the first one, it's just jumping without any animations, and the rest of my photos are working fine.
Hopefully someone can help me with this issue.
Thanks.
HTML code:
<div class="slideshow">
<ul class="slider">
<li class="slide"><img src ="imgs/model_05.jpg"></li>
<li class="slide"><img src ="imgs/model_06.jpg"></li>
<li class="slide"><img src ="imgs/model_07.jpg"></li>
<li class="slide"><img src ="imgs/model_08.jpg"></li>
</ul>
</div>
CSS code:
.slideshow {
clear: both;
width:400px;
height:400px;
border: 1px solid black;
overflow: hidden;
}
.slideshow .slider {
width:2000px;
height:400px;
}
.slideshow .slider .slide {
list-style: none;
}
.slideshow .slider .slide img{
width:400px;
height:400px;
float:left;
}
jQuery code:
var sliderWidth = 400;
var animationSpeed = 1000;
var sliderPaused = 3000;
var currentSlide = 1;
//cache DOM
var $slideShow = $(".slideshow")
var $slider = $slideShow.find(".slider")
var $slides = $slider.find(".slide");
setInterval(function () {
$slider.animate({
marginLeft: "-=" + sliderWidth
}, animationSpeed, function () {
currentSlide++;
if (currentSlide === $slides.length) {
currentSlide = 1;
$slider.css("margin-left", 0);
}
});
}, sliderPaused);
Because you are not animating when currentSlide is==1.
Try this code -
if (currentSlide === $slides.length) {
currentSlide = 1;
$slider.animate({
marginLeft : 0
});
}
Thanks for the answer Kalpesh. I have tried it but its not working as i expect it. It animates it to the first slider very fast. What i would like to happen is to continue animate without going back and start again. I would like when it is at the last one to continue the animation as it is but to start over. Endless loop.
| common-pile/stackexchange_filtered |
virtual inheritance and signature overlapping
Let's say we have following code:
struct A{
virtual ~A(){}
void f(){
p = 42;
}
int p;
};
struct B : public virtual A{};
struct C : public virtual A{};
struct D : public B, public C, public A{}; //let's add non-virtual inheritance
int main(){
D* pA = new D();
pA->A::f(); //!
return 0;
}
Is there any way to set p to 42 in the most base class A?
Following construction pA->A::f(); sets p to 42 for non-virtual inherited class A. Can we do that without cast?
what's the question exactly ? inheriting virtually means you get one instance of A...
First off, there is no cast: you just qualify which version of A you want as there are more than one. Of course, the notation you have chosen actually doesn't work because it doesn't resolve the ambiguity in the first place. I guess you meant to use something like
pA->B::f();
If you don't want to put the burden of choosing which member function to call on the user of your class, you'll have to provide suitable forwarding functions for D e.g.:
void D::f() { this->B::f(); }
I've just realized that I can call pA->B::A::f(); :) and this will change value of the most base class A.
You can omit the A:: part: once you have chosen the start of a path which unambiguously leads to the function you don't need further qualification. Well, unless the function is virtual and you want to make sure you call the base version.
| common-pile/stackexchange_filtered |
How to prevent ClearDB (MySQL) from dropping when heroku puts my app to sleep
I have deployed a Spring Boot app on heroku and installed ClearDB addon which is just MySQL database. I am using free plan so I don't mind that my app is going to sleep after an hour of inactivity but I have one issue - database drops after app is going to sleep. I am wondering if that's just how heroku works when using free service or if it can be changed in heroku settings.
I am using auto ddl set to create in my application.properties, can it be the reason?
That behaviour is caused by your application.properties, it's not related to Heroku.
You should change it to update instead of create because with create the database schema will be dropped and created afterwards.
More info here
| common-pile/stackexchange_filtered |
My App Only Optimized to Work with Iphone7
I made an app and it almost done, but I have problem with it. It only optimized with iPhone 7. For example, If I start the app with iPhone 7 plus
the labels and the images move a little bit.
is there any way to let my app works with all kinds of iPhones?
if there is a video to explain that , that will be perfect.
thank you
This is called "autolayout".
Google "iOS Adaptive Layout". Trust me, it'll help you.
Better yet, download the WWDC app, filter on WWDC 2016, and search for Making Apps Adaptive, parts 1 & 2. Spend two hours watching ASAP - it'll yield days of time saved down the road.
You have to use trait classes to set the constraints accordingly. Based on your constraints, your app will layout accordingly on different resolutions of iPhones.
| common-pile/stackexchange_filtered |
iOS Device communication
I am keen to get some apps built that can communicate with other devices/ web etc. i have played around with FTP and can get so far. But what is the best way to do this? We don't have any Servers with databases etc, but do have a site that we are currently uploading and downloading files to.
can anyone suggest a good/ better way to get the device to send/ receive files?
thanks
sam
If it's HTTP communication you're wanting to do, the simplest and most powerful tool is ASIHTTPRequest.
HTTP is the protocol your web browser uses to talk to web servers. If you have a site you're storing and downloading files at, it's almost certainly HTTP you're talking to it.
For iOS device to device communication one can use Bump API.
EDIT: I don't know of a generic framework for device <-> server communications, but having built applications that use web services of other providers like Yelp, Yahoo, Google Maps, I would say the way to go for this is to have REST based web services which exchange data in JSON format.
can that be used to get files to the device from a pc or server?
communicating with the server is more important than communicating with other devices - any suggestions?
| common-pile/stackexchange_filtered |
Important identity for covariant derivatives involving extrinsic curvature
How can I demonstrate this identity?
$$\nabla_a(K^{ab}-\lambda Kg^{ab})=\Gamma^b_{ac}(K^{ac}-\lambda K g^{ac})$$
where $\lambda$ is a parameter, $K_{ij}$ is extrinsic curvature in ADM formalism and $g_{ij}$ is 3-metric ($i,j=1,2,3$).
What is the definition of $K^{ab}$?
Moreover, do all the indices run from $1$ to $3$?
Extrinsic curvature.Yes, the indexes can be 1, 2 or 3.
| common-pile/stackexchange_filtered |
Debugging memory problems in java - understanding freeMemory
I have an application that causes an OutOfMemoryError, so I try to debug it using Runtime.getRuntime().freeMemory(). Here is what I get:
freeMemory=48792216
## Reading real sentences file...map size=4709. freeMemory=57056656
## Reading full sentences file...map size=28360. freeMemory=42028760
freeMemory=42028760
## Reading suffix array files of main corpus ...array size=513762 freeMemory=90063112
## Reading reverse suffix array files... array size=513762. freeMemory=64449240
I try to understand the behaviour of freeMemory. It starts with 48 MB, then - after I read a large file - it jumps UP to 57 MB, then down again to 42 MB, then - after I read a very large file (513762 elements) it jumps UP to 90 MB, then down again to 64 MB.
What happens here? How can I make sense of these numbers?
Just a short background - I deploy my application in Google App Engine, and the OutOfMemory error is only on production, not on my own computer. So I try to understand where the memory goes.
Have you set any parameters for total heap to your app like Xmx and Xms
Java memory is a bit tricky. Your program runs inside the JVM, the JVM runs inside the OS, the OS uses your computer resources. When your program needs memory, the JVM will see if it has already requested to the OS some memory that is currently unused, if there isn't enough memory, the JVM will ask the OS and, if possible, obtain some memory.
From time to time, the JVM will look around for memory that is not used anymore, and will free it. Depending on a (huge) number of factors, the JVM can also give that memory back to the OS, so that other programs can use it.
This mean that, at any given moment, you have a certain quantity of memory the JVM has obtained from the OS, and a certain amount the JVM is currently using.
At any given point, the JVM may refuse to acquire more memory, because it has been instructed to do so, or the OS may deny the JVM to access to more memory, either because again instructed to do so, or simply because there is no more free RAM.
When you run your program on your computer, you are probably not giving any limit to the JVM, so you can use plenty of RAM. When running on google apps, there could be some limits imposed to the JVM by google operators, so that available memory may be less.
Runtime.freeMemory will tell you how much of the RAM the JVM has obtained from the OS is currently free.
When you allocate a big object, say one MB, the JVM may require more RAM to the OS, say 5 MB, resulting in freeMemory be 4 MB more than before, which is counterintuitive. Allocating another MB will probably shrink free memory as expected, but later the JVM could decide to release some memory to the OS, and freeMemory will shrink again with no apparent reason.
Using totalMemory and maxMemory in combination with freeMemory you can have a better insight of your current RAM limits and consumption.
To understand why you are consuming more RAM than you would expect, you should use a memory profiler. A simple but effective one is packaged with VisualVM, a tool usually already installed with the JDK. There you'll be able to see what is using RAM in your program and why that memory cannot be reclaimed by the JVM.
(Note, the memory system of the JVM is by far more complicated than this, but I hope that this simplification can help you understand more than a complete and complicated picture.)
It's not terribly clear or user friendly. If you look at the runtime api you see 3 different memory calls:
freeMemory Returns the amount of free memory in the Java Virtual
Machine. Calling the gc method may result in increasing the value
returned by freeMemory.
totalMemory Returns the total amount of memory in the Java virtual
machine. The value returned by this method may vary over time,
depending on the host environment.
maxMemory Returns the maximum amount of memory that the Java virtual
machine will attempt to use.
When you start up the jvm, you can set the initial heap size (-Xms) as well as the max heap size (-Xmx). e.g. java -Xms100m -Xmx 200m starts with a heap of 100m, will grow the heap as more space is needed up to 200, and will fail with OutOfMemory if it needs to grow beyond that. So there's a ceiling, which gives you maxMemory().
The memory currently available in the JVM is somewhere between your starting and max. Somwhere. That's your totalMemory(). freeMemory() is how much is free out of that total.
To add to the confusion, see what they say about gc - "Calling the gc method may result in increasing the value returned by freeMemory." This implies that uncollected garbage is not included in free memory.
And of course calling gc is just a suggestion...the VM may very well decide it doesn't feel like gcing at the present time...
OK, based on your comments I wrote this function, which prints a summary of memory measures:
static String memory() {
final int unit = 1000000; // MB
long usedMemory = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
long availableMemory = Runtime.getRuntime().maxMemory() - usedMemory;
return "Memory: free="+(Runtime.getRuntime().freeMemory()/unit)+" total="+(Runtime.getRuntime().totalMemory()/unit)+" max="+(Runtime.getRuntime().maxMemory()/unit+" used="+usedMemory/unit+" available="+availableMemory/unit);
}
It seems that the best measures for how much my program is using are usedMemory, and the complementary availableMemory. They increase/decrease monotonically when I use more memory:
Memory: free=61 total=62 max=922 used=0 available=921
Memory: free=46 total=62 max=922 used=15 available=906
Memory: free=46 total=62 max=922 used=15 available=876
Memory: free=44 total=118 max=922 used=73 available=877
Memory: free=97 total=189 max=922 used=92 available=825
Try running your app against something like http://download.oracle.com/javase/1.5.0/docs/guide/management/jconsole.html.
It comes with the JVM (or certainly used to) and is invaluable in terms of monitoring what is happening inside the JVM during the execution of an applicaiton.
It'll provide more of a useful insight as to what is going on with regards to your memory than your debug statements.
Also, if you are really keen, you can learn a bit more about tuning garbage collections via something like;
http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html
This is pretty in depth, but it is good to get an insight into the various generations of memory in the JVM and how objects are retained in these generations. If you are seeing that objects are being retained in old gen and old gen is continually increasing, then this could be an indicator of a leak.
For debugging why data is being retained and not collected, then you can't go past profilers. Check out JProfiler or Yourkit.
Best of luck.
| common-pile/stackexchange_filtered |
Can we get radiobuttonList.Items.Count in .aspx page
Can we get the count of total radiobuttonlist items from .aspx page. I have to call a javascript function onclientclick of a button and i want to loop through the total number of radiobuttonlist items. So can anyone tell me that can we get it from .aspx page. Because in my scenario i can not use code behind for this.
function ClearRBL() {
for (i = 0; i < RBLCOUNT; i++) {
document.getElementById('rblWorkerList_' + [i]).checked = false;
}
}
How can i get RBLCOUNT here from .aspx page only? If not possible then in Javascript please.
I just created a javascript function as mentioned by Karthik Harve and found the total number of rows generated dynamically as below: -
function ClearRBL() {
var rblLen = document.getElementById('rblWorkerList');
for (i = 0; i < rblLen.rows.length; i++) {
document.getElementById('rblWorkerList_' + [i]).checked = false;
}
}
It's working on both Mozila and IE.
Thanks alot to all who tried to help.
I don't know how the aspx side would work, but if you want to do it just in JavaScript you could do something like the following that doesn't need to know the total number of elements in advance:
function ClearRBL() {
var i = 0,
rbl;
while (null != (rbl = document.getElementById('rblWorkerList_' + i++)))
rbl.checked = false;
}
The above assumes that the element ids end in numbers beginning with 0 counting up by 1s; the while loop will keep going until document.getElementById() doesn't find a matching element (in which case it returns null). A less cryptic way of writing it is as follows:
function ClearRBL() {
var i = 0,
rbl = document.getElementById('rblWorkerList_' + i);
while (null != rbl) {
rbl.checked = false;
i++;
rbl = document.getElementById('rblWorkerList_' + i);
}
}
P.S. When the while loop finishes i will be equal to the number of radio buttons, which may be useful if you want to do something with that number afterwards.
you can give a name all radio button and then get them like this.
var RBLCOUNT= document[groupName].length;
or
var RBLCOUNT= 0;
var inputs = document.getElementsByTagName('input');
for (var i = 0; i < inputs.length; ++i) {
if(inputs[i].type =="radio"){
RBLCOUNT++;
}
}
Try this:- This is not exactly what you want but hope it will help you.
function GetRBLSelectionID(RadioButtonListID) {
var RB1 = document.getElementById(RadioButtonListID);
var radio = RB1.getElementsByTagName("input");
var isChecked = false;
var retVal = "";
for (var i = 0; i < radio.length; i++) {
if (radio[i].checked) {
retVal = radio[i].id;
break;
}
}
return retVal;
}
| common-pile/stackexchange_filtered |
Parsing a JSON tree with JavaScript
I want to parse this JSON tree on the basis of the 'Name' attribute and return the matched node/nodes as an array of objects.
Kindly paste the JSON tree in here - https://jsonformatter.curiousconcept.com/
to get a better visualization of it.
The scenario would be such that, if the user enters 'Rule', all nodes that contain 'Rule*' corresponding to the 'Name' attribute would be returned.
To elaborate, the match would be such that if (object.Name.includes('Rule')) it would be valid match.
Since the JSON tree is huge and has children embedded within children, I was using Defiant.js and the function was built like this -
$scope.doASearch = function(elm) {
var as = '//*[contains(Name, ' + '"' + elm + '"' + ')]';
$scope.found = JSON.search($scope.treedata, as);
$scope.treedata = _.uniq($scope.found, 'MetaCatID');
};
Since DefiantJS does not work on Microsoft's Edge Browser, switching to a compalitibility mode like IE-10 makes DefiantJS work but is breaking few other things. So, I had to rule out DefiantJS.
Is another JSON parsing library available to help me out or a JavaScript or jQuery solution which can do it me ?
how about you post a "minimal" version of the JSON tree in the question itself, rather than a link that will go stale once you get your answer, no doubt
@JaromandaX - Kindly paste the JSON tree on https://jsonformatter.curiousconcept.com/ to get a better view of it.
what is the problem with JSON.parse()?
@smnbbrv - JSON.parse() works but I find it difficult to crawl across each node and child node of the tree.
how should look like the result?
what is difficult in there? Just a recursive function with couple of if statements
@NinaScholz - An array of objects holding each matched node as in individual object.
@smnbbrv - How about just framming your suggestion in code ?
If there were clear requirements...
@smnbbrv - If the parent node matches the condition, I just need the parent node, if any of the nodes under 'subCategories' match the condition I would need that to. The children of 'subCategories' can also have 'subCategories', I need those too if they match the condition. This can go upto to 'n'.
Kindly paste the JSON - that's not the point ... the point is the question should be understandable in two weeks, two months, two years ... is that link going to be valid then?
That link is no longer valid...
You could use an iterative and recursive approach by checking the types of the items and iterate accordingly.
This proposal uses a callback for checking the object and return the actual object if condition is met.
function search(array, fn) {
var result = [];
array.forEach(function iter(o) {
if (!o || typeof o !== 'object') {
return;
}
if (fn(o)) {
result.push(o);
return;
}
Object.keys(o).forEach(function (k) {
iter(o[k]);
});
});
return result;
}
var data = [{ tuple: { old: { MetaCategory: { MetaCatID: 517, ParentMetaCatRef: 0, Name: "D Application" } } }, MetaCatID: 517, ParentMetaCatRef: 0, Name: "D Application", subCategories: [{ tuple: { old: { MetaCategory: { MetaCatID: 518, ParentMetaCatRef: 517, Name: "Compass" } } }, MetaCatID: 518, ParentMetaCatRef: 517, Name: "Compass" }, { tuple: { old: { MetaCategory: { MetaCatID: 519, ParentMetaCatRef: 517, Name: "Orbe" } } }, MetaCatID: 519, ParentMetaCatRef: 517, Name: "Orbe" }, { tuple: { old: { MetaCategory: { MetaCatID: 520, ParentMetaCatRef: 517, Name: "PSI" } } }, MetaCatID: 520, ParentMetaCatRef: 517, Name: "PSI" }, { tuple: { old: { MetaCategory: { MetaCatID: 521, ParentMetaCatRef: 517, Name: "SAP" } } }, MetaCatID: 521, ParentMetaCatRef: 517, Name: "SAP" }] }, { tuple: { old: { MetaCategory: { MetaCatID: 541, ParentMetaCatRef: 0, Name: "D Versions" } } }, MetaCatID: 541, ParentMetaCatRef: 0, Name: "D Versions", subCategories: [{ tuple: { old: { MetaCategory: { MetaCatID: 542, ParentMetaCatRef: 541, Name: "Baseline 2016-12-31" } } }, MetaCatID: 542, ParentMetaCatRef: 541, Name: "Baseline 2016-12-31" }, { tuple: { old: { MetaCategory: { MetaCatID: 543, ParentMetaCatRef: 541, Name: "CLS step 3 2017-04-15" } } }, MetaCatID: 543, ParentMetaCatRef: 541, Name: "CLS step 3 2017-04-15" }] }, { tuple: { old: { MetaCategory: { MetaCatID: 365, ParentMetaCatRef: 0, Name: "Market" } } }, MetaCatID: 365, ParentMetaCatRef: 0, Name: "Market", subCategories: [{ tuple: { old: { MetaCategory: { MetaCatID: 366, ParentMetaCatRef: 365, Name: "Sector" } } }, MetaCatID: 366, ParentMetaCatRef: 365, Name: "Sector", subCategories: [{ tuple: { old: { MetaCategory: { MetaCatID: 463, ParentMetaCatRef: 366, Name: "term" } } }, MetaCatID: 463, ParentMetaCatRef: 366, Name: "term" }, { tuple: { old: { MetaCategory: { MetaCatID: 464, ParentMetaCatRef: 366, Name: "category" } } }, MetaCatID: 464, ParentMetaCatRef: 366, Name: "category" }, { tuple: { old: { MetaCategory: { MetaCatID: 367, ParentMetaCatRef: 366, Name: "Subsector" } } }, MetaCatID: 367, ParentMetaCatRef: 366, Name: "Subsector" }] }] }];
console.log(search(data, function (o) { return o.MetaCatID > 500; }));
console.log(search(data, function (o) { return o.Name && o.Name.includes('P'); }));
.as-console-wrapper { max-height: 100% !important; top: 0; }
It is not working when I use String.contains or String.includes.
Have a look here - https://jsbin.com/cuwocupelo/edit?html,js,console
maybe your browser does not support the methods. where should i look at jsbin?
you need a check for the property first. i changed the function a bit.
The problem was due to the property going undefined in some cases so I fixed it like this - if(o.Name !== undefined) return (o.Name.indexOf('PS') >= 0);. Since Internet Explorer or Edge do not support the methods String.contains or String.includes, I had to use the indexOf
| common-pile/stackexchange_filtered |
Interpreters: Handling includes/imports
I've built an interpreter in C++ and everything works fine so far, but now I'm getting stuck with the design of the import/include/however you want to call it function.
I thought about the following:
Handling includes in the tokenizing process: When there is an include found in the code, the tokenizing function is recursively called with the filename specified. The tokenized code of the included file is then added to the prior position of the include.
Disadvantages: No conditional includes(!)
Handling includes during the interpreting process: I don't know how. All I know is that PHP must do it this way as conditional includes are possible.
Now my questions:
What should I do about includes?
How do modern interpreters (Python/Ruby) handle this? Do they allow conditional includes?
"Disadvantages: No conditional includes(!)" I don't know that this condition necessarily holds. It's going to depend on the nature of the abstract representation you use for the parsed code.
This problem is easy to solve if you have a clean design and you know what you're doing. Otherwise it can be very hard. I have written at least 6 interpreters that all have this feature, and it's fairly straightforward.
Your interpreter needs to maintain an environment that knows about all the global variables, functions, types and so on that have been defined. You might feel more comfortable calling this the "symbol table".
You need to define an internal function that reads a file and updates the environment. Depending on your language design, you might or might not do some evaluation the moment you read things in. My interpreters are very dynamic and evaluate each definition as soon as it is read in.
Your life will be infinitely easier if you structure your interpreter in layers:
Tokenizer (breaks input into tokens)
Parser (reads one token at a time, converts to abstract-syntax tree)
Evaluator (reads the abstract syntax and updates the environment)
The abstract-syntax tree is really the key. If you have this, when you encounter the import/include construct in the input, you just make a recursive call and get more abstract syntax back. You can do this in the parser or the evaluator. If you want conditional import, you have to do it in the evaluator, since only the evaluator can compute a condition.
Source code for my interpreters is on the web. Two of them are written in C; the others are written in Standard ML.
Are you going to put the lecture notes back online? And I read somewhere else that the accompanying book was supposed to publish several years ago. How's that coming along?
@Wei: I changed jobs, which has set the book back a few years. At present I'm rewriting some of the software and am working on a new version of chapter 8 as well as extra material for chapter 7. I'll be teaching PL again next spring and hope to send a draft to publishers around them.
thanks! Is it possible to read your draft, or is it only available to a limited number of course instructors?
@Wei anybody interested should send me email<EMAIL_ADDRESS> | common-pile/stackexchange_filtered |
Spark sql query returns StringType instead of ArrayType?
In trying to apply my UDF during my spark.sql query, instead of returning my cleaned words in array form the query simply returns one long string that looks like my array. This gives me an error when attempting to apply CountVectorizer. The error it raises is 'requirement failed: Column cleanedWords must be of type equal to one of the following types: [ArrayType(StringType,true), ArrayType(StringType,false)] but was actually of type StringType.'
This is my code:
from string import punctuation
from hebrew import stop_words
hebrew_stopwords = stop_words()
def removepuncandstopwords(listofwords):
newlistofwords = []
for word in listofwords:
if word not in hebrew_stopwords:
for punc in punctuation:
word = word.strip(punc)
newlistofwords.append(word)
return newlistofwords
from pyspark.ml.feature import CountVectorizer, IDF, Tokenizer, Normalizer
from pyspark.sql.types import ArrayType, StringType
sqlctx.udf.register("removepuncandstopwords", removepuncandstopwords, ArrayType(StringType()))
sentenceData = spark.createDataFrame([
(0, "Hello my friend; i am sam"),
(1, "Hello, my name is sam")
], ["label", "sentence"])
tokenizer = Tokenizer(inputCol="sentence", outputCol="words")
wordsData = tokenizer.transform(sentenceData)
wordsData.registerTempTable("wordsData")
wordsDataCleaned = spark.sql("select label, sentence, words, removepuncandstopwords(words) as cleanedWords from wordsData")
wordsDataCleaned[['cleanedWords']].rdd.take(2)[0]
Out[163]:
Row(cleanedWords='[hello, my, friend, i, am, sam]')
How can i resolve this issue?
| common-pile/stackexchange_filtered |
Name for a word whose sound is contrary to its meaning
As onomatopoeia means words that sound like what they mean, is there a word which means words that sound contrary to what they mean? Pulchritude is an example of such a word.
I suppose they'd be a subset of heterological words, that is words that are the opposite to what they describe. I don't know of a term specifically relating to sound though. There's also a greater degree of subjective opinion in saying "pulchritude is ugly" than in saying "bang sounds like a bang".
While some may find it hard to believe that "pulchritude" could mean anything beautiful, the sound of a word is really in the ear of the beholder. I read a study a while back that polled people on which English word sounded most beautiful and "diarrhea" came in first. If you didn't know what it meant, I guess you could say that the word "sounds" pleasant - but I don't know what you could possibly call that phenomena.
@Carlo_R., I believe I read it in a newspaper many years ago. I cannot come up with that precise study but there are so many references to "a study" where diarrhea was voted the most beautiful word, I became suspicious and Snoped it. It was not debunked in Snopes so alas, the source is unknown to me.
"Money is the most beautiful word" almost certainly does not refer to its beauty as a sound in isolation from considering its meaning. I'd hazard (entirely an opinion) that words such as Nefertiti or Hiawatha * would be gentler on the ear and so "more beautiful" and that words such as Phalanx, Ajax, VAX, ... which use the classic "power" letters (A, J, X ?) would be more striking. * Both are non-English in primary use so perhaps heard differently to "usual" words. ... ->
eg note how "Celtic" is usually pronounced, (ˈkɛltic) & how it somehow "resonates" and why it is different. Except when pronounced as it occasionally is (ˈsɛltic), which brings it "into line" with standard Anglo Saxon pronunciation, although it is not an Anglo Saxon word.
I came here to ask the same question about the same word! It would be interesting to know how often an english speaker goes loking for this word (the word that describes puchritude) when they discover the word pulchritude!
They are called Phantonyms
You're shifting the definition here. "Onomatopoeia" means creating words that sound like the thing or the action that they describe. To the best of my knowledge, this is only used in the literal sense, e.g. we talk about the "buzzing" of a bee to describe a sound that at least resembles the sound "buzz". But in the example you give, you're talking about a subjective evaluation of the idea that a sound brings to your mind. That is, I have never heard anyone say that, for example, "philosophy" is a case of onomatopoeia because the word "philosophy" sounds long and impressive and has a tone that brings deep thought to his mind. That's just not what onomatopoeia means.
The vast majority of words in English are not onomatopoeic. The word "zebra" sounds nothing like the sound made by a zebra; the phrase "internal combusion engine" sounds nothing like the sound made by such an engine; the word "surprise" sounds nothing like the noises made by people who are surprised; etc etc. I guess you could call such words "non-onomatopoeic", I don't know of any specific word. I'm not exactly sure what an "opposite" sound would be. Perhaps you could say that "boss" is a very soft word for creatures who can often be shrill, or that "politician" sounds rather stacato and active for creatures whose talk is usually pretty dull and monotonous. :-)
That's correct. Onomatopoiea is restricted to words that describe sounds, like ping, buzz, ululate or meow. There is, however, a term for the relation between a word's sound and its meaning, which is Phonosemantics. Pulchritude is a formal word, based on a Latin word, and it doesn't have much in the way of phonosemantics about it; that's mostly restricted to short words like the KL-words.
Pulchritude most certainly sounds the opposite of how it means. Try saying this to your partner "Come here, my most pulchritudinous creature" and see how they react. It may be a singleton ... I can't think of another word that sounds the opposite of what it means, but that didn't stop me wondering what the word for this kind of word is!
@GreenAsJade As I said above, "onomatopoeia" refers to words for a sound that sound like that sound, like "buzz" for the sound of a bee, or to words for animals or objects where the word sounds like a sound made by that thing, like "cuckoo" as a name for a type of bird. Only a very tiny number of such words exist in English. This is not at all the same as saying that a word "sounds pretty" or "sounds ugly". We could debate how closely "moo" really resembles the sound a cow makes, but it is clearly at least generally in the right direction. But if I say that the word "aardvark" ...
... sounds beautiful to me, on what basis would you say I'm right or wrong? That's a totally subjective evaluation. Or to put it another way, pulchritude -- not the word, but the thing itself -- does not have a sound. I can make recordings of cows mooing and of people saying the word "moo" and compare them. I cannot make recordings of pulchritude to compare to people saying the word "pulchritude". The idea of your subjective opinion that the word pulchritude does not sound pretty has nothing to do with onomatopoeia. That's not to say that you can't discuss the idea, but whatever that idea ...
... is, it is not onomatopoeia. Like if I someone said that the fact that birds can fly is an example of rocketry, I would say no, it's not. I am not denying that birds fly, I am saying that their flight is not rocketry.
Where words that sound/seem the opposite of what they mean are concerned "nonplussed" has always been my favorite. One would think it means "not unduly disturbed" when in fact it means "surprised and confused".
About your internal combustion engine comment: some buses come close, especially if you say the words in a whisper and accent the consonants and stops. :D
Although your examples were not the right ones, according to your description, you are probably talking about one of the following:
Phantonym: An informal term for a word that looks as if it means one thing but actually means quite another. For example, unisex.
or
Antagonym: word that can mean the opposite of itself. Antagonym are also known as contranyms or autoantonyms. For example, "To overlook" can mean "to inspect" or "to fail to notice."
Phantonyms
According to Wikipedia:
"A phantonym is a word that sounds to mean one thing, but in fact means another."
| common-pile/stackexchange_filtered |
Why tkinter ask open file has a low resolution?
In this picture you can see that the left dialog has a lower resolution then then right one.
Why folders are different?
I'm working with Python 3 and here's my code
from tkinter.filedialog import askopenfilename
askopenfilename(
initialdir = "/",
title = "Select a File",
filetypes = (("Text files", "*.txt*"), ("all files", "*.*"))
)
I finally found the answer. You just have to import the pyautogui module into program.
import pyautogui
The only way is using the call() function to change the resolution scaling.
from tkinter.filedialog import askopenfilename
import tkinter as tk
root = tk.Tk()
root.geometry("200x150")
root.tk.call('tk', 'scaling', 2.0)
askopenfilename(initialdir = "/",title = "Select a File",filetypes = (("Text files","*.txt*"),("all files","*.*")))
Use:
from pyscreeze import pixel
Or on Windows:
import sys
if sys.platform == 'win32':
import ctypes
try:
ctypes.windll.user32.SetProcessDPIAware()
except AttributeError:
pass
| common-pile/stackexchange_filtered |
Eigenstates harmonic oscillator with mass matrix
Consider the 2D harmonic oscillator
$H = \langle \nabla, M\nabla \rangle+ \vert x \vert^2$ where $x \in \mathbb R^2$ and $M$ is a symmetric mass matrix with strictly positive eigenvalues.
Is it known if in this case the eigenvalues/eigenfunctions of the Hamiltonian are still explicit?
PS: I think I can see that the ground state is $e^{-\langle x, M^{-1/2} x \rangle}$ but I do not see any systematic approach to analyze excited states.
I assume you actually meant the Hamiltonian is $H=\mathbf{p}^T M \mathbf{p}+\mathbf{x}^T\mathbf{x}$, where $\mathbf{p}=-i\nabla$. I write both $\mathbf{p}$ and $\mathbf{x}$ as column vectors. Diagonalize $M$: $ M=O^T D O $, where $D$ is a diagonal matrix with strictly positive eigenvalues, $O$ is an orthogonal matrix (so $OO^T=1$). Now define new variables
$$ \tilde{\mathbf{p}}=O\mathbf{p}, \tilde{\mathbf{x}}=O\mathbf{x} $$
The new variables still satisfy canonical commutation relations. Then the Hamiltonian becomes $H=\tilde{\mathbf{p}}^TD\tilde{\mathbf{p}}+\tilde{\mathbf{x}}^T\tilde{\mathbf{x}}$, which is just a bunch of decoupled harmonic oscillators. The energy eigenvalues and eigenfunctions can be obtained from here.
| common-pile/stackexchange_filtered |
Rails, PostgreSQL and hstore
If I added this hash values {"1"=>"1", "3"=>"3", "2"=>"2"} into hstore column into PostgreSql (9.4) (through rails 5}, this hash will be reorder under hstore column and looks like this : {"1"=>"1", "2"=>"2", "3"=>"3"}. Is it possible to prevent this?
Hash order is guaranteed in Ruby. I assume that because it's leaving Ruby and going to the database, you are not getting the guarantee anymore. I don't think postgres preserves the order source:
The order of the [hstore] pairs is not significant (and may not be reproduced on output)
If you really want the order to be preserved I think you're out of luck. If you want the order to be the same then you should sort the Ruby hash in a known way and then sort the SQL in the same way. I believe there's a way to sort an hstore column by keys: Order by a value of an arbitrary attribute in hstore
| common-pile/stackexchange_filtered |
How do I align the view to the nearest global axis?
Is there a button or a shortcut that will align the view to the nearest global axis?
For example:
You use the shortcut that will the align view so that you're looking down the X axis.
Then you rotate the view slightly (say 10 degrees, for example).
Is there a shortcut that will snap the view back to the X axis because it's the closest one?
If I had manually rotated the view more than 45 degrees away from the X axis and then used the shortcut that I'm seeking, I would like it to snap to the closest of the other two global axes.
In Unity there's a control in the top right-hand corner of the view that lets you quickly align the view to a specific axis. The control rotates with the view, meaning that if you keep clicking on the handle on the right-hand side of the control, the view will keep rotating around the pivot in the same direction.
If I have to install a plugin to enable this, I'm willing to do so.
If you use Shift+Numpad 1, 3 or 7 to align view to selected face and then press Numpad 4 or 6 it will rotate around current viewport camera location, so viewport will continue being aligned with the axis specified above
Thank you for your suggestion, but it would not align the view to the nearest global axis, unless I've misunderstood you.
Aligning the view to a selected face when you're trying to align the view to the global axis would require that the face is already aligned to the global axis.
Besides, before pressing Shift+Numpad 1, 3 or 7, I would have to first check which axis I'm looking down and then decide whether to press 1, 3 or 7.
I don't believe there is a built-in shortcut that does what you want but if you hold the Alt key down while tumbling, the view will snap to orthographic views.
Related https://blender.stackexchange.com/questions/192527/how-to-snap-object-local-rotation-to-its-nearest-global-axis-with-python aligns a mesh to nearest global axis, could be edited to align view instead.
Use snap view. Hold the middle mouse button to start rotating the view. Then press Alt. If the view is close enough to snap, it will. If not, it snaps to another 45 angle view.
If you have a number pad it is quite easy otherwise it is under view
http://www.blender.hu/tutor/kdoc/Blender_Cheat_Sheet.pdf
Thank you for your answer, but I don't think it answers the question.
Which button would you press to align the view to the nearest axis?
Did you even check the link? I don't think there is a shortcut to align the view to the nearest axis, probably because it would need extra logic but the number pad is used for the view.
if you really need to set the view to weird angles you can use the camera for it.
Yes, I did check the link. This isn't the first time I've looked through that cheat sheet.
I'm confused. Why would you say "If you have a number pad it is quite easy" if you don't think there is a shortcut to align the view to the nearest axis? Were you talking about rotating the view around the object when you wrote that?
| common-pile/stackexchange_filtered |
Converting an array into a hash with format { value => position_in_array }
I have an array of numbers in string format, and I want to convert them into a hash where the keys are the numbers and the values are the positions of those numbers in the array. So for example:
["1", "5", "3"]
should result in:
{ 1 => 0, 5 => 1, 3 => 2 }
I have the following code, which works:
my_hash = {}
my_array.each do |number_string|
my_hash[number_string.to_i] = my_array.index(number_string)
end
which iterates through the array and pushes each value and its position into the hash.
Is there a shorter and more elegant way to do it? Maybe something similar to Ruby's to_a function, but more like to_h(options).
Hash[["1", "5", "3"]
.map.with_index{|e, i| [e.to_i, i]}]
# => {1=>0, 5=>1, 3=>2}
or
["1", "5", "3"]
.each_with_object({}).with_index{|(e, h), i| h[e.to_i] = i}
# => {1=>0, 5=>1, 3=>2}
arr = ["1", "5", "3"]
ha = Hash[arr.map.with_index {|a, i| [a.to_i, i]}]
puts "ha: #{ha.inspect}"
irb(main):038:0> arr=["1", "5", "3"]
=> ["1", "5", "3"]
irb(main):039:0> Hash[arr.map.with_index {|a, i| [a, i]}]
=> {"1"=>0, "5"=>1, "3"=>2}
irb(main):040:0> Hash[arr.map.with_index {|a, i| [a.to_i, i]}]
=> {1=>0, 5=>1, 3=>2}
| common-pile/stackexchange_filtered |
Execute JQuery after ASP.Net Microsoft AJAX
I'm trying to execute JQuery after an ASP.Net Microsoft AJAX post back.
When a user clicks on a link, Microsoft AJAX is used to update some fields in the DB and if success a label appears informing the user the change has been made.
Unfortunately the label is not very obvious and I would like to use to fade the background from red to white.
The problem is that when visible=false is set on the label, the resulting html does not include the label(span). Does anyone know how to execute JQuery after an ASP.Net Microsoft AJAX post back, or another solution to achieve the same affect?
@Alison yes I'm using UpdatePanels, I also have one than one Microsoft AJAX link/method on each of the panels
This is how you can execute a random javascript after an ASP.NET Ajax postback
function executeThis(){
//code here to fade in out the label that comes
var prm = Sys.WebForms.PageRequestManager.getInstance();
prm.remove_pageLoaded(executeThis); //job done, remove this so that it is not fired again.
}
$("link").click(function(){
var prm = Sys.WebForms.PageRequestManager.getInstance();
prm.add_pageLoaded(executeThis); //this will register executeThis function with MS Ajax libraries which will fire it after next posback
//the post back happens here.
});
@Nikhil how would I know which method had been fired?
The method named "executeThis" will be fired. Basically teh idea is to register a callback function (called whatever you like) with MS Ajax scripts and have the scripts fire that function on the next postback once the page has been loaded.
@Nikhil Sorry I re-read my comment and it very clear. What I meant was how do I know which MS Ajax method was called. I only want the JQuery to execute if a particular link is clicked on. Or should I wrap the add_pageloaded in $('linkName').clicked
Actually the other way round. I am updating the answer to (hopefully!) solve this problem
@Nikhil that works, except if the link is clicked on a second time the function isn't called.
Is this link also coming back as a part of AJAX post back? if yes then use $("link").live("click",function.....)
Sort of the same as what @Nikhil has said, something very similiar and what I always use:
Sys.WebForms.PageRequestManager.getInstance().add_endRequest(functionName)
Where functionName is the name of the function containing whatever you want called. This ensures that the function is called whenever the page/panel is reloaded/refreshed.
You could try this in the postback
ScriptManager.RegisterStartupScript(Me.Form, Me.GetType(), "FunctionName", "FunctionName();", True)
This will call the javascript function FunctionName() sfter the postback is complete
Here is a bit of pseudocode to get you going:
If no success, keep the asp:Label visible property = true but give it a CSS class with display set to none
On page load, execute the following:
ScriptManager.RegisterStartupScript(Me.Form, Me.GetType(), "ShowError", "ShowError();", True);
In your Javascript, add the following:
function ShowError() {
$('#myLabelID').show().fadeOut();
}
| common-pile/stackexchange_filtered |
jtwitter library: connecting to alternative service fails
EDIT
omg I clearly should take a break... logcat gives these errors and as
it seems the catch-block is also executed but the messages are
actually sent as I have verified by visiting the page:
http://yamba.marakana.com/
Actually a thing that took me 2 hours to get aware of.. can someone
please tell me why the app still wants to connect to twitter too?
I am following this tutorial from MarakanaTechTV: https://www.youtube.com/watch?v=-P1eiRy-klk&feature=relmfu
It's about building a twitter-like client but for simplicity (avoiding OAuth) its useing its own service located here: http://yamba.marakana.com/ username is student and password is password.
here is my code:
public void onClick(View v) {
final String statusText = editStatus.getText().toString();
//zeitkritische Aufgaben wie z.b. networking oder DB Zugriff dürfen nicht im main-thread laufen
//App crashed sonst
new Thread() {
public void run() {
try {
Twitter twitter = new Twitter("student", "password");
twitter.setAPIRootUrl("http://yamba.marakana.com/api");
twitter.setStatus(statusText);
} catch (Exception e) {
Log.e("error", "DIED", e);
//e.printStackTrace(e);
}
}
}.start();
Log.d("StatusActivity", "onClicked! with text: " + statusText);
}
}
and here is what logcat gives me:
04-08 20:48:14.329: D/gralloc_goldfish(1935): Emulator without GPU emulation detected.
04-08 20:48:17.019: D/StatusActivity(1935): onClicked! with text: ggfdg
04-08 20:48:23.308: D/StatusActivity(1935): onClicked! with text: ggfdg
04-08 20:48:24.438: E/error(1935): DIED
04-08 20:48:24.438: E/error(1935): winterwell.jtwitter.TwitterException$E401: Unauthorized http://twitter.com/account/rate_limit_status.json (student)
04-08 20:48:24.438: E/error(1935): at winterwell.jtwitter.URLConnectionHttpClient.processError(URLConnectionHttpClient.java:125)
04-08 20:48:24.438: E/error(1935): at winterwell.jtwitter.URLConnectionHttpClient.getPage(URLConnectionHttpClient.java:91)
04-08 20:48:24.438: E/error(1935): at winterwell.jtwitter.URLConnectionHttpClient.processError(URLConnectionHttpClient.java:143)
04-08 20:48:24.438: E/error(1935): at winterwell.jtwitter.URLConnectionHttpClient.post(URLConnectionHttpClient.java:219)
04-08 20:48:24.438: E/error(1935): at winterwell.jtwitter.Twitter.post(Twitter.java:1944)
04-08 20:48:24.438: E/error(1935): at winterwell.jtwitter.Twitter.updateStatus(Twitter.java:2555)
04-08 20:48:24.438: E/error(1935): at winterwell.jtwitter.Twitter.updateStatus(Twitter.java:2502)
04-08 20:48:24.438: E/error(1935): at winterwell.jtwitter.Twitter.setStatus(Twitter.java:2274)
04-08 20:48:24.438: E/error(1935): at com.example.yamba.StatusActivity$1.run(StatusActivity.java:34)
It seems that the app tries to connect to twitter despite the fact that it should connect to the marakana-service because of this line:
twitter.setAPIRootUrl("http://yamba.marakana.com/api");
I also faced the same issue. My mistake was I forgot to remove the code for Toast from the catch block
| common-pile/stackexchange_filtered |
Google Apps Script creates sheets version of excel file
I am working to automate a process. Currently, I am uploading Excel (.xlsx) files to a specific folder in Drive. I then manually convert them into a Google Sheet file, and move it into a separate folder. I would like to automate this process. I know I would have to write the script to iterate through the 2 folders to compare and see which files have yet to be converted, and then convert those it does not find in both locations. However, I am struggling with the best approach to make this happen.
The code below that may or may not be on the right track, I am rather new at this and really just tried piecing it together. Anyone's insight would be greatly appreciated.
function Excel2Sheets()
{
//Logs excel folder and string of files within
var excelfolder = DriveApp.getFolderById('1JbamZxNhAyZT3OifrIstZKyFF_d257mq');
var excelfiles = excelfolder.getFiles();
// Logs sheets folder and string of files within
var sheetfolder = DriveApp.getFolderById('1y10IwMobCdpQlYwWdveHLzxEz3Xml0Qt');
var ID = sheetfolder.getId();
var sheetfiles = sheetfolder.getFiles();
var MType = MimeType.GOOGLE_SHEETS;
while (excelfiles.hasNext()) {
var excelfile = excelfiles.next();
var excelname = excelfile.getName();
while (sheetfiles.hasNext()) {
var sheetfile = sheetfiles.next();
var sheetname = sheetfile.getName();
if(sheetname == excelname) {
break;
}
if(sheetfiles.hasNext(0)) {
var blob = excelfile.getBlob();
sheetfolder.createFile(excelname, blob, MType);
break;
}
}
}
}
I have also played around with this code. Thanks
function fileChecker()
{
try{
//Establishes Excel Source Folder
var excelfolder = DriveApp.getFolderById('1JbamZxNhAyZT3OifrIstZKyFF_d257mq');
//Establishes Sheet Target Folder
var sheetfolder = DriveApp.getFolderById('1y10IwMobCdpQlYwWdveHLzxEz3Xml0Qt');
//Establishes Return File Type
var MType = MimeType.GOOGLE_SHEETS;
//Gets all files in excel folder
var excelfiles = excelfolder.getFiles();
//loop through excel files
while(excelfiles.hasNext()){
//Establishes specific excel file
var excelfile = excelfiles.next();
//Checks for file with same name in sheets folder
var sheetfiles = sheetfolder.getFilesByName(excelfile.getName());
//Logical Test for file match
if(sheetfiles.hasNext()){
//Gets File Name
var excelname = excelfile.getName();
//Creates File Blob
var blob = excelfile.getBlob();
// Creates sheet file with given name and data of excel file
sheetfolder.createFile(excelname, blob, MType);
}
}
}
catch(err){
Logger.log(err.lineNumber + ' - ' + err);
}
}
The second code fixes one issue of the first (reinitialization of the loop over gsheets files), but doesn't actually do anything with the gsheet file collection: your code says "if there are sheets files, convert this excel file" Note: there are limits on the number of files you can create per day! If you are a generic @gmail account, you're limited to 250 spreadsheets per day
Related: https://stackoverflow.com/questions/11681873/converting-xls-to-google-spreadsheet-in-google-apps-script
Either of your codes, if they were to reach the createFile line, should throw this error:
Invalid argument: file.contentType
because you are passing a Blob while createFile(name, content, mimetype) expects a String.
Reviewing the reference page for DriveApp, one will undoubtedly notice the File#getAs(mimetype) method, which returns Blob, and the Folder#createFile(blob) methods, and try something like:
var gsBlob = excelfile.getAs(MimeType.GOOGLE_SHEETS);
gsfolder.createFile(gsBlob).setName(excelfile.getName());
This too, however, will return an error:
Converting from application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
to application/vnd.google-apps.spreadsheet is not supported.
Looking at the documentation for the getAs method indicates that this is, in general, an unsupported operation unless the destination mimetype is MimeType.PDF. My guess is this is because the PDF conversion is simple enough - its implementation likely uses a "Print"-like functionality - while spreadsheet format conversion requires careful handling of formulas, images, charts, etc.
From past experiences with Google Drive, the general user knows that the ability to use Drive to perform automatic conversion of Excel -> Google Sheets exists. However, this functionality is only available during upload. Drawing from a closely related question, we observe that we must use the Drive API "Advanced Service", rather than the simpler, native DriveApp. Enable the Advanced Service, and then the following snippet can work. Note that the Advanced Drive Service treats folders as files having a specific mimetype (which is why there are no folder-specific methods), so using both DriveApp and the Advanced Service is easiest for those in the Apps Script environment.
// Convert the user's stored excel files to google spreadsheets based on the specified directories.
// There are quota limits on the maximum conversions per day: consumer @gmail = 250.
function convertExcelToGoogleSheets()
{
var user = Session.getActiveUser(); // Used for ownership testing.
var origin = DriveApp.getFolderById("origin folder id");
var dest = DriveApp.getFolderById("destination folder id");
// Index the filenames of owned Google Sheets files as object keys (which are hashed).
// This avoids needing to search and do multiple string comparisons.
// It takes around 100-200 ms per iteration to advance the iterator, check if the file
// should be cached, and insert the key-value pair. Depending on the magnitude of
// the task, this may need to be done separately, and loaded from a storage device instead.
// Note that there are quota limits on queries per second - 1000 per 100 sec:
// If the sequence is too large and the loop too fast, Utilities.sleep() usage will be needed.
var gsi = dest.getFilesByType(MimeType.GOOGLE_SHEETS), gsNames = {};
while (gsi.hasNext())
{
var file = gsi.next();
if(file.getOwner().getEmail() == user.getEmail())
gsNames[file.getName()] = true;
}
// Find and convert any unconverted .xls, .xlsx files in the given directories.
var exceltypes = [MimeType.MICROSOFT_EXCEL, MimeType.MICROSOFT_EXCEL_LEGACY];
for(var mt = 0; mt < exceltypes.length; ++mt)
{
var efi = origin.getFilesByType(exceltypes[mt]);
while (efi.hasNext())
{
var file = efi.next();
// Perform conversions only for owned files that don't have owned gs equivalents.
// If an excel file does not have gs file with the same name, gsNames[ ... ] will be undefined, and !undefined -> true
// If an excel file does have a gs file with the same name, gsNames[ ... ] will be true, and !true -> false
if(file.getOwner().getEmail() == user.getEmail() && !gsNames[file.getName()])
{
Drive.Files.insert(
{title: file.getName(), parents: [{"id": dest.getId()}]},
file.getBlob(),
{convert: true}
);
// Do not convert any more spreadsheets with this same name.
gsNames[file.getName()] = true;
}
}
}
}
My code above enforces a somewhat-reasonable requirement that the files you care about are those that you own. If that's not the case, then removal of the email check is advised. Just beware of converting too many spreadsheets in a given day.
If working with the Advanced Service in Apps Script, it can often be helpful to review Google's API Client Library documentation for the associated API since there is no specific Apps Script documentation equivalent. I personally find the Python equivalent easiest to work with.
Tehhowch, Thanks for this answer it works perfectly. I also appreciate the thorough explanation. I will check out the Python suggestion as well. Greatly Appreciated!
This works for me when saving from a Gmail attachment blob of an Excel file to a Sheet without Advanced Services.
function saveExcel(blob, filename){
let resources = { title: filename };
let options = { convert: true };
let file = Drive.Files.insert(resources, blob, options);
let fileId = file.getId();
return fileId;
}
If I'm reading files from a Drive folder, I use the following, for example:
function readExcelFiles() {
let files = convertFolder.getFiles();
let filesFound = 0;
let options = { convert: true };
while (files.hasNext()) {
let blob = file.getBlob();
let blobType = blob.getContentType();
let isXls = ((blobType.indexOf("spreadsheet") > -1) || (blobType.indexOf("ms-excel") > -1));
if (isXls) {
filesFound += 1;
let resources = { title: "tmp" + filesFound };
let file = Drive.Files.insert(resources, blob, options);
}
}
}
One thing I've found that can be important is using unique filenames. After being saved to Drive as a converted file, they are Google Sheets and can be read with SpreadsheetApp.
Sub EnhanceSpreadsheet()
Dim ws As Worksheet
Set ws = ThisWorkbook.Sheets("Sheet1") ' Change to your sheet name
' Step 1: Add Descriptive Column Headers
ws.Cells(1, 1).Value = "ID or Base Value"
ws.Cells(1, 2).Value = "Relevant Label" ' Adjust as necessary
ws.Cells(1, 3).Value = "Step Index"
ws.Cells(1, 4).Value = "Fine Levels"
ws.Cells(1, 5).Value = "Major Levels"
' Step 2: Introduce Calculated Columns
Dim lastRow As Long
lastRow = ws.Cells(ws.Rows.Count, 1).End(xlUp).Row
' Add Percentage Change Column
ws.Cells(1, 6).Value = "Percentage Change"
Dim i As Long
For i = 2 To lastRow
If ws.Cells(i - 1, 4).Value <> 0 Then
ws.Cells(i, 6).Formula = "=(" & ws.Cells(i, 4).Address & "-" & ws.Cells(i - 1, 4).Address & ")/" & ws.Cells(i - 1, 4).Address & "*100"
End If
Next i
' Add Cumulative Change Column
ws.Cells(1, 7).Value = "Cumulative Change"
ws.Cells(2, 7).Formula = "=" & ws.Cells(2, 4).Address & "-" & ws.Cells(2, 1).Address
For i = 3 To lastRow
ws.Cells(i, 7).Formula = "=" & ws.Cells(i, 4).Address & "-" & ws.Cells(2, 1).Address
Next i
' Step 3: Include Metadata or Context
ws.Cells(1, 8).Value = "Timestamp"
ws.Cells(1, 9).Value = "Market Condition"
ws.Cells(1, 10).Value = "Instrument Identifier"
' Step 4: Visual Representation
Dim chartObj As ChartObject
Set chartObj = ws.ChartObjects.Add(Left:=100, Width:=375, Top:=50, Height:=225)
With chartObj.Chart
.SetSourceData Source:=ws.Range("D1:E" & lastRow) ' Fine Levels and Major Levels
.ChartType = xlLine
.HasTitle = True
.ChartTitle.Text = "Progression of Fine and Major Levels"
End With
End Sub
| common-pile/stackexchange_filtered |
Visual Studio VB Update requires a valid UpdateCommand when passed DataRow collection with modified rows
I have button update and the code
Private Sub txtSave_Click(sender As Object, e As EventArgs) Handles txtSave.Click
Me.Validate()
Me.TrackingBindingSource.EndEdit()
Me.TableAdapterManager.UpdateAll(Me.InventoryDataSet)
MsgBox("Record Update")
End Sub
When run, I got error
System.InvalidOperationException: 'Update requires a valid UpdateCommand when passed DataRow collection with modified rows.'
Not sure do I miss something for update records. Thank you
When you create a typed DataSet from a database, the wizard generates a DataTable and a table adapter for each database table. The DataTable schema is based on the table schema, as is the SQL in the commands of the table adapter.
The SelectCommand contains a SELECT statement that will retrieve all columns of all rows and is executed when you call Fill. The InsertCommand, UpdateCommand and DeleteCommand contain INSERT, UPDATE and DELETE statements respectively and are executed as required when you call Update.
The SelectCommand and InsertCommand can always be generated because all they need to know is the name and data type of each column. UPDATE and DELETE statements need to be able to identify the specific record to update or delete and they do that by specifying the primary key value in the WHERE clause. If your database table has no primary key then that SQL cannot be generated and your table adapter will have no UpdateCommand or DeleteCommand.
What you need to do is to make sure that all your database tables have a primary key. It can be valid to have a table without a PK but it is very rare. Once your table has a PK, you can re-run the Data Source wizard to update your typed DataSet. There is a button on the toolbar in the Data Sources window to do that.
Thank you. Good to know about Primary Key is needed.
| common-pile/stackexchange_filtered |
Typescript: after the transpilation, wrong retun type
Why my index.ts
export const isNullOrUndefined = (value: any): value is null | undefined => {
return value === null || value === undefined;
};
is transpiled into index.d.ts (only value is null)
export declare const isNullOrUndefined: (value: any) => value is null;
instead of (value is null | undefined)
export declare const isNullOrUndefined: (value: any) => value is null | undefined;
this is the tsconfig file on typescript 4.9.5
{
"compilerOptions": {
"incremental": false,
"target": "es2019",
"outDir": "build",
"rootDir": "src",
"moduleResolution": "node",
"module": "commonjs",
"declaration": true,
"inlineSourceMap": false,
"esModuleInterop": true,
"resolveJsonModule": true,
"removeComments": false,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noImplicitReturns": true,
"noFallthroughCasesInSwitch": true,
"traceResolution": false,
"listEmittedFiles": false,
"listFiles": false,
"pretty": true,
"lib": ["es2019", "dom"],
"types": ["node"],
"typeRoots": ["node_modules/@types", "src/types"]
},
"include": ["src/**/*.ts"],
"exclude": ["node_modules/**"],
"compileOnSave": false
}
It's because you don't have strictNullChecks enabled. From the option's documentation:
When strictNullChecks is false, null and undefined are effectively ignored by the language. This can lead to unexpected errors at runtime.
When strictNullChecks is true, null and undefined have their own distinct types and you’ll get a type error if you try to use them where a concrete value is expected.
Add at least:
"strictNullChecks": true,
to your tsconfig.json (although I'd recommend all of the strict flags, via "strict": true).
| common-pile/stackexchange_filtered |
IE overrides cursor when using contenteditable
I'm wanting a pointer cursor over an element until that element has focus, and then turn to the typical text caret. Easy enough in Chrome, but IE11 and Edge don't seem to let me change the cursor if the element has the contenteditable attribute.
#thing{cursor:pointer;}
<div id="thing" contenteditable="true"style="width:200px;border:1px solid black;cursor:pointer;"> this is a some random foo bar dog jumps over a stick test</div>
The simple example shows the cursor only changing if contentedible is false. If true, it only does the text cursor. Rather annoying since MS is the one who supposedly created this attribute. Is this just a bug? Intended by design? Is there a workaround?
I came across this link but unfortunately I'm still not having any luck by changing the DTD.
How to change cursor style on element with 'contenteditable' attribute in IE?
I've just tried this myself in IE and haven't been able to get it to play ball. I would say try your luck with JavaScript? Sorry I couldn't help.
I have reproduced the problem on my side, it could be the default behavior of the IE browser, I will feedback this issue.
As a workaround, I suggest you could try to use the <textarea> tag, by using this tag, in IE browser it could use the pointer cursor, code as below:
<textarea id="thing" contenteditable="true" style="width:200px; height:100px; border :1px solid black; cursor:pointer">
this is a some random foo bar jumps over a stick test
</textarea>
Unfortunately this was only a small example of larger web app. I think your solution could work but I'd have to make all my editable elements into textfields or something and then customize a lot of CSS and update the event listeners. But this may be the only IE solution.
| common-pile/stackexchange_filtered |
Access Current Area Name in Session Start - MVC 4.0
We have multiple areas in our MVC 4.0 application and I am looking for a way to get the current Area name when the session is started.
Possible duplicate of ASP.NET MVC - Get Current Area Name in View or Controller
We need this on the session start event.
Have a look at this one then, just use "area" instead of controller/action. http://stackoverflow.com/a/16820248/1241562
| common-pile/stackexchange_filtered |
Create C++ Shared Library from Simulink in Linux
I need to generate a shared library (.so) file from a Simulink model, with inputs and outputs, and run this model step by step in another C++ project.
I am aware that the MATLAB Code Generator is able to generate executables and DLLs. Nonetheless, I could not find how to generate Linux C++ shared libraries. I would appreciate if someone could explain how to generate these libraries, with a minimal working example on how to import and use them in C++.
EDIT
I managed to create a shared library and a sample C++ project in Qt Creator. Somehow I was missing the target file ert_shrlib.tlc. This is the what I did:
Configure the Simulink Model
Create a simple continuous Simulink model with two inputs, and two outputs, named: simul_main.slx.
In "Configuration Parameters"->"Code Generation", select the target ert_shrlib.tlc for the Embedded Coder.
In "Configuration Parameters"->"Code Generation"->"Interface", set:
Code replacement library: C99 (ISO)
Support: floating-point numbers, and continuous time.
In "Configuration Parameters"->"Code Generation"->"Interface"->"Configure Model Functions", set:
Initialize function name: init_simul
Step function name: step_simul
Build the Shared Library
Build the model to generate code.
Configure the MWE Qt Project
test.pro
TEMPLATE = app
CONFIG += console
CONFIG -= qt
LIBS += -L<path_to_lib> -lsimul_main
INCLUDEPATH += <path_to_sources>
INCLUDEPATH += <path_to_matlab>/simulink/include/
SOURCES += main.cpp
main.cpp
#include <cstdio>
extern "C"
{
#include "simul_main.h"
#include "simul_main_private.h"
}
int_T main()
{
/* Initialize the model */
init_simul();
real_T input1 = 1, input2 = 1;
real_T time;
real_T output1, output2;
real_T tfinal = 10;
while ((rtmGetErrorStatus(simul_main_M) == (NULL)) && !rtmGetStopRequested(simul_main_M))
{
/* Step the model for base rate */
step_simul(input1, input2, &output1, &output2);
/* Get simulation time */
time = rtmGetT(simul_main_M);
/* Display simulation output */
std::printf("%5.2f : [%5.2f, %5.2f]\n", time, output1, output2);
/* Request simulation to stop */
if (time >= tfinal)
rtmSetStopRequested(simul_main_M, true);
}
/* Terminate the model */
simul_main_terminate();
return 0;
}
Note that the step_simul function signature may be configured in the "Configure Model Functions" dialogue.
You need to have the Embedded Coder. There are various examples within that product's documentation, which although not linux specific, should be pretty easy to get to work in linux.
| common-pile/stackexchange_filtered |
Content/Character Encoding - Why Does HTTP Header(Content-Type) Take Precedence Over Meta Tags(Http-Equiv)?
I have been studying content encoding for some time now, but I'm still learning.
My understanding is, the content/character encoding decides how a web browser renders characters outside of the normal ASCII range (0-127). Basically, there are different standard for interpreting those characters and if the right content encoding is specified, then they interpret correctly. If the wrong character encoding is specified, you may end up displaying characters that don't make sense.
One thing that I found quite surprising is that if the HTTP header Content-Type field and the meta tag http-equiv mention a different encoding, the browser should override the http-equiv meta tag with the HTTP header.
It seems to me that the person producing the HTML document would be most likely to know the correct content encoding, as it's their content. If they use a tool to create the HTML, it's easy for that tool to automatically include the meta tag. The server, on the other hand, might serve content with many different encoding formats or have a default that's different. Most people producing a HTML document would have control over the meta tag, but they may or may not be able to control the server headers, and the level of technical skill required to do that is higher in many cases.
Content can also be saved locally as .htm or .html, or copied from one server to another. However, the HTTP header information is generally not retained. Thus, if the information is copied, the meta tag would generally go with it. There's a very easy chance that data copied from one server would go to another server and be served with the wrong encoding. It's easy to make a file that loads on the web but fails to load properly if saved locally.
I can't seem to find or think of any reason to use the HTTP header, other than as a backup or initial guess of the encoding.
I'm quite interested and curious on the reasoning behind this decision. It seems to me that it makes more logical sense to let the meta tag take precedence, as it seems like it would be a more reliable indication of the true encoding. Does anyone know the history on this decision and how it was made?
I'm answering this from my own experience and since I haven't seen anyone else reply to you, I thought I'd share my perspective on it.
I think the reason why the server's encoding gets to override the webpage's encoding is because historically the server would convert any text-based files you upload into a format suitable for the server. This includes rewriting the endianess. This is as opposed to binary files which didn't get converted.
Because the server converted the text-based file, it would know what the encoding was, because that was what it encoded it into.
When it subsequently serves the text file as a webpage, it has to override the original encoding because it was potentially changed.
| common-pile/stackexchange_filtered |
DX9 GetMessage FPS Drop
my program is a transparent overlay over a game
code that sets it to the game window
setwindowtotarget is called in a seperate thread
i am very frusturated
also, the game feels laggy when my fps counter says i'm getting over 200 as well
void SetWindowToTarget(){
while(true){
tWnd = FindWindow(0, tWindowName);
if (tWnd)
{
GetWindowRect(tWnd, &tSize);
Width = tSize.right - tSize.left;
Height = tSize.bottom - tSize.top;
DWORD dwStyle = GetWindowLong(tWnd, GWL_STYLE);
if(dwStyle & WS_BORDER)
{
tSize.top += 23;
Height -= 23;
}
MoveWindow(hWnd, tSize.left, tSize.top, Width, Height, true);
}
Sleep(5000);
}
}
this code makes my fps drop down and then back up every 10 seconds or so
while (GetMessage(&Message, NULL, 0, 0)){
TranslateMessage(&Message);
DispatchMessage(&Message);
}
LRESULT CALLBACK WinProc(HWND hWnd, UINT Message, WPARAM wParam, LPARAM lParam){
Sleep(17);
switch (Message)
{
case WM_PAINT:
Render();
break;
case WM_CREATE:
DwmExtendFrameIntoClientArea(hWnd, &Margin);
break;
case WM_DESTROY:
PostQuitMessage(1);
return 0;
default:
return DefWindowProc(hWnd, Message, wParam, lParam);
break;
}
return 0;
}
render func:
int Render(){
p_Device->Clear(0, 0, D3DCLEAR_TARGET, 0, 1.0f, 0);
p_Device->BeginScene();
if(tWnd == GetForegroundWindow()){
g_vCenterScreen.x = Width / 2.f;
g_vCenterScreen.y = Height / 2.f;
//draw stuff
}
}
p_Device->EndScene();
p_Device->PresentEx(0, 0, 0, 0, 0);
return 0;
}
You should use PeekMessage instead of GetMessage
I'm not sure it's a good idea to call MoveWindow in a loop in a separate thread. Maybe you can call it only once.
Why? I was told that GetMessage was better for performance.
The low FPS feel was even worse when I was using just peekmessage in a while loop. After changing it to GetMessage the fps got a bit better.. but it still isn't great
GetMessage is a blocking call, while PeekMessage is a non blocking
You can use PeekMessage in a loop with a Sleep(1) on each iteration.
I did that.. DWM uses less CPU but the game feels laggy even though the game fps counter is 200+
Then maybe your problem is related to constant MoveWindow calls
| common-pile/stackexchange_filtered |
Android: CameraBridgeViewBase FPS low
The FPS of the CameraBridgeViewBase is relative low (on a Nexus 4 phone ~10 FPS). If I use detectors on it, it will be much slower.
Is there a way to accelerate it? What are the alternatives with better FPS for image processing on Android plattform?
| common-pile/stackexchange_filtered |
Position:absolute creating messing up grid flow
I'm trying to make a grid with css from which, when clicked, a div can "break out" of using position:absolute, so that it can be moved around above the grid without messing with aforementioned grid. Here's the jsFiddle. As you can see, when clicked the position is set to absolute, but henceforth destroys the grid structure around it instead of simply floating above it. I've tryed using z-index, but it did not work. What am I doing wrong?
Thanks in advance!
Add
vertical-align: middle;
to .icon_wrapper
Demo
| common-pile/stackexchange_filtered |
my .js.erb file is not being called
In my rails application I am trying to access pagination via an ajax call. I am triggering a script via index.js.erb file. The control doesn't go to that file. Please help
My controller:
def index
ics_per_page=params[:ics_per_page]||5
ics_per_page=ics_per_page.to_i
@ics = Ic.search(params[:root_name],params[:suite_name],params[:case_name],params[:name],'f').paginate(:per_page =>ics_per_page, :page => params[:all_ics])
respond_to do |format|
format.js
format.html # index.html.erb
format.xml { render :xml => @ics }
end
end
My index.js.erb file:
console.log("inside index.js.erb");
$('#listing').html('<%= escape_javascript(render("listing")) %>');
Thanks,
Ramya.
sure.could you please let me know fow to get it in firebug
Could you check again. I think the file index.js.erb is rendered but the content is not executed because of a wrong content type. set the content type to specify that it is javascript.
format.js { render :content_type => 'text/javascript' }
Hi Alok, What I am able to find out is even if i don't have index.js.erb in the directory its not throwing an error.Ideally ir should throw an error rt?
what is the output on Rails log ? Is the call to index action an XHR ?
Hi Alok after commenting out console.log and giving an alert msg I am getting the Rendered ics/_listing.html.haml (185.0ms)
Rendered ics/index.js.erb (191.8ms)
Completed 200 OK in 321ms (Views: 225.9ms | ActiveRecord: 14.7ms)
But the page output is displayed with html tags. i.e the basic html format
how are you calling the index action ?
index action is called on loading the page
I think you have to call the action through a XML HTTP Request so that the mime type is set to text/javascript.
| common-pile/stackexchange_filtered |
います and あります usage
I just learned about: います and あります.
I know I should use います for people and moving things and あります for plants and inanimate things.
I have two doubts:
Which one should I use with "dead body". For example: "Is there a dead body inside the room?"
What about robots or a non living thing with AI?
Other questions on which verb to use, but not answering this specific question: http://japanese.stackexchange.com/questions/1905/when-is-it-okay-to-use-あります-with-a-living-subject and http://japanese.stackexchange.com/questions/5147/do-viruses-あります-or-います
If you are just starting out, then those rules will often get you by. However, the topic is more complex and depends on one's perspective, feeling, and context. Here is an informative read on the topc: http://leo.aichi-u.ac.jp/~goken/bulletin/pdfs/NO22/03Yamamoto.indd.pdf
@Dono: Thanks for the link. Unfortunately, I am far from understanding a text like that.
Basically it depends on how the speaker feels. However, I think we usually say:
死体があります。
ロボットがいます。 if it looks like it has a mind of its own.
ロボットがあります。 if it is an industrial robot without a mind.
車がいます。 if it is being driven by a human.
車があります。 when we talk about cars in general.
人工知能(AI)があります。 if it doesn't have anything visual, auditory or physical.
コンピュータのソフトウェアがあります。
ゲームのキャラクターがいます。 if it is controlled by computer software.
細菌/ウイルス/コンピュータウイルスがいます。 when we talk about someone's symptoms.
細菌/ウイルス/コンピュータウイルスがあります。
ゾンビがいます。
幽霊がいます。
+1. I never knew that things are this way: "車がいます。 if it is being driven by a human.". Thank you.
According to a quick google search, and agreeing with my thoughts, you should say "死体があります" for dead body, but you would say "死んだ人がいます" (if you like zombies).
For robots and AI, well, I guess います is acceptable, provided it's like a living thing (such as an aibo). I doubt you'd say it for a clever car, unless it's KITT from the Knight Rider… To sum up, it's quite subjective in this case.
If you said: there is a fish that is died on the table。
います or あります。
例:机に魚がいます。 (でも魚は死んだ。)
| common-pile/stackexchange_filtered |
Knex migration seeding completes with foreign key error
I'm seeding my db from an array that looks like this (words and definitions are in a many to many relationship):
var seeds = [
{
"word": "Click",
"definitions": ["Computer", "Mouse", "Tasto", "Pulsante", "Selezionare"]
}, {
"word": "Galoppo",
"definitions": ["Cavallo", "Andatura", "Trotto", "Ippica", "Passo"]
}, {
"word": "Raggio",
"definitions": ["Sole", "Bicicletta", "Diametro", "Luce", "Laser"]
}, {
.
.
.goes on for 1089 objects
This is what I tried:
exports.seed = function (knex, Promise) {
var promises = seeds.map(function (seed) {
return knex('words').insert({
word: seed.word
}, 'id').then(function (word_id) {
var promises = seed.definitions.map(function (definition) {
return knex('definitions').insert({
definition: definition
}, 'id').catch(function (err) {
if (err.code === 1062)
return knex('definitions').select('id').where({
definition: definition
}).then(function (duplicate) {
return knex('definitions_words').insert({
definition_id: duplicate[0].id,
word_id: word_id
});
});
}).then(function (definition_id) {
return knex('definitions_words').insert({
definition_id: definition_id,
word_id: word_id
});
});
});
return Promise.all(promises);
});
});
return Promise.all(promises);
};
Words are unique in my seeds but definitions may repeat, so I catch the duplication error and grab the id of the duplicate to put that in the junction table. It seems to work fine, the junction table in fact ends up with 1089*5 rows (5445), but I get an error on the cli:
Error: Cannot add or update a child row: a foreign key constraint fails
(`mytable`.`definitions_words`,
CONSTRAINT `definitions_words_definition_id_foreign`
FOREIGN KEY (`definition_id`) REFERENCES `definitions` (`id`))
Although we can't see your migrations (and this is a rather old question) what's usually going on with these foreign key restrictions is that you've defined definition_id in words to reference definitions.id. For this reason, you can't create the word before the definition it references exists.
Without testing it, and bare of error-checking, I'd imagine you using something more like this:
exports.seed = function (knex, Promise) {
var promises = seeds.map(function (seed) {
// Check for an existing definition. More recently
// you can use `whereNotExists` but you always need
// an id here whatever the case
return knex('definitions')
.select('id')
.where('definition', seed.definition)
.then(function (definition_id) {
if (definition_id.length === 1) return definition_id[0]
return knex('definitions')
.insert({ definition: definition })
})
.then(function (definition_id) {
// Use the definition once it exists
return knex('words')
.insert({ word: seed.word, definition_id: definition_id })
.then(function (word_id) {
return { word_id: word_id, definition_id: definition_id }
});
})
.then(function (join_ids) {
// Finally, update the join table
return knex('definitions_words')
.insert({
definition_id: join_ids.definition_id,
word_id: join_ids.word_id
})
})
return Promise.all(promises);
};
| common-pile/stackexchange_filtered |
find the last added sheet (vba)?
How to get the last sheet created in excel ?
I used GetSheets.Last, and it work but he found me the last sheet what it is in queued, it’s right, but if my last sheet doesn’t follow the order of the queued for example it is in the middle, the function GetSheets.Last doesn’t work.
Exist some function where the robot can found or understand which sheet has been created for last ?
Thanks
regards
As far as I know worksheets don't carry such metadata - you can't know which sheet was added first or last, nor sort them by the order they were created in. I could be wrong though.
Is your code creating the sheets? If so, you could use CustomProperties to store some Timestamp metadata. ...that wouldn't help with sheets not created by the macro though.
You can use Worksheets(Worksheets.Count). This will give you the index number of the highest numbered sheet, which should have been the last one created.
@DarrellH until Worksheets(Worksheets.Count) is moved, yes.
OK, since nobody else has mentioned it, I'll bite and ask why you need to know the order that worksheets were added. It sounds like you're trying to treat the worksheets themselves as data, which indicates a design problem.
AFAIK the sheetID in the .zip file of your xlsm is numbered consecutively (workbook.xml). I don't know if there's a possibillity to read from it in the same/open file.
When you programmatically add a worksheet, the Worksheets.Add function yields a reference to the just-added Worksheet object: that is normally how VBA code gets a handle on the "last created worksheet".
Dim newSheet As Worksheet
Set newSheet = book.Worksheets.Add
'use newSheet object to refer to the newly added worksheet.
If we're talking about manually added worksheets, things need to get more ...involved.
Assuming you need to track all sheets added to all workbooks, you could have an Excel add-in that handles application-wide events like so:
Private WithEvents app As Excel.Application
Private Sub Workbook_Open()
Set app = ThisWorkbook.Application
End Sub
Private Sub app_WorkbookNewSheet(ByVal Wb As Workbook, ByVal Sh As Object)
If Not TypeOf Sh Is Excel.Worksheet Then Exit Sub
Dim ws As Worksheet
Set ws = Sh
ws.CustomProperties.Add "DateCreated", Now
End Sub
Excluding Application.EnableEvents = False, I'm not 100% convinced that handler would run in every possible case that can create a worksheet, but I guess it's better than nothing.
You can then have a function that gets you the DateCreated custom property given a Worksheet instance:
Public Function GetDateCreated(ByVal ws As worksheet) As Date
Dim p As CustomProperty
For Each p In ws.CustomProperties
If p.Name = "DateCreated" Then
GetDateCreated = p.Value
Exit Function
End If
Next
GetDateCreated = 0 ' unknown
End Function
And then all that's left to do is to write a procedure that can sort the sheets based on their associated DateCreated custom property value.
There is a ThisWorkbook.Workbook_NewSheet event sub as well.
@user10970498 yes, but that would only pick up new sheets in ThisWorkbook.
I guess it depends on whether the 'latest workbook' requirement is more important for one workbook on many computers or many workbooks on one computer. Excel add-ins are not always the easiest to distribute but an XLSM/XLSB carries its subs with itself.
Granted... TBH I agree with Comintern here, this is very likely not something the OP actually needs to be doing - but it's a fun problem to try & solve though :) ...OP doesn't specify whether it's about a workbook, or about any workbook.. I went with the broader option
| common-pile/stackexchange_filtered |
Suppress system() console output
I'm using system() to open and close an external program with which my code communicates. However, every time I use the system() function, I get the console output I would get if I was calling the program from a normal terminal/shell, e.g. every time I call system(killall [program] &) I get a Terminated message. Is there a way to suppress this type of outputs?
Redirect their output to /dev/null, as always.
@Jon I'm already trying that, but I still get "Terminated" messages on the console.
You should use execlp instead of system ;)
https://www.securecoding.cert.org/confluence/display/seccode/ENV04-C.+Do+not+call+system()+if+you+do+not+need+a+command+processor
Problem is, with execlp I have I get stuck waiting for the command to finish; What I was looking for was something that would allow me to just start a second process, that runs in parallel with the main program, like I get with system("[program] &")
@joaocandre you should use std::thread. See the accepted answer here, put your execlp in task1. This should do the trick.
Apparently, using task1.join() implies waiting for task1 to finish, so it's not quite what I am looking for.
@joaocandre just don't call join(), the task1 should be executed at the same time without blocking your main thread.
not using join() results in task1 not being lauched at all. I've also tried using detach same result.
| common-pile/stackexchange_filtered |
Show that if $N$ is a normal subgroup of $G$ which contains all commuters then $G/N$ is abelian.
I am working on my proof for class and I was wondering if this look ok?
Let $N$ be a normal subgroup of $G$ we want to show that $G/N$ is abelian, or $(aN)(aN) = abN = baN = (bN)(aN)$. Since $N$ contains all commutors,
then let $aba^{-1}b^{-1}N = N$ for some $a,b \in G$ then,
\begin{align*}
aba^{-1}b^{-1}N &= N && \text{ Given}\\
ab(ba)^{-1}N &= N && \text{ Definition of inverse} \\
abN(ba)^{-1} &= N && \text{ since N is normal}\\
abN &= N(ba) && \text{ right multiply by ba}\\
abN &= baN && \text{ since N is normal}\\
\end{align*}
Which is what we wanted to show.
We are using Abstract Algebra by Judson and I tried to mimic one of the proofs in the book plus add my reasons behind doing so.
Your proof is good. Also, you might be interested to know that any subgroup that contains the commutators has to be normal.
Commutators. No, not "for some $a,b\in G$, rather, "for all $a,b\in G$, $[a,b]N=N$. This says precisely that $G/N$ is abelian.
Good catch, the devil is in the details.
| common-pile/stackexchange_filtered |
What type of thermal paste for cooling a stepper motor?
My little Nema 11 stepper becomes hot when I use it to hold a position. This is quite normal since it use 1.3 A of current.
My plan is to use the big aluminium supporting bracket as an heat-sink too. In this way I will avoid a bulky envelope box (this stuff has to be mounted on a motorcycle).
I'm thinking to apply a layer of thermal past between the motor and the surface of the bracket before fixing it by using some screws.
After googling, I discovered that there is no agreement about which pastes are electrical conductive and which are not. I got confused...
What is paste composition more suitable for my application? Somebody has some practical experience about that?
Do you need the full 1.3A to hold the position. A common approach is to reduce the current while holding using PWM.
Any type of paste will be better than air, and paste may only give you a tiny advantage over simply mounting it dry, which is far more convenient and less messy. Try it dry first. If it's good enough, you're done. If not, then try some paste. Whether electrically conductive or not should not make any difference when mounting a grounded motor chassis to a ground mounting bracket.
@KevinWhite: thanks for the trick! I will try it in the next days. This could help to reduce the heat and (maybe) the need for an heat-sink.
@Neil_UK: seems that I need to but one of that IR devices to measure temperature and make some tests. The big problem is how to explain that to my wife...
| common-pile/stackexchange_filtered |
Generalising Cutting lens changes intensity of light of image
We were told that if I cut a (say biconvex lens) in half, my intensity of light which forms the image is now half.I was wondering if there was a general formula for if I cut a lens horizontally (a variable,say 2/3 of the aperture),what intensity of light would I receive for the image formed?
The intensity of light in the image is equal to the intensity on the lens. Cut the lens in half, only half of the light will hit the lens and will reach the image. The intensity of the image is proportional to the area of the lens
In terms of reducing intensity, there is no difference between adding an aperture to the lens (in front of it or behind it) or cutting out pieces of the lens. In a sense a cut lens is just an ordinary lens with a different type of aperture. In any case, the intensity is proportional to the area of the aperture/remaining lens.
As an interesting side note, the focused parts of the image would stay the same shape as before cutting. Defocused parts would have blur spots with the shape of the remaining lens, like in these two images (taken from here: https://en.m.wikipedia.org/wiki/Bokeh), instead of circular as with a whole lens (assuming the lens is circular, of course):
Maybe this helps visualise what's happening here: you're cutting off pieces of the image each object point makes. When the image is sharp, you don't see the shape because its size reduces to a point. But the shape is still there and you've still cut off pieces of it, removing those pieces' contribution to intensity.
| common-pile/stackexchange_filtered |
foward the value of datetimepicker in other form to display it in the textbox
How can i foward the value of datetimepicker named dtpDAILY when clicking the ok button and form named DAIILY
and to display it in other form named SALES and put it in a textbox named txtSUMMARY ?
You can try this.
First get the open form which is Sales using Application.OpenForms, then set the txtSummary.Text on the value you get from DateTimePicker:
SalesForm frmSales = Application.OpenForms.OfType<SalesForm>().SingleOrDefault();
frmSales.txtSUMMARY.Text = dateTimePicker1.Value.ToLongDateString();
frmSales.Show();
this.Close();
Make sure you set your txtSummary textbox to public on its Access Modifier properties:
| common-pile/stackexchange_filtered |
Can I recover my saved-over TIF file?
While retouching I saved my file as a tif and then flatten it and saved as a jpeg. I accidentally saved the tif again while it was flattened with no layers. Is there a way to recover my tif file with the layers?
If you are lucky, the application saved the file as a new file, then erased the old version and renamed the new. This leaves the previous version as an erased file on the disk, and it can be recovered using "unerase" utilities. But best ask in SuperUser, you'll get more informed answers. And since this is purely a computer problem, state operating system, disk type, etc.... when you ask the question.
Also, the filesystem might have allocated new space for the new copy and didn't yet reuse the sectors where the old version was. Slim chance though....
@rachandboneman If it does save new/erase old/rename new this is implicit, since the old version is still on the disk when the new one is saved.
vtc b/c This is a question about file recovery, not photography per se.
Btw, I don't know what app you are using but in the one I use (Gimp) flattening is automatic (and not done on the current image) when "exporting " to JPEG.
@Xiota Not if you do "Save", only of you do "Save as...", so in the case at hand it would not have helped (assuming a open XCF/edit/flatten/export to Jpeg/save sequence).
@Xiota Ctrl-E will bluntly re-export the file if you loaded it from a non-XCF format.
@xenoid Seems like they're purposefully trying to make people lose data.
If you are still in the program, you might be able to undo.
If you have some kind of automatic backups or versioning filesystem installed, you might be able to recover from that.
Otherwise, sorry, no.
Can also search the Google machine for file recovery programs.
@xiota Unlilkely if the file has been saved over.
Most programs in modern operating systems don't actually over write files when they "save over" files.
Good odds if the filesystem is journalled.
Bad odds that anyone who asks the question above knows what journalling is, must less has their file system journaled.
| common-pile/stackexchange_filtered |
React Native Debugger prevents network requests
I'm working on a React Native app, testing in an android emulator. I've used the standalone React Native Debugger app as well as the debugger that opens in Chrome. In the Chrome window, the Network tab shows no activity, so that's no help. In the standalone debugger, the same is true until you right-click and choose Enable Network Inspect. The problem I'm having is that after I enable network inspecting in the debugger, all network requests fail - the inspector shows their status going from Pending to Canceled after a few seconds. I can see in my server logs that no requests are coming in. It's like debugger itself is somehow blocking the requests.
I've set up adb to run as root. When I run react-native run-android the output includes Running adb -s emulator-5554 reverse tcp:8081 tcp:8081, so I think things are starting up fine. The network requests from the app (login etc.) work fine (a typical URL would be http://<IP_ADDRESS>:2080/api/LoginScreenController/GetIdentityStatus), until I choose "enable network inspect" in the debugger, at which point all network requests fail as described above.
Any suggestions would be appreciated.
Did you find any solution?
I know that I'm too late but perhaps it will save time for somebody.
During the update of the react-native, I did update the version of the react-native-reanimated from v1 => v2, and after it, I faced the same behavior.
Version 2 requires to enable Hermes engine if it won't be enabled then all the requests will be in pending status
| common-pile/stackexchange_filtered |
Detection of 301 Redirect
In NGINX is there a way to detect is a site was accessed via a 301 redirect? An NGINX server was setup incorrectly and site B now has some 301 redirects cached in browsers pointing to site A. I would like to redirect these a second time to new site C from both A and B, but, only redirect from A if they were previously redirected from B to A and let users who directly access A get to A without redirect.
I hope this makes sense.
You can't detect that, no.
Browsers do not include HTTP status codes from previous responses in a new request. At most you get the previous URL in the Refererer header, but even that is unreliable (many users disable sending that header as it is seen as a security or privacy breach).
As such an HTTP server has no way of knowing if a request was initiated by a redirect, the user following a link
| common-pile/stackexchange_filtered |
Is this a Functional Differential Equation? How to solve it?
I ran into the equation below. I'm not familiar with functional derivatives so I'd appreciate if someone could give me an idea of how to solve it and/or a good reference I can use. I appreciate your help!
Let $g=g(v)$, $v=v(x)$ and let $h^{i}=h^{i}(x,v(x),v_{x}(x))$ for $i=1,2,3$. Functions $h^{i}$ are known while function $v(x)$ is unknown. We're looking for a solution of $g$ such that,
$g_{vv}\left(v\right)+h^{1}\left(x,v\left(x\right),v_{x}\left(x\right)\right)g_{v}\left(v\right)+h^{2}\left(x,v\left(x\right),v_{x}\left(x\right)\right)g\left(v\right)+h^{3}\left(x,v\left(x\right),v_{x}\left(x\right)\right)=0$.
Some other conditions are $x\in\left[\underline{x},\infty\right)$, $v\left(\underline{x}\right)>0$ and $v(x)$ is an increasing function of $x$. Also, $\int_{v\left(\underline{x}\right)}^{\infty}g\left(v\right)dv=1$.
Which functions are known here? If $v(x)$ and $h^i(x)$ are known and you need to find $g(v)$, then this is an ordinary differential equation. Just rewrite it so all the functions depend only on $x$, using the known functions. Then your solution $G(x)=g(v(x))$ will be the expressed in parametric form
That's a good question I'll edit the text. $v(x)$ is unknown while $h^{i}$ is known.
What do you mean " We're looking for a solution of g "? Are both $v(x)$ and $g(v)$ unknown functions? What other conditions do you have? Like initial/boundary conditions, the required properties for solutions, etc. With two unknown functions and one equation there's quite a bit of freedom
True, I was hoping to find a solution for $g(v)$ as a function of the unknown $v(x)$. The conditions are $x\in\left[\underline{x},\infty\right)$, $v\left(\underline{x}\right)>0$ and $v(x)$ is an increasing function of $x$. Also, $\int_{V\left(\underline{x}\right)}^{\infty}g\left(v\right)dv=1$.
I think if you post an actual example of the problem you are trying to solve and maybe some context (where it came from) the question stands a better chance of being answered
You're probably right. I'll try to write down a simplified version of the original problem.
While waiting for the OP to reveal more details, I'll try to make sense of what is presented, and later edit if needed. Not an answer, obviously.
We have a single equation for two unknown functions $g(v)$ and $v(x)$:
$$g_{vv}\left(v\right)+h_{1}\left(x,v\left(x\right),v_{x}\left(x\right)\right)g_{v}\left(v\right)+h_{2}\left(x,v\left(x\right),v_{x}\left(x\right)\right)g\left(v\right)+h_{3}\left(x,v\left(x\right),v_{x}\left(x\right)\right)=0$$
First, let us rewrite everything in terms of $x$:
$$g(v(x))=G(x)$$
$$\frac{dg}{dv}=\frac{dG}{dx}\frac{dx}{dv}=\frac{G_x}{v_x}$$
$$\frac{d}{dv}\frac{dg}{dv}=\frac{dx}{dv}\frac{d}{dx} \frac{G_x}{v_x}=\frac{1}{v_x} \left(\frac{G_{xx}}{v_x}-\frac{G_x v_{xx}}{v_x^2} \right)$$
Since each function is of the same variable now, I'll use $'$ for derivatives and rewrite the equation as follows:
$$\frac{G''}{v'^2}-\frac{G' v''}{v'^3}+h_1(x,v,v')\frac{G'}{v'}+h_2(x,v,v')G+h_3(x,v,v')=0 $$
Multiplying by $v'^3$ (from the conditions placed on $v(x)$ we know that $v'>0$):
$$G''v'-G' v''+h_1(x,v,v')G'v'^2+h_2(x,v,v')Gv'^3+h_3(x,v,v')v'^3=0 $$
This is a second order nonlinear ODE (with a single variable $x$), for two unknown functions.
We need to find some pair $G(x),v(x)$ which satisfies the equation and the conditions placed on both functions. Then we have $g(v)$ in parametric form.
I'm sure there's a lot of different solutions here.
In a more common case, we would have another ODE with the two functions, then we would be able to find the general solution (provided we even know it for this particular kind of nonlinear equations).
This is similar to a case of a, say, algebraic equation $P(y,x,t)=0$ where we were asked to find $y(x)$. A lot of pairs $x(t),y(x)$ might be solutions.
To the question in the title, I would likely say that this is not a functional differential equation, at least not any kind that I heard of. Usually functional ODEs are known to have terms with changed argument of some kind, but the change is known. For example $y'(x)=y(x-1)$ or $y'(x)=y(2x)$ or something more complicated.
| common-pile/stackexchange_filtered |
How to write thread safe method common for multiple instances of spring boot application
I have a situation wherein for a given request a file is generated
as part of an endpoint code flow.
At present we have placed the file
generation code within synchronized block, to ensure thread safety.
Till now we have deployed only 1 instance of application in Jenkins.
Now we are planning to deploy 3 instances of our application in Jenkins.
My problem is how do I ensure thread safety of my file generation code now.
Your question is really unclear. If you have an implementation already, update your question instead of answering your own question.
Coming back to your question, Jenkins is not a deployment environment but a build tool. If you are spinning the server as part of the build process for test execution purposes, it is highly recommended not to have side effects. Either deploy it remotely or use a docker container to run your application.
| common-pile/stackexchange_filtered |
Modal Popup and closing with sessionStorage keeping it closed
I'm wanting to make a modal that will show up after a few seconds but then after you close the modal it doesn't show up unless you close the website and start a new session. I've got the modal but I'm struggling to integrate the sessionStorage
<div id="myModal" class="modal"\>
<div class="modal-content">
<div class="modal-body">
<span class="close">×</span>
<h1>I'm Dummy Body</h1>
</div>
</div>
</div>
<script>
var modal = document.getElementById("myModal");
var span = document.getElementsByClassName("close")[0];
span.onclick = function() {
modal.style.display = "none";
}
window.onclick = function(event) {
if (event.target == modal) {
modal.style.display = "none";
}
}
setTimeout(function(){
modal.style.display = "block";
},6000)
</script>
Are you using Bootstrap, or are you making this modal on your own?
No, I'm using my own modal, from scratch, I have taking references from other places like w3schools.
You can achieve this by setting a flag in sessionStorage when the modal is closed, indicating it has been shown. When the page loads, check if the flag exists in sessionStorage. If not, display the modal after a delay. If the flag is present, do not display the modal. You can go to your dev tool and check application>session storage to see the log there. See here for more information on sessionStorage.
<script>
var modal = document.getElementById("myModal");
var span = document.getElementsByClassName("close")[0];
span.onclick = function() {
modal.style.display = "none";
// Once the modal is closed, set a flag in sessionStorage
sessionStorage.setItem('modalShown', 'true');
}
window.onclick = function(event) {
if (event.target == modal) {
modal.style.display = "none";
// Set flag in sessionStorage when modal is closed by clicking outside
sessionStorage.setItem('modalShown', 'true');
}
}
// Check if the modal has been shown in this session
var modalShown = sessionStorage.getItem('modalShown');
if (!modalShown) {
setTimeout(function(){
modal.style.display = "block";
}, 6000);
}
</script>
| common-pile/stackexchange_filtered |
Optimization of predictions from sklearn model (e.g. RandomForestRegressor)
Does anyone used any optimization models on fitted sklearn models?
What I'd like to do is fit model based on train data and using this model try to find the best combination of parameters for which model would predict the biggest value.
Some example, simplified code:
import pandas as pd
df = pd.DataFrame({
'temperature': [10, 15, 30, 20, 25, 30],
'working_hours': [10, 12, 12, 10, 30, 15],
'sales': [4, 7, 6, 7.3, 10, 8]
})
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
X = df.drop(['sales'], axis=1)
y = df['sales']
model.fit(X, y);
Our baseline is a simple loop and predict all combination of variables:
results = pd.DataFrame(columns=['temperature', 'working_hours', 'sales_predicted'])
import numpy as np
for temp in np.arange(1,100.01,1):
for work_hours in np.arange(1,60.01,1):
results = pd.concat([
results,
pd.DataFrame({
'temperature': temp,
'working_hours': work_hours,
'sales_predicted': model.predict(np.array([temp, work_hours]).reshape(1,-1))
}
)
]
)
print(results.sort_values(by='sales_predicted', ascending=False))
Using that way it's difficult or impossible to:
* do it fast (brute method)
* implement constraint concerning two or more variables dependency
We tried PuLP library and PyOmo library, but both doesn't allow to put model.predict function as an objective function returning error:
TypeError: float() argument must be a string or a number, not 'LpVariable'
Do anyone have any idea how we can get rid off loop and use some other stuff?
The keywords are black-box optimization or gradient-free optimization. There is too much to say and it's not really something for stackoverflow. All your candidates are not suited for this as all their assumptions are wrong (random examples; not mapped to those candidates: differentiable, continuous, convex). In black-box opt there are lots of approaches, bayesian, surrogate losses and all that stuff... but to be honest: in you use-case grid-search, random-search or some bandit-based random-search is very competitive. Hyperparameter-tuning would be one more keyword to google.
Above focuses on the "vs. brute-force" question. If all you want is constraint-based filtering of your grid-search (loops), this is either easy to filter out (for simple things) in the loop or you might go for sat-solving / constraint-programming techniques, where at least the former has lots of theory in terms of uniform solution-sampling. But this rapidly goes towards research stuff.
Did you find any solutions? I have the same problem
When people talk about optimizing fitted sklearn models, they usually mean maximizing accuracy/performance metrics. So if you are trying to maximize your predicted value, you can definitely improve your code to achieve it more efficiently, like below.
You are collecting all the predictions in a big results dataframe, and then sorting it in ascending order. Instead, you can just search for an increase in your target variable (sales_predicted) on-the-fly, using a simple if logic. So just change your loop into this:
max_sales_predicted = 0
for temp in np.arange(1, 100.01, 1):
for work_hours in np.arange(1, 60.01, 1):
sales_predicted = model.predict(np.array([temp, work_hours]).reshape(1, -1))
if sales_predicted > max_sales_predicted:
max_sales_predicted = sales_predicted
desired_temp = temp
desired_work_hours = work_hours
So that you can only take into account any specification that produces a predictiong that exceeds your current target, and else, do nothing.
The result of my code is the same as yours, i.e. a max_sales_predicted value of 9.2. Also, desired_temp and desired_work_hours now give you the specification that produce that maxima. Hope this helps.
| common-pile/stackexchange_filtered |
tcl - insert characters and replace lines
Say I ask tcl to print the following from within nested dictionary (containing rows and columns) - Reference: Brad Lanam
set id0 "[dict values [dict get $risedata constraints constraint $c 0]]"
...and I get this
1.1 2.1 3.1 4.1 5.1
Now I wish to change
{1.1 2.1 3.1 4.1 5.1} to get {"1.1,2.1,3.1,4.1,5.1", \}
By adding the
1) ',' in between the values
2) ',' space and ' \' at the end
3) " at the start
I know I have to use join and linsert (I did join and need help with linsert)
1) ATTEMPT: set id0 [join $id0 ","] to get 1.1,2.1,3.1,4.1,5.1
2) QUESTION: How do I convert 1.1,2.1,3.1,4.1,5.1 to {"1.1,2.1,3.1,4.1,5.1", \}
I tried set id0 [linsert [linsert $id0 end , "] 1 "] but it is INCORRECT. Please help!
Ok so after that is done I want to
... write the line ({"1.1,2.1,3.1,4.1,5.1", \}) over (overwrite) line 1 under 'values' which is "1.1,1.2,1.3,1.4,1.5", \ in this file format (called liberty (z.lib) - used for chip design). All other lines (rise_constraint ... etc) should be printed as is.
Snippet of z.lib
rise_constraint (constraint_template_5X5) {
index_1 ("0.01, 0.75, 0.72, 0.9, 0.8");
index_2 ("0.075, 0.025, 0.04, 0.3, 0.8");
index_3 ("0.084, 0.83, 3.99, 8.1, 19.44") ;
values ( \
"1.1,1.2,1.3,1.4,1.5", \
"2.1,2.2,2.3,2.4,2.5", \
"3.1,3.2,3.3,3.4,3.5", \
"4.1,4.2,4.3,4.4,4.5", \
"5.1,5.2,5.3,5.4,5.5", \
);
}
Note: I understand that For writing out a file, one would use open, close, dict, foreach, puts - to overwrite multiple lines (in this case 5 lines of 'values') - As wellexplained by Brad Lanam in a previous tcl query.
How do we use a foreach loop to iterate over 5 lines (in my .lib file format) and change it to my dict returned values???
Greatly appreciate your help on this!! Thank you so much! I'm a graduate student.
Reference: Brad Lanam
I'd use format here:
set id0 [list 1.1 2.1 3.1 4.1 5.1]
set str [format "\"%s\" \\" [join $id0 ,]]
puts $str
"1.1,2.1,3.1,4.1,5.1" \
Brilliant!! That was very slick! Thanks. If you really don't mind, do you know how I can use a foreach loop to replace the 5 old strings (under values) with the formatted strings? How can I read my z.lib file, search for the keyword 'values' and replace the strings effectively. I really appreciate your advice.
Do you know how to also include a ',' (comma) right after the last double-quote in the line (using format)? So "1.1,2.1,3.1,4.1,5.1", \ as opposed to "1.1,2.1,3.1,4.1,5.1"
Thanks so much!
How do you think you will have to alter format's first parameter to add that comma?
It would be set str [format ""%s", \" [join $id0 ,]]
Your link to format helped. Didn't check that before. Might you be able help in the foreach issue I'm running into. I know how to read the file (set fh [open "C:/Tcl/official/original_snippet.lib" r]) but how do I zero in on values of "rise_constraint" (contained amongst 1000's of other lines) and do line by line replacement of values with newly formatted strings? I'd be indebted for your advice. I imagine the format would be something like this
set data [split $fh "\n"]
foreach line $data {
# do some line processing here
}
@edaloke Tcl's format is just like C's sprintf() except not broken or insecure. And it avoids the things that are just never a good idea at all (or meaningless in Tcl, such as the addresses of things).
And for mere thousands of lines (i.e., maybe a megabyte of data?) just reading it all in and using the right search is quite good enough. There's a few searching options, depending on exactly what you are doing.
@edaloke, I got generous and added an answer to your original question
| common-pile/stackexchange_filtered |
Getting cors error while trying MFE using Module Federation
I try to achieve MFE using module federation.I go through this link to achieve this.
Everything was going perfectly until I ran the base/host application. It's throwing cors error while connecting to the remoteEntry.js file.
This file can be directly accessible from the URL but not from the angular app.
Getting this error in browser.
Access to script at 'http://localhost:8090/remoteEntry.js' from origin 'http://localhost:4200' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
main.js:1 GET http://localhost:8090/remoteEntry.js net::ERR_FAILED 200 (OK)
In my :8090 node server,cors is already there.
app.use(function(req, res, next) {
res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Access-Control-Allow-Methods', 'GET, POST');
res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With,content-type, Authorization');
next();
});
Added to the app.js file.
Did I missed something ?
EDIT :
In angular module federation app, we are adding module federation libray using ng add @angular-architects/module-federation@16 --project remoteapplazy2 --port 5002.Now in host app,we are adding the remote entry path like http://localhost:5002/remoteEntry.js .Now If we run the host app and remote app using ng serve it's working fine. But If we build the remote app using ng build and run the file using a node server running 8091 port we are error GET http://localhost:5002/remoteEntry.js net::ERR_CONNECTION_REFUSED main.ts:5 Error loading remote entries TypeError: Failed to fetch dynamically imported module: http://localhost:5002/remoteEntry.js . Now If we run the nodejs server using 5002 port ,getting cors error Access to script at 'http://localhost:5002/remoteEntry.js' from origin 'http://localhost:4200' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Is this really cors error or should I miss anything to build the app and run in the server? Is there any other process to run the code in server?
I created a html for testing
<html>
<body>Test
<script src="http://localhost:5002/remoteEntry.js"></script>
</body>
</html>
and run this.There is no cors error.
it's sending a preflight request and not getting a response. Sorta strange since a GET should be a simple request... maybe custom headers are being sent? Make sure the backend is able to respond to OPTIONS request.... (import a CORS module?) You could alternatively use a proxy to have the front-end and back-end served from same port. (That way you don't have to loosen your security by sending CORS headers)
I modified the cors res.setHeader('Access-Control-Allow-Methods', 'GET, POST,OPTIONS'); like this. Still it's not working.
@browsermator The error message indicates that preflight is not the issue, though.
@Subham Please produce a https://stackoverflow.com/help/minimal-reproducible-example
@jub0bs I have edit the question with some extra comments.Also I have added the link where from where I copied the codebase.If it's not enough, I'll add some codebase.
| common-pile/stackexchange_filtered |
Confusion Between Noun vs. Verb in Rest URLs
I have studied over the internet about restful APIs that it focuses on nouns not verbs in the url pattern, but now I am seeing multiple links that use verbs in the URL.
Here is an example.
POST /v1/payments/authorization/<Authorization-Id>/capture
POST /v1/payments/authorization/<Authorization-Id>/void
POST /v1/payments/authorization/<Authorization-Id>/reauthorize
this is Paypal apis. PayPal API
also on wikipedia on HTATEOAS page they gave a example ;
<?xml version="1.0"?>
<account>
<account_number>12345</account_number>
<balance currency="usd">100.00</balance>
<link rel="deposit" href="/account/12345/deposit" />
<link rel="withdraw" href="/account/12345/withdraw" />
<link rel="transfer" href="/account/12345/transfer" />
<link rel="close" href="/account/12345/close" />
</account>
link: Wiki HATEOAS
Can anyone help me get some clarity about this? why 'capture', 'void', 'deposit', 'withdraw', 'close' are in the URI cause they are all verbs not nouns?
or is this okay to use these kind of words in rest-full apis url?
It looks like the wiki finally decided to change the verbs to nouns ;)
Some snippets from the REST API Design Rulebook about different resource types:
Document
A document resource is a singular concept that is akin to an object instance or database
record.
Example:
http://api.soccer.restapi.org/leagues/seattle/teams/trebuchet
Collection
A collection resource is a server-managed directory of resources. Clients may propose
new resources to be added to a collection. However, it is up to the collection to choose
to create a new resource, or not.
Example: http://api.soccer.restapi.org/leagues/seattle/teams
Store
A store is a client-managed resource repository. A store resource lets an API client put
resources in, get them back out, and decide when to delete them. On their own, stores
do not create new resources; therefore a store never generates new URIs. Instead, each
stored resource has a URI that was chosen by a client when it was initially put into the
store.
Example: PUT /users/1234/favorites/alonso
Controller
A controller resource models a procedural concept. Controller resources are like executable functions, with parameters and return values; inputs and outputs.
Like a traditional web application’s use of HTML forms, a REST API relies on controller
resources to perform application-specific actions that cannot be logically mapped to
one of the standard methods (create, retrieve, update, and delete, also known as
CRUD).
Controller names typically appear as the last segment in a URI path, with no child
resources to follow them in the hierarchy.
Example: POST /alerts/245743/resend
Based on the definitions in the book, the URIs you've posted probably fall under the Controller resource type, of which the book later states:
Rule: A verb or verb phrase should be used for controller names
Examples:
http://api.college.restapi.org/students/morgan/register
http://api.example.restapi.org/lists/4324/dedupe
http://api.ognom.restapi.org/dbs/reindex
http://api.build.restapi.org/qa/nightly/runTestSuite
Other naming rules, just for completeness
Rule: A singular noun should be used for document names
Rule: A plural noun should be used for collection names
Rule: A plural noun should be used for store names
The authors of this book may have the opinion, that RPC is fine. They may even call RPC "controller resource". But I disagree that this is good advice since there is no resource at such a URI that can be acted on using the other HTTP verbs. GET /.../reindex makes no sense which is a strong indicator that the whole URL makes no sens.
@LutzHorn The book did not imply that GET /../reindex should be used. I don't want to quote the everything, but the implication was to use POST. GET should be used for a CRUD retrieve operation, which the book clearly states should not be defined by a controller
So such a controller is not a resource in the REST sense. I vote against this concept.
@LutzHorn Just curious on your opinion.. How would you define an operation in REST terms, that is not a CRUD operation? Also are you saying that "action" operations have no place in REST. Just would like to hear different views
An operation is a resource that is created by POSTing to a collection of operations. POST /.../item/1/ops with details in the body. The response contains the Location header of the resource. A GET retrieves the current state of the resource. Summary: an operation is just like any resource.
@LutzHorn But what you are describing is CRUD (create) operation. I am asking about operations that "can't logically be mapped to CRUD". The example URI above for controller says "..resource
that allows a client to resend an alert to a user". What are your thoughts on this?
Every operation can be logically mapped to CRUD. Even the resend request can.
It sounds like some of the commenters are rejecting the notion of a "Controller" resource type as part of a REST API, because it does not fit within the CRUD model. But is that saying that only CRUD APIs are "good" and anything that goes beyond CRUD is "bad"? I understand the problems that occur with an unconstrained RPC model, but I'm not sure I accept that the CRUD model is the answer for every problem. More elaboration on when it's OK to use the "controller resource" would be useful.
The trick is to make it all nouns (or entities) that operate with the CRUD verbs.
So instead of;
POST /v1/payments/authorization/<Authorization-Id>/capture
POST /v1/payments/authorization/<Authorization-Id>/void
POST /v1/payments/authorization/<Authorization-Id>/reauthorize
Do this;
capture -> POST /v1/payments/authorization/
void -> DELETE /v1/payments/authorization/<Authorization-Id>
reauthorize -> delete first then capture again.
Upvote not because it's "RESTful" this way but far more elegant.
Reviving an Old post,
The ideology behind REST Api following a noun-based naming is that URLs are resource location, so they locate a resource. At times I would agree that it is difficult to represent things as resource. And a 95% or 90% design is still a good design. Problem comes when we would not give it a proper thought before accepting it cant be done.
For example, in the OP's scenario, the resource is authorization state. Values of states can be true/false or authorized/not authorized.
With this we have two resources now,
POST /v1/payments/authorization/<Authorization-Id>/ (authorize)
DELETE /v1/payments/authorization/<Authorization-Id>/ (void)
Now, when we come to reauthorize, technically it is void+authorize. But I would avoid doing this, if I had to do a round trip to achieve it.
Let us think of a scenario, what happens when we call authorize twice? Will I have two authorizations? Or it should be void and then authorize?
If it is former, I would probably follow an imperfect design, for its performance merits, and do,
POST /v1/payments/authorization/<Authorization-Id>/reauthorize
But if it is the later, my only set of APIs will be,
PUT /v1/payments/authorization/<Authorization-Id>/ (authorize/reauthorize)
DELETE /v1/payments/authorization/<Authorization-Id>/ (void)
PUT verb here implies idempotency. So you can call it multiple times, and the effect is same.
| common-pile/stackexchange_filtered |
Hashtable and String values error
I have a master_string which has values that are retrieved from a pHp database. They are :
{"P":[["5"],["22"]],"AS":[["29"],["34"]],"DT":[["995"],["12"]],"AR":[["23"],["121"]],"SE":[["5"],["22"]]}
and after removing special characters, the string is now:
P:5,22,AS:29,34,DT:995,12,AR:23,121,SE:5,22
Now when I am trying to convert this into a Hashtable for some values, it should give me 2 values but it only gives me one. My code for this part is
String input = master_string;
Hashtable<String, int[]> result2 = new Hashtable<String, int[]>();
Pattern pattern = Pattern.compile("([A-Z]+):(\\d+(?:\\d+)*)");
Matcher matcher = pattern.matcher(input);
while (matcher.find())
{
String key = matcher.group(1);
String[] fields = matcher.group(2).split(",");
int[] values = new int[fields.length];
for (int pqr=0; pqr<values.length; pqr++)
{
values[pqr] = Integer.parseInt(fields[pqr]);
}
result2.put(key, values);
}
Each alphabet is a key to its values after it until another key is found. What i found out that the result2 hashtable only saves the first value and ignores the second, any idea why this is happening?
P.S: the answer in result2 should be something like:
P = {5, 22}
AS = {29, 34}
DT = {995, 12}
SE = {5, 22}
Your values contain , between digits so you need to consider it using
([A-Z]+):(\\d+(?:,\\d+)*)
([A-Z]+): : capture one or more alphabets and match : character
(\\d+(?:,\\d+)*): \\d+ one or more digits
(?:,\\d+)* * zero or more occurrences of ,\\d+
Output:
P [5, 22]
AS [29, 34]
DT [995, 12]
AR [23, 121]
SE [5, 22]
you sir are a lifesaver, kudos to you my brother from neighboring country!
@SaimMahmood thank you for appreciation , i am glad that i could help , happy coding
and the website you mentioned in your answer is very nice, thanks again!
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.