text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Unable to submit form using JQuery
So the scenario is on my page I have a form along with a countdown timer. If the countdown timer reaches 0:00 I have a statement which activates and is supposed to submit the form as it stands.
here is that statement:
if (Todays_Date >= Target_Date) {
clearTimeout(timeoutID);
$('#testForm').submit();
alert("Timeup, Submitting exam....");
return;
}
what actually happens is, it fires, the timer stops (cleared timeout) and alert pops up. However the form does not submit and take the user to the action page.
I have copied all of the code into this jsFiddle to layout the basic principle.
the time of which countdown is targetted to is passed in the HTML at the bottom. you'll see the countdown() call.
I really am in a pickle and would appreciate someones help.
note: I know this is not a secure method etc.
many thanks
You have form element called submit:
<input name="submit" type="submit" value="Complete Test">
Change its name so something like:
<input name="btnSubmit" type="submit" value="Complete Test" />
And your code will work.
When the form contains element called submit, referring to formElement.submit will refer the element instead of the actual submit method - this is very common problem.
Cheers @buymypi (:-D) the trick is to alert the value of submit and in your case, it showed it as input element.
You use
$("#testForm").trigger('submit');
The way you went with is binding the submit event to a function, which means once submit occurs, something you specify in the function (that you don't have) will happen.
If you'll view the jQuery source code you'll see that calling .submit() is calling this exact code as well. :)
| common-pile/stackexchange_filtered |
How to get Linux kernel modules list from user space C code?
I want to get the list of kernel modules by C code, and later on print their version.
From a script this is simple:
cat /proc/modules
lsmod
and later on, run for all drivers found:
modinfo driver_name
From C code, I can open /proc/modules, and analyze the data there, but is there a simpler way of reading this drivers list?
Your question is about programming, therefore voting to migrate to [SO] โ please don't repost there.
You might find some useful API functions in the module.c and module.h files from the Linux kernel.
On Linux, reading /proc and/or /sys is, in many cases, the official way.
From C code, I can open /proc/modules, and analyze the data there, but is there a simpler way of reading this drivers list?
Depends on your definition of simple. The concept in Unix land of everything being a file does make everything simpler in one respect, because:
int fd = open("/proc/modules" | O_RDONLY);
while ( read(fd, &buffer, BUFFER_LIMIT) )
{
// parse buffer
}
close(fd);
involves the same set of function calls as opening and reading any file.
The alternative mechanism would be for the kernel to allocate some memory in your process' address space pointing to that information (and you could probably do this with a custom system call) but there's really no need - as you've seen, this way works very well not just with C, but with scripts also.
It's probably even easier with fopen()/fgets(), as the /proc/modules is line based.
@nos definitely agreed. Feel free to change the code sample :)
| common-pile/stackexchange_filtered |
TypeError parsing JSON obj from urllib ipython
I am using an API to request data from a website. The data can be found here and pasted into a JSON Viewer. My code along with the error that it is returning are below. I am guessing that this is a quick fix, partially reflecting the fact that this is my first time using urllib.
import pandas as pd
import urllib
import json
api_key = '79bf8eb2ded72751cc7cda5fc625a7a7'
url = 'http://maplight.org/services_open_api/map.bill_list_v1.json?apikey=79bf8eb2ded72751cc7cda5fc625a7a7&jurisdiction=us&session=110&include_organizations=1&has_organizations=1'
json_obj = urllib.request.urlopen(url)
data = json.load(json_obj)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-21-85ab9af07320> in <module>()
8 json_obj = urllib.request.urlopen(url)
9
---> 10 data = json.load(json_obj)
/home/jayaramdas/anaconda3/lib/python3.5/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
266 cls=cls, object_hook=object_hook,
267 parse_float=parse_float, parse_int=parse_int,
--> 268 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
269
270
/home/jayaramdas/anaconda3/lib/python3.5/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
310 if not isinstance(s, str):
311 raise TypeError('the JSON object must be str, not {!r}'.format(
--> 312 s.__class__.__name__))
313 if s.startswith(u'\ufeff'):
314 raise JSONDecodeError("Unexpected UTF-8 BOM (decode using utf-8-sig)",
TypeError: the JSON object must be str, not 'bytes'
Any suggestions, comments, or further questions are appreciated.
It looks like it wants str, not bytes. Did you try json.load(json_obj.decode('utf-8'))?
Thank you for your suggestion! I just tried your suggestion. I received the following error AttributeError: 'HTTPResponse' object has no attribute 'encode'
Oops... json.loads(json_obj.read().decode('utf-8')) maybe?
I just noticed that ^. I tried both and received a corresponding error AttributeError: 'HTTPResponse' object has no attribute 'decode'
Did you notice the .read() in my second suggestion? IIRC, the HTTPResponse objects are file-like which means that to get the bytes data from them, you can call .read().
Thank you for the tip! That seems to have done the trick :)
With requests it is a simple requests.get(url).json()
Also if you are trying to put this into a df you can do it all with pandas
@ Padraic: I just tried the requests.get(url).json() and got the following error: `---> 12 requests.get(url).json()
NameError: name 'requests' is not defined
`
json.load won't guess the encoding, so you typically need to .read the bytes from the object returned and then convert those bytes into a string by using .decode and the appropriate codec. e.g.:
data = json.loads(json_obj.read().decode('utf-8'))
There is an example of this in the official documentation.
Specifically, it says:
Note that urlopen returns a bytes object. This is because there is no way for urlopen to automatically determine the encoding of the byte stream it receives from the http server. In general, a program will decode the returned bytes object to string once it determines or guesses the appropriate encoding.
| common-pile/stackexchange_filtered |
Create a new object extending a trait and using early definition syntax
How to create an objectthat extends a trait and uses early definition syntax?
Let's say we have two traits:
trait Trait {
val value : String
}
trait Simple
Let's say we have also a simple class:
class Class
We can create a new object of type Class and make it extend a Simple trait:
new Class extends Simple
Is it possible to create a new object of type Class that extends the Trait and uses the early definition syntax to set value member? I tried something like:
new Class extends { override val value = "42" } with Trait
But this gives a syntax error:
Error:(12, 17) ';' expected but 'extends' found.
new Class extends { val value = "42" } with Trait
Can you clarify a bit more what you're trying to do? Why can't you just have this set up like a normal class? I suspect you want a lazy val, but can't find anything on the syntax you're referring to.
@Ethan Ooops, I meant early definition syntax. Actually, I have been just learning scala and just wondering, whether one can use that syntax when creating a new object.
Why can't you override the value in the body?
trait T { val value: String }
class C
new C with T { val value = "ok" }
Can you add some context around your answer?
@atymic The only "context around" that I would have been able to produce could have been "The right way to do that is ... ". But I was just afraid it would be just a redundant decoration on top of the implicit context determined by the question. I could have stated that using Class and Trait for class and trait names is a bad practice (readability and eventual collision), but it's about teaching. I like reading answers that explain complex things, but I prefer minimalist answers to minimal questions. Thanks though.
| common-pile/stackexchange_filtered |
What under \subsubsection?
In a \subsubsection, I'm describing a software component, and I have to describe the classes it is composed of. I'm thinking to add some "titles" like "Class 1", "Class 2", but what's the best way to do this?
The simple way I know is to put a \noindent \textbf{myTitle}, is there a smarter way?
Thoses sub-titles don't go in the index, but it is important they are clearly distinguishable
Can I offer another suggestion? Maybe you could use a description environment to describe classes instead of going deeper in the sectioning levels.
The standard classes provide the additional sectioning commands \paragraph and \subparagraph below the \subsubsection level. By default, these levels are unnumbered, aren't included in the table of contents and are typeset in "runin" style, i.e., without a vertical space between level title and following text.
For a general overwiew of sectioning commands, see section 2.7 of lshort.
| common-pile/stackexchange_filtered |
How to tell if an enum property has been set? C#
I have a class with an enum property like so:
public class Foo
{
public Color ColorType {get;set;}
}
public enum Color
{
Red,
Green,
}
Now this class can be initialized like so:
var foo = new Foo();
without the ColorType property ever being set.
Now, I'm trying to create a method and perform actions on whether that enum was ever set or not, for example I have a method
private void checkEnum(Foo foo)
{
if(foo.ColorType !=null)
{
//perform these actions
}else
{
//perform those actions
}
}
however I get a warning saying that value will never be null and upon further research, if the enum is never set if will default to the first value which would
be Red in my case, I was thinking about adding a value to my enum which would be 'not set' and make that value the first one, so if it hasnt been set then
the enum will have the value 'not set', is there a better way of doing this, my proposed method seems like it could get messy
You should be able to find your answer here: http://stackoverflow.com/questions/4967656/what-is-the-default-value-for-enum-variable
You can use one of two methods: default enum value or a nullable enum.
Default enum value
Since an enum is backed by an integer, and int defaults to zero, the enum will always initialize by default to the value equivalent to zero. Unless you explicitly assign enum values, the first value will always be zero, second will be one, and so on.
public enum Color
{
Undefined,
Red,
Green
}
// ...
Assert.IsTrue(Color.Undefined == 0); // success!
Nullable enum
The other way to handle unassigned enum is to use a nullable field.
public class Foo
{
public Color? Color { get; set; }
}
// ...
var foo = new Foo();
Assert.IsNull(foo.Color); // success!
Good answer but it fails to address the issue of the user being able to set the value to null or undefined and thus you don't know if the user set the enum to this or if the enum has not changed.
@MrUniverse true, you could introduce a boolean flag field that is set the first time Color value is changed. You'd have to take care, however, and make sure the class is not ever deserialized in a way that makes it a false positive.
What if it was an attribute such as [UserModifiedMonitor] that gets set to true when the setter is called? I don't think attributes are ever deserialized with a class.
@MrUniverse I don't know whether you can set an attribute value dynamically at runtime, but even if you could, that would seem to be a pretty violent abuse of the attribute infrastructure, which is about static annotation rather than dynamic state.
Well the static annotation is that you are marking it as a monitored set property. As for the physical limitations, attributes are (for the most part) just objects attached to the property (or other structure). So it will work.
What is your enum does not have Undefined at all?
@Ziggler, as explained, unless you explicitly assign values, the first Enum element will have the value of zero, and will be effectively the default one.
You can make it so that the underlying private field is nullable, but the property is not.
E.g.
class SomeClass
{
private Color? _color; // defaults to null
public Color Color
{
get { return _color ?? Color.Black; }
set { _color = value; }
}
public bool ColorChanged
{
get { return _color != null; }
}
}
That way if color == null you know it hasn't been set yet, and you are also stopping the user from setting it to null (or undefined as other answers specify). If color is null you have 100% certainty that the user has not set it ever.
Only catch is the default value returned by the get but you could always throw an excepting if it better matches your program.
You can also take it one step further by making it so that the set only sets the field if the given value is not equal to the default value (depending on your use case):
public Color Color
{
get { return _color ?? Color.Black; }
set
{
if (value != Color)
{
_color = value;
}
}
}
You can even do return color ?? Color.Black.
@1blustone Thanks, fixed and did some general improvements to the answer.
You have two real options. The first is to add an undefined value to enum. This will be the default value before the property is initialized.
1)
public enum Color
{
Undefined,
Red,
Green,
}
With your check like:
private void checkEnum(Foo foo)
{
if(foo.ColorType == Color.Undefined)
{
//perform these actions
}else
{
//perform those actions
}
}
2) Alternatively you can not add the undefined value and just make the property Nullable
public class Foo
{
public Color? ColorType {get;set;}
}
public enum Color
{
Red,
Green,
}
And perform your check like:
private void checkEnum(Foo foo)
{
if(!foo.ColorType.HasValue)
{
//perform these actions
}else
{
//perform those actions
}
}
Enums are Value Types, which means they are not references to an object stored somewhere else, and hence they cannot be null. They always have a default value just like int which will default to zero upon creation. I suggest two approaches:
Add another enum entry called e.g., None with value equal to zero. This way your enum value will default to None upon creation. Then you can check if(foo.ColorType != Color.None).
Make your Color property a nullable one like: public Color? ColorType { get; set; }. Now it will default to null and can be assigned the value null. Read more about nullable types here: MSDN - Nullable Types (C#).
Try this :
private void checkEnum(Color color)
{
if(string.IsNullOrEmpty(color.ToString()))
{
//value is not set
}
else
{
//value is set
}
}
enum is a value type so it cannot be null, and the storage is generally an integer. If you want a undefined value for your type, you may have
public enum Color
{
Undefined,
Red,
Green,
}
As you've discovered, enumerations in C# are value types (they are essentially integers) not reference types so they will not default to NULL but rather the lowest integer value in the enumeration. Don't lose sight of this relationship when dealing with enums as it's one of their most useful attributes. Always remember that whether you explicitly state it or not,
public enum Color
{
Red,
Green
}
equates to:
public enum Color
{
Red = 0,
Green = 1
}
Though you may of course give each enumeration member any integer value that you like.
As far as whether or not there is a better way of doing this, it really all depends on what "this" is, though there's nothing wrong with your suggestion of simply using the following enum setup:
public enum Color
{
None = 0,
Red,
Green
}
Generally you want to use enums when you have a decidedly finite and discrete set of possible values that you want to be able to select from by name. For example, say I have a method that takes one of the 4 cardinal directions (North, East, South, West) as a parameter. I decide that I want to number each of the directions in clockwise order, starting with 0 for North.
public enum Direction
{
North = 0,
East,
South,
West
}
Now, instead of having my function take an integer parameter and trusting that I'll remember what each number refers to, I can now have the function take an enumeration member as a parameter and know immediately which direction I'm dealing with. For example:
getNeighbor(1);
reads much easier as:
getNeighbor(Direction.East);
| common-pile/stackexchange_filtered |
reverse geocode a set of co-ordinates
I have sets of co-ordinates in the following format
(-33.9,18.6)
How do I go about getting the name of the nearest town or Country for those co-ords? I'm guessing it will involve Javascript, but am happy to also use PHP if appropriate?
EDIT: Am trying the Google Reverse Geocoder but having trouble with it. The following code is pretty identical to one of their examples but doesn't seem to be running at all... any ideas why?
<script type="text/javascript" charset="utf-8">
function reverseGeocode(lat,lon){
var geocoder = new GClientGeocoder();
var latlng = new GLatLng(lat, lon);
geocoder.getLocations(latlng, function(addresses) {
alert(addresses);
});
}
</script>
You are calling the function, right?
yep, if I put an alert("test") on the first line of the function it fires off once, but if I put it after the var geocoder line it doesn't
Ok managed to get it working, so here's what I did:
<script src="http://maps.google.com/maps?file=api&v=2&key=INSERTAPIKEYHERE" type="text/javascript"></script>
function reverseGeocode(latitude,longitude){
var geocoder = new GClientGeocoder();
var latlng = new GLatLng(latitude, longitude);
geocoder.getLocations(latlng, function(addresses) {
var address = addresses.Placemark[0].address;
var country = addresses.Placemark[0].AddressDetails.Country.CountryName;
var locality = addresses.Placemark[0].AddressDetails.Country.AdministrativeArea.SubAdministrativeArea.Locality.LocalityName;
alert(address);
});
}
google has a webservice you can call to do (reverse) geocoding, which is much more lightweight then using the full javascript api.
an example with v2: http://maps.google.com/maps/geo?q=51,4&sensor=false&output=json&key=insert_your_api_key_here&callback=parseme
and with v3:
http://maps.google.com/maps/api/geocode/json?latlng=51,4&sensor=false&callback=parseme
in both cases you get a json response (javascript object), the difference being that in the now deprecated v2 you need to provide you api key and you can use a callback (for cross site ajax goodness) which for crying out loud isn't supported in v3 any more (but you don't need an api key any more).
Thanks, cold you maybe show me some sample code for using the webservice? I'm not sure how to go about it
my still ongoing experiment lives at http://futtta.be/checkRoam/
It tries to find your coordinates (geolocation browser, html5) and uses those to get country code. Feel free to view source and copy/ paste.
Google maps API has a javascript example.
http://code.google.com/apis/maps/documentation/services.html#ReverseGeocoding
have you any experience with it? I'm trying the following but to no avail
Have you registered for their API key?
Also are there any specific error messages?
Yes, I'm using the google api successfully to get the user's current location, but for some reason can't get reverse geolocation
| common-pile/stackexchange_filtered |
Why can a convert say mikra bikkurim but not viduy maaser?
Maaser Sheini 5:14 says that a convert can't say viduy maaser because he doesn't inherit a portion of the Land and can't say ืืืช ืืืืื ืืฉืจ ื ืชืช ืื ื. We follow this mishnah.
Bikkurim 1:4 says that a convert can't say mikra bikkurim because he can't say ืืฉืจ ื ืฉืืข ื' ืืืืชืื ื ืืชืช ืื ื. We pasken like R' Yehuda, that the Avos are considered ancestors of converts as well, and therefore this is not an issue and a convert can say mikra bikkurim.
Mikra bikkurim also says ืืืชื ืื ื ืืช ืืืจืฅ ืืืืช. This never enters the discussion, and we seem to assume that it's not a problem.
Why the difference?
(The reverse question, why ืืืฉืจ ื ืฉืืขืช ืืืืชืื ื doesn't disqualify a convert from viduy maaser according to the tanna of the mishnah in Bikkurim, can be answered by just saying that he's already disqualified by ืืืช ืืืืื ืืฉืจ ื ืชืช ืื ื so we don't need to explicitly list another disqualification.)
Some answers, from a chaburah I wrote up a few years back. (I didn't write down the precise locations of the sources - will have to look them up when I have some time.)
Radvaz notes that by ืืงืจื ืืืืืจืื the expression is ืืฉืจ ื ืฉืืขืช ืืืืืชืื ื ืืชืช ืื ื, which can be understood as ื ืฉืืขืช ืืืืืชืื ื โ to Avraham โ and therefore ืืชืช ืื ื, it pertains to the whole Jewish people, gerim included. Whereas by ืืืืื ืืขืฉืจ the expression is ืืฉืจ ื ืชืช ืื ื, and it wasnโt given to the gerim; ืืืฉืจ ื ืฉืืขืช ืืืืืชืื ื there refers to ืืืช ืืื ืืืืฉ.
Mishneh Lamelech quotes R. Moshe ibn Chaviv, who points out that Yechezkel states that gerim will get a portion in Eretz Yisrael in Moshiachโs times; therefore they can well say ืืชืช ืื ื in future tense. (A logical reason for this is that gerim who joined the Jewish people after Yetzias Mitzrayim, when we were at the highest heights, may not have been totally sincere; not so with those who became gerim during the years of galus.) Whereas ืืฉืจ ื ืชืช ืื ื is in past tense.
A couple of mefarshim on Rambam bring an idea from R. Shmuel Primo, that the expression ืืจืฅ ืืืช ืืื ืืืืฉ is never found in Chumash Bereishis (addressed to the Avos), only in Shemos (addressed to the Jews in Egypt) and afterwards. So if in ืืืืื ืืขืฉืจ the expression ืืืฉืจ ื ืฉืืขืช ืืืืืชืื ื refers to ืืจืฅ ืืืช ืืื ืืืืฉ, then ืืืืชืื ื there necessarily means the Jews who were in Egypt โ and the ger isnโt descended from them.
Yet another answer, from Mirkeves Hamishneh, is that ืืงืจื ืืืืืจืื must be said in lashon hakodesh, and there ืืืืชืื ื can mean โmastersโ (as in ืืืฉืืื ื ืืื ืืคืจืขื); whereas ืืืืื ืืขืฉืจ can be said in any language, and in other languages โavโ means only โfather.โ
These are great sources, but they only address ืืฉืจ ื ืฉืืข ืืืืชืื ื ืืชืช ืื ื, not ืืืชื ืื ื ืืช ืืืจืฅ ืืืืช or ืืฉืจ ื ืชืช ืื ื'.
@Heshy For ืืืชื ืื ื ืืช ืืืจืฅ ืืืืช, there's probably no issue because it's talking collectively about the Jewish people as a whole (with ืืฉืจ ื ืฉืืข, the problem isn't so much ืืชืช ืื ื, but ืืืืืชืื ื). For ืืฉืจ ื ืชืช ืื, maybe it's just that practically speaking, he does own the land right now (hence was given it by Hashem), he just doesn't own it in perpetuity.
But in viduy maaser, ืืืช ืืืืื ืืฉืจ ื ืชืช ืื ื is also plural. And ืืฉืจ ื ืชืช ืื excludes women.
@Heshy True. Will have to think about it some more...
| common-pile/stackexchange_filtered |
Terminating the readLine() function when encountered with a whitespace
So, I have started using Kotlin to solve some basic problems at websites like CodeChef and Codeforces where I got caught up on a problem.
The question requires us to input 2 integer variables, say x and y, followed by y-spaced integers, say m1, m2, m3 and so on till mY. But the way it is being input is as shown:
4 3
3 2 3
And my Kotlin code for input is this:
fun main(args: Array<String>)
{
val n = readLine()!!.toInt()
val m = readLine()!!.toInt()
var a:Int
for(i in 1..m) {
a= readLine()!!.toInt()
//Some additional manipulation involving all three variables
}
}
Upon submitting, I receive the following message:
java.lang.NumberFormatException: For input string: "4 3"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at ProgramKt.main(program.kt:2)
So, in order to make my code my code work, I have to somehow terminate the readLine() method as soon as it encounters a whitespace. Or is there any other method?
readLine(), as its name implies and as its documentation explains, returns the whole line. So split the line on space, and parse every of its element as an Int.
So what I could to is perform readLine()!!.split(" ") and store it in a list. And then when required for manipulation, I change the type of list element(s) using toInt() method?
Why would you change the string to integers every time you get them from the list? Why not construct a list of Ints directly?
Ah got it I was making a fool out of myself! I had to map those elements to Int type. Thank you for your help!
The answer was quite simple, after some discussion in the comments. What we have to do is implement the split function and then map each element as an Int.
fun main(args: Array<String>){
val xandy = readLine()!!.split(" ").map{it.toInt()}
val elems= readLine()!!.split(" ").map{it.toInt()}
var b=1
var ans=0
for(i in 1..xandy[1]){
//Manipulation stuff
}
}
| common-pile/stackexchange_filtered |
How to connect asp-validation-for to my new error
I have made new error message in my controller like that
if (!viewModel.Appointment.isConfirmed) ;
{
ModelState.AddModelError("", "You haven't confirmed in checkbox!");
}
How can I connect span with asp-validation-for only to this error?
You can use ModelState.AddModelError in your code and name for key:
Controller:
if (!viewModel.Appointment.isConfirmed)
{
ModelState.AddModelError("YourErrorName", "You haven't confirmed in checkbox!");
}
View:
<span> @Html.ValidationMessage("YourErrorName") </span>
Or if you like use asp-validation-for, let check it in How to customize validation error message given by text-danger?
Also need to specify @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers in the cshtml file
Works perfectly fine, thanks! At first I did an alert using js, but this is a way better solution
| common-pile/stackexchange_filtered |
input_dim for Dense Layer after LSTM layers Keras
Do I need to specify the input_dim (which means the number of features in one row/sample) after adding the first LSTM layer for the later Dense layers?
I was trying to create an architecture with 2 LSTM layers and 1 Feed-forwarding layer with 200 cells and 1 Feed-forwarding layer with 2 cells. First LSTM layer outputs for every timestamp second LSTM layer outputs for only the last time step so I was wondering If I created that architecture correctly
model.add(LSTM(units = 200, return_sequences = True, input_shape = (WINDOW_SIZE, 9), batch_size = 206))
model.add(LSTM(units = 200, return_sequences = False))
model.add(Dense(units = 200, input_dim = 9))
model.add(Dense(units = 2, input_dim = 9))
model.add(Dense(units = 1, input_dim = 9))
You do not need to specify the input_dim for the later layers, the model can infer the shape of those input layers from the output shape of the previous layer.
In addition, the input_dim values currently specified don't match the output dimensions of the prior layers- for example, the output dimension of the LSTM layer with return_sequences = False will be (200,). The network should fail to compile with an expected input size error with this code.
You can fix this by correcting or removing the input_dim values from the final three layers.
So the output shape of layer is related with the cells in it. For example if I have 50 units that means my output shape will be (50,) right ?
Yes, that's right.
If you find the answer undeful, it helps others find it if you upvote the answer. If it completely answers your question, accepting the answer is how you show that. Welcome to stack exchange!
What is the relation between layer weight matrix size and layer output size? Is there any relation?
The layer weight matrix is a function of how many units there are in the layer, how many outputs each unit has and how many weights each unit in that layer has. For most conventional layers, this means 1 weight per unit in the layer, 1 output per unit in the layer and they would be equal. The extra information I provided here is only meant to illustrate that it's not always the case, and that you architecture could potentially change the answer.
| common-pile/stackexchange_filtered |
Cannot compare enums in jasmine after angular migration from 6 to 7
I am in the process of migrating an Angular application from v6 to v7. All is well except any test that compares enums. When I run my tests, I get many errors regarding my enums like so
ERROR in src/.../some-thing.component.spec.ts: error TS2345: Argument of type 'PlanDuration.SixMonths' is not assignable to parameter of type 'Expected<PlanDuration.TwelveMonths>'.
An example of test being run looks like this:
export enum PlanDuration {
SixMonths,
TwelveMonths
}
...
it('should toggle plan duration to six months if the event source id is the toggle duration and the event is not checked', () => {
component.selectedPlanDuration = PlanDuration.TwelveMonths;
component.handleToggle(event);
expect(component.selectedPlanDuration).toBe(PlanDuration.SixMonths); // Tests cannot run because of errors here
});
However, if I cast my enum to number, my tests work perfectly! This would be less than ideal to update my specs everywhere like this:
expect(component.selectedPlanDuration).toBe(<number> PlanDuration.SixMonths);
I'm unsure if I missed something in my package.json. I've compared a fresh angular 7 project to my own projects and the versions of angular core, typescript, jasmine and karma between them are the same.
How can I get my tests to compare enums properly? Below is my package.json
"dependencies": {
"@angular/animations": "~7.2.0",
"@angular/common": "~7.2.0",
"@angular/compiler": "~7.2.0",
"@angular/core": "~7.2.0",
"@angular/forms": "~7.2.0",
"@angular/http": "~7.2.0",
"@angular/platform-browser": "~7.2.0",
"@angular/platform-browser-dynamic": "~7.2.0",
"@angular/router": "~7.2.0",
"core-js": "^2.5.4",
"rxjs": "~6.4.0",
"tslib": "^1.9.0",
"zone.js": "~0.8.26",
"@angular/cdk": "^7.0.3",
"@angular/flex-layout": "7.0.0-beta.24",
"@angular/material": "7.3.6",
"hammerjs": "2.0.8",
"intl": "1.2.5",
"jshashes": "1.0.7",
"lodash-es": "4.17.11",
"request-promise-native": "1.0.5",
"stream": "0.0.2",
"timers": "0.1.1",
"url-search-params-polyfill": "5.0.0",
"xml2js": "0.4.19"
},
"devDependencies": {
"@angular-devkit/build-angular": "~0.13.0",
"@angular/cli": "~7.3.7",
"@angular/compiler-cli": "~7.2.0",
"@angular/language-service": "~7.2.0",
"@types/node": "~8.9.4",
"@types/jasmine": "~2.8.8",
"@types/jasminewd2": "~2.0.3",
"codelyzer": "~4.5.0",
"jasmine-core": "~2.99.1",
"jasmine-spec-reporter": "~4.2.1",
"karma": "~4.0.0",
"karma-chrome-launcher": "~2.2.0",
"karma-coverage-istanbul-reporter": "~2.0.1",
"karma-jasmine": "~1.1.2",
"karma-jasmine-html-reporter": "^0.2.2",
"protractor": "~5.4.0",
"ts-node": "~7.0.0",
"tslint": "~5.11.0",
"typescript": "~3.2.2",
"@types/lodash-es": "4.17.1",
"gulp": "3.9.1",
"gulp-stylelint": "7.0.0",
"jasmine-data-provider": "2.2.0",
"karma-cli": "1.0.1",
"karma-junit-reporter": "1.2.0",
"karma-parallel": "0.3.0",
"karma-spec-reporter": "0.0.32",
"lodash": "4.17.11",
"moment": "2.22.2",
"npm": "6.0.0",
"protractor-beautiful-reporter": "1.2.5",
"protractor-jasmine2-screenshot-reporter": "0.5.0",
"stylelint": "9.6.0",
"stylelint-order": "1.0.0",
"tslint-jasmine-noSkipOrFocus": "1.0.9"
}
tsconfig.json:
{
"compileOnSave": false,
"compilerOptions": {
"importHelpers": true,
"preserveConstEnums": true,
"outDir": "./dist/out-tsc",
"baseUrl": "src",
"sourceMap": true,
"declaration": false,
"moduleResolution": "node",
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"noUnusedLocals": true,
"target": "es5",
"typeRoots": [
"node_modules/@types"
],
"lib": [
"es2016",
"dom"
]
}
}
tsconfig.spec.json
{
"extends": "../tsconfig.json",
"compilerOptions": {
"outDir": "../out-tsc/spec",
"module": "commonjs",
"target": "es5",
"baseUrl": "",
"types": [
"jasmine",
"node"
]
},
"files": [
"test.ts",
"polyfills.ts"
],
"include": [
"**/*.spec.ts",
"**/*.d.ts"
]
}
Try "export const enum ..." instead, and in your TSConfig, try setting preserveConstEnums: true. Enums actually are numbers unless otherwise specified, and don't need casting, but Typescript compiles funky with Enums I've found
@JonathanSchmold, can you post this as an answer? I can try your suggestion on Monday morning and if this works, I will gladly award you this bounty.
I certainly can :)
I guess this might be a problem with Karma and Jasmine. I've been seeing some and had to fallback from 4th version to 3.99
It's a bug in jasmine's type definition. Check my answer for detail and how to fix.
TL;DR
Quick fix: go to<EMAIL_ADDRESS>search for type Expected
// Change this line:
// type Expected<T> = T | ObjectContaining<T> | Any | Spy;
// to:
type Expected<T> = any;
That's it :)
For more details, read on:
I believe this is a bug in jasmine's types definition. I set up a fresh ng7 workspace and try to reproduce your problem. Here's my finding:
There are two jasmine related .d.ts file in the workspace:
// package.json
...
"@types/jasmine": "~2.8.8",
"@types/jasminewd2": "~2.0.3",
I'm not 100% sure how these two work together, but they are declaring conflicting types for same jasmine utils. For example:
// jasminewd2/index.d.ts
declare namespace jasmine {
interface Matchers<T> {
toBe(expected: any, expectationFailOutput?: any): Promise<void>;
...
// jasmine/index.d.ts
declare namespace jasmine {
interface Matchers<T> {
toBe(expected: Expected<T>, expectationFailOutput?: any): boolean;
Now the problem is in jasmine/index.d.ts.
This line toBe(expected: Expected<T>) is simply WRONG. This is a test case, certainly you are allowed to test against any value. Yet Expected<T> is declared as some complex type of no point.
The easiest way to fix it, is to manually correct it. Solution is already given at the beginning. Cheers.
A side note. I personally think ts-check a test case source file at compile time is an overkill, and should be disabled for good. Ts-check at deisgn time is fine though.
This quick-fix makes sense but how would this ever work in a production build server set-up? Maybe a better option (other than submitting a PR ) is to override that type definition and use that in the tests. Although this would still require a change in every file that is comparing enums and the OP does not want to do that.
@Nanotron Declaration merging should help.
I don't have a computer around me, so cannot test it, but I guess it should work: add a patchJasmine.d.ts somewhere that is included in tsconfig.json, then in the file you add this: declare namespace jasmine { type Expected<T> = any; }.
I ran into issues a little while back where Angular was telling me that it could not access MyEnumValue of undefined. After some fiddling, I found that exporting all enums as const, and adding "preserveConstEnums": true to my tsconfig.json made it work just fine.
But enums are always numbers unless otherwise specified, and thankfully don't need casting, but Typescript's compilation of enums can be funky at times in the same way interfaces are funky.
Edit:
In your component, make sure:
// If you give this a default value, TypeScript will assume
// that the only "valid" type is PlanDuration.TwelveMonths
// Type evaluates to: PlanDuration | number between 0 and 1;
selectedPlanDuration: PlanDuration = PlanDuration.TwelveMonths;
// Type evaluates to: PlanDuration.TwelveMonths | 1;
selectedPlanDuration = PlanDuration.TwelveMonths
I tried your suggestion, along with matching a tsconfig between a vanilla ng7 app and our production app, and could not get this to work. I do feel this is an issue with TS though and will continue to investigate.
try expect(component.selectedPlanDuration.valueOf())
Hey actually, what type annotation did you give to component? I feel like you have it defined as something funky, and that the type assigned to selectedPlanDuration literally is PlanDuration.TwelveMonths. Typescript lets you assign a "type" to a var as a single value of a single type. let q: 3 = 4; // error
I've updated my answer to cover something that could be the problem, but was not provided in your question. It's likely the issue
My component has this member: public selectedPlanDuration: PlanDuration = PlanDuration.SixMonths;. The suggestion of valueOf works though, however, not ideal to fix many specs to contain this method.
Can you post your full tsconfig file?
@JonathanScmold, I have posted my tsconfig in my question now
Let us continue this discussion in chat.
I came across the same issue while implementing new tests in an Angular 7 project (without a migration from 6 to 7). If you would rather a workaround than making a change that may affect future updates to Karma/Jasmine, you can swap the comparisons around so the enum is first:
expect(PlanDuration.SixMonths).toBe(component.selectedPlanDuration);
I'm not sure why this got down voted but I got here by searching for why the Enums in my tests where throwing errors and this method was the most plausible for me (and I'd imagine most people who come across this post) to implement.
| common-pile/stackexchange_filtered |
A problem on counting
Consider a number system which does not have the number 7 but has all other numbers. So the numbers are $1,2,3,4,5,6,8,9,10,11,12,13,14,15,16,18,...$. I want to find what is the $10^k$ number, where $k$ is an integer. For example $10^{th}$ number is $11$.
My try: I found that $10^k$ number is $11^k$ if $k \le5$. But for $k=6,11^6=1771561$ which contains $7$. So I am unable to solve this problem. If anyone can help it would be great. Thanks.
You are effectively working base $9$ - do you know how to convert from one base to another?
@MarkBennet yes I know how to convert from one base to another but I am sorry that I am unable to see the connection. So can you please elaborate
Write your number as $a_0+a_1\cdot9^1+a_2\cdot9^2+a_3\cdot9^3+a_4\cdot9^4+a_5\cdot9^5+ \dots $ with $0\le a_i\lt 9$
Then if $0\le a_i\le 6$ set $b_i=a_i$ and for $7\le a_i\le 8$ set $b_i=a_i+1$. Then the expression you are looking for is $$b_rb_{r-1}\dots b_0$$
I still do not get it,can you please outline the proof
@happymath You hev nine symbols. The $a_i$ are standard base $9$ digits $0-8$. The $b_i$ avoid the digit $7$. Have you yet tried converting your number to base $9$?
| common-pile/stackexchange_filtered |
Is it possible to rename multiple columns of a CSV to empty columns name when using miller?
I have CSV files with headers like this
MyFirstCol,MySecondCol,MyThirdCol,.....MyLastRealCol,ppp,qqq,rrr
The columns ppp, qqq, etc I want to set to columns with empty headers. (I do not want to delete them!) So I want a resulting CSV with a header like this:
MyFirstCol,MySecondCol,MyThirdCol,.....MyLastRealCol,,,
(Note the empty, but present columns at the end.)
Is there a way to do this with miller?(*) I tried
mlr --csv rename -r '"^(.){3}$",' myFile.csv
but this command folds all the matching columns into one! :-(
(*) I do know how to hack this together with a search-replace command in sed , but I don't like it as a general solution, because sed is not aware of the CSV's column structure. Therefore I am hoping for a solution with miller.
If I understand correctly, just remove the empty columns
mlr --csv remove-empty-columns input.csv >output.csv
If you want to use rename, the command is
mlr --csv rename -r '^.{3}$,' input.csv >output.csv
But please note in Miller you cannot have a CSV whit two or more fields with the same name. And if you have
MyFirstCol,MySecondCol,MyThirdCol,.....MyLastRealCol,,,
the last fields have the same empty field name. Then you can add a numeric progressive heading, then apply a search & replace to the first data row, and at the end remove the numeric heading.
Starting from
field1,field2,ppp,qqq,zzz
1,2,,,
4,7,,,
and running
mlr --csv -N put -S 'if(NR==1){for (k in $*) {$[k] = gsub($[k], "^.{3}$", "");}}' input.csv
you will have
field1,field2,,,
1,2,,,
4,7,,,
Some points:
-N add and remove the numeric heading;
if(NR==1) to apply the put verb only to first data row that here is field1,field2,ppp,qqq,zzz
No, I do not want to remove the empty columns. I want to give the (nearly) empty col7mns empty headers, but keep all three columns, just with empty column names.
@halloleo If you look ad my answer, there is also the way to have empty column names. Have you tested it?
I just tried exactly your rename command and it folds all the matching (empty) columns into one empty column! - But your answer shows me that I should make it even clearer that I want to keep the empty columns. Will edit question.
@halloleo I have edited once again my answer. Now it should be ok
Very cool. Does exactly what I want to achieve. A bit more complicated as I hoped for, but the workaround via the numeric heading is super smart! Thanks aborruso!
| common-pile/stackexchange_filtered |
Bash: How to control iteration flow/loops?
For going over some recovered data, I am working on a script that recursively goes through folders & files and finally runs file on them, to check if they are likely fully recovered from a certain backup or not. (recovered files play, and are identified as mp3 or other audio, non-working files as ASCII-Text)
For now I would just be satisfied with having it go over my test folder structure, print all folders & corresponding files. (printing them mainly for testing, but also because I would like to log where the script currently is and how far along it is in the end, to verify what has been processed)
I tried using 2 for loops, one for the folders, then one for the files. (so that ideally it would take 1 folder, then list the files in there (or potentially delve into subfolders) and below each folder only give the files in that subfolders, then moving on to the next.
Such as:
Folder1
- File 1
- File 2
-- Subfolder
-- File3
-- File4
Folder2
- File5
However this doesn't seem to work in the ways (such with for loops) that are normally proposed. I got as far as using "find . -type d" for the directories and "find . -type f" or "find * -type f" (so that it doesn't go in to subdirectories) However, when just printing the paths/files in order to check if it ran as I wanted it to, it became obvious that that didn't work.
It always seemed to first print all the directories (first loop) and then all the files (second loop). For keeping track of what it is doing and for making it easier to know what was checked/recovered I would like to do this in a more orderly fashion as explained above.
So is it that I just did something wrong, or is this maybe a general limitation of the for loop in bash?
Another problem that could be related: Although assigning the output of find to an array seemed to work, it wasn't accessible as an array ...
Example for loop:
for folder in '$(find . -type d)' ; do
echo $folder
let foldercounter++
done
Arrays:
folders=("$(find . -type d)")
#As far as I know this should assign the output as an array
#However, it is not really assigned properly somehow as
echo "$folders[1]"
# does not work (quotes necessary for spaces)
For going over some recovered data, I am working on a script that recursively goes through folders & files and finally runs file on them, to check if they are likely fully recovered from a certain backup or not. (recovered files play, and are identified as mp3 or other audio, non-working files as ASCII-Text)
If you want to run file on every file and directory in the current directory, including its subdirectories and so on, you don't need to use a Bash for-loop, because you can just tell find to run file:
find -exec file '{}' ';'
(The -exec ... ';' option runs the command ... on every matched file or directory, replacing the argument {} with the path to the file.)
If you only want to run file on regular files (not directories), you can specify -type f:
find -type f -exec file '{}' ';'
If you (say) want to just print the names of directories, but run the above on regular files, you can use the -or operator to connect one directive that uses -type d and one that uses -type f:
find -type d -print -or -type f -exec file '{}' ';'
Edited to add: If desired, the effect of the above commands can be achieved in pure Bash (plus the file command, of course), by writing a recursive shell function. For example:
function foo () {
local file
for file in "$1"/* ; do
if [[ -d "$file" ]] ; then
echo "$file"
foo "$file"
else
file "$file"
fi
done
}
foo .
This differs from the find command in that it will sort the files more consistently, and perhaps in gritty details such as handling of dot-files and symbolic links, but is broadly the same, so may be used as a starting-point for further adjustments.
(By the way, please ask follow-up questions if none of these is exactly what you want. To be honest, I was pretty confused by some aspects of your question.)
Thanks! Sorry if part of my question was confusing. I discovered -exec during my research, however I found it hard to control as it just runs as part of find, which could of course also be my limited knowledge of bash. As it would be really helpful to see afterwards to know how far it got, also in case of an error, I thought it better to try to 'guide' the script, in that it goes folder by folder. Of course it always possible to sort the output later, but I wanted to avoid that. But you mention some suggestions that I didn't think of and that sound promising.
@step21: I've updated my answer to give a non-find approach, which you may find helpful.
A find ... -exec ... solution @H.-Dirk Schmitt was referring to might look something like:
find . -type f -exec sh -c '
case $(file "$1") in
*Audio file*)
echo "$1 is an audio file"
;;
*ASCII text*)
echo "$1 is an ascii text file"
;;
esac
' _ {} ';'
| common-pile/stackexchange_filtered |
How to decorate class that relies on a runtime value for creation
I'm brand new to using Simple Injector although I have been using Ninject for a long time, so I am comfortable with DI in general. One thing that attracted me to want to use Simple Injector was the ease of use of Decorators.
I have been able to successfully use decorators with Simple Injector in all normal cases where the dependencies are resolved when the service is requested. However, I am having a hard time figuring out if there is a way to get my decorators applied in a case when the service must be constructed using a runtime value.
In Ninject, I could pass a ConstructorArgument to the kernel.Get<IService> request that could be inherited down the chain of N decorators all the way to the "real" implementing class. I cannot figure out a way to replicate that using Simple Injector.
I have put some very basic code below to illustrate. What I would want to do in the real world would be to pass an IMyClassFactory instance into other classes in my application. Those other classes could then use it to create IMyClass instances using the IRuntimeValue they would provide. The IMyClass instance they got from the IMyClassFactory would be decorated automatically by the registered decorators.
I know I could manually apply my decorator(s) in my IMyClassFactory or any Func<IMyClass> I could come up with, but I would like it to "just work".
I keep going around and around trying to abstract out the MyClass construction, but I can't figure out how to get it to resolve with the IRuntimeValue constructor argument and be decorated.
Am I overlooking an obvious solution?
using System;
using SimpleInjector;
using SimpleInjector.Extensions;
public class MyApp
{
[STAThread]
public static void Main()
{
var container = new Container();
container.Register<IMyClassFactory, MyClassFactory>();
container.RegisterDecorator(typeof (IMyClass), typeof (MyClassDecorator));
container.Register<Func<IRuntimeValue, IMyClass>>(
() => r => container.GetInstance<IMyClassFactory>().Create(r));
container.Register<IMyClass>(() => ?????)); // Don't know what to do
container.GetInstance<IMyClass>(); // Expect to get decorated class
}
}
public interface IRuntimeValue
{
}
public interface IMyClass
{
IRuntimeValue RuntimeValue { get; }
}
public interface IMyClassFactory
{
IMyClass Create(IRuntimeValue runtimeValue);
}
public class MyClassFactory : IMyClassFactory
{
public IMyClass Create(IRuntimeValue runtimeValue)
{
return new MyClass(runtimeValue);
}
}
public class MyClass : IMyClass
{
private readonly IRuntimeValue _runtimeValue;
public MyClass(IRuntimeValue runtimeValue)
{
_runtimeValue = runtimeValue;
}
public IRuntimeValue RuntimeValue
{
get
{
return _runtimeValue;
}
}
}
public class MyClassDecorator : IMyClass
{
private readonly IMyClass _inner;
public MyClassDecorator(IMyClass inner)
{
_inner = inner;
}
public IRuntimeValue RuntimeValue
{
get
{
return _inner.RuntimeValue;
}
}
}
Edit 1:
Ok, thanks to Steven for the great answer. It has given me a couple of ideas.
Maybe to make it a little more concrete though (although not my situation, more "classic"). Say I have an ICustomer that I create at runtime by reading a DB or deserializing from disk or something. So I guess that would be considered a "newable" to quote one of the articles Steven linked. I would like to create an instance of ICustomerViewModel so I can display and manipulate my ICustomer. My concrete CustomerViewModel class takes in an ICustomer in its constructor along with another dependency that can be resolved by the container.
So I have an ICustomerViewModelFactory that has a .Create(ICustomer customer) method defined which returns ICustomerViewModel. I could always get this working before I asked this question because in my implementation of ICustomerViewModelFactory I could do this (factory implemented in composition root):
return new CustomerViewModel(customer, container.GetInstance<IDependency>());
My issue was that I wanted my ICustomerViewModel to be decorated by the container and newing it up bypassed that. Now I know how to get around this limitation.
So I guess my follow-up question is: Is my design wrong in the first place? I really feel like the ICustomer should be passed into the constructor of CustomerViewModel because that demonstrates intent that it is required, gets validated, etc. I don't want to add it after the fact.
Simple Injector explicitly lacks support for passing on runtime values through the GetInstance method. Reason for this is that runtime values should not be used when the object graph is constructed. In other words, the constructors of your injectables should not depend on runtime values. There are several problems with doing that. First of all, your injectables might need to live much longer than those runtime values do. But perhaps more importantly, you want to be able to verify and diagnose your container's configuration and that becomes much more troublesome when you start using runtime values in the object graphs.
So in general there are two solutions for this. Either you pass on the runtime value through the method call graph or you create a 'contextual' service that can supply this runtime value when requested.
Passing on the runtime value through the call graph is especially a valid solution when you practice architectures like this and this where you pass on messages through your system or when the runtime value can be an obvious part of the service's contract. In that case it is easy to pass on the runtime value with the message or the method and this runtime value will also pass through any decorator on the way through.
In your case this would mean that the factory creates the IMyService without passing in the IRuntimeValue and your code passes this value on to the IMyService using the method(s) it specifies:
var service = _myServiceFactory.Create();
service.DoYourThing(runtimeValue);
Passing through the runtime value through the call graph however is not always a good solution. Especially when this runtime value should not be part of the contract of the message that is sent. This especially holds for contextual information use as information about the current logged in user, the current system time, etc. You don't want to pass this information through; you just want it to be available. We don't want this, because this would give an extra burden to the consumers of passing the right value every time, while they probably shouldn't even be able to change this information (take the user in who's context the request is executed for instance).
In that case you should define service that can be injected and allows retrieving this context. For instance:
public interface IUserContext {
User CurrentUser { get; }
}
public interface ITimeProvider {
DateTime Now { get; }
}
In these cases the current user and the current time aren't injected directly into a constructor, but instead these services are. The component that needs to access the current user can simply call _userContext.CurrentUser and this will be done after the object is constructed (read: not inside the constructor). Thus: in a lazy fashion.
This does mean however that the IRuntimeValue must be set somewhere before MyClass gets invoked. This probably means you need to set it inside the factory. Here's an example:
var container = new Container();
var context = new RuntimeValueContext();
container.RegisterSingle<RuntimeValueContext>(context);
container.Register<IMyClassFactory, MyClassFactory>();
container.RegisterDecorator(typeof(IMyClass), typeof(MyClassDecorator));
container.Register<IMyClass, MyClass>();
public class RuntimeValueContext {
private ThreadLocal<IRuntimeValue> _runtime;
public IRuntimeValue RuntimeValue {
get { return _runtime.Value; }
set { _runtime.Value = value; }
}
}
public class MyClassFactory : IMyClassFactory {
private readonly Container _container;
private readonly RuntimeValueContext context;
public MyClassFactory(Container container, RuntimeValueContext context) {
_container = container;
_context = context;
}
public IMyClass Create(IRuntimeValue runtimeValue) {
var instance = _container.GetInstance<IMyClass>();
_context.RuntimeValue = runtimeValue;
return instance;
}
}
public class MyClass : IMyClass {
private readonly RuntimeValueContext _context;
public MyClass(RuntimeValueContext context) {
_context = context;
}
public IRuntimeValue RuntimeValue { get { return _context.Value; } }
}
You can also let the MyClass accept the IRuntimeValue and make the following registration:
container.Register<IRuntimeValue>(() => context.Value);
But the disallows verifying the object graph, since Simple Injector will ensure that registrations never return null, but context.Value will be null by default. So another option is to do the following:
container.Register<IMyClass>(() => new MyClass(context.Value));
This allows the IMyClass registration to be verified, but will during verification still create a new MyClass instance that is injected with a null value. If you have a guard clause in the MyClass constructor, this will fail. This registration however disallows MyClass to be auto-wired by the container. Auto-wiring that class can come in handy when you've got more dependencies to inject into MyClass for instance.
Additionally, if a service depends on a runtime value, you've baked some knowledge into the consumption of the service about the implementation of the service, which is sort of the point that you're trying to avoid. If one implementation requires the number 42 to be passed, is that number relevant to a different implementation? What does 42 mean?
@LasseV.Karlsen I agree with you completely for value types and stuff like that. But I'm trying to get my head around a case in which say the runtime value is ICustomer which was just read in from a DB and I need to pass it (along with some other container-resolved dependencies) into a constructor for say a CustomerViewModel. Steven has given me a lot to think about though whether that is the proper design in the first place though.
This is quite a bit of information for me to digest and to think about (and thank you for the links and helping me fall into the pit of success). I'm going to add an edit for further comment. I actually came up with your second solution on my own originally but was a little scared of it because 1) I have never used the ThreadLocal storage (neat), and 2) I was afraid there might be some other side effects of holding onto a reference to the object in the context. But that may be the way I go.
@jomtois: I'm not very familair with MVVM, but I think its best to move the Model out of the ViewModel's constructor and move it into a property. This makes it very easy for the factory to resolve the ViewModel and set the model using the supplied property.
+1 Great explanation of the problems caused by mixing newables and injectables, and for the link to the blog post "to new or not to new". I think method or setter injection for newables is the answer most of the time, method injection being preferred. It seems like every time I am frustrated by a seeming lack of flexibility of SimpleInjector, I am forced to change my design in a way that turns out to be better and easier to deal with in the end. Kudos to you, Steven! ;)
| common-pile/stackexchange_filtered |
Find the number of inequivalent two-dimensional complex representations of the group $Z_4$
Find the number of inequivalent two-dimensional complex representations of the group $Z_4$
Any hints will be greatly appreciated. Thank you all
Do you know what the irreducible representations of $\mathbb{Z}_4$ are?
Hint: Using Maschke's Theorem, any such two-dimensional representation of $\mathbb{Z}_4$ is the direct sum of irreducible representations of $\mathbb{Z}_4$. You need to find the irreducible representations of the group (which all turn out to be one-dimensional because the group is abelian). Thus, any two-dimensional representation is the sum of two of these irreducible representations. The question then becomes a combinatorial one: how many ways can you choose two irreducible representations?
Why does no one like abelian plays? Because the characters are all one-dimensional! (laughter ensues)
Very nice pun, +1!
@TimRatigan Even though all the characters are one-dimensional, they can sometimes be complex!
| common-pile/stackexchange_filtered |
Amazon MWS SubmitFeed Updating stock quantity
I'm having problems with a stock quantity update feed using Amazon MWS. My Feed is submitted and processed, but I get errors, however if I submit the same XML via the scratchpad, the inventory updates are accepted and processed.
(merchant id starred out deliberately)
Submission and response below:
<AmazonEnvelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="amzn-envelope.xsd">
<Header>
<DocumentVersion>1.01</DocumentVersion>
<MerchantIdentifier>************</MerchantIdentifier>
</Header>
<MessageType>Inventory</MessageType>
<Message>
<MessageID>1</MessageID>
<OperationType>Update</OperationType>
<Inventory>
<SKU>BUS999904</SKU>
<Quantity>269</Quantity>
</Inventory>
</Message>
<Message>
<MessageID>2</MessageID>
<OperationType>Update</OperationType>
<Inventory>
<SKU>PROBS-HO-01</SKU>
<Quantity>137</Quantity>
</Inventory>
</Message>
And the response:
<?xml version="1.0" encoding="UTF-8"?>
<AmazonEnvelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="amzn-envelope.xsd">
<Header>
<DocumentVersion>1.02</DocumentVersion>
<MerchantIdentifier>M_ONTRACKSCO_1106147</MerchantIdentifier>
</Header>
<MessageType>ProcessingReport</MessageType>
<Message>
<MessageID>1</MessageID>
<ProcessingReport>
<DocumentTransactionID>54774016520</DocumentTransactionID>
<StatusCode>Complete</StatusCode>
<ProcessingSummary>
<MessagesProcessed>2</MessagesProcessed>
<MessagesSuccessful>0</MessagesSuccessful>
<MessagesWithError>2</MessagesWithError>
<MessagesWithWarning>0</MessagesWithWarning>
</ProcessingSummary>
<Result>
<MessageID>1</MessageID>
<ResultCode>Error</ResultCode>
<ResultMessageCode>25</ResultMessageCode>
<ResultDescription>We are unable to process the XML feed because one or more items are invalid. Please re-submit the feed. </ResultDescription>
</Result>
<Result>
<MessageID>2</MessageID>
<ResultCode>Error</ResultCode>
<ResultMessageCode>25</ResultMessageCode>
<ResultDescription>We are unable to process the XML feed because one or more items are invalid. Please re-submit the feed.</ResultDescription>
</Result>
</ProcessingReport>
</Message>
Any help anyone can give, or pointers/examples of valid stock update feeds would be most welcome.
Thanks.
I finally worked it out - I had the incorrect feed type in my post. It should have been set to _POST_INVENTORY_AVAILABILITY_DATA_.
Your XML seems to be missing </AmazonEnvelope> at the end of the feed, but that could easily be an error in pasting it here. Once I added that, I was able to validate your XML against my copy of the XSDs. Other than that, my inventory feed only differs in one way: I have an additional <FulfillmentLatency>1</FulfillmentLatency> following right after each Quanitity, which is not mandatory according to the XSDs.
Hi Hazzit - thanks for your response. You are correct, the closing was missing due to a pasting error. I tried adding the additional field for each item, but no joy. Still getting the same error code 25. It's back to trawling through the XSDs with a fine tooth comb for me....
Amazon MWS Update Inventory Stock Sample Code:
<?php
/**********************************************************
* Update inventory stock through amazon mws api
*
***********************************************************/
$sku1 = '10101-AM';
$quantity1 = '9';
$leadTimeToShip1 = '7';
//amazon mws credentials
$amazonSellerId = 'xxxxxx';
$amazonMWSAuthToken = 'xxxxxx';
$amazonAWSAccessKeyId = 'xxxxxx';
$amazonSecretKey = 'xxxxxx';
$amazonMarketPlaceId = 'xxxxxx';
$param = array();
$param['AWSAccessKeyId'] = $amazonAWSAccessKeyId;
$param['Action'] = 'SubmitFeed';
$param['Merchant'] = $amazonSellerId;
$param['MWSAuthToken'] = $amazonMWSAuthToken;
$param['FeedType'] = '_POST_INVENTORY_AVAILABILITY_DATA_';
$param['SignatureMethod'] = 'HmacSHA256';
$param['SignatureVersion'] = '2';
$param['Timestamp'] = gmdate("Y-m-d\TH:i:s.\\0\\0\\0\\Z", time());
$param['Version'] = '2009-01-01';
$param['MarketplaceIdList.Id.1'] = $amazonMarketPlaceId;
$param['PurgeAndReplace'] = 'false';
$secret = $amazonSecretKey;
$url = array();
foreach ($param as $key => $val) {
$key = str_replace("%7E", "~", rawurlencode($key));
$val = str_replace("%7E", "~", rawurlencode($val));
$url[] = "{$key}={$val}";
}
$amazon_feed = '<?xml version="1.0" encoding="utf-8"?>
<AmazonEnvelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="amzn-envelope.xsd">
<Header>
<DocumentVersion>1.01</DocumentVersion>
<MerchantIdentifier>'.$amazonSellerId.'</MerchantIdentifier>
</Header>
<MessageType>Inventory</MessageType>
<Message>
<MessageID>1</MessageID>
<OperationType>Update</OperationType>
<Inventory>
<SKU>'.$sku1.'</SKU>
<Quantity>'.$quantity1.'</Quantity>
<FulfillmentLatency>'.$leadTimeToShip1.'</FulfillmentLatency>
</Inventory>
</Message>
</AmazonEnvelope>';
//echo $amazon_feed;exit;
sort($url);
$arr = implode('&', $url);
$sign = 'POST' . "\n";
$sign .= 'mws.amazonservices.com' . "\n";
$sign .= '/Feeds/'.$param['Version'].'' . "\n";
$sign .= $arr;
$signature = hash_hmac("sha256", $sign, $secret, true);
$httpHeader = array();
$httpHeader[] = 'Transfer-Encoding: chunked';
$httpHeader[] = 'Content-Type: application/xml';
$httpHeader[] = 'Content-MD5: ' . base64_encode(md5($amazon_feed, true));
//$httpHeader[] = 'x-amazon-user-agent: MyScriptName/1.0';
$httpHeader[] = 'Expect:';
$httpHeader[] = 'Accept:';
$signature = urlencode(base64_encode($signature));
$link = "https://mws.amazonservices.com/Feeds/".$param['Version']."?";
$link .= $arr . "&Signature=" . $signature;
$ch = curl_init($link);
curl_setopt($ch, CURLOPT_HTTPHEADER, $httpHeader);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $amazon_feed);
$response = curl_exec($ch);
$info = curl_getinfo($ch);
$errors=curl_error($ch);
curl_close($ch);
echo '<pre>';
print_r($response); //xml response
?>
| common-pile/stackexchange_filtered |
How to absolute position after dynamical size block?
I have this blocks, located in the order A, B, C:
Blocks "A" and "C" position: absolute. I need display block "C" after block "A", but the height of the block "A" may very, and I can not use the property top. Here is an example of code:
.wrapper {
position: relative;
width: 100%;
height: 1000px;
}
.block-1 {
position: absolute;
left: 50%;
width: 300px;
height: 200px;
background: red;
}
.block-2 {
width: 300px;
height: 400px;
background: yellow;
}
.block-3 {
position: absolute;
left: 50%;
width: 300px;
height: 400px;
background: green;
}
https://jsfiddle.net/skriming/xn47v51k/
Can i change position on like block b then block a and block c in html ?
No, this is the problem.
Use js to find block a height, and add this value to block cs top
I can not do it by css?
Can the position absolute of A and C be changed ?
@izorgor Yes of course
Is this what you are after? https://jsfiddle.net/g2nm914y/1/
@HiddenHobbes thx! can you create answer?
No problem @skriming, because the question is closed I wont be able to post an answer.
Just replace your CSS with below one:
.wrapper {
position: relative;
width: 100%;
height: 1000px;
}
.block-1 {
position: relative;
left: 50%;
width: 300px;
height: 200px;
background: red;
}
.block-2 {
float:left;
position:absolute;
top:0;
width: 300px;
height: 400px;
background: yellow;
}
.block-3 {
position: relative;
left: 50%;
width: 300px;
height: 400px;
background: green;
}
Here is the solution but it combines css + jquery
I added float left to the B block
.block-2{float: left}
and top position to the C block with jQuery
var aHeight = $(".block-1").height();
$(".block-3").css("top",aHeight);
If you can`t use jquery for some reason let me know.
You didn't close last line of code, but here ya go https://jsfiddle.net/ywku3hvg/
| common-pile/stackexchange_filtered |
Formulas involving traces of products of singular hermitian positive semidefinite $2$ by $2$ matrices
While working on the Atiyah problem on configurations of points, I came across formulas involving products of traces of products of singular hermitian positive semidefinite $2$ by $2$ matrices.
To illustrate the type of formulas I am talking about, here is one of them:
$$\DeclareMathOperator{\tr}{tr} D = \frac{1}{4 \tr(n_{01}) \tr(n_{02}) \tr(n_{12})} \left( \tr(n_{01}n_{02}) \tr(n_{12}) + \tr(n_{01}n_{21}) \tr(n_{02}) + \tr(n_{02}n_{12}) \tr(n_{01}) + \tr(n_{01}n_{02}n_{12}) - \tr(n_{01}n_{20}n_{12}) + \tr(n_{01}n_{20}n_{21}) + \tr(n_{01}n_{21}n_{02}) + \tr(n_{01}) \tr(n_{02}) \tr(n_{12}) \right).$$
In the previous formula, each $n_{ij}$ is a singular positive semidefinite hermitian $2$ by $2$ matrix satisfying the following constraints:
$$\tr(n_{ij}) = \tr(n_{ji})$$
$$n_{ij} + n_{ji} = \tr(n_{ij}) I$$
$$n_{ij} + n_{jk} + n_{ki} = \frac{1}{2}(\tr(n_{ij}) + \tr(n_{jk}) + \tr(n_{ki}))I,$$
where $I$ is the $2$ by $2$ identity matrix, and $i$, $j$ and $k$ are different indices in $\{0,1,2\}$.
I am interested in showing that the lower bound for a formula such as the above is $1$. This is the Atiyah-Sutcliffe conjecture $2$. Kindly note that I do know how to show that for the formula above, and it is not what I am asking for (of course, this is a formula for $n = 3$ and I do not know how to show the lower bound for $n > 4$, which is still open).
I would like to know whether such expressions such as the numerator of the formula above, have been studied before in the literature, whether in the Math or Physics literature. Indeed, I am interested in finding a physical interpretation of the Atiyah-Sutcliffe determinant $D$. I am also interested in knowing whether some tools have been developed to prove lower bounds for expressions such as the above, or perhaps to simplify them.
Not sure if it helps but a singular positive semidefinite hermitian 2 by 2 matrix can also be written as the outer product of a 2x1 vector with itself, i.e. there exists $u_{ij}\in \mathbb{R}^{2\times 1}$ s.t. $n_{ij}=\bar{u}{ij} u^T{ij}$. Then you have for example $tr(n_{ij})=|u_{ij}|2^2$, $tr(n{ij} n_{kl})=(u^T_{ij} \bar{u}{kl})^2$ and $tr(n{ij} n_{kl}n_{mo} )=(u^T_{ij} \bar{u}{kl})(u^T{kl} \bar{u}{mo})(u^T{mo} \bar{u}_{ij})$
@user35593, yes I know. I kind of went in the opposite direction. I started from these Weyl spinors, if I may call them that, and got to the $2$ by $2$ matrices.
| common-pile/stackexchange_filtered |
Why is Ubuntu reporting a slightly different size for my Virtualbox disk?
I'm installing Ubuntu in a VM using VirtualBox; I selected the dynamic expansion option (as opposed to fixed) in VirtualBox and set the limit to 85GB. Then, when I was installing Ubuntu, the virtual disk I created for it states it is 91.3GB. Did I make a mistake somewhere or is this difference normal? If it is normal, why does it exist? Thank you.
Yes, this is normal.
This is happening because, unfortunately, in practice, the word "gigabyte" can refer to two different units of digital information storage.
Strictly speaking, these days, a gigabyte (GB) is formally defined as exactly one billion bytes, and similarly, a megabyte (MB) is exactly one million bytes and a kilobyte (kB) is exactly one thousand bytes.
However, using powers of two is more convenient for many purposes, and 210 is 1024, which is pretty close to 103. So historically, and in many instances today, people say "kilobyte" to mean 1024 bytes, "megabyte" to mean 1024 "kilobytes", and gigabyte to mean 1024 "megabytes."
This is the distinction between "decimal" and "binary" kilobytes/megabytes/gigabytes/terabytes/etc.
These days, we have separate unit names and symbols formally defined for the units that scale by factors of 1024 rather than 1000: A kibibyte (KiB, and informally a "binary kilobyte") is 1024 bytes, a mebibyte (MiB, "binary megabyte") is 1024 kibibytes, a gibibyte (GiB, "binary gigabyte") is 1024 kibibytes, a tebibyte (TiB, "binary terabyte") is 1024 gibibytes, and so forth.
What's happening is that, apparently, VirtualBox is showing you storage capacity in units of gibibytes, while Ubuntu's installer shows it in units of (decimal) gigabytes.
See this article for more details and historical information.
@jay Or to phrase it in arithmetic, 1 GiB =<PHONE_NUMBER>24 bytes while 1 GB = 1,000,000,000 bytes. Therefore 1 GiB ~= 1.0737 GB. And so 85 GiB * 1.0737 = 91.2645 or ~91.3 GB.
Thanks for that info Eliah, and irrational John, I see how the arithmetic works out there, but I'm still having trouble conceptualizing this.
@Jay A GiB is bigger than a GB. You created a virtual drive with a storage capacity of 85 GiB (even though VirtualBox may have said "GB"). Since a GB is smaller than a GiB, a drive will hold more GB's than it will GiB's. Specifically, a drive that will hold 85 GiB's will hold about 91.3 GB's.
If Ubuntu is giving me 91.3GB and VB is giving me 85GB, then it makes sense to think that Ubuntu is defining a GB as something smaller than what VB is. So if it is between 1024MB and 1000MB equaling 1GB, then the smaller one, the 1000MB, is the way Ubuntu is defining it and 1024MB is the way VB is. By this, I get that VB sees 1024 X 85 = 87040MB, which would result in 87GB for Ubuntu (divided by 1000). What am I doing wrong to not get 91.3GB?
@Jay 1 GB = 1000 MB. But 1 GiB = 1024 MiB (not MB). That is, 1 GB = 1,000 * 1,000 * 1,000 bytes = 1,000,000,000 bytes, while 1 GiB = 1,024 * 1,024 * 1,024 bytes = 1,073,741,824 bytes.
Got it- thanks Eliah and thank you irrational John!
| common-pile/stackexchange_filtered |
replace " using String.replace() with regex
I have following string
"content \" content <tag attr1=\"val1\" attr2=\"val2\">content \" content</tag> content \" content"
I want to replace all " characters inside tags definitions to ' character with use String.replace() function. " characters inside tag content must remain in its present form.
"content \" content <tag attr1='val1' attr2='val2'>content \" content</tag> content \" content"
Maybe is regex with some grouping can used?
You can use replace() and a regex-callback to do this for you:
var str = 'content " content <tag attr1="val1" attr2="val2">content " content</tag> content " content';
function replaceCallback(match) {
return match.replace(/"/g, "'");
}
str = str.replace(/<([^>]*)>/g, replaceCallback);
The regex will match anything in-between < and > characters in your string and pass the matches to the replaceCallback() method which will then, as desired, replace " with ' characters.
Edit: The replaceCallback() was using .replace('"', "'"), but this would only replace the first ". I've updated it to use a regex-replace instead and it now works as-desired.
it's seems right. Sorry for another question, but how to regexp will look if I want to raplace " inside <tag***> definition, but not in tags with other names?
@Nawa You can change the initial-regex to be /<tag([^>]*)>/g instead; that will match only the tags that start as <tag
Sorry once again. My specific case is content where I replace "<" to "[LESS_THEN]" and ">" to "[GREATER_THEN]". Content after these changes looks "content " content [LESS_THEN]tag attr1="val1" attr2="val2"[GREATER_THEN]content " content[LESS_THEN]/tag[GREATER_THEN] content " content". I can't understand how to describe "[GREATER_THEN]" in your regexp
var str = "content \" content <tag attr1=\"val1\" attr2=\"val2\">content \" content</tag> content \" content";
var replaced = str.replace(/\<.*?\>/g, function(match, index){
return match.replace(/"/g, "'"); // `match` will be each tag , both opening and closing.
});
The following code (taken from the accepted answer to this question) will also work, without a callback, but with a less easily understandable regex.
var str = 'content " content <tag attr1="val1" attr2="val2">content " content</tag> content " content';
str = str.replace(/"(?=[^<]*>)/g, "'");
You can't do it with just one regex, you must use a combination. This is the way to go:
Match the text within the tags, this is easy.
Replace the characters from the output of the previous match with the ones you want using
replace, this is also easy, now that you have done the previous match.
Salute you! Whatever you are.
| common-pile/stackexchange_filtered |
Checking a value before assigning it to variable
Throughout my code, I find myself testing if the output of a function is undefined or blank before i use the output. In this case, getQuery checks if the property of an object is either blank or if the object key is not set at all. In both cases, the output can be ignored. But if the object key holds a value, then I want to use this output as the value for my variable.
var filter = ""
if ( typeof getQuery("filter") != 'undefined' && getQuery("filter") != "" ) filter = getQuery("filter")
if ( filter != "" ) createFilterView( filter )
There must be a more optimized way of doing this without the repetition?
I often set the output of the function to a variable first, in order to not have to perform the function several times. A bit faster, I assume, but also not very elegant.
var getQueryFilter_output = getQuery("filter")
if ( getQueryFilter_output != 'undefined' && getQueryFilter_output != "" ) var filter = getQuery_output
What would be a better way of doing this?
I edited your question slightly, because asking for the best or the most [x] seems primarily opinion-based, which would be off-topic
also: 1) sometimes, you call foo("filter") and sometimes foo(). I think it should either be one or the other? 2) some context would help answer this question. For example: what do you do if the output is ""? As is, this question is somewhat off-topic because it contains pseudo-code or example code instead of real code.
I Definitely wouldn't run functions needlessly; so yes, store the return value in a variable. Also, I'd say you should make sure your functions never return more error/failure values than you are interested in differentiating between. For example, if the distinction between "" or 'undefined' is irrelevant then you could change the function to only ever return "" on failure. That way you only have to do a if (foo != "") check.
@tim: You're right - I messed up when "translating" my actual function and variable names "foo"-ish example names. Reinstated the original names and corrected it now.
@Skarven this is a lot better. But I would still like to see some context. Why does getQuery sometimes return undefined and sometimes ""? And what happens if it does? Do you exit, or do you use the value filter had before this line?
@Tim - thanks, good point. I've updated the original question with a proper context.
Indentation
I'm not quite sure if this is correct, but from what I know, I believe it is.
Indentation in JavaScript does not matter as, before your code is run, it is completely minimized - meaning it removes all comments and unnecessary spaces.
That being said, trying to crush your JavaScript into three lines does not make it any faster.
Calling
In your first example, you call the same function three times with the same arguments each time. This is extremely inefficient.
It would be optimal to call it once and save it in a variable.
Therefore, these lines:
var filter = ""
if ( typeof getQuery("filter") != 'undefined' && getQuery("filter") != "" ) filter = getQuery("filter")
if ( filter != "" ) createFilterView( filter )
Can become:
var filter = "";
var ret = getQuery("filter"); // for lack of a better variable name
if(typeof ret != 'undefined' && ret != "") filter = ret;
if(filter != "") createFilterView(filter)
(Yes, you fixed this in the second example)
Conditionals 1
In your first example, you check to see that the return value of getQuery isn't "undefined" and that it isn't "". If that passed, you take the return value and place it in a variable. Then, you go through another conditional to make sure that that variable isn't empty.
Well, since in your first conditional you already check to make sure that the return value isn't empty, why don't you put the body of the second conditional into the first one.
This:
if ( typeof getQuery("filter") != 'undefined' && getQuery("filter") != "" ) filter = getQuery("filter")
if ( filter != "" ) createFilterView( filter )
Becomes:
if(typeof getQuery("filter") != 'undefined' && getQuery("filter") != "") {
filter = getQuery("filter")
createFilterView(filter)
}
Conditionals 2
In the conditional statements in both of your examples, you check that the return value of getQuery is not "". You don't have to specifically specify that.
You can just remove the != "" part and it would work the same way.
The further reduce the size of your conditional, you can have your getQuery function actually return the value undefined (null would be better, however) instead of it returning a string of the word "undefined".
Then, you could reduce your conditional to:
if(getQuery("filter"))
Semicolons
In JavaScript, it is a good practice to close statements with a semicolon.
Example:
createFilterView(filter)
Should be:
createFilterView(filter);
^
|
semicolon
Quotes
This isn't really important, but I just thought I'd point it out.
You are consistent in your code with your quotes. For some strings, you use single quotes and for others, you use double quotes.
It is best to stick to one.
| common-pile/stackexchange_filtered |
Displaying existing content in Rails 5.2 with ActionText
I have upgraded my app to rails 5.2 and inclined to use ActionText since old trix-editor/gem is no longer working.
Now new posts display their "descriptions" but how can I display my old posts' DESCRIPTIONS with the new installed ActionText?
post.rb has_rich_text :description
posts_controller.rb
...params.require(:post).permit(:description)
_form.html.erb
<%= f.rich_text_area :description %>
show.html.erb
<%= @post.description %>
Descriptions are only fetching from new records in ActionText but not displaying from existing "description" columns for old posts
I had a similar issue and I couldn't find a clean solution in the rails repo or anywhere. As a workaround, in your case, I would try:
show.html.erb: .
<%= @post.try(:description).body || @post[:description]ย %>
That wont solve the issue, but it would help you to populate old post values.
This answer worked for me. It also has the added bonus of cleaning up your database of the tables used for the normal (non-rich) text content.
"Assuming there's a 'content' in your model and that's what you want to migrate, first, add to your model: "
has_rich_text :content
"then create a migration"
rails g migration MigratePostContentToActionText
class MigratePostContentToActionText < ActiveRecord::Migration[6.0]
include ActionView::Helpers::TextHelper
def change
rename_column :posts, :content, :content_old
Post.all.each do |post|
post.update_attribute(:content, simple_format(post.content_old))
end
remove_column :posts, :content_old
end
end
You can find the original solution I used here
https://github.com/rails/rails/issues/35002#issuecomment-562311492
| common-pile/stackexchange_filtered |
Console component tries to open in subtab when primaryTabId is null
I am working on a visualforce console footer component that displays solutions. If a tab is open, the solution opens as a subtab. If no primary tab is open, the new page still tries to open in a subtab.
Iแธฟ using a simple if else statement to evaluate if primaryTabIdxRt === null || primaryTabIdxRt === undefined, but instead it still follows the else path. The console log clearly shows that primaryTabIdxRt is null, and it keeps going down the wrong path.
Here is the relevant function
<script type="text/javascript">
function openSolution(recURL) {
console.log('OpenSolutionInitial = ' + primaryTabIdxRt);
if (primaryTabIdxRt === null || primaryTabIdxRt === undefined) {
console.log('OpenInPrimaryTab = ' + primaryTabIdxRt);
sforce.console.openPrimaryTab(null,recURL,true);
} else {
console.log('OpenInSubtab = ' + primaryTabIdxRt);
sforce.console.openSubtab(primaryTabIdxRt,recURL,true);
}
}
</script>
The function is called via onClick event
<a href="#" onclick="openSolution('{!instanceURL}/{!s.Id}');">{!s.SolutionName}</a>
Here is the console output when parent Id is null. Its going down the else path, even though the log shows that the Id is null before the if statement evaluates.
Console Log (id = null):
OpenSolutionInitial = null
OpenInSubtab = null
openSubTab: Invalid ID: null
Interestingly, the solution does open in primary tab when parent ID is undefined (which is the value when I first open or refresh the console), so its just null that isnt evaluating properly.
Console Log (id = undefined):
OpenSolutionInitial = undefined
OpenInPrimaryTab = undefined
Output:
I have tried evaluating == null, i have tried evaluating if(primaryTabIdxRt) and flipping the paths, I renamed primaryTabId to primaryTabxRt in case there was a naming conflict, I tried removing the || and using else if primaryTabxRt === undefined, and none of these had any impact.
Any ideas on what else to try? I have some other functions running to update primaryTabIdrXt when tab is closed, etc, and its all working reasonably well, but this piece which seems straightforward is not behaving.
Here is the code for the full visualforce page
<apex:page standardController="Solution" recordSetVar="sols">
<apex:variable value="{!LEFT($Api.Partner_Server_URL_260, FIND( '/services', $Api.Partner_Server_URL_260))}" var="instanceURL"/>
<apex:includeScript value="/support/console/39.0/integration.js"/>
<script type="text/javascript">
var primaryTabIdxRt;
</script>
<script type="text/javascript">
function refreshPrimaryTabId() {
sforce.console.getFocusedPrimaryTabId(processTab);
}
var processTab = function processTab(result) {
primaryTabIdxRt = result.id;
console.log('primaryTabIdxRt RefreshTab = '+ primaryTabIdxRt);
}
sforce.console.addEventListener(sforce.console.ConsoleEvent.CLOSE_TAB,
refreshPrimaryTabId);
</script>
<script type="text/javascript">
var eventHandler = function (result) {
primaryTabIdxRt = result.id;
console.log('primaryTabIdxRt onFocusedPT = ' + primaryTabIdxRt);
}
sforce.console.onFocusedPrimaryTab(eventHandler);
</script>
<script type="text/javascript">
function openSolution(recURL) {
console.log('OpenSolutionInitial = ' + primaryTabIdxRt);
if (primaryTabIdxRt === null) {
console.log('OpenInPrimaryTabNull = ' + primaryTabIdxRt);
sforce.console.openPrimaryTab(null,recURL,true);
}
else if (primaryTabIdxRt === undefined) {
console.log('OpenInPrimaryTabUnd = ' + primaryTabIdxRt);
sforce.console.openPrimaryTab(null,recURL,true);
}
else {
console.log('OpenInSubtab = ' + primaryTabIdxRt);
sforce.console.openSubtab(primaryTabIdxRt,recURL,true);
}
}
</script>
<apex:form >
<apex:panelGrid >
<apex:selectList value="{!filterId}" size="1" id="selList">
<apex:actionSupport event="onchange" rerender="list" />
<apex:selectOptions value="{!listViewOptions}"></apex:selectOptions>
</apex:selectList>
</apex:panelGrid>
<apex:dataList var="s" value="{!sols}" id="list" style="padding: 0px;">
<a href="#" onclick="openSolution('{!instanceURL}/{!s.Id}');">{!s.SolutionName}</a> - {!s.Status}
</apex:dataList>
<apex:commandButton onclick="refreshPrimaryTabId();" title="ClickMe" value="Refresh Primary Tab ID" rerender="selList" />
</apex:form>
</apex:page>
UPDATE
I modified the openSolution function to reflect the changes suggested by Santanu.
Here is the updated function. I changed from primaryTabIdxRt === null to primaryTabIdxRt == null, and I added window.location.reload();. For testing I also removed the lines that call openPrimaryTab and openSubtab methods, and am only using alerts to show which path is followed.
As the screenshots of the alerts illustrate, the function still is not following the null path when primaryTabIdxRt is null. So maybe I need to try a different way to evaluate null?
<script type="text/javascript">
function openSolution() {
alert('OpenSolutionInitial = ' + primaryTabIdxRt);
if (primaryTabIdxRt == null) {
alert('primary tab null ' + primaryTabIdxRt);
}
else if (primaryTabIdxRt === undefined) {
alert('primary tab undefined ' + primaryTabIdxRt);
}
else {
alert('open in subtab ' + primaryTabIdxRt);
}
window.location.reload();
}
</script>
<a href="#" onclick="openSolution();">abcdefg</a>
Actions taken
1. open console
2. open console component
3. click 'refresh primary tab id' button
4. click on the link 'abcdefg'
see screenshots of the alerts below - its following the else / subtab path, instead of the else if / primaryTab path, even though it sees primary tab as null.
1. initial id is null
2. subtab path is followed
Where are you setting primaryTabIdxRt?
I've got listeners that are setting it when primary tab is opened or closed. That part seems to be working fine as if I open a case the solution opens as a subtab of case, if I open a new case it opens as a subtab of new case, etc, and logs are fine. I'll post the full page when I'm back home in case it's helpful.
I have taken your code done a small change and it is working as expected.
For shake of my testing, rather than solutions, I am opening a hardcoded Lead record.
Before clicking on Click Here url, I am clicking on 'Refresh Primary Tab Id` button so that it can recognize the tab id.
I think, every time your Solution record is opening on Subtab, may be in the Console App configuration you have chosen Solution to open as subtab.
That's why I have taken a lead record which is no way related to any tab defined in my Console App.
<apex:page standardController="Case">
<apex:includeScript value="/support/console/29.0/integration.js"/>
<script type="text/javascript">
var primaryTabIdxRt;
function refreshPrimaryTabId() {
alert('inside refreshPrimaryTabId');
sforce.console.getFocusedPrimaryTabId(processTab, true);
}
var processTab = function processTab(result) {
primaryTabIdxRt = result.id;
alert('primaryTabIdxRt=' + primaryTabIdxRt);
}
function testOpenPrimaryTab() {
recURL = 'https://cs21.salesforce.com/00Qq0000004iHC2'; //lead id
sforce.console.getFocusedPrimaryTabId(processTab, true);
alert('OpenSolutionInitial = ' + primaryTabIdxRt);
if (primaryTabIdxRt == null || primaryTabIdxRt == 'null') {
alert('OpenInPrimaryTabNull = ' + primaryTabIdxRt);
sforce.console.openPrimaryTab(null,recURL,true);
}
else if (primaryTabIdxRt == undefined) {
alert('OpenInPrimaryTabUnd = ' + primaryTabIdxRt);
sforce.console.openPrimaryTab(null,recURL,true);
}
else {
alert('OpenInSubtab = ' + primaryTabIdxRt);
sforce.console.openSubtab(primaryTabIdxRt,recURL,true);
}
window.location.reload();
}
</script>
<apex:form >
<a href="#" onclick="testOpenPrimaryTab(); return false;">Click here</a>
<apex:commandButton onclick="refreshPrimaryTabId(); return false;" title="ClickMe" value="Refresh Primary Tab ID" rerender="selList" />
</apex:form>
</apex:page>
Update
When primary tab is null javascript actuals returns "null" String. So you have to check like this:
if (primaryTabIdxRt == null || primaryTabIdxRt == 'null')
Brilliant idea. Really thought that would do it but just checked and solutions set to open as primary tab. I removed the solutions object from the console app entirely and it still is behaving the same. Thx for testing w leads, that was a great approach.
Please take this code and work on that, the way I have tested it is working
thanks @santanu, I have tried to do so, and have added an update w/ the modified function, but its still behaving the same way for me. I stripped out all the functionality from the openSolution method and am just displaying alerts, but if primaryTabIdxRt is null, it does not follow the proper path (tried with == or with ===). I'm Wondering if i can try another way to evaluate null. My javascript is unfortunately very limited, using this exercise to learn!
Please find the updated answer regarding 'null' checking, you have to check "null" String and it is working for me correctly. Please accept and close this question.
checking for the string null in addition appears to have solved it. thanks for working through it - this was driving me crazy.
| common-pile/stackexchange_filtered |
Rcpp zeros of a quadratic equation in two variables
I am new to Rcpp so I apologize in advance if this question is simple to answer. I searched on the web but couldn't find much help and I am hoping the savviness in this forum can help me!
I have an existing code in R using Rcpp, and I need to add to this code the following. I have a quadratic function in two variables, f(x, y), and I need to find the zeros of it:
f(x, y) = (x + by + c)' W (x + by + c)
where the unknowns are x and y. That is, I am interested in finding the set of pairs (x, y) that satisfy f(x , y)=0.
Note: This is a simulation exercise where I need to find the zeros of this function for different values of a, b, c and W. Therefore, I need to code this in a mechanical way (cannot just find the solution, for instance, by graphical inspection). Both variables are continue, and I don't want to work with a grid for (x,y) to see when f(x,y)=0. I need a more general/optimization solution. I don't really know what values (x,y) can take.
So, you need a numerical optimization routine. Try RcppNumerical.
Well, I don't need to minimize the function. I need to find the zeros or roots...
Demerit points for cross-posting rcpp-devel, particularly as you failed to give any more relevant details.
What do the notations stand for in your function? Are x, y scalars? Or vectors?
I cross posted to reach out to more people. I thought rcpp-devel may be a more specific audience. I really need help!
Good question. In the general problem, x and y are vectors, and W is a matrix. Dimensions are such that f(x,y) is scalar. I can simplify the problem and assume everything is scalar, including W. It will give a quadratic form in two scalar variables.
Before diving into the numerical part I think you should define this question in a better way. Here I assume that x, y, and c are vectors, and b is a scalar.
A quick observation is that, if W is positive definite, then f(x, y) = 0 implies that x + by + c = 0. If both x and y are free variables, then the solution is not unique. For example, if (x, y) is a solution, then (x - b, y + 1) (element-wise operations) is also a solution.
If W is indefinite, then the equation also has multiple solutions. I just give a very simple example here. Imagine that W is a 2x2 diagonal matrix with 1 and -1 on the diagonal. Then as long as x + by + c = (t, t)' for any t, the function value is exactly zero.
In short, under my assumption on the notations, the equation has infinite number of solutions. I believe you need additional restrictions to make it unique.
Thank you for your reply. I know the equation has no unique solution. That is why I am interested in finding the set of pairs (x, y) that satisfy f(x , y)=0. W is a non-singular positive semi-definite matrix. There is a lot of math behind the problem, which I cannot write here (derivations are long and complex). I tried to give you the essence of the problem: find the roots of a quadratic form in two variables. This solution should contemplate all possible cases, as you would do with one variable based on the discriminant. I hope this helps.
"W is a non-singular positive semi-definite matrix." -- Doesn't this imply that W is actually positive definite?
| common-pile/stackexchange_filtered |
Android GUI crawler
Anyone knows a good tool for crawling the GUI of an android app? I found this but couldn't figure out how to run it...
Tried Robotium? You can get an ArrayList of Views in an activity and iterate through them.
Hi,
@DevOfZot I'm new to android can you help me out to start from somewhere for using Robotium like a sample code or something.
Can you please specify what do you want to achieve?
Getting started with Robotium: https://code.google.com/p/robotium/wiki/Getting_Started
@dtmilano I want to have a tree structure from the GUI.
Have you check AndroidViewclient/culebra as one of the answers suggests?
@dtmilano Actually I have posted a question on your github(https://github.com/dtmilano/AndroidViewClient/issues/34) and moreover as I said I want to have a GUI tree of the whole application, Can you help me more about that?
Personally, I don't think it would be too hard to make a simple GUI crawler using MonkeyRunner and AndroidViewClient.
Also, you may want to look into uiautomator and UI Testing
Good is a relative term. I have not used Robotium, but it is mentioned in these circles a lot.
EDIT - Added example based on comment request.
Using MonkeyRunner and AndroidViewClient you can make a heirarchy of views. I think AndroidViewClient has a built-in mechanism to do this, but I wrote my own. This code tries to produce a layout similar to that used by the Linux tree command. The space and line variables are used to setup the "prefix" prepended on each line. Of course, this code uses a depth-first, recursive traversal.
def printViewListDepth(view, depth=0):
space = '|' * int(not depth == 0)
space += (' ' * 2 * (depth-1)) + '|' * int(not depth-1 <= 0)
line = '_' * int(not depth == 0) * 2
text = view.getText()
text = text[:10] + int(len(text) > 10) * '...'
print " [*] %s%s%s %s %s" % (
space, line, view.getUniqueId(),
view.getClass().replace('android.widget.', ''), text)
for ch in view.children:
printViewListDepth(ch, depth+1)
You call printViewListDepth as follows, using a ViewClient returned by AndroidViewClient:
printViewListDepth(viewClient.root)
Note that in the above implementation, the class of View is truncated, by removing "android.widget." and the the text of a View is truncated at 10 characters. You can change these to suit your needs.
Edit Crawling the GUI
With AndroidViewClient you can query whether a View is clickable, someView.isClickable(). If it is clickable, you can invoke a touch event on it, someView.touch(). Assuming most button clicks open a different Activity, you will need to come up with a mechanism of getting back to where you came from to do the recursion.
I imagine that it can be done with some effort, but you may want to start with something as simple as invoking a BACK button press, device.press('KEYCODE_BACK', MonkeyDevice.DOWN_AND_UP). You will probably need to handle application-specific special cases as they arise, but this should get you off to a decent start.
@Ehsan Yes, but not at the moment. Will update answer with code example tomorrow.
yeah I guess your answer is exactly what I want but the thing is that I have no idea how should I run these scripts and how should I use the monkeyrunner and androidViewClient, I read some tutorials but couldn't figureout till now, do you have any idea what should I do?
great example and finally I ran it:) but here is my question you sample just show the current tree structure of the GUI right? I thought if I set the depth variable to lets say 1, it will go one layer deep in every clickable action and print me the GUI. Can you explain the depth variable more?
The depth variable only controls the indentation of the printed text. This code example does not crawl through the clickable items
So do you have any idea how can I do that? I mean how can I Crawl the clickable items?:)
@Ehsan I updated my answer with a few details on how I think you could go about this. If you are satisfied with my answer, please consider accepting it.
I accepted your answer since it is really good:) and thanks for the good points and do you have any idea where can I read about the properties of ViewClient
@Ehsan Thanks. Here are some links for AndroidViewClient: Blog: http://dtmilano.blogspot.com/ Wiki: https://github.com/dtmilano/AndroidViewClient/wiki Source: https://github.com/dtmilano/AndroidViewClient/blob/master/AndroidViewClient/src/com/dtmilano/android/viewclient.py
You can take a look at Testdroid App Crawler
This is available in Testdroid Cloud as project type and you can easily run it on 15 free devices.
Sounds interesting. Do you have any link that I can use for running?
do you have any idea whether I can get the GUI tree of an application by Testdroid or not?
You can't, there are android tools for that - hierarchy viewer or hierarchy dumper within UIAutomator
| common-pile/stackexchange_filtered |
How to include <glib.h> in android cmake project
I have created an android project on windows with ndk template. I want to include ndk-build project into android studio. This ndk-build project is working fine separately when I run ndk-build command.
My requirement is to convert and use it in android studio so that I can debug the code on android mobile. At this time I am just using share library (so file) and call the required function from adb shell.
I have copied all the source files of my separate ndk project into my android studio project and also added them into native-lib(created by default by android studio) library. But, I am getting an exception on including glib.h.
I am not sure actually how to resolve it.
Please suggest something.
post your CMakeList.txt or android.mk and error logs please
Read this:https://developer.gimp.org/api/2.0/glib/glib-compiling.html
I usually use the following:[To complie]
gcc `pkg-config --cflags --libs glib-2.0 dbus-glib-1` progname.c
Hope this will help you.
| common-pile/stackexchange_filtered |
#1052 - Column 'lat' in field list is ambiguous in a mile radius query, can't figure out why
I've got a rather lengthy query i have been working with that is throwing the error '#1052 - Column 'lat' in field list is ambiguous'. I have broken it into parts and each part seems to work fine but when I run it at once I get this error. Here is the query:
SELECT lesson_requests_global_2.student_name,
(3959 * ACOS(COS(RADIANS(30.096595)) * COS(RADIANS(lat)) * COS(RADIANS(lng) - RADIANS(- 81.718983)) + SIN(RADIANS(30.096595)) * SIN(RADIANS(lat)))) AS distance,
lesson_requests_vendor.user_purchased
FROM lesson_requests_global_2
INNER JOIN
( SELECT student_name,
MAX(request_date) AS max_request_date
FROM lesson_requests_global_2
WHERE ( 3959 * ACOS(COS(RADIANS(30.096595)) * COS(RADIANS(lat)) * COS(RADIANS(lng) - RADIANS(- 81.718983)) + SIN(RADIANS(30.096595)) * SIN(RADIANS(lat))) ) < 30
GROUP BY student_name ) AS recent_student_lesson_request ON lesson_requests_global_2.student_name = recent_student_lesson_request.student_name
AND lesson_requests_global_2.request_date = recent_student_lesson_request.max_request_date
LEFT JOIN lesson_requests_vendor ON v.user_purchased = lesson_requests_global_2.student_name
WHERE lesson_requests_vendor.user_purchased <> 'bob jones'
AND distance < 30
ORDER BY distance LIMIT 0 , 20
Please note that the long COS/RADIANS stuff looks complicated but it is to find a mile radius distance. I think that somehow it is thinking 'lat' within those formulas is somehow in the column list?
Thanks in advance for your help!
Have you looked at http://stackoverflow.com/questions/431391/php-mysql-how-to-resolve-ambiguous-column-names-in-join-operation?
please see my comment to DonCallisto's answer. I think the issue here is with the mile radius formula which includes 'lat' in it but lat is not a column in the tables I am pulling. I am only using that part of the code to calculate distance
Is very simple.
You join on the same table from where you select, so you'll have two column with the same name.
If you don't put the "table name" before your field name, this will produce a sql error.
You can do something like this:
SELECT .... FROM lesson_requests_global_2 request
INNER JOIN
( SELECT ..... FROM lesson_request_globals_2 .....)
....
WHERE ....
and rename every occurrence of lat in request.lat
request is now an alias from your table name: "virtually" the first you're choosing from.
thanks for replying. the problem is, the word 'lat' is not from my table, it is part of the Haversine formula so if i understand it correctly it is only used to calculate distance and not part of the table structure. i tried your suggestion above (renaming lat to request.lat and i didn't have any luck. any other ideas?
Sounds like both lesson_requests_global_2 and lesson_requests_vendor have a column called 'lat'. You need to specify which table you want to query it from:
SELECT lesson_requests_global_2.student_name,
(3959 * ACOS(COS(RADIANS(30.096595)) * COS(RADIANS(lesson_requests_global_2.lat)) * COS(RADIANS(lng) - RADIANS(- 81.718983)) + SIN(RADIANS(30.096595)) * SIN(RADIANS(lat)))) AS distance,
lesson_requests_vendor.user_purchased
FROM lesson_requests_global_2
INNER JOIN
( SELECT student_name,
MAX(request_date) AS max_request_date
FROM lesson_requests_global_2
WHERE ( 3959 * ACOS(COS(RADIANS(30.096595)) * COS(RADIANS(lesson_requests_global_2.lat)) * COS(RADIANS(lng) - RADIANS(- 81.718983)) + SIN(RADIANS(30.096595)) * SIN(RADIANS(lesson_requests_global_2.lat))) ) < 30
GROUP BY student_name ) AS recent_student_lesson_request ON lesson_requests_global_2.student_name = recent_student_lesson_request.student_name
AND lesson_requests_global_2.request_date = recent_student_lesson_request.max_request_date
LEFT JOIN lesson_requests_vendor ON v.user_purchased = lesson_requests_global_2.student_name
WHERE lesson_requests_vendor.user_purchased <> 'bob jones'
AND distance < 30
ORDER BY distance LIMIT 0 , 20
| common-pile/stackexchange_filtered |
Blazor display enum value in UI. value not showing
I am using blazor webclient to display an enum value in the UI. however when I run the client and a value is assigned nothing displays. I have taken this code from the GetName method. however this displays nothing on the page.
my razor component
<div>
<p>Nationality: @Enum.GetName(typeof(Nationality), user.Nationality).ToString()</p>
<p>Favourite Medium: @Enum.GetName(typeof(Medium), user.FavouriteMedium).ToString()</p>
</div>
@code {
[Parameter]
public UserViewModel user {get; set;}
}
when the page is loaded in no error is given and the value is shown as blank. I followed this method from the documentation here.
https://learn.microsoft.com/en-us/dotnet/api/system.enum.getname?view=net-6.0
Edit.
The issue here was that there was a view model conversion point on my API that was not sending the enum value to the user model. because of this a 0 was being passed by the value where the enum values for these objects start at 1. therefore nothing was being displayed by the component where the @code block was inserted into the HTML.
thanks guys!
Is nothing displayed on the page? Or is the value after Nationality: not displayed?
I found the issue to be that the enum was getting a value of 0 when the first value of enum was set to 1. therefore no value was being returned from the GetName() method.
Not sure what your code is doing, but here's a dirty demo that shows you how to display an enum in a Blazor component.
@page "/"
<h1>Hello, your @nationality</h1>
<div>
<button class="btn btn-dark" @onclick=ChangeNationality>Change Nationality</button>
</div>
@code {
private Nationality nationality = Nationality.French;
private void ChangeNationality()
=> nationality++;
public enum Nationality
{
French,
English,
Welsh,
Portuguese,
Spanish
}
}
The issue was with a 0 value being passed into the code. I explain this in the answer i posted earlier. =) thanks for your code!
NP - note you don't need to do the @Enum.GetName(typeof(Nationality), user.Nationality).ToString(), @user.Nationality will work.
I found the issue to be that the enum was getting a value of 0 from the model object when the first value of enum was set to 1. therefore no value was being returned from the GetName() method.
| common-pile/stackexchange_filtered |
UserNotFound mongodb error
I know there are lot of articles with the same issue and I explored every other article trying everything and still this error persists.
Here are the steps I followed:
use admin.
created an admin user with role: dbAdminAnyDatabase
use mydb.
created an admin user with different name for mydb
created a readonly user with: db.createUser({user: "nameuser", pwd: "password", roles: [{role: "read", db: "mydb"}]});
setup a mongod.conf file with:
security:
authorization: enabled
net:
port: 12345
bindIp: <IP_ADDRESS> #default value is <IP_ADDRESS>
then run the command: sudo mongod --port 27017 --dbpath ~/data/mongodb --config /etc/mongod.conf
and then on localhost i run the command to check:
mongo -u nameuser -p password localhost:12345/mydb
then the error come as:
connecting to: mongodb://localhost:12345/mydb MongoDB server version: 3.6.3 2020-10-04T03:05:37.054+0530 E QUERY [thread1] Error: Authentication failed. : DB.prototype._authOrThrow@src/mongo/shell/db.js:1608:20 @(auth):6:1 @(auth):1:2 exception: login failed
and on server side the error looks like this:
2020-10-04T02:53:11.835+0530 I NETWORK [listener] connection accepted from <IP_ADDRESS>:60240 #3 (1 connection now open) 2020-10-04T02:53:11.835+0530 I NETWORK [conn3] received client metadata from <IP_ADDRESS>:60240 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-10-04T02:53:11.836+0530 I ACCESS [conn3] SCRAM-SHA-1 authentication failed for nameuser on mydb from client <IP_ADDRESS>:60240 ; UserNotFound: Could not find user nameuser@mydb 2020-10-04T02:53:11.837+0530 I NETWORK [conn3] end connection <IP_ADDRESS>:60240 (0 connections now open)
Now please tell me if I can try anything new. I have tried so many articles from stackoverflow but none address the issue. Every resolved issue is the common mistake which I have checked already and I am not doing any of these. Tell me if you have any suggestion to work this out.
Try running db.getUsers() in each database to see which user is where.
The server says the user doesn't exist. Are you sure step 5 succeeded?
You need to use
--authenticationDatabase admin
parameter... Because, even you give "use mydb" before creating user... User is created at the admin -database.
So,
mongo -u nameuser -p password --authenticationDatabase admin localhost:12345/mydb
I tried this. There was still an error. I tried making owner of same name for that database too. and then try this command but still there is same error.
| common-pile/stackexchange_filtered |
Ubuntu 22.04 struggles to mount exfat flash drive with a .Spotlight-V100 folder from macOS
I have a flash drive which I am using to transfer files between several computers, including computers running Ubuntu 20.04, Ubuntu 22.04, macOS 12.6, and Windows 10. The flash drive was formatted as exFat with a GPT partition table using gparted in Ubuntu 22.04, using exfatprogs which was installed automatically.
The flashdrive works fine except when trying to mount the drive in Ubuntu 22.04 after using it with macOS. Some further experimentation shows that there is something about the contents of the .Spotlight-V100 folder that macOS creates on the drive that Ubuntu 22.04 cannot handle.
When trying to mount the drive when it has a Spotlight folder on it, one of several things happens:
if I try to access it through the Files GUI, it hangs for a while and then the drive completely dissapears (doesn't even register as a device in df or gparted)
if I try to cd and ls, it hangs for a while and then either the device dissapears or ls succeeds but throws errors such as "ls: reading directory '.': Input/output error"
Removing the Spotlight folder causes the drive to work just fine. Adding the spotlight folder back, even if I added a copy of the folder using an OS such as Ubuntu 20.04, causes the drive to have issues again. I only ever have these issues with Ubuntu 22.04; all 3 other operating systems I have tried have no issues with the spotlight folder.
Any insight or suggested fixes would be appreciated.
| common-pile/stackexchange_filtered |
Once choice in multiple choice
I am trying to use the filter function to get a count of all
candidates that have a bachelors degree or higher, problem is i can
get the count but if say a candidate has both a bachelors and a
masters, i only want one of the degrees. Question is there a way to
modify this code to make sure candidates that have multiple degrees
are counted only once?
FILTER("Fact - Count"."# of Applications" USING (("Candidate
Education"."Highest Level Education" IN('Bachelor' 's Degree',
'Higher Degree')) AND ("Candidate Education"."Graduated" =
'Yes')))
Why don't you nest it?
FILTER(FILTER(A USING B) USING C) ?
| common-pile/stackexchange_filtered |
Clean way to convert CURL to Jquery/Javascript [Instagram]
I'm trying to convert the curl command from the instagram api to jquery(ajax/get).
curl -F 'client_id=CLIENT_ID' \
-F 'client_secret=CLIENT_SECRET' \
-F 'grant_type=authorization_code' \
-F 'redirect_uri=AUTHORIZATION_REDIRECT_URI' \
-F 'code=CODE' \
https://api.instagram.com/oauth/access_token
From the doc :
"In order to make this exchange, you simply have to POST this code,
along with some app identification parameters"
var clientId = "my client id";
var clientSecret = "my client secret";
var redirectURI = "http://localhost:3000/instagram";
var myCode = "my instagram code";
var uri = 'https://api.instagram.com/oauth/access_token?client_id=' + clientId + '&client_secret=' + clientSecret + '&grant_type=authorization_code&redirect_uri=' + redirectURI + '&code=' + myCode;
var url = encodeURIComponent(uri);
$.ajax({
type: "POST",
dataType: "json",
url: ' https://api.instagram.com/oauth/access_token?client_id=' + clientId + '&client_secret=' + clientSecret + '&grant_type=authorization_code&redirect_uri=' + redirectURI + '&code=' + myCode,
success: function (result) {
console.log(result);
}
});
Doesn't seems to work for me...
Getting Cross Origin Block Request [CORS] error.
interesting error you have in the browser console Doesn't seems to work for me
Getting Cross Origin Block Request [CORS] error ^^
First of all, in order to send values in the query string, use the .data property of the options object you send to jQuery.ajax. And if you want to create the url programatically, do not forget to URL encode the values you send via encodeURIComponent.
Try with jsonp. Something like:
function myinstagramfunction (json_object)
{
// I fetch here the json object that contains the info of all the pictures returned
console.log("OK");
}
$.ajax({
type: "GET",
dataType: "jsonp",
url: 'https://api.instagram.com/v1/tags/YOUR_HASHTAG/media/recent?access_token=YOUR_ACCESS_TOKEN_HERE&callback=myinstagramfunction',
success: function (result) {
console.log(result);
}
});
| common-pile/stackexchange_filtered |
Binding XPages check box group to managed bean
How does one go about binding the value of an XPages "Check Box Group" to a Managed Bean so that multiple values can be loaded and saved without binding each check box individually? I'm able to bind text values on the page just fine, but nothing seems to work with the check box group. I have tried using a String with comma separated values, which is how it is stored in a Notes document, as well as a Vector to no avail.
Check this answer: http://stackoverflow.com/a/20481436/785061
See my comment below: on 901 a getter/ setter with a java.util.ArrayList should work. BTW: you say that it's stored in Notes as a comma separated string. That is not correct: by default it should be stored as a multi-value list.
The issue with checkboxes is that value binding to bean fields isn't not consistent. Single value -> Set string value. Multiple values -> Set list value.
At least it was like that a while back.
This is the technique I use to work around the issue:
http://dontpanic82.blogspot.no/2012/06/multi-value-fields-and-beans-in-xpages.html
That's what I thought too. But... just tested this with a basic setup and the values (now) always seem to be send as a java.util.ArrayList , also if only a single value is checked. This was on a 9.0.1 server.
Ahh.. Then IBM might have fixed it. Thanks for testing :)
Thanks guys. We're not on 9.0.1 yet, but good to know that they've changed this.
| common-pile/stackexchange_filtered |
segmentation fault while heapifying
I was simply heapifying an array in C. But while running it's giving me segmentation fault(core dumped)... i have no idea where i am trying to access unallocated memory!!
#include<stdio.h>
int n;
int left(i)
{
return (2*i);
}
int right(i)
{
return (2*i + 1);
}
void min_heap(int a[],int i)
{
int l=left(i);
int r=right(i);
int min;
if((l<=n)&&(a[l]<=a[i])&&(a[l]<=a[r]))
{
min=a[l];
a[i]=a[i]+a[l];
a[l]=a[i]-a[l];
a[i]=a[i]-a[l];
}
else if((r<=n)&&(a[r]<=a[i])&&(a[r]<=a[l]))
{
min=a[r];
a[i]=a[i]+a[r];
a[r]=a[i]-a[r];
a[i]=a[i]-a[r];
}
min_heap(a,min);
}
int main()
{
printf("The no is : ");
scanf("%d",&n);
int i,a[n+1];
for(i=1;i<=n;i++)
{
scanf("%d",&a[i]);
}
for(i=n/2;i>=1;i--)
{
min_heap(a,i);
}
for(i=1;i<=n;i++)
{
printf("%d",a[i]);
}
return 0;
}
N o t a r e a d a b l e c o d e!
I'm not clear what the computations in the if and else block are supposed to be doing; it doesn't look like a straight-forward swap. You check that l <= n but if l == n then r > n and you will have problems as you access a[r]. If your array is of size 10 (n == 10) and the values in your array are 1000000 and upwards, your use of min = a[l]; or min = a[r]; means you'll recurse on an array that's supposed to be vastly bigger than it actually is. And having a global variable n is tacky; pass it as a parameter to the functions.
@JonathanLeffler: those are swaps (but not straightforward). I'd suggest changing them to be plain old swaps using a temporary. Especially since min is already there as a temporary.
I agree that they are swaps, but I'd shoot anyone who sent me code to review that swapped using that many irrelevant operations. It's not only obscure, it's wasteful. Granted, waste is relative, but ...
You call min_heap(a,i) when i == n/2.
In this case, inside min_heap() the call to right() will return in effect:
(2 * (n/2) + 1)
When n is even that will result in a right index of n+1 and accessing a[r] (with r == n+1) is beyond the end of the array you've allocated.
I'm not sure if this is the reason for your segfault; I'd guess there may be other problems.
You should probably just step through a run with a debugger.
Here is some code from More Programming Pearls by Jon Bentley, written as comments in a C file. The full code is not relevant to you; it is generic interface like the interface to bsearch() and qsort(), but this is written in awk.
/*
** See Appendix 2 of Jon Bentley "More Programming Pearls".
** See also Column 14 of Jon Bentley "Programming Pearls, 2nd Edn".
** Note that MPP algorithms are in terms of an array indexed from 1.
** C, of course, indexes arrays from zero.
**
** 1-based identities.
** root = 1
** value(i) = x(i)
** leftchild(i) = 2*i
** rightchild(i) = 2*i+1
** parent(i) = i/2
** null(i) = (i < 1) or (i > n)
**
** 0-based identities.
** root = 0
** value(i) = x(i)
** leftchild(i) = 2*(i+1)-1 = 2*i+1
** rightchild(i) = 2*(i+1)+1-1 = leftchild(i)+1
** parent(i) = (i+1)/2-1
** null(i) = (i < 0) or (i >= n) # NB: i < 0 irrelevant for unsigned numbers
*/
/*
** function swap(i, j t) {
** # x[i] :=: x[j]
** t = x[i]
** x[i] = x[j]
** x[j] = t
** }
**
** function siftup(l, u, i, p) {
** # pre maxheap(l, u-1)
** # post maxheap(l, u)
** i = u
** while (1) {
** # maxheap(l, u) except between i and its parent
** if (i <= l) break
** p = int(i/2) # p = parent(i)
** if (x[p] >= x[i]) break
** swap(p, i)
** i = p
** }
** }
**
** function siftdown(l, u, i, c) {
** # pre maxheap(l+1, u)
** # post maxheap(l,u)
** i = l
** while (1) {
** # maxheap(l, u) except between i and its children
** c = 2*i # c = leftchild(i)
** if (c > u) break;
** if (c + 1 <= u && x[c+1] > x[c]) c++
** if (x[i] >= x[c]) break
** swap(c, i)
** i = c
** }
** }
**
** function hsort( i) {
** # post sorted(1, n)
** for (i = int(n/2); i >= 1; i--)
** siftdown(i, n)
** for (i = n; i >= 2; i--) {
** swap(1, i)
** siftdown(1, i-1)
** }
** }
*/
In the code, the array being sorted is x, indexed from 1 to N.
| common-pile/stackexchange_filtered |
transform an array of objects into an array containing arrays of objects
I have an array of objects like this:
[{...}, {...}, {...}, {...}, {...}]
An object looks like this:
{
id: ...
name: ...
association: {
id: ...
}
}
I'd like to collect objects with the same association id and get a array like this:
[ [ { ... association { id: 1} }, { ... association { id: 1} } ], [ { ... association { id: 2 } } ] ]
How can I do this?
Sounds like you're looking for a function that will return an array of objects that contain an association id that is provided?
I just want to gather the different objects that contain the same value in the association id property
Sounds like you're looking for a function that will return an array of objects that contain an association id that is provided
const data = [{...},{...},{...}]
const getByAssociationID = (source, id) => source.filter(obj => obj.association.id === id)
console.log(getByAssociationID(data, id))
This should group the data as you describe
function groupByAssociation(data) {
return data.reduce((list, value) => {
let added = false;
list.forEach(group => {
if(group[0].association.id === value.association.id) {
group.push(value);
added = true;
}
});
if(!added) {
list.push([ value ]);
}
return list;
}, []);
}
Use forEach to build and object with keys as association.id and values are accumulated.
const data = [
{
id: 1,
name: "blah",
association: {
id: "a1"
}
},
{
id: 2,
name: "foo",
association: {
id: "a2"
}
},
{
id: 3,
name: "test",
association: {
id: "a2"
}
}
];
const process = data => {
const obj = {};
data.forEach(item => {
const aId = item.association.id;
const newItem = obj[aId] || [];
newItem.push(item);
obj[aId] = newItem;
});
return Object.values(obj);
};
console.log(process(data));
| common-pile/stackexchange_filtered |
Hell yeah! - How to express great joy?
I'm looking for some exclamations to be used in comics to express great joy and pride, such as:
"I'm a good boy!",
"Hell yeah!",
"I'm so great!"
or similar. Any suggestion?
ๅฅฝๆไบ๏ผ Great! ้ฃ็ๅฅฝๆไบ๏ผ That's really cool! ๅฅฝๆฃๅ๏ผ/ๅคชๆฃไบ๏ผ Terrific!
@ Drunken Master: Nice! Any suggestion for something to be written on a T-shirt, like "I'm so cool!", "I'm so wonderful!"?
Would you post your thoughts in an answer, so that I can accept it and give you reputation?
@DrunkenMaster Indeed, ๅฅฝๆไบ can be seen in many books, can be heard in foreign movies or shows dubbed into Chinese, but I've never heard that in real life, maybe because it sounds a little bookish. ๅฅฝๆฃๅ๏ผ/ๅคชๆฃไบ๏ผwould be more common.
I believe that, due to the little readiness of Chinese people to openly brag about themselves, the answer should greatly vary with context.
@Stan "ๅฅฝๆไบ" returns ~4.6 million results on Google, Baidu returns ~29.7 million, I think it's quite OK to use it, not "bookish" at all.
@DrunkenMaster not a surprise there're many results on Google. As my last comment mentioned, ๅฅฝๆไบ is widely used in some situations (including manga books, of course). But for expressing great joy and pride (by bookish, I meant the character ไบ is difficult to be shouted out aloud), I do observe very few samples in real life :) I just wanted to clarify this point.
ๅๅฟๆฃ, I'm not sure how cringeworthy this is now but there you have it.
It depends on what dialect you're writing in. For example, Cantonese (primarily used in Hong Kong) has slang versions that is different from Mandarin.
In Mandarin
I'm a good boy
ไนไป (this is in cantonese and is a slang). It means, "Good boy!"
ๆๅพ่ฝ่ฉฑ (this is universal). It literally means, "I listen." But it in practice means, "I'm well behaved."
Hell yeah:
ๆฃๆญปไบ (this means "fresh to death!")
The important word is "ๆฃ", which means "awesome."
Alternatively, some people use "้
ท" as a slang. This literally means "Cool" and sounds like "cool" in Chinese.
I so great
ๆๅฅฝๅ
The important word is "ๅ," which means "strength" or "great"
This is slang term. But if someone read this, they would laugh.
It depends if you want to use some formal words or not.
informal (but normally stronger than formal words). I think it is OK to use these words in comics:
็ฝ or ็ฝ็ฟปไบ
็B
ๆดปๆดป็พๆญป
One of the current trending slang for "great" is โ็ขๅ กไบโ or โๅผ็ธๅคฉโใ
For a less extreme expression you can use โ็ปๅโใ
So for example to express "I am so great!": ๆ็ขๅ กไบใ
"Hell yeah!": ็ปๅ๏ผ
It just strikes me to write a Python program to generate some possible translations because there are so many possible ways of saying this.
# the subject I (optional)
Im = ['', 'ๆ']
# the adverb 'so'
so = ['่ถ
', '่ถ
็บง', '็', '็็', '้ๅธธ', 'ๅฅฝ', 'ๅพ']
# slang (optional)
fucking = ['', 'ไปๅฆ']
# the adjective "great"
great = ['ๆฃ', 'ๅๅฎณ', 'ๅผบ', '็', '็้ผ', '็ปๅ', 'ไผๅคง', 'ไผ็ง', 'ๅฑ']
# ๅฑ is actually another slang, which is sometimes written as ๅ
# the modal particle as ending (optional)
ending = ['ๅ', 'ๅฆ', '็']
# ็ is actually not a modal particle
# I don't know how to explain this but it is often used at the end
for subject in Im:
for adverb in so:
for slang in fucking:
for adjective in great:
for end in ending:
print (subject+adverb+slang+adjective+end)
The result looks like this:
......
ๆ่ถ
ไปๅฆไผ็งๅฆ
ๆ่ถ
ไปๅฆไผ็ง็
ๆ่ถ
ไปๅฆๅฑๅ
ๆ่ถ
ไปๅฆๅฑๅฆ
ๆ่ถ
ไปๅฆๅฑ็
ๆ็ๆฃๅ
ๆ็ๆฃๅฆ
ๆ็ๆฃ็
......
ๆๅพไปๅฆๆฃ็
ๆๅพไปๅฆๅๅฎณๅ
ๆๅพไปๅฆๅๅฎณๅฆ
ๆๅพไปๅฆๅๅฎณ็
ๆๅพไปๅฆๅผบๅ
ๆๅพไปๅฆๅผบๅฆ
......
However, some of them would seem unnatural, and there are certainly a lot more ways of expressing the same feeling, especially if you dig into the deep mysterious dialects.
This answer is just for reference and fun. :D
Warning, these are Internet slang.
ๅฟ
้กป็: supposed to be
A: You are so great.
B: ๅฟ
้กป็.
ๆฒกๆฏ็
: no shortcoming
A: Let's...
B: ๆฒกๆฏ็
/ๅฟ
้กป็
| common-pile/stackexchange_filtered |
how to inject sql in login process
I've received an old application which completely lacks user input sanitization and is vulnerable to sql injection. To prove gravity of the situation i need to give client an example and what can be better to scare him than the login process. I've tried standard techniques but the problem with them is that they return multiple rows and due to nature of the code it returns an error instead of logging him in. What sql should i inject so that only a single row is returned and the execution reaches "return $access" line in order to pass the value of this "access" column to code calling this login function. The request is made via POST method and magic quotes are off on the server. Please let me know if you need any other information.
function login($username, $pw)
{
global $dbname, $connection, $sqluser, $sqlpw;
$db = mysql_connect($connection,$sqluser,$sqlpw);
mysql_select_db($dbname);
if(!($dba = mysql_query("select * from users where username = '$username' AND password = '$pw'"))){
printf("%s", sprintf("internal error5 %d:%s\n", mysql_errno(), mysql_error()));
exit();
}
$row = mysql_fetch_array($dba);
$access = $row['access'];
if ($access != ''){
return $access;
} else {
return "error occured";
}
mysql_close ($db);
}
Note: it turns out that magic_quotes_gpc is turned on and the php version is 5.2.17
Thanks
The methods below will work, but here is a youtube video for handling multiple records on SQL injection: http://youtu.be/7H358PrFagc
I watched the video but it does not seem to provide anything on how to handle multiple records. Please correct me if i'm wrong.
Starting with the goal query:
SELECT *
FROM users
WHERE username = '' OR '1'='1'
AND password = '' OR 1=1 LIMIT 1;#'
We get username is ' OR '1'='1 and password is ' OR 1=1 LIMIT 1;#
It depends what values the login function is called with. If there's sanitation before passing it to the function it might actually be safe. However it's better to filter it right before the query so you can see that your built query is safe.
However if you have something like this:
login($_POST['user'], $_POST['pass']);
In that case just put foo' OR 1=1 OR ' in the user field in the login form :)
already tried this but does not work. Also the variables have not been sanitized before passing
do a var_dump() on the $username and $pw to check there's no magic going on. I literally mean magic, it could be that the dreaded magic_quotes_gpc is still enabled on the box for example.
yes it turns out that magic_quotes_gpc is enabled. any way to get past that :). Php version is 5.2.17
| common-pile/stackexchange_filtered |
Cannot Quit the C# application using Sentinel Value
I have created application that will allow the user to enter numbers which provide the average of the numbers entered. The user will be allowed to enter as many numbers as they choose. After each entry I have display the current average.
I am not able to quit the application. I need help to find some problem in the code.
The Code is below is now corrected:
string input = "";
double noentered = 0;
double total=0;
double average = 0;
do
{
Console.Write("enter a number or Q to quit",input);
input = Console.ReadLine();
if (input != "q" && input != "Q")
{
noentered++;
total += Convert.ToInt32(input);
average = total / noentered;
Console.WriteLine(" Total: {0} \t Number Entered: {1} \t Average:{2}", total, noentered,average);
}
}
}while (input != "q" && input != "Q");
Have you tried stepping thru in debugger and see what happens when you enter q?
Yes I did. But nothing is getting printed in variable "noentered"
If you type "q" or "Q" it exits the screen and Warning shows up "Input String is not in correct format"
Move your if-sentence to check the input before trying to convert it to an int:
string input = "";
double noentered = 0;
double total = 0;
double average = 0;
do
{
{
Console.Write("enter a number or Q to quit", input);
input = Console.ReadLine();
if (input != "q" && input != "Q")
{
total += Convert.ToInt32(input);
average = total / noentered;
noentered++;
Console.WriteLine(" Total: {0} \t Number Entered: {1} \t Average:{2}", total, noentered, average);
}
}
} while (input != "q" && input != "Q");
do
{
Console.Write("enter a number or Q to quit", input);
input = Console.ReadLine();
if (input != "q" && input != "Q")
{
noentered++;
total += Convert.ToInt32(input);
average = total / noentered;
Console.WriteLine(" Total: {0} \t Number Entered: {1} \t Average:{2}", total, noentered, average);
}
} while (input != "q" && input != "Q");
This works. Please notice the placement of the if condition. The entered value is checked after reading. In your code, the value is checked after one pass. Also, you might want to increment the count before calculating average for getting accurate average value.
| common-pile/stackexchange_filtered |
How to get Scrollbars as needed with a TabPane as Content?
I have a BorderPane inside a Tabpane inside a ScrollPane. The ScrollPane.ScrollBarPolicy.AS_NEEDED does work if i remove the TabPane and put the BorderPane as Content of the ScrollPane. How do i get this to work with the TabPane?
Somehow the BorderPane is able to tell the ScrollPane when to display Scrollbars and the TabPane unable to do so. I looked through the avaible Methods for the Tabpane but couldn't find any for this resizing.
Working Example:
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.control.ScrollPane;
import javafx.scene.control.Tab;
import javafx.scene.control.TabPane;
import javafx.scene.layout.*;
import javafx.stage.Stage;
public class FXApplication extends Application {
private BorderPane border;
private GridPane inner;
private TabPane tabPane;
@Override
public void start(Stage primaryStage) {
tabPane = new TabPane();
Tab tab = new Tab("test");
tabPane.getTabs().add(tab);
border = new BorderPane();
border.setCenter(innerGrid());
tab.setContent(border);
ScrollPane scp = new ScrollPane();
scp.setFitToHeight(true);
scp.setFitToWidth(true);
scp.setVbarPolicy(ScrollPane.ScrollBarPolicy.AS_NEEDED);
scp.setHbarPolicy(ScrollPane.ScrollBarPolicy.AS_NEEDED);
// scp.setContent(border); // this works
scp.setContent(tabPane); // this doesnt
Scene s = new Scene(scp);
primaryStage.setScene(s);
primaryStage.show();
}
private GridPane innerGrid() {
inner = new GridPane();
for(int i=0; i<11 ;i++) {
ColumnConstraints columnConstraints = new ColumnConstraints();
columnConstraints.setHgrow(Priority.SOMETIMES);
inner.getColumnConstraints().add(columnConstraints);
RowConstraints rowConstraints = new RowConstraints();
rowConstraints.setVgrow(Priority.SOMETIMES);
inner.getRowConstraints().add(rowConstraints);
}
for(int i=0; i<100 ;i++) {
inner.add(new Button("Button " + i), i/10, i%10);
}
return inner;
}
public static void main(String[] args) {
FXApplication.launch(args);
}
}
good starter - but not complete (and with it probably not minimal :) CustomGridPane is missing, but you can probably replace it with some plain content
thx - removed CustomGridPane and made it "close to minimal"
good :) just nit-picking: you removed it from the method signature, but not from instantiation :)
Astonishingly, the exact behavior of AS_NEEDED is unspecified. All we have is the ScrollPaneSkin to look at. The decision whether or not to show the (f.i.) horizontal bar happens in its private method determineHorizontalSBVisible()
private boolean determineHorizontalSBVisible() {
final ScrollPane sp = getSkinnable();
if (Properties.IS_TOUCH_SUPPORTED) {
return (tempVisibility && (nodeWidth > contentWidth));
}
else {
// RT-17395: ScrollBarPolicy might be null. If so, treat it as "AS_NEEDED", which is the default
ScrollBarPolicy hbarPolicy = sp.getHbarPolicy();
return (ScrollBarPolicy.NEVER == hbarPolicy) ? false :
((ScrollBarPolicy.ALWAYS == hbarPolicy) ? true :
((sp.isFitToWidth() && scrollNode != null ? scrollNode.isResizable() : false) ?
(nodeWidth > contentWidth && scrollNode.minWidth(-1) > contentWidth) : (nodeWidth > contentWidth)));
}
}
Here nodeWidth is the actual width of the content node - has been calculated previously, respecting the node's min/max widths - and contentWidth is the width available for laying out the content.
Unreadable code (for me ;) In the case of resizable content and fitting into scrollPane's content area boils down to returning true if both content's actual and min width are greater than the available width.
The minWidth makes the difference in your context: BorderPane has a min > 0, TabPane has a min == 0, so the method above always returns false.
The other way round: to allow the hbar being visible with the TabPane it needs a min, f.i. by relating it to its pref:
tabPane.setMinWidth(Region.USE_PREF_SIZE);
| common-pile/stackexchange_filtered |
How can I $_POST information to this page without going to it by submitting a form?
I'm using the following code:
$.ajax({
type:'GET',
url: 'save_desc.php?scrapbook_name=<?php print(addslashes($scrapbook_name)); ?>,
success: function(data){
$("#" + id + "below").html(data);
}
});
How can I change this to sending sensitive information (with "special" characters) by posting it rather than using the $_GET method?
NOTE: I've tried to use addslashes but this doesn't have any affect in passing strings with wildcard characters.
Don't use addslashes, use json_encode(). addslashes was intended for database work, not javascript generation.;
Change the type parameter to 'POST', or alternatively use jQuery's post() function:
$.post(
'save_desc.php',
{ scrapbook_name: <?php print(addslashes($scrapbook_name)) },
function(data) {
$("#" + id + "below").html(data);
}
);
http://api.jquery.com/jQuery.post/
On top of the post comment from RoccoC5, you can use the jquery serialize function into a variable then use the variable in the post
var PostData = $(myform).serialize();
$.post("myphppage.php",
PostData,
function()
{
//completion code goes here
},
"html")
.error(alert("error with post"));
http://api.jquery.com/serialize/
http://api.jquery.com/jQuery.post/
| common-pile/stackexchange_filtered |
AngularJS data-ng-repeat does not show any changes when Array was modified
Okay, so I have this custom directive that is responsible for showing Notes for a product that users have left.
<mx-note-manager-two isModalPopup="true" is-show-date-range="{{IsShowNoteDateRangeControl}}" recordType="BOM_Header" recordId="{{UniqKey}}" note-save-success-call="CheckItemLevelData" noteLabel={{translation.BOMNote}} record="{{SummaryGridPartNumber}}"></mx-note-manager-two>
That directive inside of it has this div, responsible for showing all the note located in the $scope.MessageList
<div data-ng-repeat="message in MessageList" ng-class="!$first ? 'padding-top2px padding-left2px' :''">
... <!-- some divs to show data in message -->
</div>
So whenever we open a page that contains this directive, it calls the controller that sends an API, then the response it gets assigns to the $scope.MessageList
MXNoteService.getMxTwoNotes('api/MXNote/GetMxTwoNotes', { params: { recordType: $scope.RecordType, recordId: $scope.RecordId, NoteCategory: 2 } }).then(function (responseData) {
...
$scope.MessageList = responseData.data.responseObject[0].ListOfNotes
...
if ($scope.IsFocus) {
setTimeout(function () {
$('#note-text-area').focus();
$scope.$apply();
}, 0);
}
...
}
And the directive itself
directive('mxNoteManagerTwo', function ($rootScope, MXNoteService) {
return {
restrict: 'E',
templateUrl: function (element, attributes) {
if (!$rootScope.IsNullOrUndefined(attributes.ismodalpopup) && attributes.ismodalpopup === 'true') {
return 'SinglePageApp/GenericControls/MXNoteManager/MXNote2Model.html';
} else {
return 'SinglePageApp/GenericControls/MXNoteManager/MXNote2.html';
}
},
scope: {
recordType: '@',
recordId: '@',
isEdit: '@',
isShowDateRange: '@',
isShowHeader: '@',
isEditDisable: '@',
isShowAttachment: '@',
isAddAttachment: '@',
record: '@',
keyValue: '@',
isShowCamera: '@',
isFocus: '@'
},
link: function (scope, element, attrs) {
//here we assign attributes
//...
MXNoteService.setValues = function (record, recordId, title, moduleName, isEdit) {
scope.Record = record;
scope.RecordId = recordId;
scope.NoteLabel = title ? title : scope.NoteLabel;
scope.RecordType = moduleName ? moduleName : scope.RecordType;
scope.IsEdit = !$rootScope.IsNullOrUndefined(isEdit) ? isEdit : scope.IsEdit;
scope.GetMxTwoNotes(scope.RecordType, scope.RecordId);
}
setTimeout(function () {
scope.GetMxTwoNotes();
}, 500);
},
controller: 'MXNote2Controller',
}
After we do any changes to the $scope.MessageList like deleting all the data then overriding it with a new one, the changes do not appear in the page. Even after the $scope.$apply() was called.
I've tried multiple solutions from StackOverflow, like adding $timeout, using angular.Extend, calling the digest cycle multiple times after each time $scope.MessageList was changed.
Your design seems like it might be getting in the way. First you're using a lot of interpolation when binding would be more appropriate. What you're encountering would be typical of attribute binding. Here is roughly what I'd expect:
<mx-note-manager-two isModalPopup="true"
is-show-date-range="IsShowNoteDateRangeControl"
recordType="BOM_Header"
recordId="UniqKey"
note-save-success-call="CheckItemLevelData"
noteLabel="translation.BOMNote"
record="SummaryGridPartNumber"></mx-note-manager-two>
To get your directive to respond accordingly you'd want to define the scope like so:
scope: {
recordType: '@',
recordId: '=',
isEdit: '@',
isShowDateRange: '=',
isShowHeader: '@',
isEditDisable: '@',
isShowAttachment: '@',
isAddAttachment: '@',
record: '=',
keyValue: '@',
isShowCamera: '@',
isFocus: '@',
noteSaveSuccessCall: '&', // notice this was an attribute that seemed to be a callback so use the & binding for that.
},
I left some '@' bindings for what appear to be constants, but you'll have to make the determination if that's appropriate. It's fine to use '@' binding when you think an attribute will take on static values (ie not from variables).
Anywhere you feel like you need to use interpolation (ie {{ someValue }}) I'd use =. If you did that then you probably don't need MXNodesService.setValues anymore because if those values changed they would be updated by binding into the directive.
If you need to respond to changes and reload GetMxTwoNotes then you can use a $watch. Like so:
scope.$watch('recordId', (oldValue,newValue) => {
if( oldValue !== newValue ) {
scope.GetMxTwoNotes();
}
});
Something else to consider is if you have several fields that are dependent like you can't call GetMxTwoNotes() when recordId changes because you also need recordType or some other field. Then don't split those up as separate attributes when binding. Have a single object that has those values and pass that into the scope. That way when that object changes you have all dependent fields needed, and can execute it immediately in the watch function. Another reason to not use '@' attribute binding and to use '=' binding because you can pass regular javascript objects as properties to the directive. For example:
<mx-note-manager-two is-modal-popup="true" note-params="params"></mx-note-manager-two>
Where params would be an object like so:
params = {
recordId: 5,
recordType: '...',
NoteLabel: '...',
...
}
I'd also consider removing the use of timeout and just make the call in the link function. You shouldn't need to delay loading, and doing it in a timeout function means it's out of the digest cycle. I know you had mentioned using $timeout which would retain the digest cycle (ie no need to call $apply), but you need to simplify things so you aren't fighting angularjs.
Without seeing MXNote2Controller it's hard to say what else could be contributing the your Message updates not being seen.
| common-pile/stackexchange_filtered |
Make read-only column in spreadsheet using spreadsheet gem
I have used spreadsheet gem to generate xlsx, I wants to make read-only for some particular columns and rows.
How to do with spreadsheet gem?
I don't know if this is a suitable answer to your question (let me know if it is, and I'll post below), but this can be done with the axlsx gem: http://www.rubydoc.info/github/randym/axlsx/Axlsx/CellProtection. Note that the spreadsheet gem only generates xls files, whereas the axlsx generates xlsx files (which are more modern).
I went through the code and documentation for the spreadsheet gem and apparently (and unfortunately) this behavior is not implemented.
| common-pile/stackexchange_filtered |
Show date and time in time picker component in JSP
Hi I'm working on Struts 2 web project. I am using a jQuery time picker in a web page. But now my client requires to pick the Date and Time to be selected from System clock and calender (Independent of os Windows,Mac,Linux,IOS,etc.). Is there a way to call system clock and calender to pick date and time ? Thanks in advance..
Do you want a calendar component or just show the current date/time on your site?
I want to show the calender component on the browser....
Please add the jQuery or another component you're using as time picker in your site.
Yeah.. I'm trying that too
Just post the code by editing your question, somebody will help you to format it.
You have probably to use javascript or some library to pick up the date and time picker. Calling the system object requires native integration with the clock that it seems not available here.
try this.. seems, it is what you are looking for..
taken from jQuery date/time picker
You can make use of HTML5's date type. This shows the popup calendar. http://www.w3schools.com/html/html5_form_input_types.asp
Note that link-only answers are discouraged, SO answers should be the end-point of a search for a solution (vs. yet another stopover of references, which tend to get stale over time). Please consider adding a stand-alone synopsis here, keeping the link as a reference.
In javascript
new Date()
always picks date from the System clock and calendar
I want to show the calender to the user to pick date and time not getting system date and time values..
@Ram you mean you want the date and time picker from os
did u mean that if user clicks on the web page the OS date-time picker should open?
| common-pile/stackexchange_filtered |
Parsing Gmail Email content in HTML Format (Table) into Google SpreadSheet Columns
I am having two issues Parsing Data from Gmail Email HTML Table into Googlesheet Columns
Issue 1:
Getting Data from the Body
If the Email Body comes formatted as a HTML Table the method getPlainBody() only takes the Paragrapgh before the Table, but nothing else, so there is nothing to parse.
1.1 When I remove the HTML Table format it works, bring the Data into the Body Column.
So how can I retrieve the Content of the HTML Table into a body Field?
PLAIN EMAIL - NOT HTML FORMATTING
Rental Unit 404 Rupert Street
Renter Full Name Test
Email Address<EMAIL_ADDRESS>
Phone<PHONE_NUMBER>
Check In Date Wed Jun 19 2019
Check Out Date Sun Jul 19 2020
Additional Comments Testing
How did you hear about Us? Testing
Issue 2
Parsing Data into separate Columns when I used not formatted text
toString() seems to be deprecated and therefore, code complains about it
2.1. If i remove the toString() then columns are separated by it displays the Object Reference, instead of the actual Data
I tried to replace toString by getText but it did not work.
So how what method should I use to bring the text instead of the Object Reference?
//Displays Menu Option in GoogleSheet
function onOpen(e) {
var ui = SpreadsheetApp.getUi();
ui.createMenu("Parse Customer Requests")
.addItem("Get Emails","getGmailEmails")
.addToUi();
}
//Gets Emails with Rental Tag
function getGmailEmails(){
var label = GmailApp.getUserLabelByName("Rental");
var threads = label.getThreads();
var i;
for(i = threads.length - 1; i >= 0; i --){
var messages = threads[i].getMessages();
var j;
for (j=0; j<messages.length; j++){
var message = messages[j];
extractDetails(message);
GmailApp.markMessageRead(message);
}
threads[i].removeLabel(label);
}
}
//Extract Details from Email
function extractDetails(message){
//var datetime = message.getDate();
//var subjectText = message.getSubject();
//var senderDetails = message.getFrom();
//var bodyContents = message.getPlainBody();
var emailData = {
body: "Null",
date: "Null",
subject: "Null",
rentalUnit: "Null",
renterFullName: "Null",
emailAddress: "Null",
phone: "Null",
checkInDate: "Null",
checkOut: "Null",
additionalComments: "Null",
howDidYouHearAboutUs: "Null"
}
var emailKeywords = {
rentalUnit: "Rental Unit",
renterFullName: "Renter Full Name",
emailAddress: "Email Address",
phone: "Phone",
checkInDate: "Check In Date",
checkOutDate: "Check Out Date",
additionalComments: "Additional Comments",
howDidYouHearAboutUs: "How Did you hear about Us?"
}
emailData.date = message.getDate();
emailData.subject = message.getSubject();
emailData.sender = message.getFrom();
emailData.body = message.getPlainBody();
var regExp;
regExp = new RegExp ("(?<=" + emailKeywords.rentalUnit + ").*");
emailData.rentalUnit = emailData.body.match(regExp).toString().trim();
regExp = new RegExp ("(?<=" + emailKeywords.renterFullName + ").*");
emailData.renterFullName = emailData.body.match(regExp).toString().trim();
regExp = new RegExp ("(?<=" + emailKeywords.emailAddress + ").*");
emailData.emailAddress = emailData.body.match(regExp).toString().trim();
regExp = new RegExp ("(?<=" + emailKeywords.phone + ").*");
emailData.phone = emailData.body.match(regExp).toString().trim();
regExp = new RegExp ("(?<=" + emailKeywords.checkInDate + ").*");
emailData.checkInDate = emailData.body.match(regExp).toString().trim();
regExp = new RegExp ("(?<=" + emailKeywords.checkOutDate + ").*");
emailData.checkOutDate = emailData.body.match(regExp).toString().trim();
regExp = new RegExp ("(?<=" + emailKeywords.additionalComments + ").*");
emailData.additionalComments = emailData.body.match(regExp).toString().trim();
regExp = new RegExp ("(?<=" + emailKeywords.howDidYouHearAboutUs + ").*");
emailData.howDidYouHearAboutUs = emailData.body.match(regExp).toString().trim();
var activeSheet = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
var emailDataArray = [];
for (var columnName in emailData){
emailDataArray.push(emailData[columnName]);
}
activeSheet.appendRow(emailDataArray);
}
I was wrong and the issue that I said it was related to a deprecated toString, was incorrect it is working with plain text now. The only remaining issue is how to read from a HTML Table format table
I cannot reproduce this behaviour (I'm getting the data from a table with getPlainBody(), but can't you use getBody() instead?
Thanks for your response. I have figured it out. It required a few more changes and ended using getRawContent() instead.
I'm glad your issue was solved. Would you consider posting an answer explaining what the issue was?
I am still cleaning a few things. I will post when done..
| common-pile/stackexchange_filtered |
(51) SSL: no alternative certificate subject name matches target host name
I am trying to generate the LetsEncrypt certificate files, and I am using the following commands:
./certbot-auto --config /etc/letsencrypt/configs/milhas.brau.io.conf certonly
The files are generated correctly, but in "curl" command results the message:
curl: (51) SSL: no alternative certificate subject name matches target host name 'milhasplus.brau.io'
milhas.brau.io.conf
# the domain we want to get the cert for;
# technically it's possible to have multiple of this lines, but it only worked
# with one domain for me, another one only got one cert, so I would recommend
# separate config files per domain.
domains = milhas.brau.io
# increase key size
rsa-key-size = 2048 # Or 4096
# the current closed beta (as of 2015-Nov-07) is using this server
server = https://acme-v01.api.letsencrypt.org/directory
# this address will receive renewal reminders
email =<EMAIL_ADDRESS>
# turn off the ncurses UI, we want this to be run as a cronjob
text = True
# authenticate by placing a file in the webroot (under .well-known/acme-challenge/)
# and then letting LE fetch it
authenticator = webroot
webroot-path = /var/www/letsencrypt/
NGINX config file
server {
listen 443 ssl default_server;
server_name milhas.brau.io;
ssl_certificate /etc/letsencrypt/live/milhas.brau.io/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/milhas.brau.io/privkey.pem;
location /.well-known/acme-challenge {
root /var/www/letsencrypt;
}
location / {
proxy_pass https://<IP_ADDRESS>:8084/;
}
}
curl result
$ curl -v https://milhasplus.brau.io/autenticacao/docs/termo_uso
* Trying <IP_ADDRESS>...
* TCP_NODELAY set
* Connected to milhasplus.brau.io (<IP_ADDRESS>) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: PROFILE=SYSTEM
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=milhas.brau.io
* start date: Aug 25 10:28:56 2018 GMT
* expire date: Nov 23 10:28:56 2018 GMT
* subjectAltName does not match milhasplus.brau.io
* SSL: no alternative certificate subject name matches target host name 'milhasplus.brau.io'
* stopped the pause stream!
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, Client hello (1):
curl: (51) SSL: no alternative certificate subject name matches target host name 'milhasplus.brau.io'
Thanks
Looks valid to me, is green in Firefox https://milhas.brau.io
Thanks @DanieleDellafiore. My problem was actually another one. Sorry my mistakes
@BrรกulioFigueiredo what was it? I am having the same with multiple hostnames...
@FrancescoGualazzi this steps works for me:
1 - I made the update of Python version 2.7.5 (in my case the server is CentOS 7)
2 - I set all my files in Nginx only as http.
3 - I used the letsencrypt-auto command for the system to update the Nginx files
| common-pile/stackexchange_filtered |
what does "*&" mean together?
For example:
private:
Node* curr;
public:
Node*& Iterator::getCurr() {
return curr;
}
bool Iterator::operator==(const Iterator& other) {
return curr == other.getCurr();
}
I'm getting error in this code:
passing โconst Iteratorโ as โthisโ argument of โNode*&
Iterator::getCurr()โ discards qualifiers [-fpermissive]
How should I fix it?
You're returning a reference to a pointer. It's hard to tell from your question whether that's what you want.
I'm sorry that I changed my question a lot. But, I think it's clearer now
You will fix it by first providing a description of the error you encounter. Otherwise we won't be able to help.
ok, I have just provided above
Can you, please, help?
Node*& means โreference to pointer to Nodeโ. You can return it as normal. Accessing it, however, can be done two ways: the โnormalโ way, where the โreference toโ part will just be dropped, and the way preserving the reference. The advantage of the latter way is you can change the underlying curr value:
Node *&curr = iterator.getCurr();
curr = new Node(); // or something like that
// iterator.curr has been changed
don't read them together
if you see something like this:
Foo& foo();
Do you know what does the & means?
It is a reference to Foo
Then
Foo* foo();
What about this? this is Pointer to Foo
Then
Foo*& foo();
is reference to "Pointer to Foo"
It's a reference to a pointer to Node.
| common-pile/stackexchange_filtered |
IDE for XSLT stylesheets
What is the best IDE for creating and debugging complex XSLT stylesheets?
For debugging, the ability to set breakpoints and step through the source would be great.
I am interested in all options both commercial and free.
Editors worth checking out:
Visual Studio
Altova's StyleVision/XMLSpy
Oxygen
StylusStudio
All have their specific advantages, so just check them out. If you already have Visual Studio, I'd suggest you just getting started with this one.
I use Visual Studio. It lets you set breakpoints (conditional or otherwise) and establish watches on whatever XPath expression you can come up with. It also supports XSLT right out the box and colours it differently than regular XML so developing in it is very easy.
My recommendation is XMLSpy, but isn't free - http://www.altova.com/products/xmlspy/xmlspy.html
If you looking for free, you can try NetBeans - http://www.netbeans.org/
I have used both and agree with the above recommendation. If you are a dabbling in XML as part of other work (i.e. java programming) then you can make due quite well with NetBeans. However, if you are doing XML development as your main task then you really should invest in XMLSpy.
I got a recommendation from someone here for Altova XMLSpy, and it was pretty nice for the few days I used it.
I've been happy with using XML Cooktop for light XSL development. It's free and has been fairly reliable for me.
cooktop looks cool, but do you know if it has XSLT 2.0 support?
XML Spy is the best I've used, without question. But it's very expensive.
I think XSLT debugging and single-stepping is really overrated. Granted, when I started using XSLT there were no debuggers, so I might just be identifying with my torturers here. But the same things that make writing XSLT often feel as difficult as driving a car while wearing a straitjacket (e.g., variables don't) also make using a debugger not a whole lot more useful than the insert-print-statement paradigm.
I recommend Liquid Studio.
For XSLT transformations you can even go for their quite functional Community Edition. It lacks many features present in commercial editions like:
Diagram View
Graphical View
Web Services Tools
XPath Tools
Debuggers
Refactoring
XML Schema Documentation and XML Document Generator
Visual Studio Extension
XML Differencing, etc.
BUT still Community Edition is cool enough as an editor for XML (WSDL, XSLT, XQuery, DTD, CSS, XDR, XML Schema, JSON) with:
Autocompletion (based on schema)
Code colouring
Collapsible sections
Spellchecker (grammar and XML validation)
Community edition also starts as a 15-days trial so you can evaluate those paid features before deciding on edition.
If you really need XSLT debugging then your option would be XML Editor Edition. Though Community Edition still allows for smooth transformations while being able to work with source, target and style files in the same IDE.
You can find more details here.
Hope this helps.
| common-pile/stackexchange_filtered |
Gradle build fails on news mac
I try to compile my Gradle/Kotline Project on my new Mac and getting the same error 'Resource not found'-like errors for all my gradle projects. Any ideas whats wrong? The same project for example works on the latest Docker 'primetoninc/jdk'.
mac:nextlevel2017-kotline robert.rajakone$ ./gradlew build
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.gradle.internal.reflect.JavaMethod (file:/Users/robert.rajakone/.gradle/wrapper/dists/gradle-3.5.1-all/42vjebfdws9pjts3l4bouoq0p/gradle-3.5.1/lib/gradle-base-services-3.5.1.jar) to method java.lang.ClassLoader.getPackages()
WARNING: Please consider reporting this to the maintainers of org.gradle.internal.reflect.JavaMethod
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
:compileKotlin FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Could not resolve all dependencies for configuration ':compileClasspath'.
> Could not find org.springframework.boot:spring-boot-starter-web:.
Searched in the following locations:
https://repo1.maven.org/maven2/org/springframework/boot/spring-boot-starter-web//spring-boot-starter-web-.pom
https://repo1.maven.org/maven2/org/springframework/boot/spring-boot-starter-web//spring-boot-starter-web-.jar
https://repo.spring.io/snapshot/org/springframework/boot/spring-boot-starter-web//spring-boot-starter-web-.pom
https://repo.spring.io/snapshot/org/springframework/boot/spring-boot-starter-web//spring-boot-starter-web-.jar
https://repo.spring.io/milestone/org/springframework/boot/spring-boot-starter-web//spring-boot-starter-web-.pom
https://repo.spring.io/milestone/org/springframework/boot/spring-boot-starter-web//spring-boot-starter-web-.jar
Required by:
project :
> Could not find org.springframework.boot:spring-boot-starter-data-jpa:.
Required by:
project :
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
Total time: 4.354 secs
mac:nextlevel2017-kotline robert.rajakone$ uname -a
Darwin mac 16.7.0 Darwin Kernel Version 16.7.0: Thu Jun 15 17:36:27 PDT 2017; root:xnu-3789.70.16~2/RELEASE_X86_64 x86_64
mac:nextlevel2017-kotline robert.rajakone$ ./gradlew -version
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.codehaus.groovy.reflection.CachedClass (file:/Users/robert.rajakone/.gradle/wrapper/dists/gradle-3.5.1-all/42vjebfdws9pjts3l4bouoq0p/gradle-3.5.1/lib/groovy-all-2.4.10.jar) to method java.lang.Object.finalize()
WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.reflection.CachedClass
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
------------------------------------------------------------
Gradle 3.5.1
------------------------------------------------------------
Build time: 2017-06-16 14:36:27 UTC
Revision: d4c3bb4eac74bd0a3c70a0d213709e484193e251
Groovy: 2.4.10
Ant: Apache Ant(TM) version 1.9.6 compiled on June 29 2015
JVM: 9 (Oracle Corporation 9+181)
OS: Mac OS X 10.12.6 x86_64
mac:nextlevel2017-kotline robert.rajakone$
upgrade to newest gradle doesn't help (./gradlew wrapper --gradle-version=4.2.1)
What version of Java are you using? i.e. what is the output of gradle -version
Gradle 3.5.1
java version "9"
Java(TM) SE Runtime Environment (build 9+181)
Java HotSpot(TM) 64-Bit Server VM (build 9+181, mixed mode)
| common-pile/stackexchange_filtered |
Distance between 2 points in a circle
I am given 2 points inside a circle. I have to find a point on the circle (not inside, not outside) so that the sum of the distances between the given 2 points and the point i found is minimum. I only have to find the minimum distance, not the position of the point.
Could you please share, What you have tried so far?
It is a minimization problem:
minimize sqrt((x1-x0)^2 + (y1-y0)^2) + sqrt((x2-x0)^2 + (y2-y0)^2)
^ ^
(distance from point1) (distance from point 2)
subject to constraints:
x1 = C1
y1 = C2
x2 = C3
x4 = C4
x0^2 + y0^2 = r^2
(assuming the coordinates are already aligned to the center of the circle as (0,0)).
(C1,C2,C3,C4,r) are given constants, just need to assign them.
After assigning x1,y1,x2,y2 - you are given a minimization problem with 2 variables (x0,y0) and a constraint. The minimization problem can be solved using lagrange multiplier.
Let (x1, y1) and (x2, y2) be the points inside the circle, (x, y) be the point on the circle, r be the radius of the circle.
Your problem reduces to a Lagrange conditional extremum problem, namely:
extremum function:
f(x, y) = sqrt((x-x1)^2 + (y-y1)^2) + sqrt((x-x2)^2 + (y-y2)^2)
condition:
g(x, y) = x^2 + y^2 = r^2 (1)
Introduce an auxiliary function(reference):
ฮ(x, y, ฮป) = f(x, y) + ฮป(g(x, y) - r^2)
Let โฮ / โx = 0, we have:
(x-x1)/sqrt((x-x1)^2 + (y-y1)^2) + (x-x2)/sqrt((x-x2)^2 + (y-y2)^2) + 2ฮปx = 0 (2)
Let โฮ / โy = 0, we have:
(y-y1)/sqrt((x-x1)^2 + (y-y1)^2) + (y-y2)/sqrt((x-x2)^2 + (y-y2)^2) + 2ฮปy = 0 (3)
Now we have 3 variables(i.e. x, y and ฮป) and 3 equations(i.e. (1), (2) and (3)), so it's solvable.
Please be noted that there should be two solutions(unless two inside points happen to be the center of the circle). One is the minimal value(which is what you need), the other is the maximal value(which will be ignored).
just another approach(which is more directly):
for simplicity, assume the circle is at (0,0) with radius r
and suppose two points are P1(x1,y1) and P2(x2,y2)
we can calculate the polar angle of these two points, suppose they are alpha1 and alpha2
obviously, the point which is on the circle and having the minimum sum of distance to P1 and P2 is within the circular sector composed of alpha1 and alpha2
meanwhile, the sum of distance between the point on the circle within that sector and P1 and P2 is a quadratic function. so, the minimum distance can be found by using trichotomy.
| common-pile/stackexchange_filtered |
CiviCase Timelines based on specific values of fields
Iโm new here, so sorry if Iโm asking something too obvious, but I couldnโt find an answer in Civi manual to it. The manual stats that โโฆif one of the activities in your standard timeline is a medical evaluation, you could have 2 or more timelines that outline different schedules of medical treatment and add one of these to the case depending on the outcome of the original medical evaluation.โ However, I couldnโt find anywhere how to I set a timeline which is dependent on another field value. Thanks
Edit: Could I do it with CiviRules, maybe? Or any other better option?
Edit 2: Yes, was able to do it with CiviRules
It's maybe unclear but when it says "and add one of these to the case depending on the outcome of the original medical evaluation" it means the case manager would manually add the appropriate timeline after assessing the situation. Not based on the value of a field but on a variety of factors requiring professional evaulation.
| common-pile/stackexchange_filtered |
display divs without blanks - CSS
I'm struggling with some CSS issue. I have a few divs to display, one after another. I'm using the inline block display to display them
I'd like to display divs. The thing is, it appears like that :
I guess it is because of the display style. Because of the shirt-div height, i got some blank spots above the "lone rose" and the "Dim Mak" ones. Is there a way to remove these blanks and to display these divs so they can fill the whole space ?
this is the HTML
{% for prod in articles %}
<a href="{{ path('dyma_shop_info_article', {'id': prod.id}) }}" id="link_prod">
<div class="product_detail">
<img src="{{ asset('uploads/'~prod.name~'/'~random(prod.pictures).picturename) }}" alt="" />
<div class="prod_title">
{{prod.name}}
</div>
</div>
</a>
{% endfor %}
and this is the CSS for this div
.product_detail
{
float: left;
position: relative;
width: 30%;
border: 1px solid #666;
padding: 5px;
background-color: rgba(255,255,255,0.7);
text-align: center;
margin: 5px;
box-shadow: 7px 7px 7px #666;
transition: 0.3s;
}
.product_detail:hover
{
box-shadow: 12px 12px 12px #666;
transition: 0.3s;
}
.product_detail img
{
max-width: 100%;
max-height: 400px;
z-index: 99;
}
.prod_title
{
height: 40px;
line-height: 40px;
bottom: 0px;
left: 0px;
right: 0px;
padding: 10px;
margin: 0;
padding: 0;
font-size: 22px;
background-color: #fff;
text-align: center;
}
Please post your HTML and CSS.
did you try to make every div floating?
The probable solution is jQuery masonry, or perhaps CSS multi-column layouts.
possible duplicate of CSS Floating Divs At Variable Heights
By the way, you're not using display: inline-block; in the code you posted.
What is your expected result ?
Yes, my bad, I changed it after @Fuzzyma comment ! Thank you, I'm trying the isotope script right now :)
Yes, Isotope or Masonry can achieve those grids.
| common-pile/stackexchange_filtered |
Matlab precion when specifying fractions
I wanted to create a vector with three values 1/6, 2/3 and 1/6. Obviously I Matlab has to convert these rational numbers into real numbers but I expected that it would maximize the precision available.
It's storing the values as doubles but it's storing them as -
b =
0.1667 0.6667 0.1667
This is a huge loss of precision. Isn't double supposed to mean 52 bits of accuracy for the fractional part of the number, why are the numbers truncated so severly?
what is the purpose of this scenario ? so that i can suggest possible solution
I am solving differential equations using Runge Kutta 4. The loss of precision is horrendous for numerical analysis such as this.
The numbers are only displayed that way. Internally, they use full precision. You can use the format command to change display precision. For example:
format long
will display them as:
0.166666666666667 0.666666666666667 0.166666666666667
So the answer is simple; there is no loss of precision. It's only a display issue.
You can read the documentation on what other formats you can use to display numbers.
you can not store values as 1/2 or 1/4 or 1/6 in to a Double variable... these are stored as decimals behind the system; if you want to store these values , try storing it as string that would work;
Whenever you want to make mathematical calculation using these strings then convert the value into number and continue....
I don't see the point. That way, you would convert the same string over and over again, and always get the same answer. That wouldn't serve any purpose.
what is the purpose of this scenario ? so that i can suggest possible solution...
There's no scenario, really. The OP is just confused about the way Matlab displays numbers and thinks that the display precision represents the actual precision used to store the numbers.
| common-pile/stackexchange_filtered |
SwiftUI view is in the middle instead of in the top - NavigationView
I am trying to create a navigation view in SwiftUI - I want to navigate from the login form to the Home Screen. In the preview, the Home Screen is looking how it should, but on the live preview - it is centered in the middle of the screen. I tried with .navigationBarHidden(true), it goes a little bit up, but still not on the top of the screen. Please help.
struct ContentView: View {
@EnvironmentObject var viewModel: AppViewModel
var body: some View {
NavigationView {
if viewModel.signedIn {
NavigationView {
HomeScreen()
}
.navigationBarHidden(true)
}
else {
SignInView()
}
}
.onAppear {
viewModel.signedIn = viewModel.isSignedIn
}
}
}
You need to "push" the view up by using a Spacer() inside a VStack:
var body: some View {
NavigationView {
// Add this
VStack {
if viewModel.signedIn {
NavigationView {
HomeScreen()
}
.navigationBarHidden(true)
}
else {
SignInView()
}
// This will push the view up
Spacer()
}
}
.onAppear {
viewModel.signedIn = viewModel.isSignedIn
}
}
It is still the same - the Home Screen is in the middle of the screen
Add a VStack and Spacer. The spacer consumes the rest of the space below the HomeScreen
NavigationView {
VStack {
HomeScreen()
Spacer()
}
}
If SignInView is also supposed to be on top move the VStack and the Spacer up accordingly.
| common-pile/stackexchange_filtered |
Proving an inequality.
I have stuck in the middle of a problem and I don't know whether or not the following inequality is true.
$$\sum_{i=0}^{[n\,t]} \Bigl(1-\frac{i}{n\,t}\Bigr)^{n-1}>t$$
Assuming that $n$ is a natural number and $t$ is a positive real number and $[x]$ is the floor function.
Please show (1) your work and (2) what $[x]$ means (I presume floor function, but there really is no standard notation for that).
The cases $n=1$ and $n=2$ are easy. For $n>2$ I can prove $\dots>t/2$. Numerical evidence suggests that $\dots>t+1/2$.
| common-pile/stackexchange_filtered |
Convert datetime 64ns to mm/dd/yyyy in pandas
I have a date column of format datetime64[ns]
date
01/04/2020
01/06/2020
01/08/2020
the first row represents apr-2020, then second row is june-2020 and third is aug-2020
From my actual data ,the date is stored as "dd/mm/yyyy" in the date column and in the pandas column is shown as "yyyy-mm-dd". I think it's just the represenation
I am trying to convert the date column to "mm/dd/yyyy" and I am using the below code:
df['date'] = pd.to_datetime(df['date'], format='%d/%m/%Y').dt.strftime('%m/%d/%Y')
While I export the data again, the date format for the date "04/01/2020" is represented as Jan-2020 instead of Apr-2020. What am I doing wrong here?
I think your solution is OK, but is possible input data are mm/dd instead dd/mm ?
nope. the input data is in dd/mm only.. and that' where I am getting confused as to why the date column is not getting converted.
hmmm, weird. Is possible add data sample for see this problem?
added the data for your reference
Ok, but here is problem, if column is datetime64[ns] then not possible format 01/04/2020
Also this data returned wrong ouput? for me working well.
but it is actually like that only.. my date contains the same values and it is of the format datetime64[ns]
if you export the data into an excel and look from the filter you will see that all the values belongs to the january month.
Then problem is excel parsing it wrongly.
Could be. then how can I rectify it?
I think problem is in excel side, pandas working here perfeclty fine.
how do I rectify it then? any option to parse correctly the dates while importing the data?
I not working with excle so not idea how prevent it.
| common-pile/stackexchange_filtered |
SQL CE 3.5 - Select field from another table
I'm working with databases for the first time (SQL CE 3.5) and I'm not certain how I define a relationship between tables where later ( I think) I'll have to use a join to select some value for one field from another table.
_________ __________ __________
MY TABLE| | TABLE A | | TABLE B |
--------- |---------| |---------|
OrderID | | a_Text | | b_Text |
--------- |---------| |---------|
a_Text |
---------
b_Text |
---------
When it is all implemented when I define a value for a_Text in [MY TABLE] I only want to be able to set a value for a_Text as defined in [Table A] (and again for b_Text).
What you want here is a Foreign Key Constraint on your a_Text field which enforces a link between MY_TABLE/TABLE_A for that particular field value.
This is a database relationship and as such should be defined at database-level and if required, at model-level. Most modern day ORM technologies e.g. EntityFramework/NHibernate, do a pretty good job at representing the same relationships at model-level, or at least make it very trivial to do so - EF will do it automatically if you create a context via a database directly.
It's pretty simple to create a relationship using SQLCE through the VS Designer - Walkthrough: Creating a SQL Server Compact Database gives an example of adding a relationship between two tables.
Based on your requirements I wouldn't recommend having your value field (a_Text) in TABLE A as the PK. One of the biggest concerns is if you update the key you need to cascade that change throughout the other referencing tables. It's much more flexible to introduce a surrogate key and make your a_Text field a unique key itself.
My Table Table A Table B
-------- ------- -------
OrderID a_ID a_ID
a_ID a_Text a_Text
b_ID
Whilst this is certainly a model of the relationship I wish to establish I'm yet to find an example where a_Text is not a number type. I want to be able to use nvarchar(12).
@SimonJohnson it's definitely possible to use a foreign key constraint with a text-type value (GUIDs are essentially just large strings and they work just fine).
I tried this (setting the a_Text field to be a PK in [TABLE A]) and it kept failing until I restarted Visual Studio (2008), thanks for the help :)
@SimonJohnson no worries, yeah the VS designer can be quite buggy for SQLCE, it's also quite limited so you might be better off using direct SQL instead. Just a note on what your doing though, in terms of performance I would definitely not recommend using a text field as a PK in a database table. I would recommend using a surrogate key (e.g. auto-increment) and have your a_Text as a unique key on the table.
Its got to be a string value. So in this example Table A may contain three entries, Apple, Orange and Grape. When I create a record on My_Table I only want to be able to select a value from the records that exist on Table_A.
In my actual implementation the possible values in Table_A will be inserted into the database on initial load (just after installation) and can only every be Pass, Fail or Inconclusive with a similar constraint on b_Text.
@SimonJohnson "Its got to be a string value" - no it doesn't :) like I said you could use a surrogate key with a more approprite type as the PK and have the FK constraint on the unique column (not the PK). I will update my answer to show you what I mean. The approach your taking just now will most definitely cause you problems in the future.
What problems can this cause?
@SimonJohnson see update. There are also other concerns such as performance (depending on how big the values for a_Text will be). Do a bit of digging into the subject and you will see there are various reasons why you shouldn't do it. That's not to say you can't do, however, more often than not it causes more problems in the long run.
I can see the advantage of using this system however in this case both Table A and Table B will only every contain three records. The tables will actually be marked as read only after those three records are added just after installation on the users machine. They exist to make a formal connection that this field can only ever contain one of these (the specified values) in those tables.
The user can never add to these two tables or cause more choices through action to become relevant as this is a pretty closed system.
@SimonJohnson your justification seems fair enough. The only concern is if in the future you decide to change those table values in future versions of your application you will essentially lose backwards compatibility.
| common-pile/stackexchange_filtered |
Do tapjoy commercial campain and get Pro version app?
I have seen some apps like "Get paid to play" and "Appdog" were tapjoy is used. You do commercial campains and get paid for it.I was wondering if there was any way to use tapjoy in your app with your own point system. And if you reach a certain number of points, you will be able to get a pro or a paid version for free.
Are there any way of doing this?
Have you looked at Tapjoy's Knowledge Center? There's an article on managing your own virtual currency: Non-Managed Currency. It's entirely up to the developer how they intend to manage their virtual economy and what they offer for purchase. I don't know about converting virtual currency into a tangible good (like a Pro version) but you could certainly offer user's some in-game item that could reduce the number of ads they see.
| common-pile/stackexchange_filtered |
php mysql update date type from YYYY-mm-dd to mm-dd-YYYY
I have a mysql DATE Field named dDate that is formatted like this: YYY-MM-DD I would like to update all of the records in this field to a different DATE format of MM-DD-YYYY can someone show me how to to this via mySQL query ?
I tried this example but it doesn't work
UPDATE address_email SET dDate = DATE_FORMAT(dDate, '%m-%d-%Y')
and this:
UPDATE address_email SET dDate = DATE(dDate, '%m-%d-%Y')
THANKS SO MUCH FOR THE HELP.
@Lior Sorry that is what I wanted to write still the same it keeps the YYYY-mm-dd format
@Lior wait is the format YYYY-MM-DD the default/native format ?
Based on the MySQL docs, it looks like MySQL retrieves and displays DATE values in 'YYYY-MM-DD' format only, and I don't think you can change it.
What you really want to do is format the date when you pull it out of the database, not change the default formatting of the DATE field for your table. When you SELECT, do something like this:
SELECT DATE_FORMAT(dDate, '%m-%d-%Y') FROM address_email;
Have a look at the mysql documentation regarding date_format :
http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-format
UPDATE vechical
SET PDate = DATE_FORMAT('12-08-2012','%d-%m-%y')
WHERE Cust_ID='21';
Please correct query for this
it is better to formate in a select query as MySql will not allow changing the date format.
| common-pile/stackexchange_filtered |
WCF goes offline automatically
I have hosted a WCF service application on IIS8 (Windows Server 2012 R2). These services automatically goes offline sometimes an the other applications which uses these services throw this error,
The requested service, 'http://<IP_ADDRESS>:8086/Apis/ILQ/Services/ILQInfoService.svc' could not be activated. See the server's diagnostic trace logs for more information.
When I open the Web.config of the WCF application and save it again, the services starts working back again.
It sounds like the app pool is being shut down for some reason - possibly an error. Did you look in the application event viewer to see if there were any errors logged there?
yes, I checked the event viewer. no errors have been logged. only warnings.
| common-pile/stackexchange_filtered |
Access to exposed header from Angular 8
Im trying to get the exposed header from the backend with angular 8. here you can see the Response headers when I Posted the API
**Response headers(6)**
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: http://localhost:4200
Access-Control-Expose-Headers: Xtoken, CX-Token, Content-Disposition
Content-Length: 0
Date: Mon, 07 Oct 2019 09:03:34 GMT
Vary: Origin
I want to access to Xtoken.
login(login: string, password: string) {
return this.http.post<any>(`${apiUrl.apiUrl}//login`, { login, password }, { observe: 'response' as 'body' })
.pipe(map((res, err) => {
console.log(res.headers.get("Xtoken"));
}));
}
It return null.
Could anyone help with this?
Is there a key in your header called "Xtoken"?
@NikNik yes there's a key called "Xtoken"
In your header example, it's missing
| common-pile/stackexchange_filtered |
Automatically fixing un-closed HTML tags in Ruby
I'm trying to convert HTML pages to Markdown using the reverse-markdown Ruby gem. Unfortunately it fails with:
/usr/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:95:in `rescue in parse': #<REXML::ParseException: Missing end tag for 'img' (got "td") (REXML::ParseException)
The source contains some IMG, INPUT, etc. tags which end with > instead of />.
I've tried the tidy_ffi gem:
doc = Nokogiri::HTML(TidyFFI::Tidy.new(Nokogiri::HTML(page).to_html,
:numeric_entities => 1,
:output_html => 1,
:merge_divs => 0,
:merge_spans => 0,
:join_styles => 0,
:clean => 1,
:indent => 1,
:wrap => 0,
:drop_empty_paras => 0,
:literal_attributes => 1).clean)
but that made no difference. Any suggestions?
Show some samples of the HTML please.
At which point do you get the error? Show us the relevant code as well, please
Where is the HTML coming from? A markdown processor?
Reverse-markdown actually assumes the markdown processor produces well-formed XHTML. If yours doesn't, you may want to try the html2markdown gem. It parses using Nokogiri, and is likely more robust (disclaimer: I have not used it).
I made a gem that excerpts html: https://www.ruby-toolbox.com/gems/auto_excerpt maybe you can use that or look at the code it uses to do this? Not sure if that answers the question here.
Actually I just noticed you call Nokogiri::HTML twice: Nokogiri::HTML(TidyFFI::Tidy.new(Nokogiri::HTML(page).to_html
I'm not sure if the error you're getting is coming from Nokogiri or TifyFFI though.
I think this is not relevant at all. OP seems to know how to handle HTML.
His use of Nokogiri is legal. He parses the document to have Nokogiri do some fixups, converts it to HTML again, and tries to get TidyFFI to work its magic, which will return HTML again. Finally, he reparses it with Nokogiri into a document. It's unconventional, but that's OK. It just doesn't fix the problem.
Correct. But, I was saying if the error was coming from the call to Nokogiri::HTML() then the first HTML conversion might be the issue, before it gets parsed by TidyFFI.
| common-pile/stackexchange_filtered |
How to shade smooth using BPY
New to bpy and I'd like to write some code that sets the shading of an object to smooth by just clicking a button. I did some research and I have written the following lines for now:
import bpy
def draw(self, context):
layout = self.layout
row = layout.row()
row.operator(" bpy.ops.object.shade_flat()")
row = layout.row()
row.operator(" mesh.object.shade_smooth()")
I assume that I would have to add some other lines in order to register the class and all that right? Anyway, the problem is that the code does not work. Is this the way it is supposed to work at all?
Q: What's the correct way to call this function when pressing a custom button?
Or row.operator("object.shade_smooth")?
@lemon object has no attribute "shade_smooth"
https://blender.stackexchange.com/questions/167862/how-to-create-a-button-on-the-n-panel
Did you used the quotes, as I indicated above?
@lemon the problem is solved but what if i have a bunch of actions to do in a single click?
For "a bunch of actions" you need to understand the principles, I recommend read the post linked by batFINGER: https://blender.stackexchange.com/questions/167862/how-to-create-a-button-on-the-n-panel
https://blender.stackexchange.com/questions/71190/toggle-subdivision-modifier-and-smooth-shading-with-one-key
| common-pile/stackexchange_filtered |
Simple.Data join without where clause including any primary table columns
I am using Simple.Data and have been trying to find an example that will let me do a join with the only condition in the WHERE clause be from the joined table. All of the examples I have seen always have at least one column in the primary table included in the WHERE. Take for example the following data:
private void TestSetup()
{
var adapter = new InMemoryAdapter();
adapter.SetKeyColumn("Events", "Id");
adapter.SetAutoIncrementColumn("Events", "Id");
adapter.SetKeyColumn("Doors", "Id");
adapter.SetAutoIncrementColumn("Doors", "Id");
adapter.Join.Master("Events", "Id").Detail("Doors", "EventId");
Database.UseMockAdapter(adapter);
db.Events.Insert(Id: 1, Code: "CodeMash2013", Name: "CodeMash 2013");
db.Events.Insert(Id: 2, Code: "SomewhereElse", Name: "Some Other Conf");
db.Doors.Insert(Id: 1, Code: "F7E08AC9-5E75-417D-A7AA-60E88B5B99AD", EventID: 1);
db.Doors.Insert(Id: 2, Code: "0631C802-2748-4C63-A6D9-CE8C803002EB", EventID: 1);
db.Doors.Insert(Id: 3, Code: "281ED88F-677D-49B9-84FA-4FAE022BBC73", EventID: 1);
db.Doors.Insert(Id: 4, Code: "9DF7E964-1ECE-42E3-8211-1F2BF7054A0D", EventID: 2);
db.Doors.Insert(Id: 5, Code: "9418123D-312A-4E8C-8807-59F0A63F43B9", EventID: 2);
}
I am trying to figure out the syntax I need to use in Simple.Data to get something similar to this T-SQL:
SELECT d.Code FROM Doors AS d INNER JOIN Events AS e ON d.EventID = e.Id WHERE e.Code = @EventCode
The final result should be only the three Door rows for EventId 1 when I pass in an event code of "CodeMash2013". Thanks!
First, a general point: since you've got criteria against the joined Events table, the LEFT OUTER is redundant; only rows with matching Event Codes will be returned, which implies only those rows where the join from Doors to Events was successful.
If you've got referential integrity set up in your database, with a foreign key relationship from Doors to Events, then Simple.Data can handle joins automatically. With that in mind, this code will work, both with the InMemoryAdapter and SQL Server:
List<dynamic> actual = db.Doors.FindAll(db.Doors.Events.Code == "CodeMash2013")
.Select(db.Doors.Id, db.Events.Name)
.ToList();
Assert.AreEqual(3, actual.Count);
If you don't have referential integrity set up then you should, but if you can't for some reason, then the following will work with SQL Server, but will trigger a bug in the InMemoryAdapter that I've just fixed but haven't done a release for yet:
dynamic eventAlias;
List<dynamic> actual = db.Doors.All()
.Join(db.Events, out eventAlias)
.On(db.Doors.EventID == eventAlias.Id)
.Select(db.Doors.Id, db.Events.Name)
.Where(eventAlias.Code == eventCode)
.ToList();
Assert.AreEqual(3, actual.Count);
I do have referential integrity set up on both the actual database and my in memory test data. Your first answer gives me exactly the results I wanted. Thanks! I also edited the question since I really wanted an INNER JOIN anyhow, not an OUTER.
Hey Mark, i'm wondering if the bug that you described here related to the in-memory adapter still exists. I recently tried a very similar query to the second of your examples above (without referential integrity). I am not getting an exception, however the query isn't returning any results. I did notice that when I removed the where clause, the query works as expected.
UPDATE : This answer applies when using the Simple.Data SQL Server Provider, not the InMemoryAdapter
You could probably use the following for this:
db.Doors.All()
.Select(
db.Doors.Code)
.LeftJoin(db.Events).On(db.Doors.EventID == db.Events.Id)
.Where(db.Events.Code == eventCode);
You might need to experiment between using LeftJoin and OuterJoin depending on your provider. If you're using the ADO provider for example, the two functions both generate LEFT JOIN statements as LEFT JOIN and LEFT OUTER JOIN are synonymous in t-sql.
If you need to use aliases for some reason, the syntax is slightly different.
dynamic EventAlias;
db.Doors.All()
.LeftJoin(db.Events.As"e", out EventAlias).On(db.Doors.EventID == db.EventAlias.Id)
.Select(
db.Doors.Code)
.Where(db.EventAlias.Code == eventCode);
There's no reason why a where clause must contain a field from the primary key table. You can find more examples of Joins here on the Simple.Data doc site. Click Explicit Joins when you get there.
Neither of those queries is returning any results when I run it against my in memory test data (or against a SQL Server test DB) using Simple.Data version 1.0.0-rc3. I can't get tracing to work correctly and show me the query it is executing so I can't troubleshoot it any further at the moment. I am going to keep trying to get a trace of the actual query being executed.
Joe, Tracing won't work because the InMemoryAdapter is in memory. It's part of the Simple.Data.Core package and doesn't touch any SQL at all - you'd have to create the database in SQL Server and point Simple.Data at that to check the SQL it is creating. Simple.Data would use the SQl Server provider at that point which creates the SQL commands sent to a database.
| common-pile/stackexchange_filtered |
Why does frexp/ldexp significand range from [0.5, 1.0)?
Why do the frexp/ldexp functions have a significand that ranges from [0.5, 1.0) when IEEE 745 floating point values actually have a significand that ranges from [1.0, 2)?
The rationale documents accompanying the C89 and C99 standards are silent on this matter. A reasonable guess would be that this normalization was chosen because it was familiar to the people who created C, as floating-point formats on DEC architectures used a mantissa normalized to [0.5,1), rather than [1,2) chosen for the IEEE formats introduced later.
"So why does frexp() put the radix point to the left of the implicit bit, and return a number in [0.5, 1) instead of scientific-notation-like [1, 2)" - "Perhaps the format returned by frexp made sense with the PDP-11's floating-point format(s)"
For any valid floating point value that is not 0 or denormal, the high bit of the mantissa is always 1. IEEE-754 takes advantage of this by not encoding it in the binary value, thus squeezing out one extra bit of precision. Like double has 53 bits of precision but encodes only 52. So the encoded value is never less than 1, the range is [1.0 .. 2)
But as soon as the actual value is needed, like when you printf() it or calculations need to be done, then this bit needs to be restored from the encoded value. Typically done internally inside the floating point execution unit of the processor. Otherwise the inspiration behind the infamous 80 bit internal format of the Intel x87 FPU design. So the actual range is [0.5 .. 1). The frexp function works with actual values.
I don't follow. What is being restored and what do you mean by "actual values"?
Hmya, you have to understand more about the way floating point values are encoded to make sense of it. It is not terribly intuitive. Key is that you can get 53 bits of precision out of 52 stored bits. Wikipedia has lots of material about it, check the "Double precision" article for example.
I know perfectly well how IEEE 745 floating point values work. I still do not see what the implied bit of the mantissa has to do with the range [0.5, 1.0). That is what you need to explain.
The implied bit changes the range of the encoded mantissa value to [1.0 .. 2). That's the oddball, the one you never observe in a program. [0.5.. 1) is the normal range.
I have to disagree with your assertion that the non-encoded bit somehow implies a normal range of [0.5, 1.0). If floats were 33-bits with a full 24-bit mantissa it would change nothing. It seems to me to be an arbitrary choice, and a less logical one at that.
For normalized floats the mantissa has a range of [1.0, 2.0) and for denormals it has a range of [0.0, 1.0), neither of which is [0.5, 1.0).
| common-pile/stackexchange_filtered |
What would happen if a (democratic) government does not honour a court ruling?
What would happen if a government does not honor a ruling made by a judge?
An example would be US President Trump's Travel Ban, which was recently suspended entirely by a federal judge.
Note: This question is for worldwide politics. (The information I gave is just for an example/context).
Is this question about USA or worldwide?
This question is Worldwide. (The information I gave is just for an example/context)
In Europe definitely the court decision would prevail.
If we take Russia from where I am, the court decision would prevail over the government definitely especially regarding such not-so-important issue. It will prevail automatically: every person on duty will follow the court's decision rather than other regulations. Not following a court decision is a crime.
I want to point out that in modern Russia the judges are heavily dependent on the president in their career, so they are highly unlikely making decisions against the president. But they make the decisions against the government and ministries routinely.
In 1993 there was a case where the Constitutional court decided that the president violated the Constitution by disbanding the legislature and should step down. This led to a major constitutional crisis involving tanks firing on the legislature building. The president made a coup, enacted new constitution, the head of the constitutional court was fired (ironically, now he is the head of the constitutional court again thanks to his principled position that time).
So, for Russia the answer is: the courts will prevail or there will be a major constitutional crisis involving tanks firing in the capital.
In a democracy where rule of law prevails, the executive has to follow judicial decisions. People have the right to address the judiciary if they feel that the executive doesn't give them the rights the legislative wants them to have.
People getting their rights restricted by government officials acting against the law is not uncommon. The law is complex and a government official aren't legal professionals. Police officer might stretch the limits of when to use how much force, a tax bill can have a calculation mistake or an immigration officer might reject a person without a legally valid reason to do so.
When this happens, the person whose rights got violated has the option to go to court. The court will then decide if this specific action by the government official was lawful or not. When the court finds that they were not, the actions taken by the government official will be reversed. The courts might also rule that the government officials who executed the order get sanctioned and/or that the government needs to pay reparations.
So what would happen if a government organization would repeatedly and systematically act against the law?
Lots of court cases, lots of cost for the tax payer, and in the end all their actions would be annulled by the courts anyway.
| common-pile/stackexchange_filtered |
Why can i tokenize string passed as function parameter but not string retrieved using getline?
The problem I have is that I am able to successfully tokenize the string parameterswhich is input into my function menucall, but using the same method I cannot properly tokenize a string obtained from a file using getline and I do not understand why. Both strings are separated into tokens properly. All tokens made from the string parameters have their leading spaces removed, while all the leading spaces on tokens made from the string book are not.
Book and Bookstore are classes that i have created. There is more code after this but it is dependent on the tokenization of these 2 strings. My function call looks like this:
menuCall("1, BookInput.txt", store);
and the first line in my input file looks like this:
12345, Michael Main, Data Structures, 100.00
void menuCall(string parameters, Bookstore bookstore)
{
ifstream in;
int i, o, option;
int j = 0, p = 0;
string optionToken, bookToken, book;
string *sp, *so;
Book newBook;
sp = new string[5];
so = new string[4];
while (parameters != "") //Check if the string is empty
{
i = parameters.find(",", 0); //Find next comma in the string
if (i > 0) //If a comma is found
{
optionToken = parameters.substr(0, i); //Substring ends at first comma
parameters.erase(0, i + 1); //Delete substring and comma from original string
while (optionToken[0] == ' ') //Check for spaces at beginning of token
{
optionToken.erase(0, 1); //Delete spaces at beginning of token
}
}
if (i < 0) //If no comma is found
{
optionToken = parameters; //Substring ends at the end of the original string
parameters.erase(0, i); //Delete substring from original string
while (optionToken[0] == ' ') //Check for spaces at beginning of token
{
optionToken.erase(0, 1); //Delete spaces at beginning of token
}
}
sp[j] = optionToken; //Token is added to dynamic array
j++; //Keeps track of iterations
}
option = stoi(sp[0]); //Change option number from string to usable int
switch (option) //Switch to determine what menu option is to be performed
{
case(1): //Case 1
in.open(sp[1]); //Open file from extracted token containing file name
while (!in.eof())
{
getline(in, book); //Get string containing book information from input file file
while (book != "") //Check if the string is empty
{
o = book.find(",", 0); //Find the next comma in the string
if (o > 0) //If a comma is found
{
bookToken = book.substr(0, o); //Substring ends at first comma
book.erase(0, o + 1); //Delete substring and comma from original string
while (bookToken[0] == ' ') //Check for spaces at beginning of token
{
bookToken.erase(0, 1); //Delete spaces at beginning of token
}
}
if (o < 0) //If no comma is found
{
bookToken = book; //Substring ends at the end of the original string
book.erase(0, o); //Delete substring from original string
while (bookToken[0] == ' ') //Check for spaces at beginning of token
{
bookToken.erase(0, 1); //Delete spaces at beginning of token
}
}
so[p] = bookToken; //Token is added to dynamic array
p++; //Keeps track of iterations
}
}
break;
Use std::getline() and std::istringstream for further parsing.
@NathanOliver I need a solution that doesn't involve vectors or regex or boost or any of that other stuff. Simple basic homework here.
@ฯฮฌฮฝฯฮฑ แฟฅฮตแฟ Please explain a little better where you are going with that, new guy here.
May be this helps a bit to understand what I mean: http://stackoverflow.com/questions/23047052/why-does-reading-a-record-struct-fields-from-stdistream-fail-and-how-can-i-fi
I want to know why it is that I can edit a string passed as a parameter to my function but not edit a string retrieved using getline. I'm looking for an explanation of the logic. Neither one of those links tells me that. Perhaps my question topic is worded poorly.
| common-pile/stackexchange_filtered |
Chance of late deliveries (combining two Poisson processes)
If sent mail is a Poisson distributed process with $\lambda = 100$ and delivery men deliver a number of envelopes per day which also turns out to be Poisson distributed with say $\lambda = 10$, how many delivery men should the post office hire such that at most $5\%$ of mail is expected to be late?
For the sake of the question, assume the delivery men deliver at a rate independent of the amount of mail.
I realise these processes may not actually be Poisson distributed, but this was the best example I could come up with to illustrate the problem of combining these two processes.
This is very much alike to cascading nuclear decays. The type of question is usually not asked there, but I would write the (differential) equation and solve it. With the formula, it is a "simple" propagation of the distribution characteristics (simple in the sense of straightforward).
Maybe I am making this to simple, but: Let $S \sim \mathcal{Po}(\lambda=100)$ be the sent mail, and the delivered mail is $D \sim \mathcal{Po}(10 \cdot N)$ where $N$ is the number of mailmen. By independence, the number $N=S-D$ not delivered on time (zero or negative if all delivered) have a Skellam distribution, there is an implementation in R package skellam. Then the $\DeclareMathOperator{\P}{\mathbb{P}} \P(N \le 0)$ can be calculated directly for various $N$.
But to control that at most 5% is late is more tricky. That $N \le 0.05S$ can be written $0.95S \le D$ and its probability can be calculated by conditioning as
$$
\P(0.95S \le D) = \sum_{d=0}^\infty \P(S \le d/0.95)\cdot \P(D=d)
$$
which at least can be evaluated numerically.
This is very nice, I was not familiar with the Skellam distribution!
| common-pile/stackexchange_filtered |
Solution of the parabolic equation $u_t=(axu)_{xx}-((bx+c)u)_x$
I am reading a book and it says that the Kolmogorov equation is similar to the parabolic equation $$u_t=(axu)_{xx}-((bx+c)u)_x$$
This may seem like a silly question but how is this a parabolic equation?
Also
If $$c\leq0$$ and/or $$0<c<a$$ why does this parabola equation have a unique solution?
It is not a parabola, it is a parabolic equation.
equations of the form $u_t = ku_xx + bu_x + cu$ are called parabolic partial differential equation. in your example, the constant $k$ depends on $x.$
Just like conic sections can be parabolic or hyperbolic (think back to 3-D calculus), so can partial differential equations. The classification scheme is actually quite similar too. Recall an equation is parabolic if its something like $c = x^2 + x$ (you can complete the square to make it look nicer)
This classification structure extends to pdes, since your primary (spatial) derivative is $u_{xx}$ this provides the classification of parabolic
How would you go about solving the parabolic equation in the question? Is it something to do with laplace transforms?
| common-pile/stackexchange_filtered |
Make edits outside suggested edit
I couldn't find a duplicate of this bug/feature-request.
I wanted to make an edit to a question to improve the formatting, however I couldn't because there was an edit pending. When I click the edit (1) link, it pops up the review screen but I'm notified:
Thank you for reviewing 20 Suggested Edits today;
Am I at the mercy of waiting for this edit to be approved before I can edit it? I feel like I should be able to edit the question whether it's pending or not.
I'm pretty sure that this can by bypassed by going directly to the post's edit url, /posts/1234/edit. Your edit will go through and Community will reject the pending suggested edit.
@JeremyBanksแ For some reason this didn't work for me.
This has bothered me a few times as well and I've learned to just smile and wave ;-)
There's no easy fix here; for your edit to go through, pending suggested edits must be reviewed first.
You could suggest making your edit automatically "improve" the pending edit, but what if the pending edit is bad and absolutely not useful? Would you undo the other's changes just to push your edit through? Either way it's still a review.
I think this is one of those cases whereby you should slowly step away from the computer ;-) leave the question open and come back to it later when the review has finished.
I'm glad I'm not the only one. I asked another question about edits and it seems that there should be a way to choose whether the edit was useful in this case.
| common-pile/stackexchange_filtered |
What is this default gray circle
I apologize in advance for such a basic question. I haven't been able to find anything online referring to this. I'm probably just not using the right search terms, but I don't know what this is called.
This is zoomed out showing my model which is a large building, and what appears to be a default sort of terrain or horizon. I can't interact with it. What is it and how do I get rid of it?
This view is a little closer and the scene has a sky box applied
This view is much closer at an angle showing the skybox. You can see the gray circle cutting the skybox off at the horizon.
Can you add a screenshot of what it looks like in wireframe mode?
This might be the far clipping plane of the camera coming into effect.
Clipping planes are two (near and far) planes from the camera's origin, away. Anything near than the near is culled, anything further than the far is culled.
If you're using a very wide angle camera, you might get this sort of round clipping effect on a far plane.
Try setting the far plane's value to a much higher number, to see if that helps/solves the problem.
Select your Main Camera in the Hierarchy, and adjust the Clipping plane values in the Inspector, about half way down... here...
Good thought. I'm familiar with clipping planes and adjusting that has no effect. It is related to something like that though. Its not an object. Something to do with player settings, the camera, or something similar.
Doh. I dunno. Been fighting all sorts of z-battle issues with Unity, and layering issues, last few days. Not this, though.
| common-pile/stackexchange_filtered |
Flash CS4: Actionscript 3.0 How to load images from a file
I currently can load images from a url but what is the script for loading and image from a file in my harddrive?
ti.border = true
ti.addEventListener(TextEvent.TEXT_INPUT, onInput);
function onInput(event:TextEvent):void
{
if(ti.text.search('a')!=-1) load_image("http://i54.tinypic.com/anom5d.png", "ottefct");
else if(ti.text.search('b')!=-1) load_image("http://i53.tinypic.com/2dv7dao.png", "rnd");
else if(ti.text.search('c')!=-1) load_image("http://i51.tinypic.com/m8jp7m.png", "ssd");
}
var loaded_images:Dictionary = new Dictionary();
function load_image(url:String, id_name:String)
{
var loader:Loader = new Loader();
loader.name = id_name;
var url_req:URLRequest = new URLRequest(url);
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, onLoadingComplete);
loader.load(url_req);
}
function onLoadingComplete(evt:Event):void
{
var img_name:String = evt.currentTarget.loader.name
var spr_box:Sprite = new Sprite();
spr_box.addChild(evt.currentTarget.loader);
spr_box.mouseChildren = false;
spr_box.doubleClickEnabled = true;
spr_box.addEventListener(MouseEvent.MOUSE_DOWN, drag);
spr_box.addEventListener(MouseEvent.MOUSE_UP, drop);
spr_box.addEventListener(MouseEvent.MOUSE_WHEEL, rotate);
spr_box.addEventListener(MouseEvent.DOUBLE_CLICK , unrotate);
//Shouldn't really hard code this here
spr_box.width = 124;
spr_box.height = 180;
spr_box.x = 430;
spr_box.y = 425;
// - Since this isn't a class, I'd do this instead:
//spr_box.addEventListener(Event.ADDED_TO_STAGE, resize_img);
this.addChild(spr_box);
loaded_images[img_name] = spr_box;
}
//Because this event function lets you control individual image dimensions
/*
function resize_img(evt:Event):void
{
switch (evt.currentTarget.name)
{
case "ImageOne":
evt.currentTarget.width = 250;
evt.currentTarget.height = 250;
break;
default:
evt.currentTarget.width = 180;
evt.currentTarget.height = 124;
break;
}
}
*/
function drag(evt:MouseEvent):void
{
evt.currentTarget.startDrag()
}
function drop(evt:MouseEvent):void
{
evt.currentTarget.stopDrag()
}
function rotate(evt:MouseEvent):void
{
evt.currentTarget.rotation = 90
}
function unrotate(evt:MouseEvent):void
{
evt.currentTarget.rotation = 0
}
To load an image from the client machines hard drive you have to use the FileReference class. This allows you to open a standard file select box, the user to select a file, and that file be loaded into Flash as a ByteArray. You can then use Loader.loadBytes to load as a DisplayObject as usual.
As shanethehat says, to load a file from the clients hard drive, in a solution intended for use on the web, you have to use the FileReference class and have the user pick the file.
So that is probably the way you want to go, but since you ask about "loading an image from a file in my harddrive", if what you want to do only concerns your computer, where you develop and run the swf, you could also use a relative path to the file, or use the file: protocol instead of http:.
| common-pile/stackexchange_filtered |
MySQL aliased subquery cannot be used in where clause
I have a query given below that returns all the data asked. Then in the next query I need to write I need to restrict the output to where the subquery is greater than 5.
I will show examples but I cannot understand why I cannot do what I am attempting.
Query without restriction
select t1.BOOK_NUM, t1.BOOK_TITLE,
(select count(t2.CHECK_OUT_DATE)
from checkout t2
where t2.BOOK_NUM = t1.BOOK_NUM) as Times_Checked_Out
from book t1
order by Times_Checked_Out desc, t1.BOOK_TITLE;
Screenshot of output
Attempt at query with restrictions
select t1.BOOK_NUM, t1.BOOK_TITLE,
(select count(t2.CHECK_OUT_DATE)
from checkout t2
where t2.BOOK_NUM = t1.BOOK_NUM) as Times_Checked_Out
from book t1
where Times_Checked_Out > 5
order by Times_Checked_Out desc, t1.BOOK_TITLE;
Error
You can't use a derived column in a where clause, need to use HAVING:
select t1.BOOK_NUM, t1.BOOK_TITLE,
(select count(t2.CHECK_OUT_DATE)
from checkout t2
where t2.BOOK_NUM = t1.BOOK_NUM) as Times_Checked_Out
from book t1
HAVING Times_Checked_Out > 5
order by Times_Checked_Out desc, t1.BOOK_TITLE;
Thank you very much, when the time is up I will mark as the answer.
Can you also help me understand this, I have the output in the screenshot but instead of the column heading being Times_Checked_Out I want Times Checked Out but when I use 'text' I don't get any results.
| common-pile/stackexchange_filtered |
CSS Alignment of <li>
Possible Duplicate:
Formatting an <li> element within a <div>
The facebook and twitter elements are not aligned. I want them to be positioned perfectly, one above the other. Please can you help?
Here's my code:
#floating-box {
width: 65px;
height:auto;
background-color: #484848;
margin: 54px 10px 0px 623px;
position:absolute;
z-index:1;
text-align: justify;
border-top: 1px solid #000;
border-left: 1px solid #000;
border-bottom: 1px solid #000;
border-right: 1px solid #484848;
}
.social {
position : relative;
list-style-type : none;
margin-left: 2px;
}
.social li a {
float: left;
padding: 1px 5px 5px 0px;
margin: 0px 0px 3px 0px;
display: inline;
}
The HTML that uses this CSS is:-
<div id="floating-box">
<img src="likeusnow.jpg" />
<ul class="social"><!-- Facebook Like/Share Button -->
<li><a name="fb_share" type="box_count" rel="nofollow" share_url="http://www.mysite.com"></a>
</li>
<li>
<a href="https://twitter.com/share" rel="nofollow" class="twitter-share-button" data- url="http://www.mysite.com" data-lang="en" data-count="vertical">Tweet</a>
</li>
</ul>
Please do not post the same question twice.
try display: inline-block; instead
thank you, unfortunately, it didn't seem to work. I like your dress btw, very stylish ;)
cheers :) i bought it after losing 4 stone!
Ok I am making a lot of assumptions on this one.
Assumptions
You are using the vertical Facebook share (which has been deprecated, so I'm demonstrating with the Facebook Like Button)
Your are using the similarly styled Twitter share, also with the vertical count.
This is supposed to be a modal dialog that pops up in the middle of the screen.
The image "likeusnow.jpg" is just the text "Like Us Now"
I'd float the <li> elements rather than the <a>'s. Styling the <a>'s will not matter since their content is an <iframe>. It's the fixed width of the div that is getting you into trouble. While the buttons are supposed to be floated left, there is not enough room and the Twitter button is being bumped below the Facebook one.
CSS:
#floating-box {
position:absolute;
background-color: #484848;
margin: 54px 10px 0px 623px;
z-index:1;
text-align: justify;
border: 1px solid #000;
border-right: 1px solid #484848;
padding: 15px;
}
.social {
position: relative;
list-style: none;
margin-left: 2px;
padding: 0;
}
.social li {
float:left;
margin-left: 10px;
}
I made a demo using jsFiddle and inserted placeholder Facebook Like and Twitter Share plugins here.
You were right about the Facebook stuff. It appears, in fact, that the Facebook share plugin was creating an invisible iFrame mask over the entire page and stopping any clicks on links! I can't beleive Facebook can do this to our websites. Thanks so much for taking the time to help me though, it was much appreciated.
| common-pile/stackexchange_filtered |
Magento2 : How to get value of checkbox in observer?
I will refer to previous post: Magento : How to get value of checkbox in observer?
Anyone have example of how to do the exact same for Magento 2?
possible duplicate of https://magento.stackexchange.com/questions/166917/get-post-data-in-checkout-success-observer
Thanks, i've already tried this before asking the question. I'm not getting any results this way.
I have tried with following events:
checkout_submit_all_after,
sales_model_service_quote_submit_success
Possible duplicate of Get post data, in Checkout success observer
| common-pile/stackexchange_filtered |
receiving ID of Pending Intent from Broadcast Receiver
I need to access the ID of the Pending Intent from the Broadcast Receiver class.
Here is the code of my Main Activity from which I set the Alarm using PendingIntent.
private void setAlarm(Calendar targetCal)
{
Intent alarmintent = new Intent(AddAlarm.this, AlarmReceiver.class);
PendingIntent sender = PendingIntent.getBroadcast(AddAlarm.this, pen, alarmintent, PendingIntent.FLAG_ONE_SHOT); //where pen is the ID
AlarmManager alarmManager = (AlarmManager)getSystemService(ALARM_SERVICE);
alarmManager.setExact(AlarmManager.RTC_WAKEUP, targetCal.getTimeInMillis(), sender);
}
And here is the code of my Broadcast Receiver:
public class AlarmReceiver extends WakefulBroadcastReceiver {
@Override
public void onReceive(final Context context, Intent intent) {
int vibrator = intent.getIntExtra("vibrator", 1);
//PendingIntent sender = PendingIntent.getBroadcast(context, 0, intent, 0);
//intent to call the activity which shows on ringing
Intent myIntent = new Intent(context, Time_Date.class);
myIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
context.startActivity(myIntent);
//display that alarm is ringing
Toast.makeText(context, "Alarm Ringing...!!!", Toast.LENGTH_LONG).show();
ComponentName comp = new ComponentName(context.getPackageName(),
AlarmService.class.getName());
startWakefulService(context, (intent.setComponent(comp)));
setResultCode(Activity.RESULT_OK);
}
}
Can I use Intent.putExtra() to receive the same or any other easy way to get the unique ID to the Broadcast Receiver? Any help will be appreciated.
Yes, just use an Extra in the Intent, and get the Extra in the BroadcastReceiver.
Thanks, but my doubt was, as the id is binded along with the Pending intent, is it possible to access the Id from Broadcast receiver w/o using intent.extra method ?
I don't think that is possible, as you only have access to the Intent in the BroadcastReceiver.
Thanks, but one more doubt. As you can see from my code, I am launching another activity from this Broadcast Receiver, So do I need to pass the value again from the Broadcast receiver to the launching activity to receive the value there ? Or can I get that value from main Activity to this launching activity directly ?
I don't think the second argument to PendingIntent.getBroadcast() is meant to be used by the component that eventually receives the intent (at least I have found no way to access it). If you want to pass some data along that is specific to your app, just use an extra.
| common-pile/stackexchange_filtered |
Is there any environment that would allow for electroreception to work on land?
This is not a duplicate. This is not asking about land electroreception on an Earth-like planet. This is asking what the planet would have to be like for electroreception on land to work. Denser humid atmosphere maybe?
The word electrosensory is an adjective. It needs to determine a noun or a nominal group. That is to say, an electrosensory what? (And the fount of all knowledge has examples of non-aquatic organisms which are capable of electroception, including bees, echidnas and caecilians.)
Bees can sense electric fields thanks to special structures, so either that or have all of your creatures give off ridiculous electric currents (humid air won't do, as both air and pure water are bad conductors)
The bee example that AlexP and ProjectApex mention in their comments is interesting. There is a PNAS commentary https://www.pnas.org/content/pnas/113/26/7020.full.pdf that explains that they may not have a sensory apparatus like sharks or vertabrates, but instead use the accumulation of charge may cause the position of sensory hairs to change. That is not that different than an old fashioned electrometer.
The Feynman lectures https://www.feynmanlectures.caltech.edu/II_09.html has a discussion of electric fields and currents in the atmosphere.
I think you have to be a little careful about what kind of sensing you want to do and make a distinction between sensing current, or flow of ions, compared to sensing the electric field.
For the bees - dryer air is likely better, but they also have to get pretty close.
For sharks - it seems like it is substantially more complicated, and probably the concentration of the salts and ions on the two sides of some sort of membrane changes in response to the the electric field (one side being seawater, the other being the sensory organ of the shark). So it seems unlikely that translates very well to land.
But for world building purposes it seems like you could have some hair like structures that could change position in relation to the electric field.
| common-pile/stackexchange_filtered |
What are some of the main high level approaches to applying ML on kinematic sensor data?
I've just started a project which will involve having to detect certain events in a stream of kinematic sensor data. By searching through the literature, I've found a lot of highly specific papers, but no general reviews.
If I search up on computer vision, I'm likely to get 100s of articles giving overviews of different types of architectures for various vision tasks. They would look something like this:
We mainly use CNNs which work like this ...
For object detection we use one or two stage detectors which look like this...
For video classification we can use 3D CNNs or RNNs...
.... etc
So I'm looking for something similar with regard to kinematic motion sensors. As was pointed out to me on the signal processing SE, "kinematic" could mean a lot of things. So specifically, I'm referring to 1d time series data for:
acceleration/velocity/position
angular velocity / absolute orientation
1D CNNs could help for time series. LSTM can be used to predict next value in a time series and anomaly detection. More info on patterns recognition in time series here - https://stackoverflow.com/questions/11752727/pattern-recognition-in-time-series
| common-pile/stackexchange_filtered |
Azure Logic Apps - Get Blob Content - Setting Content type
The Azure Logic Apps action "Get Blob Content" doesn't allow us to set the return content-type.
By default, it returns the blob as binary (octet-stream), which is useless in most cases. In general it would be useful to have text (e.g. json, xml, csv, etc.).
I know the action is in beta. Is that on the short term roadmap?
Have you tried setting the blob to the correct content-type? http://stackoverflow.com/questions/10040403/set-content-type-of-media-files-stored-on-blob
Yes. The blob was /json.
After fiddling much with Logic Apps, I finally understood what was going on.
The JSON output from the HTTP request is the JSON representation of an XML payload:
{
"$content-type": "application/xml",
"$content": "77u/PD94bWwgdm..."
}
So we can decode it, but it is useless really. That is an XML object for Logic App. We can apply xml functions to it, such as xpath.
So I had a blob sitting in az storage with json in it.
Fetching blob got me a octet back that was pretty useless, as I was unable to parse it.
BadRequest. The property 'content' must be of type JSON in the
'ParseJson' action inputs, but was of type 'application/octet-stream'.
So I setup an "Initialize variable", content type of string, pointing to GetBlobContent->File Content. The base64 conversion occurs under the hood and I am now able to access my json via the variable.
No code required.
JSON OUTPUT...
FLOW, NO CODE...
Enjoy! Healy in Tampa...
Workaround I found is to use the Logic App expression base64ToString.
For instance, create an action of type "Compose" (Data Operations group) with the following code:
"ComposeToString": {
"inputs": "@base64ToString(body('Get_blob_content').$content)",
"runAfter": {
"Get_blob_content": [
"Succeeded"
]
},
"type": "Compose"
}
The output will be the text representation of the blob.
thanks that was progressing. i was able to parse it but not able to put that parsed in to a slack message.
I am getting the results as 'click to download' link as it is reading the ftp into a binary. Could not find a way to resolve it
You would need to know the content-type.
Use @{body('Get_blob_content')['$content']} to get the content part alone.
Is enough to "Initialize Variable" and take the output of the Get Blob Content as type "String". This will automatically parse the content:
| common-pile/stackexchange_filtered |
How to create login field properties depending on which radio button is checked
I have a login page with 2 radio buttons "new account" and "existing user". when "new account" is selected the "User" field auto populates with the text "New Account" and the "Password" field remains blank. I need to grey out the fields so that they are uneditable when the "new account" radio button is selected, but still pass along the information in the fields because it is used to gain access to the database. I can disable the fields to get the desired uneditable greyed out fields, but the information does not get passed along to the database.
I have tried to fix this by creating two hidden fields (which auto populate with the needed information for database access) to take the place of the "user" and "password" field which allows me to disable the visible fields while "new account" radio button is clicked and which still passes along the new user login info that never changes. This works fine until I try to login as an existing user, in which case my two hidden fields do not auto populate with the users input for their existing account information.
There may be a much simpler approach to fixing this problem, but all of my research and trials have not been successful yet. I have been reluctant to ask this question as it seems so simple and frequently used approach for a login page, but all of my searching has not yielded any thing that has worked yet. I appreciate any input or navigation in the right direction.
How are you passing the field values to the database? Can you post a mock-up of your problem in JSFiddle?
...and please note that the values of disabled form fields will (usually) not be send to the server (by the browser).
Getting disabled textbox values is straightforward eg http://jsfiddle.net/rxVPP/1/ - However as Marc says - if you are relying on the POSTing to return the value you may be out of luck.
I'm pretty sure the correct and painless solution for your problem is to use two different form tags (take care to not nest them) and show/hide the form depending on the selected radio button.
And for the convenience of your user you should copy the username from one form to the other if he has already filled the user field and switches to the other version later.
EDIT
The complete solution:
HTML
<label><input class="formSwitcher" type="radio" name="formSwitch" data-form="#divForm1"> Form 1</label>
<label><input class="formSwitcher" type="radio" name="formSwitch" data-form="#divForm2"> Form 2</label>
<hr>
<div class="hiddenForm" id="divForm1">Put form 1 in here</div>
<div class="hiddenForm" id="divForm2">Put form 2 in here</div>
โโโโโโโโโโโโโโJS
// if someone clicks on one radio button
$('.formSwitcher')โโโโ.change(function(){
// get the form id we want to show (from the "data-form" attribute)
var formIdToChange = $(this).data('form');
// hide all forms first
$('.hiddenForm').hide();
// show the specific form
$(formIdToChange).show();
});
โ// initially hide all forms
$('.hiddenForm').hide();
// initially call the change method if one radio is already selected on page load
$('.formSwitcher:checked').change();โ
Thanks Marc for that idea, the only trouble I have is I am not positive on how to follow through with it. I am able to create the two forms, but when it comes to showing/hiding a form based on radio buttons I get lost. I have found code to do this, but don't think I am using it properly... here is the original code I have started with http://jsfiddle.net/ajdietrich/V53kM/2/ does this next code look capable of creating the show/hide form feature I am looking for? http://jsfiddle.net/ajdietrich/RLTDV/
All a bit complicated mate :) As you already have the almighty jQuery it is pretty easy to set up such a switch, please see my updated answer or see it up and running in this jsfiddle: http://jsfiddle.net/marcgrabow/V53kM/3/
Are you saying that you need to change the existing user fields because they will grant access to the database? I think what you want to do is to change the properties of fields depending on what is selected. For example if you need the auto populate fields to be set when the user clicks new user then write something like
document.form.AutoPopUser.value="required value"
and when the person clicks on the existing user you can do
document.form.AutoPopUser.value=""
or if even having that part exist will mess up the existing user log in then you could delete that entire section by putting it inside a div and creating or destroying it depending on the selected option. I feel like I'm not being very clear but I think what you need is a clever set of onClick functions to get rid of things you dont need and add in the ones you do need. If you could post some code to go with this that would be awesome.
Thanks for your reply champ, I am not having trouble with the auto populate portion. You hit the nail on the head about the clever onclick functions, trouble is I'm not clever enough yet to write that type of code.
| common-pile/stackexchange_filtered |
Remove spaces before newlines
I need to remove all spaces before the newline character throughout a string.
string = """
this is a line \n
this is another \n
"""
output:
string = """
this is a line\n
this is another\n
"""
You can split the string into lines, strip off all whitespaces on the right using rstrip, then add a new line at the end of each line:
''.join([line.rstrip()+'\n' for line in string.splitlines()])
If the string doesn't end with a newline, this will add one (which may or may not be desirable).
@ekhumoro Good observation. One could use this approach on a slice of the splitted lines that excludes the last and treat the last one differently. Depends on OP.
It works but leaves a newline at the end of the string like what ekhumoro said. so, I use use rstrip() to remove that newline. works fine.
import re
re.sub('\s+\n','\n',string)
Edit: better version from comments:
re.sub(r'\s+$', '', string, flags=re.M)
This will fail for lines which don't end with a newline. Better to use re.sub(r'\s+$', '', string, flags=re.M).
As you can find here,
To remove all whitespace characters (space, tab, newline, and so on) you can use split then join:
sentence = ''.join(sentence.split())
or a regular expression:
import re
pattern = re.compile(r'\s+')
sentence = re.sub(pattern, '', sentence)
But he don't want to replace all whitespaces.
That you'r talking about is str.replace() not str.sub()
OP doesn't want to remove all the whitespace in the string, only at the ends of lines. Your solution removes all the spaces, even between words in a sentence.
| common-pile/stackexchange_filtered |
issue while using union clause with order by
While placing the union in between two statements it is giving out some error. Code is given below:
I tried placing the brackets starting at select statement and ending at order by name. But, nothing works for me. Can someone please suggest what is wrong with this code?
select TOP(1) name from hack
where len(name) in (select max(len(name)) from hack )
order by name
UNION
select TOP(1) name from hack
where len(name) in (select min(len(name)) from hack )
order by name
Receving the error mentioned below:
Incorrect syntax near the keyword 'UNION'.
Are you using SQL Server? I suppose you need subqueries to involve ORDER BY.
Yes I am using SQL Server. Adding subquery works for me. Thanks :)
if you remove the fist order by name right before Union; you will not get any more error and your query will give the expected result set you need. Please read up about "query execution order" (hint why you can use column alias in order by clause).
The where is not necessary:
select h.*
from (select top (1) name
from hack h
order by len(name), name asc
) h
union all
select h.*
from (select top (1) name
from hack h
order by len(name) desc, name
) h;
I tried this query. But what If there are two names having same length. One is starting with 'a' and another with 'm'. So in that case query must show the name starting with 'a' but after executing this query it is showing name with 'm'. Can you please let me know how I can I modify this query for better results?
@user11497433 . . . Just add a second key to the order by.
You need to use subqueries:
select name
from (
select TOP(1) name
from hack
where len(name) in (select max(len(name)) from hack )
order by name
) A
UNION
select name
from (
select TOP(1) name
from hack
where len(name) in (select min(len(name)) from hack )
order by name
) B
Thanks HoneyBadger. This is really helpful. Just wanted to know that what makes you to nest one more query.
Access table only once:
SELECT a.Name
FROM (
SELECT h.Name
,ROW_NUMBER()OVER(ORDER BY LEN(h.Name) DESC, h.Name) AS [rnMax]
,ROW_NUMBER()OVER(ORDER BY LEN(h.Name) ASC, h.Name) AS [rnMin]
FROM hack h
) a
WHERE (a.rnMin = 1 OR a.rnMax = 1)
GROUP BY a.Name
ORDER BY a.Name
;
| common-pile/stackexchange_filtered |
Conditionally select multiple, separated ranges
I have an Excel spreadsheet tool to generate license images (.png files), which are printed using a wax-resin to PVC printer.
I'd like to simultaneously, conditionally select up to eight specific non-contiguous range clusters. If a defined cell has something, select the range cluster.
Something like:
if E4 is not blank, select D3:G18
if L4 is not blank, select K3:M18
if S4 is not blank, select R3:U18
if Y4 is not blank, select X3:Z18
if E24 is not blank, select D23:G38
if L24 is not blank, select K23:M38
if S24 is blank, don't select R23:U38
if Y24 is blank, don't select X23:Z38
I have code that's selecting all of the range clusters, but with no "intelligence".
Sub Select_Licenses()
Range("D3:G18,K3:M18,R3:U18,X3:Z18,D23:G38,K23:M38,R23:U38,X23:Z38").Select
End Sub
You can test each cell in turn using Application.Union() to build the range to select. Eg see BuildRange here: https://stackoverflow.com/a/64778203/478884
You can test each cell in turn using Application.Union() to build the range to select.
Sub Tester()
Dim ws as worksheet, rng As Range
Set ws = ActiveSheet
if Len(ws.range("E4").Value) > 0 Then BuildRange rng, ws.Range("D3:G18")
'...
'...
if Len(ws.range("Y44").Value) = 0 Then BuildRange rng, ws.Range("X23:Z38")
if not rng is nothing then rng.select
End Sub
'utility sub for building ranges using Union
Sub BuildRange(ByRef rngTot As Range, rngAdd As Range)
If rngTot Is Nothing Then
Set rngTot = rngAdd
Else
Set rngTot = Application.Union(rngTot, rngAdd)
End If
End Sub
Excellent! As soon as I have the opportunity I'll work on implementing. Thank you!
| common-pile/stackexchange_filtered |
How do I add a dash after the fifth term in a string of numbers?
I am trying to do some data tidying in R with zip codes and some are 5 digits and others are 9 that I want to change from XXXXXXXXX to XXXXX-XXXX. df is the dataframe containing these zip codes with multiple repetitions of the same zip code.
a <- df$Zip_Code
for (i in a){
if (length(i) > 5){
str_replace(i, '(\d{5})','\1-')
}}
The code runs find but the value 'a' doesn't change so I am wondering what I'm doing wrong.
You could do this in base R or stringr using the below approaches. You do not need a for loop or if statement since R uses vectorization:
Data
zip <- c(12345, 12345, 123456789, 123456789)
substr
Use substr to split the first 5 digits from the last 4 then paste0 to put it all back together
zip[nchar(zip) > 5] <- paste0(substr(zip[nchar(zip) > 5], 1, 5),
"-",
substr(zip[nchar(zip) > 5], 5, 9))
# [1] "12345" "12345" "12345-56789" "12345-56789"
gsub
You could alternatively use gsub to make it more elegant:
zip[nchar(zip) > 5] <- gsub('^(.{5})(.*)$',
'\\1-\\2',
zip[nchar(zip) > 5])
# [1] "12345" "12345" "12345-6789" "12345-6789"
stringr
since you tagged stringr, here is an approach using stringr::str_replace:
zip[nchar(zip) > 5] <- stringr::str_replace(zip[nchar(zip) > 5],
pattern = "(.{5})(.*)",
replacement = "\\1-\\2")
# [1] "12345" "12345" "12345-6789" "12345-6789"
| common-pile/stackexchange_filtered |
Javascript is not getting executed when DOM/HTML is loaded browser cache
I am caching the dynamic content using the ETAG. There is javascript in tag which in turn located at the end of body tag. This script tag make changes in DOM elements. But when DOM/HTML is loaded from browser cache, javascript is not getting executed.
Any work around to make this work?
Also I am observing caching is working fine with Chrome and mozilla, but not with OPERA. Is there any specific reason/bug in opera, which is causing this issue?
Can you show us some code?
Got an issue. I am populating the js variables from php. So values of js variables are also cached.
e.g.
| common-pile/stackexchange_filtered |
e^x derivative using first principles
Using the Taylor series expansion for e^x , show that the
derivative of e^x (using first principles) is ex (include at least three terms of the series to confirm the derivative of e^x)
...but Taylor series use derivatives, so isn't that going to be pretty circular? Besides, and as far as I know, that is not "first principles" ...
No, you do it.$\phantom{}$
Do you have a question about this exercise?
@anomaly No, I do it. You know that $e^x=1+x+x^2/2+o(x^2)$. Therefore, $(e^x)'=\lim_{h\to 0}(e^{x+h}-e^x)h=\lim_{h\to 0}e^x(e^h-1)/h=\lim_{h\to 0}e^x(1+h/2+o(h^2)/h)= e^x$.
Note that
$$e^x=1+x+\frac{x^2}{2}+\frac{x^3}{3!}+...$$
thus
$$(e^x)โ=0+1+2\frac{x}{2}+3\frac{x^3}{3!}+...=1+x+\frac{x^2}{2}+\frac{x^3}{3!}+...=e^x$$
| common-pile/stackexchange_filtered |
Rails 4.2 - Creating an action for a belongs_to model (Thinkster.io Angular + Rails Tutorial)
I'm following the AngularJS + Rails tutorial from here https://thinkster.io/angular-rails/ and have ran into a wall at the "Finishing Off Comments" section towards the end (right after it says "To enable adding comments, we can use the same technique we used for adding new posts"). Specifically, the server is throwing a 500 when I hit the /posts/{id}/comments.json endpoint.
The error I get is undefined local variable or method `post' for #<CommentsController:0x5f72b38>.
post.rb:
class Post < ActiveRecord::Base
has_many :comments
def as_json(options = {})
# Make all JSON representations of posts include the comments
super(options.merge(include: :comments))
end
end
comment.rb:
class Comment < ActiveRecord::Base
belongs_to :post
end
postsCtrl.js:
angular.module('flapperNews')
.controller('PostsCtrl', [
'$scope',
'posts',
'post',
function($scope, posts, post) {
$scope.post = post;
$scope.addComment = function(){
if($scope.body === '') { return; }
posts.addComment(post.id, {
body: $scope.body,
author: 'user'
}).success(function(comment) {
$scope.post.comments.push(comment)
});
$scope.body = '';
};
}]);
posts.js:
angular.module('flapperNews')
.factory('posts', [
'$http',
function($http) {
// Service Body
var o = {
posts: []
};
o.getAll = function() {
return $http.get('/posts.json').success(function(data) {
angular.copy(data, o.posts)
});
};
o.create = function(post) {
return $http.post('/posts.json', post).success(function(data) {
o.posts.push(data);
});
};
o.upvote = function(post) {
return $http.put('/posts/' + post.id + '/upvote.json')
.success(function(data) {
post.upvotes += 1;
});
}
o.get = function(id) {
return $http.get('/posts/' + id + '.json').then(function(res) {
return res.data;
});
};
o.addComment = function(id, comment) {
return $http.post('/posts/' + id + '/comments.json', comment);
}
return o;
}]);
And finally comments_controller.rb:
class CommentsController < ApplicationController
def create
comment = post.comments.create(comment_params)
respond_with post, comment
end
def upvote
comment = post.comments.find(params[:id])
comment.increment!(:upvotes)
respond_with post, comment
end
private
def comment_params
params.require(:comment).permit(:body)
end
end
I understand that it's complaining about the reference to post in the create action, but I don't know why Rails isn't just recognizing it as the post the comment belongs to. I'm very new to Rails but I can't see anything I've done differently from the tutorial. Thanks a bunch!
hey man, sorry for bothering you with off-topic - did you manage to complete the tutorial? Cause I've stuck on Wiring Everything Up#Loading posts.. Could you please, if that's not a problem, provide me with a link to your github, where I can take a look at something? Would be really appreciated. Thx in advance!
Hoo boy I feel stupid now. I added the following line to the top of the create action and it works now: post = Post.find(params[:post_id]) Hope this helps anyone else who gets stuck anyway! The tutorial seems to be missing this part.
| common-pile/stackexchange_filtered |
What can we accept in thought experiments in relativity?
Although title is more broad, and you are welcome to give examples, I will ask about why we accept certain things as acceptable in Einstein's thought experiments using a specific experiment:
Consider the famous mirror in a train example:
There is a train moving with velocity $v$ relative to earth. The train has height $h$ and there is a mirror in the ceiling and a light source at the bottom directly below mirror. Alice , inside train, measures the time it takes for light to go back and forth once , $t_{\text{Alice}}$. Bob, a person on ground, measures the time it takes for light to go back and forth once $t_{\text{Bob}}$. Relation between times is $$\gamma t_{\text{Alice}} = t_{\text{Bob}}$$
for $\gamma = \frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$.
Now a proof without using Lorentz transformations is as follows (based on that from David Morin's Classical Mechanics):
$2h = ct_{\text{Alice}}$ for obvious reasons, and $c^2(\frac{t_{\text{Bob}}}{2})^2 = v^2(\frac{t_{\text{Bob}}}{2})^2 + h^2$ based on fact that light will move piecewise straight until it reaches mirror and back and equation is based on Pythogoras theorem.
My question is really motivated from simultaneity being frame dependent, so that one person saying they saw something is a bit subjective : A person could say they saw 2 light sources light up at the same time, but other could say they light up at different times.
Then based on above, why we agree that both Alice and Bob saw light to hit the mirror in the first place? (i.e in general what constitutes a lie and what is just frame dependent truth in relativity?)
Note: I specifically omitted Lorentz transformations, since they can be accepted as axioms, which is fine, and equations guarantee that light hits the mirror, but Einstein probably wasn't basing his arguments directly to fit some formalism other than his own postulates.
You are wrong in the interpretation of this. The full Lorentz transformations can only be arrived at after obtaining time dilation and length contraction. We do not postulate that the Lorentz transformation is correct. We postulate that the speed of light in vacuum is constant for all observers. Also, we are not asserting that Alice and Bob would say that the light hits the mirror at the same time; we only asserted that both of them see something, and then we try to relate them together.
Also bear in mind that the word "see" used here is a very specialized interpretation of the word, which has nothing to do with light entering anyone's eyes!
I am aware of meaning of "see". Also, postulating Lorentz transformation is equivalent to postulating const speed of light by the notion of invariant time.
Then based on above, why we agree that both Alice and Bob saw light to hit the mirror in the first place? (i.e in general what constitutes a lie and what is just frame dependent truth in relativity?)
There is an assumption involved here that is often (or even usually?) left unstated because it seems so extremely manifestly self-evident. That assumption is that, even though time and space are each frame dependent, localized events are objectively absolute, independent of frame. Changing your reference frame may change when and where any specific event occurred, but it cannot change whether the event occurred.
The Big Bang happened in the reference frame you are using, and changing to another reference frame cannot change that objective fact.
The Earth formed in the reference frame you are using, and changing to another reference frame cannot change that.
You viewed this answer in the reference frame you are using, and changing to another reference frame cannot change that.
Likewise, the light hit the mirror in the thought experiment, and changing your choice of reference frame cannot change that. Changing your reference frame can change the time and location of the event of the light hitting the mirror, but cannot change that the light did hit the mirror.
This principle is an axiom that is so commonly implicit that I'm not sure it even has an official name.
Perfect! This is exactly answer of my question.
In terms of Lorentz transformations, clearly transformation is continuous (and so is its inverse), thus , it seems relevant to say I think, locally we can focus on one part of event and look at how it did transform than the global picture.E.g there are 2 light bulbs and in frame $S$ they light up at the same time but in $S'$ they didn't. Now the term 'one part of event' is really if we choose a locality around a light bulb that doesn't contain the other, then that part of system transforms in such a way that in both frames light turns on, but not both of them at the same time.
Recommendation: the article Nothing but relativity by Palash B. Pal.
That article stands in the following approach to special relativity: how far can you push while using only the principle of relativity of inertial motion?
So that is an approach where the starting point is as follows: assume there is an equivalence class of coordinates systems such that all members are in inertial motion relative to each other. What is the most general form of a transformation that can be used to transform between any pair of coordinate systems?
(That approach has a long history: Palash B. Pal offers that his contribution is in achieving a more comprehensive and better flowing presentation than in prior treatments.)
In the article it is demonstrated that the principle of relativity of inertial motion alone is sufficient to narrow down to just two possibilities:
Lorentz transformation
Galilean transformation
And of course: the Galilean transformations are a limiting case; with infinite speed of causality the Lorentz transformations simplify down to Galilean transformation.
With the possibilities narrowed down to just two: it is then possible to have experimental result deciding which one to go with.
(Of course, what you then want is several mutually corroborating experimental results, some experiments probing propagation of light, other measuring motion of particles at relativistic velocity.)
More general consideration:
The purpose that thought experiments are useful for is to probe the implications of a theory of physics.
If the thought experiment leads to a self-contradiction then you know something is wrong.
In my opinion thought experiments are very much ill suited for the purpose of introducing some theory of physics, or for the purpose of motivating some theory of physics
Awesome answer! Indeed Palโs paper is an excellent resource.
I think i got the answer to my question : the truth told, or what is the world at a snapshot of time is well defined through Lorentz transformation (buy algebra is messy ). Then the total history is the truth in a given frame.
But other than that, above answer motivates why the mentioned principle is associated with Lorentz transformation (with support from experiment), so it is helpful.
Thought experiments are simply a way of thinking about physical principles. Whether a thought experiment is valid depends on whether it is self-consistent and leads to an outcome that is consistent with experimental results.
The thought experiment involving a light clock can be considered in a more abstract way without any observers at all. You can simply consider light leaving the origin in some reference frame, traveling directly up the y axis, striking a horizontal mirror and returning back to the origin. In another frame moving inertially with respect to the first at 90 degrees to the y axis, the light follows a diagonal path, and therefore must take longer to complete the round trip. From that you can work out the formula for time dilation.
The more abstract version of the thought experiment contains a number of implicit assumptions. For example, it assumes that the vertices distance from the origin to the mirror is the same in both frames. It assumes that there is no significant delay involved in either frame when the light changes direction upon encountering the mirror. You can of course question all such assumptions, but ultimately what counts is whether they lead to experimentally verifiable conclusions.
In this type of thought experiment, Alice and Bob are only made human observers in order to make the thought experiment more "relatable". There is an assumption (often unstated) that Alice and Bob are perfect objective observers and neither make mistakes nor lie about their observations. Conceptually, you could replace Alice and Bob with very precise and accurate photodetectors and timing devices.
First of all, you should examine the train experiment more carefully. Read the version in Einstein's book Relativity. The train experiment can be translated mechanically into the thought experiment in the 1905 paper. There is no difference.
The anomaly is that Einstein says point M and M' "fallt zwar...zusammen." The English translation has "naturally coincides."
The problem with this term is that it is not defined in the argument. Einstein does use the Euclidean definition of the coincidence of points (which is: ________ [you fill that in]). But there is no definition of a "natural" coincidence of points.
Thus, this term has no place in the argument. Because that is so, we cannot move beyond it to establish the relativity of simultaneity, or special or general relativity, or the standard model.
We can't remove the "naturally" either, since that leaves us with one Cartesian coordinate system, when we are supposed to assume two--a contradiction.
So this term is the pea under the mattress.
Restatements of the train experiment always unconsciously attempt to correct this error--the authors know there is a problem, but are unaware of it. The most glaring example is the 1920 Italian translation of the book. Compare the discussion of points M and M'. The attempted correction is right before your eyes. Do you see it?
So, in the first place, make sure you do actually have a thought experiment and not, as in the case of the relativity of simultaneity, a sleight of hand.
This was a truly pointless objection the first time you made it and it remains pointless this time too. Under a linear transform, the midpoint of a segment naturally transforms to the midpoint of the transform of the segment. It is entirely correct, well understood, and completely non-objectionable. This is not a sleight of hand nor a pea under the mattress. There may be variations in translation, but the math and science are clear
| common-pile/stackexchange_filtered |
Configure ProFTPD correctly on Ubuntu 16.04
-- My System Info
Ubuntu 16.04
Using SSL through LetsEncrypt module
ProFTPD module Main
domain is a WordPress
I know there're a lot subjects for this issue and I read dozens of documentations but unfortunately nothing of those solutions working with me, I installed virtualmin correctly (I think) I'll write the commands that I run to make sure it's ok setup
-- Commands
sudo apt-get update
sudo apt-get dist-upgrade -y
-> This message A new version of /boot/grub/menu.lst is available but the version installed currently has been locally modified appeared ONCE during setup and I chose Install the package maintainer's version
-> Reboot the system
wget https://software.virtualmin.com/gpl/scripts/install.sh -O /root/virtualmin-install.sh
Then I create the first virtual server for my main domain and sub domains and request LetsEncrypt certificates and all working very well.
-- My Issue with FTP, WordPress and FileZilla
1- I tried to login in FileZilla with Host<EMAIL_ADDRESS>but I received the following messages...
Status: Initializing TLS...
Status: Verifying certificate...
Status: TLS connection established.
Status: Logged in
Status: Retrieving directory listing...
Status: Server sent passive reply with unroutable address. Using server address instead.
Command: MLSD
Error: Connection timed out after 20 seconds of inactivity
Error: Failed to retrieve directory listing
I made some searches and I see this command modprobe ip_conntrack_ftp in {https://www.virtualmin.com/documentation/web/faq#toc-ftp-service-isnt-working-Llnjz8K8} so I run it and reboot the system but the same messages appear ..
2- In my main domain I cannot add new plugin or media file and FTP credentials box appear too (I think it's related).
Any Suggestion?
| common-pile/stackexchange_filtered |
Arm TrustZone on Xilinx zynq zc706, smc #0
Arm TrustZone, zynq-zc706
Hi,
I tried enabling TrustZone on Xilinx Zynq zc706 board. After many attempts, still no success.
Does anyone know if I have to enable somehow that option? I downloaded opensource solution for TrustZone implementation (sierraTEE from openvirtualization.org) but I am not able to boot its kernel.
When SMC #0 (during boot) instruction is performed, system goes to PREFETCH ABORT. Do I have to change or do something to enable SMC function (Secure Monitor Call to change worlds).
Please, write whatever you know.
Thanks
You have to configure the hardware system prior to simply booting the code. Specifically, there are several clock signals that must be connected in the hardware in order for the NW to function properly. Xilinx released UG1019 that details a lot of the configuration that must be done. Once you have the hardware design created properly, you should be able to create a FSBL project, perform the required configuration operations outlined in UG1019, and then boot the SierraTEE.
I have found this project quite useful and have tried this myself on ZedBoard and it work.
https://github.com/tzvisor/ltzvisor
| common-pile/stackexchange_filtered |
Angular 16: Either AngularFireModule has not been provided in your AppModule (this can be done manually or implictly using provideFirebaseApp)
**Versions info:
@angular/fire 7.6.1
@angular/common": 16.0.0
The error I am recieving:
enter image description here
I was trying to implement unit testing in my angular project where I have implemented angular/fire, the code runs perfectly but while unit testing it asks for AngularFireModule (which has been deprocated in newer versions):
My authentication service:
import { Injectable } from '@angular/core';
import { from, Observable } from 'rxjs';
import {
Auth,
signInWithEmailAndPassword,
authState,
createUserWithEmailAndPassword,
UserCredential,
} from '@angular/fire/auth';
@Injectable({
providedIn: 'root',
})
export class AuthService {
currentUser$ = authState(this.auth);
constructor(private auth: Auth) {}
signUp(email: string, password: string): Observable<UserCredential> {
return from(createUserWithEmailAndPassword(this.auth, email, password));
}
login(email: any, password: any): Observable<any> {
return from(signInWithEmailAndPassword(this.auth, email, password));
}
logout(): Observable<any> {
return from(this.auth.signOut());
}
}
Do you have any update? I'm trying to test an angular app with your same versions but when I try to test a component that inject a service that use authState I receive your same error.
| common-pile/stackexchange_filtered |
Store output of command in sftp to variable and list
My aim is to create a shell script such that it logins and filter the list of files available and select a file to get. Here I need to run commands like in bash.
My sample code is:
sshpass -p password sftp<EMAIL_ADDRESS><<EOF
cd /home/
var=$(ls -rt)
echo $var
echo "select a folder"
read folder
cd $folder
filen=&(ls -rt)
echo $filen
echo "select a file"
read name
get $name
bye
EOF
You want to control from your program a FTP session, and at the same time keep a dialogue the user going. I wouldn't do this in bash. Have a look at, for instance, Perl with Net::SFTP, example is discussed here, or Ruby.
Here are more SFTP examples for Perl.
After some research and experimentation, found a way to create batch/interactive sessions with sftp. Posting as separate answer, as I still believe the easier way to go is with lftp (see other answer). Might be used on system without lftp
The initial exec create FD#3 - pointing to the original stdout - probably user terminal. Anything send to stdout will be executed by the sftp in the pipeline.
The pipe is required to allow both process to run concurrently. Using here doc will result in sequential execution. The sleep statement are required to allow SFTP to complete data retrieval from remote host.
exec 3>&1
(
echo "cd /home/"
echo "ls"
sleep 3 # Allow time for sftp
echo "select a folder" >&3
read folder
echo "cd $folder"
echo "ls"
sleep 3 # Allow time for sftp
echo "select a file" >&3
read name
echo "get $name"
echo "bye"
) | sshpass -p password sftp<EMAIL_ADDRESS>
That works great with small correction in your code,, "<<EOF need to be removed"
The above approach will not work. Remember that the 'here document' (<<EOF ... EOF) is evaluate as input to the sftp session. Prompts will be displayed, and user input will be requested BEFORE any output (ls in this case) will be available from sftp.
Consider using lftp, which has more flexible construct. In particular, it will let you use variables, create command dynamically, etc.
lftp sftp://user@host <<EOF
cd /home
ls
echo "Select Folder"
shell 'read folder ; echo "cd $folder" >> temp-cmd'
source temp-cmd
ls
echo "Select Folder"
shell 'read file ; echo "get $file" >> temp-cmd'
source temp-cmd
EOF
In theory, you can create similar constructs with pipes and sftp (may be a co-process ?), but this is much harder.
Of course, the other alternative is to create different sftp sessions for listing, but this will be expensive/inefficient.
lftp is not installed, however the below answer worked well
I would suggest you to create a file with pattern of the files you want downloaded and then you can get files downloaded in one single line:
sftp_connection_string <<< $"ls -lrt"|grep -v '^sftp'|grep -f pattern_file|awk '{print $9}'|sed -e 's/^/get -P /g'|sftp_connection_string
if there are multiple definite folders to be looked into, then:
**Script version**
for fldr in folder1 folder2 folder3;do
sftp_connection_string <<< $"ls -lrt ${fldr}/"|grep -v '^sftp'|grep -f pattern_file|awk '{print $9}'|sed -e "s/^/get -P ${fldr}/g"|sftp_connection_string
done
One-liner
for fldr in folder1 folder2 folder3;do sftp_connection_string <<< $"ls -lrt ${fldr}/"|grep -v '^sftp'|grep -f pattern_file|awk '{print $9}'|sed -e "s/^/get -P ${fldr}\//g"|sftp_connection_string;done
let me know if it works.
| common-pile/stackexchange_filtered |
How to enable a click event in GWT Java?
How do I implement a click event in Java(GWT) for this image with an overlaid set of map area tags?
<img src="planets.gif" width="145" height="126" alt="Planets" usemap="#planetmap">
<map name="planetmap">
<area shape="rect" coords="0,0,82,126" href="sun.htm" alt="Sun">
<area shape="circle" coords="90,58,3" href="mercur.htm" alt="Mercury">
<area shape="circle" coords="124,58,8" href="venus.htm" alt="Venus">
</map>
Have you tried to follow the tutorial on click events? http://www.gwtproject.org/doc/latest/tutorial/manageevents.html#listening Please make an attempt and ask a specific question about any problems you are having and you are much more likely to get useful help.
It depends what are you trying to do after click. If you just need to know what was clicked you should read about events as @jwpfox suggests. If you need some kind of navigation after click I would suggest to use GWT History Mechanism.
Do you want to catch the click anywhere it's clicked, or specifically on the shapes?
| common-pile/stackexchange_filtered |
iPhone storage in tmp directory
I have a question from this stackoverflow question about iPhone storage. Like I already tried to answer, we can cache data in tmp directory. But a comment says that the data can be deleted when OS whimp. I don't understand exactly the problem that the comment says.
I want to ask if the process of OS deleting tmp directory is manually or automatically. In other words, if the system auto detect that our tmp directory has to be deleted.
Another question is that if we can control, or be asked to do something (before the deleting process) that can help us to keep the tmp directory.
Another question is that if we can not do anything then how often the OS will do that, under what circumstances
This blog post explains about almost all directories of iOS application http://kmithi.blogspot.in/2012/08/ios-application-directory-structure.html
The OS will delete the temp directory on restart and at other indeterminate points. If you need to store something somewhere that you don't want backed up then use the cache directory. That will not be deleted and will not be backed up.
Your application will not be running when the temp is deleted nor will you have an opportunity to react to that deletion. This is fairly common behavior on all unix based platforms (OS X does this as well).
one more question is where I can store something that can be backed up and not be deleted as well
To be backed up and not deleted, it needs to be in the documents directory.
Should be noted that in the iOS 5 world, the cache directory can now be cleaned out. So, not as reliable as it used to be.
That's maybe true for the system wide /tmp folder (which is not accessible by regular apps and not visible on a non-jailbroken phone). But local app ./tmp folders do not get deleted when restarting iOS.
Note that you can use iExplorer for having a look at app local ./tmp directories on non-jailbroken phones.
What do you mean by intermediate points? Also does this mean every app will load some objects from documents dir, some from cache directory and some from tmp directory-- all at once?
I wrote indeterminate points not intermediate points. Different meaning entirely. What that means is that the OS is not locked into removing temp folders at a certain point in its lifecycle. It might remove them at any point and you need to be prepared for that.
In iOS 5, OS can anytime clean cache and tmp directories. Only files in documents directory won't be deleted. These files will be also backed up to iCloud, so they shouldn't be big.
Here is great article about this issue: iOS 5 caches cleaning
There is a fix for this issue in iOS 5.0.1. You can now specify which files should not be deleted during device cleanup.
| common-pile/stackexchange_filtered |
IF statement not working in RHEL 6 (workd in RHEL 5)
I have a simple if statement that works fine in RHEL 5, but for some inexplicable reason, fails in RHEL 6:
if [[ ! $1 =~ "(one|two|three)" ]] ; then
echo -e "\n***Invalid number"
usage
exit 1
else
action=$1
fi
I can use a case statement which works fine or re-write it but more than anything, I'm curious as to what has changed, assuming it is the version of RHEL and not something else?
regex must not not be quoted in newer BASH (starting from BASH version 3.2), try this:
if [[ ! "$1" =~ (one|two|three) ]] ; then
echo -e "\n***Invalid number"
usage
exit 1
else
action="$1"
fi
To be able to use quoted regex you can use:
shopt -s compat31
EDIT: As glen commented below you can use !~ operator also i.e.
[[ "$1" !~ (one|two|three) ]]
You can use !~ instead of ! ... =~
The best way of keeping the script compatible across the versions is to use a variable re='(one|two|three)' and then use [[ $1 !~ $re ]].
Thanks all ... that's good to know. Although the !~ didn't seem to work. I'll have a play with that.
Wait, d'oh, I missed the quotes around the variable. Face palm.
| common-pile/stackexchange_filtered |
python simplekml change shape of newpoint
I am trying to change the default Yellow Push-PIN place marker to rectangle or pixel.
after running code below I still getting default Yellow Push-PIN place marker.
import simplekml
#from simplekml import Shape,Color
kml = simplekml.Kml()
pt2=kml.newpoint(name="test", coords=[(18.432314,-33.988862)])
#both code below are not working.
pt2.style.iconstyle.icon.shape='rectangle'
pt2.style.iconstyle.shape='rectangle'
pt2.style.iconstyle.color='ffffff00'
kml.save("test.kml")
To change the shape of the icon, change the href property of the iconstyle which is the URL of the icon. Can see a list of icons for use in Google Earth displayed here.
In posted code above, "shape" is not a property of icon nor is it a part of iconstyle. The structure of the icon and iconstyle property is defined by the KML spec.
Updated code to output KML with a custom icon style:
import simplekml
kml = simplekml.Kml()
pt2 = kml.newpoint(name="test", coords=[(18.432314,-33.988862)])
# use square icon
pt2.style.iconstyle.icon.href = 'http://maps.google.com/mapfiles/kml/shapes/placemark_square.png'
# set color of icon to be cyan (RGB=#00ffff)
pt2.style.iconstyle.color ='ffffff00' # aabbggrr
print("Output: test.kml")
kml.save("test.kml")
| common-pile/stackexchange_filtered |
Importing sql script and running it using node oracledb
I have got query.sql file that I am importing using node's fs module and trying to run it using node oracledb module. But it is giving me the error of invalid or missing options
set echo off
set abc on
select * from table
set echo on
You're trying to use SQL*Plus set commands in another application; they are not SQL statements and will throw "ORA-00922: missing or invalid option". Are you able to pre-process the script file to remove them?
The driver executes SQL statements and PL/SQL blocks, but not scripts. Someone could write a module to parse scripts and execute them with the driver, but it's not trivial and I'm unaware of anyone having done it yet. Do you have to run the script or can you just execute the SQL and PL/SQL within it? Do you have to use Node.js or can you use another tool like SQLcl which works with scripts out of the box https://www.oracle.com/database/technologies/appdev/sqlcl.html?
Well, maybe not quite out-of-the-box, since you also need a Java SDK installed :) If you don't have a JDK, it'll be a lot faster to download SQL*Plus
@AlexPoole preprocess in what way and why do I have to remove them.
@DanMcGhan I have to use node js only using the node oracledb driver. Could you suggest me any other npm module that supports executing of scripts?
@ChristopherJones, I have to run this script through the code.
You cannot run the script as it is; you at least need to strip out the non-SQL parts (the client-specific SQL*Plus set commands etc.), but if the script contains multiple SQL statements (and/or PL/SQL blocks) then you will have to separate those out - which isn't trivial - and run those statements one by one. It's probably still easier to call out to SQL*Plus.
It seems you didn't write this script, someone else did. However, they didn't write it in a way that it's executable by Node.js or the driver. So why do you have some constraint that you execute the script using Node.js? One of the following must happen: #1 You write a module that parses the script and makes it executable (I don't recommend this). #2 You get a clean script that would make it easier to execute in Node.js (I doubt you'll get this). #3 You use a tool designed to execute the type of script you were given (this make the most sense).
@DanMcGhan thank you for the suggestion . Could you also recommend me some language to use this with.
There are two free tools that I know of that you could use with the script as it is: #1 SQLPlus - https://www.oracle.com/database/technologies/sqlplus-cloud.html and #2 SQLcl - https://www.oracle.com/database/technologies/appdev/sqlcl.html SQLPlus is a favorite among DBAs while developers would likely favor SQLcl.
| common-pile/stackexchange_filtered |
Evaluating $\int_0^\pi \frac{\cos(nx)}{2+\cos(x)} dx$
What is a good place to start in order to evaluate $$\int_0^\pi \frac{\cos(nx)}{2+\cos(x)} dx?$$
I just want something to set me on the right tracks.
Thanks.
It is enought to notice that the integral is an even function $$\sf I=\frac12 \int_{-\pi}^\pi \frac{\cos(nx)}{2+\cos x}dx=\frac12\Re\int_{-\pi}^\pi \frac{e^{inx}}{2+\frac{e^{ix}+e^{-ix}}{2}}dx$$
Now substitute $\sf e^{ix}=z\Rightarrow dx=\frac{dz}{iz} ,|z|=1$
$$\sf \Rightarrow I=\frac12\Re \oint_{|z|=1} \frac{z^n}{2+\frac{z^2+1}
{2z}}\frac{dz}{iz}=\Re\oint_{|z|=1}\frac{z^n}{z^2+4z+1}\frac{dz}{i}$$
$$\sf z^2+4z+1=(z+2)^2-3\Rightarrow z_{1,2}=\pm \sqrt 3-2$$
But only the pole $\sf z_1=\sqrt3-2$ is found inside our countour thus:
$$\sf I=\Re\left(2\pi \lim_{z\to z_1} (z-z_1)\frac{z^n}{(z-z_1)(z-z_2)}\right)=2\pi \frac{z_1^n}{z_1-z_2}=\frac{(\sqrt 3-2)^n}{\sqrt 3}\pi$$
An alternative approach can be found here.
Thanks. However I don't have access to complex analysis in my curriculum. Do you know if it's possible to do it without it ?
What are your available tools?
I have the basics: Integration by part, substitution, as well as theorems on improper integral. I can write what you did up to I being a real part but no I cant use holomorphic functions or line integral so no residue theorem. I basically can't differentiate in C but as long as my variables are real it should be ok.
@AdrickMamode See here then: https://ysharificalc.wordpress.com/2018/09/09/limit-of-integrals-16/
Thanks a lot, I will look into it.
Hint $u=2 \pi -x$. Then
$$\int_0^\pi \cos(nx)/(2+\cos(x))dx= -\int_{2\ pi}^\pi \cos(nu)/(2+\cos(u))du= \int_\pi^{2 \pi} \cos(nx)/(2+\cos(x))dx$$
Now,
$$\int_0^{2\pi} \cos(nx)/(2+\cos(x))dx$$ is a standard complex analysis computation: You figure out the "right" function, such that, under the parametrisation $z=\cos(x) + i \sin(x)$ you get
$$\int_{|z|=1} F(z) dz =\int_0^{2\pi} \cos(nx)/(2+\cos(x))dx$$
Hi, I don't have access to complex analysis in my curriculum. Do you know if it's possible to do it without it ?
| common-pile/stackexchange_filtered |
I need to calculate sum on multiple fields in Google Datastore (NoSQL database)
we are building a recommender, comparing user interests with the characteristics of places. For example. Does the user like 'outdoor' and does the place store a high value for 'outdoor' then the place is recommended.
My Google Datastore document structure is
{ "_id" : 1, "name" : "Netherlands", "outdoor": 4, "painting":5, "history": 6, "drink"; 7, "love": 8 }
What I want to make is to calculate sum on multiple fields.
With MongoDB, I can implement it like this.
db.characteristics.aggregate(
[
{ $project: { name: 1, score: { $add: [ "$outdoor", "$love" ] } }}
]
)
And the output will be
{ "_id" : 1, "name" : "Netherlands", "score" : 12 }
I can make this easily with MongoDB query but I have no idea how to make this with google datastore API.
So if anyone has experience with my problem, please let me know.
That isn't a natively supported feature (virtual fields) of the query engine. You'd just have to write the code that manually aggregated the fields after reading the entity.
If you are using App Engine Standard and the Python NDB client library, you can use ComputedProperties in your model, but keep in mind this is essentially just a client-side helper function.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.