text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Xcode UITest sometimes does not find property of XCUIElement
In my UI tests, the frame property of some XCUIElement are found, but not of others.
The accessibility identifiers used below are set in storyboard, and app is initialised in setUp() as XCUIApplication().
Here is the storyboard layout:
The two UI elements used in the test are Text Field and Add Button.
Here is the relevant code:
func test() {
// given
let mainViewNavigationBar = app.navigationBars[„NavBar“]
let navBarHeight = mainViewNavigationBar.frame.size.height
print("navBarHeight: \(navBarHeight)") // is printed out correctly
let addShoppingItemTextField = app.textFields["TextField"]
let textFieldHeight = addShoppingItemTextField.frame.size.height // error breakpoint here
print("textFieldHeight: \(textFieldHeight)")
}
The test stops at an error breakpoint at the second last line with the following message:
No matches found for Find: Descendants matching type TextField from input {(
Application, 0x60000019f070, pid: 13114, label: ‚xxx‘
)}
I do not understand why the frame property, which should be defined for all XCUIElement, is found in the first case, but not in the second.
EDIT
Oletha pointed out below, that my constant addShoppingItemTextField is an XCUIElementQuery that should be resolved when I try to read the frame property of the textField.
Indeed, when the program stops at the test error breakpoint and I print its description, I get
Printing description of addShoppingItemTextField:
Query chain:
→Find: Target Application 0x6080000a6ea0
↪︎Find: Descendants matching type TextField
↪︎Find: Elements matching predicate '"TextField" IN identifiers'
But the find fails, although Accessibility is enabled, and the Accessibility Identifier is set to TextField:
I also inserted in the app
print(textField.accessibilityIdentifier!)
in viewDidLoad(), and it printed out TextField correctly.
As a workaround, I set the test to recording, and tapped the textField. This created code for the access to the textField. I then replaced let addShoppingItemTextField = app.textFields["TextField"] by (the right side was generated by the recording):
let addShoppingItemTextField = app.otherElements.containing(.navigationBar, identifier:"WatchNotOK")
.children(matching: .other).element.children(matching: .other).element
.children(matching: .other).element
And now the code works without errors.
So it seems to me that the query for the accessibility identifier of a textField does not work correctly.
EDIT 2
I give up: Without changing anything in the storyboard, the test now stops with the same test error (No matches found for Find: Elements matching predicate '"WatchNotOK" IN identifiers‘) at the line let navBarHeight = mainViewNavigationBar.frame.size.height. This did work all the time.
This indicates to me that Xcode UI tests are broken.
As I can see, you wrote TextField in your test (without space). Did you try writing Text Field with space? You wrote it with space in your question, and it is also with space on the image you provided.
@lagoman "TextField" is my Accessibility id, which is assigned to my UITextField. The TextField mentioned in the log is the same as UITextField, which is for some reason displayed in the storyboard as "Text Field". I confess that this is confusing, both the accessibility id is completely independent of the UITextField type. Actually, I my real code I used a different, much longer accessibility id. Here, I simplified it for better readability.
When you write subscript like this app.textFields["Some text"] you are querying views that have Text Field trait and have accessibility label Some text. Accessibility label in UITextField view is by default set to string that is being displayed in that view, so you should try querying that way, or if this doesn't work and you know position order of that view, you can use app.staticTexts.element(boundBy: 1) (this finds 2nd Text Field).
By Accessibility label you probably mean the accessibility identifier, and the query that I used indeed searched for this identifier (see my edit above). By default, my textField is empty, so I cannot use your workaround, but thanks anyway! I found another workaround, see above.
I contacted Apple, and they found my bug:
The view of my main view controller had its accessibility property set to true. This was wrong; it must be set to false:
The explanation is found in the docs to isAccessibilityElement:
The default value for this property is false unless the receiver is a standard UIKit control, in which case the value is true.
Assistive applications can get information only about objects that are represented by accessibility elements. Therefore, if you implement a custom control or view that should be accessible to users with disabilities, set this property to true. The only exception to this practice is a view that merely serves as a container for other items that should be accessible. Such a view should implement the UIAccessibilityContainer protocol and set this property to false.
As soon as I set accessibility of the main view to false, the UI test succeeded.
Thanks for finding the fix, but I still don't see how the docs apply here. Does it mean that by disabling isAccessibilityElement for UI testing purposes, it will no longer be accessible when running the app in a non-testing environment?
Up to now I used accessibility only for UI testing. But my understanding is that if accessibility is disabled for a custom view, it will no longer be accessible during normal (non UI testing) operation. However, views embedded in this custom „container“ view can still be accessible. So this should not be a limitation.
In my case, it may be a limitation. I have four UISwitches, each of which has an associated UILabel. I want to test the state of the switches, but I have to turn off their accessibility. Now, if I want them to be accessible at runtime, I can turn on accessibility for the view that contains each switch and its label, but then I have to manually update the accessibility label when the switch state changes. It would be better to keep the accessibility on for the user, so I may have to drop tests for the switches.
@ReinhardMänner: I'm not using Storyboard. Designed UI using thirdparty framework (similar to Swift UI) and also programmatically not setting any accessibility enable/disable.:
In addition with above answers... I would like to add one point
This may happen because the XCUIElement you are accessing is not available on screen.
Suppose you are executing test case for login screen and simulator is launching with dashboard not with login screen. This happen with my case. I tried to logout and then execute test case. Error disappears
The problem is not that the frame property is not found on the element, it's that the element itself could not be found.
Every XCUIElement is derived from an XCUIElementQuery. The first attempt to resolve the query is not, as you might expect, when you assign the value of addShoppingItemTextField, but the first time you access a property (other than exists) on addShoppingItemTextField.
Therefore, when you try to access the frame property on the XCUIElement object, the query for finding that element is resolved, but the element is not found - so you get the error saying 'No matches found...' on the line where you access frame. This can be a bit misleading, but the problem you're encountering is that the element could not be found. Try adjusting your query.
| common-pile/stackexchange_filtered |
How to start VS 2010 from Run dialog
Typing "devenv.exe" in my system's Run dialog starts VS 2008. I also have VS 2010 installed. How can I start VS 2010 from the Run dialog? I'm running Windows XP.
Neither should start on XP, it should complain that it cannot find the program. Looks like the PATH environment variable in your system environment got tinkered with. The VS installer doesn't mess with it. Control Panel + System + Advanced + Environment variables to fix (iirc). Logout + Login to make it effective.
Consider a desktop or quick launch shortcut.
Change the order of VS 2008 and 2010 directories in your PATH environment variable (if 2008 was installed before 2010, its directory is probably earlier in the PATH, so devenv.exe from 2008 is excuted). In Win XP, that would be My computer -> Properties -> Advanced -> Environment variables.
| common-pile/stackexchange_filtered |
nvmlDeviceGetHandleByIndex does not return NVML_SUCCESS
OS: Windows
CUDA version: 11.5
GPU Driver version: 531.79
I have two graphics cards on my PC: GPU 0 is Intel UHD Graphics 770, and GPU 1 is GeForce RTX 3060.
I’m developing an application that uses NVML (NVIDIA Management Library) to interact with NVIDIA GPUs. To ensure NVML works correctly, I tested it using a sample code from the NVIDIA GPU Deployment Kit:
#include <stdio.h>
#include <nvml.h>
const char* convertToComputeModeString(nvmlComputeMode_t mode) {
switch (mode) {
case NVML_COMPUTEMODE_DEFAULT:
return "Default";
case NVML_COMPUTEMODE_EXCLUSIVE_THREAD:
return "Exclusive_Thread";
case NVML_COMPUTEMODE_PROHIBITED:
return "Prohibited";
case NVML_COMPUTEMODE_EXCLUSIVE_PROCESS:
return "Exclusive Process";
default:
return "Unknown";
}
}
int main() {
nvmlReturn_t result;
unsigned int device_count, i;
// Initialize NVML library
result = nvmlInit();
if (NVML_SUCCESS != result) {
printf("Failed to initialize NVML: %s\n", nvmlErrorString(result));
return 1;
}
// Query the number of NVIDIA devices
result = nvmlDeviceGetCount(&device_count);
if (NVML_SUCCESS != result) {
printf("Failed to query device count: %s\n", nvmlErrorString(result));
nvmlShutdown();
return 1;
}
printf("Found %d device%s\n\n", device_count, device_count != 1 ? "s" : "");
// List and query details of each device
printf("Listing devices:\n");
for (i = 0; i < device_count; i++) {
nvmlDevice_t device;
char name[NVML_DEVICE_NAME_BUFFER_SIZE];
nvmlPciInfo_t pci;
nvmlComputeMode_t compute_mode;
// Get handle for the device
result = nvmlDeviceGetHandleByIndex(i, &device);
if (NVML_SUCCESS != result) {
printf("Failed to get handle for device %i: %s\n", i, nvmlErrorString(result));
nvmlShutdown();
return 1;
}
}
// Shutdown NVML library
nvmlShutdown();
return 0;
}
Upon running this program, I receive the following output:
Found 1 device
Listing devices:
Failed to get handle for device 0: Not Supported
It appears that all functionalities work except for nvmlDeviceGetHandleByIndex(). Any suggestions on resolving this issue would be greatly appreciated.
I've looked online for a similar issue, but I couldn't find anything that was relevant or helpful. I've also gone through the documentation, but it doesn't explain what "Not Supported" means in this particular case.
| common-pile/stackexchange_filtered |
parallelized data structure
I'm searching a data structure that supports O(1) concurrent insertions (thread number is known in advance) and iteration over its elements (insertion order doesn't need to be preserved).
While one thread is iterating over the elements there won't be any insertions.
So basically I'm looking for a concurrent dynamic array...
Currently I'm using a std::vector and a mutex, but I have the feeling there might be something faster.
Thanks in advance!
Could you use a vector for each thread? Have a vector of vector pointers and then just iterate over every element for every vector pointer.
oof i didn't thought of that but you are right
Is this just a set of elements, or do you need it to be an array?
| common-pile/stackexchange_filtered |
Where can I ask about the materials that make an indoors piece of furniture/device?
My question would be "What are the glowing hands of a clock made of?" Where I ask about clocks with self luminance (even without the presence of a battery).
Is there an appropriate site on Stack Exchange for such a question?
It should be something phosphorus (it "ignites" when is in contact with oxygen, a byproduct of the oxidation), so I think chemistry can be one since "Questions relating to observed chemical phenomena" are on topic there.
@Braiam after a bit of research it turns out to be Tritium, which is an hydrogen isotope whose alpha radiation reacts with fluorescent glass to create light. Perhaps Chemistry is suitable for this...
I'm not familiar with either of these sites but:
Chemistry SE does have similar questions like What material is used in coating these aluminum containers for the food industry?
Arts and Crafts SE is appropriate if you're looking to replicate this effect in your own project
| common-pile/stackexchange_filtered |
sound record tickling / ticks / noises / spikes
I successfully record sound on an Android 2.3.4 device (full settings are: AudioSource.MIC + 44100Hz + AudioFormat.CHANNEL_IN_MONO + AudioFormat.ENCODING_PCM_16BIT).
I also properly write the PCM data to a WAV file.
The problem is that there are those noises which makes the recording terrible.
The attached file shows exactly what I mean. Notice the obvious spikes.
I have tested the same recording (the music in the attached file) with some sound recording apps on the market, and they record in a perfect way, without noises at all - how is it done? Is it a setting I should set (I use AudioRecord)? Should I manually filter those noises with some algorithm? What should I be looking for?
If the attached file is not downloading for some reason or link is simply broken just let me know with a comment.
Thanks!
Edit #1:
I'm with Galaxy S2.
This could be a hardware problem. I've noticed on some phones and small cameras, that the microphone is susceptible to bumps and knocks when he device is manipulated in space. So, moving your S2 around while recording might be causing the spikes. Have you been able to rule this out?
@JamieTaylor Hi, yes - it's definitely not the case as I record sound while the device is on the table - not moving whatsoever.
And just to emphasize, again - in the exact same conditions I use another recording software and it gives perfect sound. Weird.
I just thought I'd ask, to make sure. Ya know? :-)
I'd do the same, that's ok :)
i have the problem with file size for recording sample rate with 44100. so that i tried different sample rates such as 8000, 11025, 22050, 32000, 44056 but no use. however some devices accept the audio configuration but audio quality is too worst. some devices shows errors with invalid buffer size or invalid audio configuration. i am totally confused how to record clear audio with low file size. please help me.
I think 44100Hz is too much for your phone capability, and in general, for all phones capabilities: mobile phones are not HI-FI.
Try with 8000Hz.
Then you can change this value until you find an acceptable recording quality.
Hi rosco, I don't think so. I'm with Galaxy S2 that's one. Second, another recording app, as I've said in the original post, records perfectly the same sound I try to record with my own code.
ok. but, have you tried? or you are sure that your app uses the same algorithm of the other recording app? 44100Hz is the number of samples per seconds (CD quality), you need a very good HW and SW to encode and decode it. just try and verify if it is better/equal/worst and then decide your next steps :)
I've tried and you're correct. More than that, I've also found out that providing AudioRecord a bigger buffer than AudioRecord.getMinBufferSize() suggests does clear the sound even in 44100Hz. Thanks :)
i have the problem with file size for recording sample rate with 44100. so that i tried different sample rates such as 8000, 11025, 22050, 32000, 44056 but no use. however some devices accept the audio configuration but audio quality is too worst. some devices shows errors with invalid buffer size or invalid audio configuration. i am totally confused how to record clear audio with low file size. please help me.
| common-pile/stackexchange_filtered |
Is there a way to search an array inside multiple data attributes of an html element based on a single value?
I have an html element that uses arrays inside the data attribute
<div data-area='["North America", "United Kingdom"]'>Test</div>
I have multiple data attributes pertaining to different "things"
<div data-area='["North America", "United Kingdom"]' data-job='["Programmer","Scientist"]'>test</div>
How can I go by searching through all the data attribute tags on the entire element based off of 1 value?
I'm trying to use a selector rather than looping through each data attribute, Is this possible?
Here is an example plus a fiddle of what I've attempted.
// search all data attributes of a div element that contains the given value and apply a hidden class
// this should search every data attribute for "Programmer"
$('[data-*="Programmer"]').addClass('hidden');
http://jsfiddle.net/pWdSP/1/
I can only think of this solution:
$('body *').addClass(function() {
var data = $(this).data();
for (var i in data) if (data.hasOwnProperty(i)) {
if (data[i].indexOf('Programmer') > -1) {
return;
}
}
return 'hidden';
});
http://jsfiddle.net/pWdSP/4/
It's not optimal since it iterates over all DOM nodes, but I don't see other way to select all the nodes that have data-* attribute.
If there are some more precise criterias on what nodes should be checked - it's highly recommended to change a selector accordingly (instead of body *)
If data are stored only in div tags, you could allways add a filter in the selector
@zerkms.. I thing he means $('div') instead of $('body *')
@Sushanth --: ah ok. I've added it in the very end anyway )
| common-pile/stackexchange_filtered |
Missing 1 Required Keyword-Only Argument
When I try to enter a dataframe as a function parameter in Python 3.6, I'm getting the error 'Missing 1 Required Keyword-Only Argument' for the following function, where df is a dataframe and rel_change is an array:
def get_mu(*rel_change, df):
row_count = len(df.index)
print("mu count")
print(row_count)
mu_sum = 0
for i in range (0, len(rel_change)):
mu_sum += rel_change[i]
mu = (mu_sum) / row_count
return mu
Then I access it like
mu = get_mu(g, df)
which gives the error.
I've also tried writing the dataframe access in another function that just calculates row_count, and passing that into mu, but that gives the same error.
What could I be doing wrong?
Try reversing your function arguments and then your function call. def get_mu(df, *rel_changes).... and mu = get_mu(df, g)
Arguments following *args are keyword arguments and have to be passed by name mu = get_mu(g, df=df)
You have defined a function with a variable amount of positional arguments, *rel_change, which can only ever be followed by keyword only arguments. In this case, you have to pass df by name like so:
mu = get_mu(g, df=df)
Or redefine get_mu() such that df appears before *rel_change.
You should flip your arguments to be get_mu(df, *rel_change). Don't forget to flip the function call as well: get_mu(df, g). Optional positional arguments (often called star args in reference to the conventional name for the parameter, *args) need to go after keyword arguments.
For more detail, I strongly recommend the book "Effective Python: 59 Specific Ways to Write Better Python" by Brett Slatkin. Here's an excerpt on that topic following the break:
Item 18: Reduce Visual Noise with Variable Positional Arguments
Accepting optional positional arguments (often called star args in reference to the conventional name for the parameter, *args) can make a function call more clear and remove visual noise.
For example, say you want to log some debug information. With a fixed number of arguments, you would need a function that takes a message and a list of values.
def log(message, values):
if not values:
print(message)
else:
values_str = ', '.join(str(x) for x in values)
print('%s: %s' % (message, values_str))
log('My numbers are', [1, 2])
log('Hi there', [])
>>>
My numbers are: 1, 2
Hi there
Having to pass an empty list when you have no values to log is cumbersome and noisy. It’d be better to leave out the second argument entirely. You can do this in Python by prefixing the last positional parameter name with *. The first parameter for the log message is required, whereas any number of subsequent positional arguments are optional. The function body doesn’t need to change, only the callers do.
def log(message, *values): # The only difference
if not values:
print(message)
else:
values_str = ', '.join(str(x) for x in values)
print('%s: %s' % (message, values_str))
log('My numbers are', 1, 2)
log('Hi there') # Much better
>>>
My numbers are: 1, 2
Hi there
If you already have a list and want to call a variable argument function like log, you can do this by using the * operator. This instructs Python to pass items from the sequence as positional arguments.
favorites = [7, 33, 99]
log('Favorite colors', *favorites)
>>>
Favorite colors: 7, 33, 99
There are two problems with accepting a variable number of positional arguments.
The first issue is that the variable arguments are always turned into a tuple before they are passed to your function. This means that if the caller of your function uses the * operator on a generator, it will be iterated until it’s exhausted. The resulting tuple will include every value from the generator, which could consume a lot of memory and cause your program to crash.
def my_generator():
for i in range(10):
yield i
def my_func(*args):
print(args)
it = my_generator()
my_func(*it)
>>>
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
Functions that accept *args are best for situations where you know the number of inputs in the argument list will be reasonably small. It’s ideal for function calls that pass many literals or variable names together. It’s primarily for the convenience of the programmer and the readability of the code.
The second issue with *args is that you can’t add new positional arguments to your function in the future without migrating every caller. If you try to add a positional argument in the front of the argument list, existing callers will subtly break if they aren’t updated.
def log(sequence, message, *values):
if not values:
print('%s: %s' % (sequence, message))
else:
values_str = ', '.join(str(x) for x in values)
print('%s: %s: %s' % (sequence, message, values_str))
log(1, 'Favorites', 7, 33) # New usage is OK
log('Favorite numbers', 7, 33) # Old usage breaks
>>>
1: Favorites: 7, 33
Favorite numbers: 7: 33
The problem here is that the second call to log used 7 as the message parameter because a sequence argument wasn’t given. Bugs like this are hard to track down because the code still runs without raising any exceptions. To avoid this possibility entirely, you should use keyword-only arguments when you want to extend functions that accept *args (see Item 21: “Enforce Clarity with Keyword-Only Arguments”).
Things to Remember
Functions can accept a variable number of positional arguments by using *args in the def statement.
You can use the items from a sequence as the positional arguments for a function with the * operator.
Using the * operator with a generator may cause your program to run out of memory and crash.
Adding new positional parameters to functions that accept *args can introduce hard-to-find bugs.
| common-pile/stackexchange_filtered |
GridView Unbound Column Error
I'm using ASP.NET MVC with DevExpress, I inserted a gridview in a view (generated automatically with it's partial and stuff) and I want to add an unbound column in it.
@{
var grid = Html.DevExpress().GridView(settings =>
{
//configuration code and other columns
settings.Columns.Add(c =>
{
c.FieldName = "ClientFinal";
c.Caption = "Client Final";
c.UnboundType = DevExpress.Data.UnboundColumnType.String;
c.UnboundExpression = "[Prenom] + ' ' + [Nom]";
});
//configuration code
});
}
@grid.Bind(Model).GetHtml()
ClientFinal entity:
public class ClientFinal : XPObject
{
public ClientFinal(Session session) : base(session) { }
public override void AfterConstruction() { base.AfterConstruction(); }
private string nom;
public string Nom
{
get { return nom; }
set { SetPropertyValue<string>("Nom", ref nom, value); }
}
private string prenom;
public string Prenom
{
get { return prenom; }
set { SetPropertyValue<string>("Prenom", ref prenom, value); }
}
//other attributes
}
Result:
I even tried : c.UnboundExpression = "[ClientFinal.Prenom] + ' ' + [ClientFinal.Nom]"; but it didn't work.
Please help.
The code you demonstrated is absolute correct. Setting the grid column's FieldName property seems to be unnecessary, but it should not cause of the error. Hence, I believe that the mistake is in the code that you did not show.
The code in question was all shown (if I remove it it won't be any column with #Err) if there is any other code involved in the operation please tell me I will post it. Thank you
I have no idea which code might be a culprit. Error might be in your own code, as well as in the DevExpress libraries. All that I can say for sure is that your code is fine according to the official documentation. If I was you, I would check whether any CLR exception is thrown when openning a problematic page. The exception message might contain the information useful to understand its cause.
I tried debugging and adding breakpoints all over but no exception was thrown :/
Finally I solved it, the problem was that the Unbound Column feature was not well explained in the devexpress documentation.
First:
c.FieldName = "ClientFinalUnbound";
The property FieldName has to be unique and it's just a name for the column not a mapping for the existing field in the entity in question (the one showed in the grid).
Second:
c.UnboundExpression = "[ClientFinal.Prenom] + ' ' + [ClientFinal.Nom]";
In the unbound expression the fields has to be existing ones (listed as properties in the entity definition or child elements of properties in the entity definition), so here I'm calling the Prenom (or Nom) property of the ClientFinal property in the RendezVous entity (the one showed in the grid).
Thanks
| common-pile/stackexchange_filtered |
how to check for correct query and error handle it?
With my api i have '/api/students'
will produce all student names and when a req.query is input such as a name the endpoint becomes '/api/students?name=John'if the req.query was "John". i have used regex to check and control the input to only allow characters and spaces
const re = /^[a-zA-Z ]*$/;
if (!re.test(name)){//error code
this will allow me to check that numbers or special characters aren't being input but if the user input the endpoint accidentally as '/api/students?firstnames=John'for instance, they will receive all names as per '/api/students' instead of it error handling and returning it as a bad request?
If I understand your problem correctly, your issue is with the query param name vs the query param firstnames. In my opinion, you should not let the user decide which query param types (e. g. name, age, etc.) he can input, but rather control that in the UI (this is what you commonly see as "filters" on websites).
It sounds to me, that the API is ignoring the unknown query param firstnames, so it responds with all students. You could catch any unwanted query params in the API endpoint implementation, as in responding with a 400 BAD REQUEST, once the API receives an unknown query param type, such as firstnames.
As to "how to check". I think you are on a pretty good path with your regex for checking the actual values of the query params. Depending on the solution you pick (UI side checking, API side checking, or both) you can do the following to control the query param types:
UI side:
Don't let the user freely type query param types
Take user input for specific query param types such as name
Compose your query string for specific query params with user input respective to the params
API side:
Check query params for validity (they can not be unknown)
Respond with 400 BAD REQUEST if there are any unknown query params in the query instead of ignoring the unknown query params
yes but say user puts a typo in url : url/api/students?names=John instead of name. how would we check the query name and compare it with express without the server responding with all names by default. yeah focussing on server side not ui.
So I'm guessing you are using express.js for your API implementation. The request query is an object see here: https://expressjs.com/en/4x/api.html#req -> what you could do is read all object properties of req.query, compare them with an array of known query params and throw an error, if there are any unknowns.
| common-pile/stackexchange_filtered |
Lua Error: '<eof>' expected
There is an error that I can not get away from. When I run the script, the following error occurs every time:
bios:14: [string "Digger"]:35: '<eof>' expected
write("X:")
x = read()
write("Y:")
y = read()
write("Z:")
z = read()
for zi=0, z, 1 do
for xi=0, x, 1 do
for yi=0, y, 1 do
turtle.dig()
turtle.forward()
end
if xi < x then
if xi % 2 then
turtle.turnLeft()
turtle.dig()
turtle.forward()
turtle.turnLeft()
else
turtle.turnRight()
turtle.dig()
turtle.forward()
turtle.turnRight()
end
end
end
if y % 2 == 1 then
turtle.turnRight()
for back=0, y, 1 do
turtle.forward()
end
turtle.turnRight()
end
end
Although I have already found out in the internet that there must be an end to much, but I find no mistake. Can you tell me what's wrong?
In which environment are you running the script?
Your code is syntactically correct. BTW, if xi % 2 will be always interpreted as if true (in Lua, zero is true).
In Minecraft: Computercraft ^^
How can I differentiate between odd and even in Lua?
if xi % 2 == 1
do you have more code than that? I can't find any error here
@TimoSpielberger Are you still active here?
| common-pile/stackexchange_filtered |
reading fprintf while the program is running
So I'm writing a program that is suppose to run forever (client server stuff) in C, and I'm writing to a log. I am using fopen and fprintf to be able to write to the file. My problem is that since the program is suppose to run forever, there really is no point where I can put a fclose. And when I Ctrl+C to stop the process, and the log file, it shows nothing? I want to use fprintf because it allows me to have formatted strings, is there a way for me to log my output while still having formatted strings?
What's happening is that the output is being buffered and when the program is killed, the buffer is never flushed to disk. You can either periodically flush the buffer (fflush), or you can try to capture the kill signal. Specifically, Control + C sends the SIGINT signal, so you'd need to register a handler for that.
As to minimize potential buffer loss, I would probably use both strategies.
How can I catch a ctrl-c event? (C++)
Edit: As Adam said, you can also just open and close the file when necessary, but depending on how frequently you're writing, you may want to keep the file open in which case my answer would apply.
I am actually writing pretty frequently, which is why I didn't want to go with the 'open and close each time' method, thanks for telling me what is actually going on. I'll try both methods
If you just do periodic flushing, then you can potentially lose data is the program goes down before the next flush. If you try to catch signals, then you can better prevent that, and that's why both is ideal. If you're not particularly concerned with IO performance, you could just fflush after every write. Depending on how aggressively the kernel and physical disk cache, fflush after every write may actually have a very small performance impact.
Each time you want to write a log message, open the file in append mode, print your text to the end of the file, and then close it.
setbuf(), setvbuf(), or fflush(), - and trap SIGINT or what ever is appropriate relating to your system.
I.e.
#include <stdio.h>
#include <unistd.h> /* for sleep() in this example */
int main(void)
{
FILE *fh;
char *fn = "main.log";
int i;
if ((fh = fopen(fn, "w")) == NULL) {
fprintf(stderr,
"Unable to open file %s\n", fn);
return 1;
}
setbuf(fh, NULL); /* Turns buffering completely off */
for (i = 1; ; ++i) {
fprintf(fh, "%d ok ", i);
if (!(i%120))
fprintf(fh, "\n");
sleep(1);
}
fclose(fh);
return 0;
}
in other console :
$ tail -f main.log Yield:
1 ok 2 ok 3 ok 4 ok 5 ok 6 ok ^C
Or fflush(fh) when you want to flush write buffer.
stdio.h:>
int setvbuf(FILE* stream, char* buf, int mode, size_t size);
Controls buffering for stream stream. mode is _IOFBF for full buffering, _IOLBF for line buffering, _IONBF for no buffering. Non-null buf specifies buffer of size size to be used; otherwise, a buffer is allocated. Returns non-zero on error. Call must be before any other operation on stream.
void setbuf(FILE* stream, char* buf);
Controls buffering for stream stream. For null buf, turns off buffering, otherwise equivalent to (void)setvbuf(stream, buf, _IOFBF, BUFSIZ).
int fflush(FILE* stream);
Flushes stream stream and returns zero on success or EOF on error. Effect undefined for input stream. fflush(NULL) flushes all output streams.
I'd go with Corbin's answer, but you asked if there's another way to get formatted output.
There is: you could use sprintf() into a char buffer and then use unbuffered open/write/close I/O.
But, don't do that. Keep using fopen() and co., and catch the SIGINT.
| common-pile/stackexchange_filtered |
marshal struct to xmlwith an extra tag
I'm developing a twilio telephone server in Go and have some structs that reference the xml to generate.
For instance:
type Say struct {
XMLName xml.Name `xml:"Say"`
Text string `xml:",chardata"`
}
type Response struct {
XMLName xml.Name `xml:"Response"`
Says []Say `xml:",omitempty"`
}
When the Says array is filled with two Say structs containing 'Something' and 'Something else' this generates:
<Response>
<Say>Something</Say>
<Say>Something else</Say>
</Response>
But after 'Something' is said out loud there is no pause and 'something else' comes right after that.
twilio created a Pause tag for that to let it pause a second.
So what I want is to have an xml generated like this:
<Response>
<Say>Something</Say>
<Pause></Pause>
<Say>Something else</Say>
<Pause></Pause>
</Response>
But how can this be represented in the go struct? How to squeeze in an extra Pause tag as a sibling of the Say tag?
type Say struct {
XMLName xml.Name `xml:"Say"`
Text string `xml:",chardata"`
???? Pause `xml:Pause,sibling?????`
}
type Response struct {
XMLName xml.Name `xml:"Response"`
Says []Say `xml:",omitempty"`
}
So basically you want to ensure that you have a valid sequence, in which a pause should only be following a say by means of struct tags?
yes thats what I want
Of the top of my head, this is not possible with just struct tags and sounds more like a job for XSD. You could of course write a custom validator using a tokenizer, which fails if the first element encountered is a pause. Wether to use XSD or a custom validator heavily depends on your use case, however.
Found the solution with the use of interfaces
type Say struct {
XMLName xml.Name `xml:"Say"`
Text string `xml:",chardata"`
}
type Response struct {
XMLName xml.Name `xml:"Response"`
Says []interface{}
}
The lack of an actual type name does not generate a 'Says' tag.
var r Response
r.Says = append(r.Says, Say { Text: "hello"})
| common-pile/stackexchange_filtered |
How Can I Implement Multiple Routes in HereMaps?
How can I implement multiple routes using the iOS HereMap SDK.
Can anyone provide the Example for the Multiple routes.
How many routes you want?
3 routes if available
By this example I'm able to create one route https://github.com/heremaps/here-ios-sdk-examples/blob/master/routing-ios-swift/RoutingApp/MapViewController.swift
In the Lite Variant (4.x) of the SDK, set alternativeRoutes in the Route options: https://developer.here.com/documentation/ios-sdk/api_reference/Structs/RouteOptions.html
In the Premium SDK (3.x), set alternatives routes in the NMAroutingMode, and it's called a bit differently there: ResultLimit
See https://developer.here.com/documentation/ios-premium/api_reference_jazzy/Classes/NMARoutingMode.html#%2Fc:objc(cs)NMARoutingMode(py)resultLimit
Still unclear which product/variant/version you are using, but your example above with one route should be easy to adapt. Just set the alternative routes to 1 or higher (max. 9), and route result (what is anyway already an array) will contain the routes, if available.
| common-pile/stackexchange_filtered |
What does match_incoming_request_route option in gRPC transcoding do?
I'm trying to figure out what match_incoming_request_route option in envoy does. If I read this doc correctly it suggests using it to match on custom route path (the google.api.http annotation). But the custom route works just fine when match_incoming_request_route is set to false. Furthermore we tried reading the code and find out the flag is only used in here.
Found the PR and related issue, which explains the usecase. closing this.
| common-pile/stackexchange_filtered |
GROUP_CONCAT within CONCAT
Database hierarchical structure is as follow
Student Name List of fee Assigned to Each
Student
List of Scholarship Assigned to Each Fee
As structure, expected output is
Student Name-Fee->Scholarship1, Scholarship2
Karan-1.Annual Fee->Economic Scholarship,Incapable Scholarship,2.Monthly Fee
But what I am getting
Student Name-Fee->Scholarship1, Student Name-Fee->Scholarship2
Karan-1.Annual Fee->Economic Scholarship,1.Annual Fee->Incapable Scholarship,2.Monthly Fee
What is wrong in here ? Though I am nesting CONCAT, but not getting expected output
CONCAT(student.en_ttl,'-',GROUP_CONCAT(DISTINCT fee.id,'.',fee.en_ttl,
COALESCE(CONCAT('->',sch.en_ttl),''))) AS fee
SQL Fiddle
You basically need to two levels of GROUP BY. So, we will need to use a Derived Table here. First subquery will aggregate at the level of fee; and then the second level would aggregate those fee details at the level of student.
Also, in newer (and ANSI SQL compliant) versions of MySQL, you need to ensure that any non-aggregated column in the SELECT clause should be in the GROUP BY clause as well.
Query
SELECT
CONCAT(stud_ttl,'-',GROUP_CONCAT(CONCAT(fee_det, COALESCE(CONCAT('->',fee_sch), '')))) AS fee
FROM
(
SELECT student.ttl AS stud_ttl,
CONCAT(fee.id,'.',fee.ttl) AS fee_det,
Group_concat(DISTINCT sch.ttl) AS fee_sch
FROM inv_id
JOIN student
ON student.id = inv_id.std
JOIN inv_lst
ON inv_lst.ftm = inv_id.ftm
JOIN fee
ON fee.id = inv_lst.fee
JOIN sec_fee
ON sec_fee.fee = fee.id
AND sec_fee.cls = student.cls
AND sec_fee.sec = student.sec
LEFT JOIN std_sch
ON std_sch.std = student.id
LEFT JOIN sec_sch
ON sec_sch.sch = std_sch.sch
AND sec_sch.fee = fee.id
LEFT JOIN sch
ON sch.id = sec_sch.sch
GROUP BY student.ttl, fee_det, fee.ttl
) dt
GROUP BY stud_ttl;
Result
| fee |
| -------------------------------------------------------------------- |
| Karan-1.Annual->Economic Scholarship,Incapable Scholarship,2.Monthly |
View on DB Fiddle
| common-pile/stackexchange_filtered |
unexpected output in postman
There is no error, just I not getting the output which i am expecting in postman.
such as not showing the product data, payment_intent: null etc.
Could you please help me to fix this?
Here is the output which i am getting
{
"status": "success",
"session": {
"id": "cs_test_a1mZ6W9dAO1n7IgpBkvZFJR3Uoe0TTAv3BV9U76b9JVpogcbOyImkaAvU2",
"object": "checkout.session",
"after_expiration": null,
"allow_promotion_codes": null,
"amount_subtotal": 99700,
"amount_total": 99700,
"automatic_tax": {
"enabled": false,
"status": null
},
"billing_address_collection": null,
"cancel_url": "http://<IP_ADDRESS>:3000/tour/the-mountain-biker",
"client_reference_id": "64482b81fc72db55d5ab256c",
"consent": null,
"consent_collection": null,
"created":<PHONE_NUMBER>,
"currency": "usd",
"currency_conversion": null,
"custom_fields": [],
"custom_text": {
"shipping_address": null,
"submit": null
},
"customer": null,
"customer_creation": "if_required",
"customer_details": {
"address": null,
"email"<EMAIL_ADDRESS> "name": null,
"phone": null,
"tax_exempt": "none",
"tax_ids": null
},
"customer_email"<EMAIL_ADDRESS> "expires_at":<PHONE_NUMBER>,
"invoice": null,
"invoice_creation": {
"enabled": false,
"invoice_data": {
"account_tax_ids": null,
"custom_fields": null,
"description": null,
"footer": null,
"metadata": {},
"rendering_options": null
}
},
"livemode": false,
"locale": null,
"metadata": {},
"mode": "payment",
"payment_intent": null,
"payment_link": null,
"payment_method_collection": "always",
"payment_method_options": {},
"payment_method_types": [
"card"
],
"payment_status": "unpaid",
"phone_number_collection": {
"enabled": false
},
"recovered_from": null,
"setup_intent": null,
"shipping_address_collection": null,
"shipping_cost": null,
"shipping_details": null,
"shipping_options": [],
"status": "open",
"submit_type": null,
"subscription": null,
"success_url": "http://<IP_ADDRESS>:3000/",
"total_details": {
"amount_discount": 0,
"amount_shipping": 0,
"amount_tax": 0
},
"url": "https://checkout.stripe.com/c/pay/cs_test_a1mZ6W9dAO1n7IgpBkvZFJR3Uoe0TTAv3BV9U76b9JVpogcbOyImkaAvU2#fidkdWxOYHwnPyd1blpxYHZxWjA0SzRTa1FWQkNTSE9%2FN3BEXFF3RlJvVXJnVn02bV0yQEh8ZEJrVW01bHFPYHxqUzxtbjFBRkpdc2phMU9oUTA8NVFUU19BQXE0XGRHXHBcMTBGQH1PYUI2NTVmf1xTd0FhSicpJ2N3amhWYHdzYHcnP3F3cGApJ2lkfGpwcVF8dWAnPyd2bGtiaWBabHFgaCcpJ2BrZGdpYFVpZGZgbWppYWB3dic%2FcXdwYHgl"
}
}
Here is my code, BTW using stripe@7
const stripe = require('stripe')(process.env.STRIPE_SECRET_KEY);
const Tour = require('./../models/tourModel');
const catchAsync = require('./../utils/catchAsync');
const factory = require('./handlerFactory');
const AppError = require('./../utils/appError');
exports.getCheckoutSession = catchAsync(async (req, res, next) => {
//1) Get the currently booked tour
const tour = await Tour.findById(req.params.tourId);
//2) Create checkout session
console.log(tour);
const session = await stripe.checkout.sessions.create({
payment_method_types: ['card'],
success_url: `${req.protocol}://${req.get('host')}/`,
cancel_url: `${req.protocol}://${req.get('host')}/tour/${tour.slug}`,
customer_email: req.user.email,
client_reference_id: req.params.tourId,
line_items: [
{
quantity: 1,
price_data: {
currency: 'usd',
unit_amount: tour.price * 100,
product_data: {
name: `${tour.name} Tour`,
description: tour.summary,
images: [`https://www.natours.dev/img/tours/${tour.imageCover}`],
},
},
},
],
// metadata: {
// name: `${tour.name} Tour`,
// description: tour.summary,
// image: `https://www.natours.dev/img/tours/${tour.imageCover}`,
// },
mode: 'payment',
});
//3) Create session as response
res.status(200).json({
status: 'success',
session,
});
});
I am expecting proper output in below fields in postman.
as data is not getting reflected in product_data, payment_intent: null, fields
Why are you expecting the Payment Intent to exist when you create the Checkout Session? The Checkout Session creates the Payment Intent when the Customer uses the Payment Page to complete the Payment. This property will have the ID of the Payment Intent after it is paid successfully.
As for the product data, it is mentioned in the API Reference Doc that this isn't included by default. It is expandable though. This means you can pass in line_items in the expand parameter when you create the Checkout Session to get the full line_items data in the response.
| common-pile/stackexchange_filtered |
Is it possible to select only repeated values from a single column using eloquent?
Table
id|p.id|product
1 |8 |chair
2 |8 |table
3 |2 |chair
4 |4 |guitar
5 |8 |glasses
I would like to select p.id 8 and p.id 2 products however I I want the repeated values (chair). This is the opposite of distinct. So far I have
$product =modelname::whereIn('p.id', array(8,2))->select('product')->get();
This obviously selects the 2 p.id's however I'm unaware of how to use aggregate functions such as count when using eloquent. Any help is much appreciated. (I use laravel 4.2 btw)
What is the exact result you want to get for above data?
The exact result is chair. I only want the product and not the values that are distinct (repeated values only).
If you rename your p.id into p_id using:
$product= modelname::selectRaw('product, count(product) AS aggregate')->whereIn('p_id', array(8,2))->groupBy('product')->having('aggregate','>',1)->get();
you will get as result:
array:1 [▼
0 => {#121 ▼
+"product": "chair"
+"aggregate": 2
}
]
If you want to select value of p.id 8 and p.id 2 you can use the code like
$product = Modelname::whereIn('p.id', array(8,2))->get();
Now the $product variable contact the value in array format. now you can pass the variable to view or if you want the count use count($product)
| common-pile/stackexchange_filtered |
Predatory journals and low impact journals
Another page was discussing predatory journals and I thought it was a good segway for further discussion in this group. The discussion on pseudoscience seems to have become a standstill (I have been adding comments but it does not become active).
The issue about low impact journals and predatory journals is also not transparent, nor clear within the available guidelines of this SE. I am very concerned that there is a denigration and bias against "low impact journals" or low quality journals which tends to be open-access for "high impact journals" which tend to be not accessible to the public and also usually too academic for the public. The bias towards high impact journals I do not think should apply for this SE though the discussion should not exclude those journals. The problem is that if we enforce this standard of "high impact journals" then there will be a extremely limited pool of us in academia that are able to adequately engage with the discussion and access the breadth of the literature.
This group needs to be transparent and upfront about this bias as to not constantly traumatize newcomers (like I was) for standards that are not obvious and with a tone that implies gullibility and poor education. Poor access to journals should not be the limitation nor the standard for access to this SE.
Having said that, I think predatory journals are definitely off limits and need to be actively discouraged. There needs to be far more explicit expectations detailed in the FAQs or help page for the group needs to have information about predatory journals (either the code of conduct or expected behaviour, etc).
Beall's Predatory Journal list
"Poor access to journals should not be the limitation nor the standard for access to this SE." I don't think it is? We even allow news articles, blog posts, etc. as references to back up questions. Or, is your question pertaining to answers? In which case, I don't think the journal should matter, but in case a member does not find the answer correct and has evidence from another journal counteracting it, why can't part of the argument simply be that one journal is more trustworthy than the other?
I don't think there is such a bias. As anywhere on SE, the community tries to enforce correct answers. Answers that are deemed incorrect will receive down votes and comments. That is explained in the tour. Whether that incorrect answer comes from a high impact journal or low impact, really shouldn't matter.
BryanKrause was harsh and made a comment about low impact journals which I think he has deleted now. The only comment that was similar was this "That review is in a journal known for publishing pseudoscience including support for easily discredited fields like homeopathy, mediums contacting dead people, and other junk" - https://psychology.stackexchange.com/questions/23866/evidence-for-this-tapping-therapy-eft-emotional-freedom-technique?noredirect=1&lq=1
If the journal has been catalogued in Medline/Pubmed and is not a predatory journal, I am not sure how readers can tell whether it is a low impact journal or a journal publishing "pseudoscience".
I guess my question is: what is wrong with that comment (other than perhaps 'junk' at the end)? That article might still be okay; you would have to read through it in full to be certain. That comment simply outlines a very good reason to be suspect, and in case contrasted with a well-respected journal and short on time, to simply prefer to more cited journal. Readers don't need to be able to tell. Answers like that are welcome, but might get down voted. This is simply SE's curation system at work.
But, perhaps we should be more explicit about topics which are considered pseudoscience here and for which some pushback may be expected. That goes back to your previous question I still owe you an edit /answer to!
Yes all these topics are intertwined and it would be good to get some consensus from the group which may hopefully clarify things and make things easier for newcomers
The tone and the manner of delivery of that comment could have been softened and be less hostile to the writer. I honestly did not know that, but that statement implied that I should have. If it was worded to, did I know, or a question then it would have been much better
Agreed and agreed. :) I'll try to pick up on your discussion thread later today.
Thanks @StevenJeuris
Have you seen this?
No @ArnonWeinberg. That is a helpful page and has quite a lot of useful detail for the newcomer. Can we add it to the help page for this group? Some other groups have common issues like this added to their Code of Conduct or Expected Behaviour document
@Poidah To clarify, I did not say anything about "low impact" journals being a problem; my critique was about "a journal known for publishing pseudoscience including support for easily discredited fields like homeopathy, mediums contacting dead people, and other junk" - the problem is that articles there appear like they are peer-reviewed journal articles, but the editors clearly have a bias outside of mainstream science. My intent in commenting is to make sure readers, and the author, are aware of the source they are reading.
If it sounded harsh I certainly didn't mean to direct that at you, but rather at the journal. I find those sorts of journals to be far worse than the typical crap science that floats around on social media etc, because not only is it misleading but it also is masquerading as real science. They also make a lot of effort for real scientists to have to push back against.
Totally agree. Those journals do a great job in having a veneer of professionalism and scientific rigor which is highly problematic. Sorry, I think I interpreted your comments as low impact journals, so sorry about my misattribution. But I think clarifying the issue about low impact journals is still important for this group
Yeah, I think the main issue with low impact journals would just be that they don't typically have the same cache that higher impact journals do...the bigger the journal, the more outcry when something bad makes it through their filter. That isn't a reason to exclude them, though, just maybe a reason to prefer more respected journals when possible when preparing the very brief answers that fit the SE format.
I tend to preference open access journals in this forum due to the public nature of this SE and my pro-educational leanings. However, that means a leaning towards low impact journals unfortunately. Having a standard and arguing about the publication standard is an important part of this group compared to other SEs.
@Poidah As a sidenote, quite a lot is open access nowadays through scihub, ResearchGate, etc.
ResearchGate unfortunately requires an email address from an academic institution, which can be limiting to the public, but better than nothing. Are we allowed to put links to scihub here?
| common-pile/stackexchange_filtered |
Jquery $.ajax not fired on post
I'm using Jquery ajax to post data, but $.ajax is not fired. the click button is fired.
There's a javascript warning in firebug Empty string passed to getElementById().
Here's my code
$('body').on('click','button.btnsubmitads',function(event) {
event.preventDefault();
$.ajax({
type:"post",
url:base_url+"advertising/newads",
dataType:"json",
data:$("#newadsform").serialize(),
contentType: "application/json",
success: function(result) {
if(result.status){
console.log('oke');
} else {
console.log('ga oke');
}
},
error: function() {}
});
return false;
});
here's my form
<form class="newadsform form-horizontal" action="" method="post" id="newadsform">
<div class="control-group">
<label class="control-label">Listing</label>
<div class="controls">
<select name="idlisting" class="span10" id="idlisting">
<?php foreach($listing->result() as $data){
echo "<option value=\"".$data->idlisting."\">".$data->title."</option>";
}?>
</select>
</div>
</div>
<div class="control-group">
<label class="control-label">Title</label>
<div class="controls">
<input type="text" class="span10" name="title" id="title">
</div>
</div>
<div class="control-group">
<input type="hidden" id="inputimageadvs" name="imagename" value="" />
<input type="hidden" id="inputimagelarge" name="imagenamelarge" value="" />
<input type="hidden" id="inputimagethumb" name="imagenamethumb" value="" />
<label class="control-label">Image:</label>
<div class="controls">
<div class="example span10" style="margin:0;">
<div class="alert alert-success">
Please use PNG or JPEG extension only, and File size not more than 1 Mb (1024 Kb)
</div>
<ul id="uploadadv" class="styled"></ul>
</div>
</div>
<div class="clear"></div>
</div>
<div class="form-actions span10" style="margin:0;">
<button class="btnsubmitads btn btn-primary" type="submit" id="btnsubmitads">
<i class="icon-ok"></i> Submit
</button>
<button class="btn" type="reset" id="reset">Reset</button>
</div>
</form>
whether the form newadsform has elements with id as blank text.. can you share the form contents
@ArunPJohny okay, i'll ad to my question the form
also jquery and firefox versions
@ArunPJohny I'm using Jquery.1.9.1 and firefox 24, post data also not working on Latest version of chrome
What does $('button.btnsubmitads').length and $("#newadsform").length return in the console? Should be greater than 0. Also, you should add some console output to your error func so it doesn't fail silently. Unrelated, using return false; is a bit of a misunderstanding of jQuery event handling, especially combined with event.preventDefault(). Good info on that here: http://fuelyourcoding.com/jquery-events-stop-misusing-return-false/
The problem could be you are using content type application/json, where the request body should be json content, but you are sending request parameters instead
@JAAulde all length is 1
@ArunPJohny I'm remove contentType: "application/json" .. It's still not working. are encoding type of my file is the problem? i'm using utf8
if you look at http://jsfiddle.net/arunpjohny/Fz9gz/2/ it is working
@user221915 ok, elements are found. Did you add any output to the error function yet?
@JAAulde I'm add xhr.status the results is 0, and xhr.responseText is undifined
@ArunPJohny yeah.. but why it's not work for me, or i must change the jquery version?
I used same version as yours 1.9.1
Also, it looks like you are trying to prevent the form from submitting and use manual ajax instead. This is fine. But if that's what you want, you should probably hook into the form submit action instead of the button click action. Specifically: $('body').on('submit', '#newadsform', function (event) { ... }) instead of $('body').on('click','button.btnsubmitads',function(event) { ... }.
Looks like you have Ads Block Installed.
Last day I've same problem.
try to change
url:base_url+"advertising/newads
with another word that not contain ads or advertising word
| common-pile/stackexchange_filtered |
WiFi not working on MacBook Pro running 12.10
I've seen many other posts like this and found one other where the problem was resolved, but the terminal commands given were for a different pcid than mine. I ran several commands including:
sudo apt-get install linux-headers-generic
which got the response:
linux-headers-generic is already the newest version.
I also ran:
sudo apt-get install --reinstall bmcwl-kernel-source
Which got the error message:
E: unable to locate package bmcwl-kernel-source
I am new to Linux, but this seems to be a pretty common issue with 12.10. My pcid is 14e4:4331
If anyone could help, I would really appreciate it.
You typed:
sudo apt-get install --reinstall bmcwl-kernel-source
It is not bmcwl, it is bcmwl:
sudo apt-get install --reinstall bcmwl-kernel-source
| common-pile/stackexchange_filtered |
Solaris Nags About "process.max-stack-size" while running "Java Processes"
I have a server running Solaris 10. It reports using 10% of total memory using prstat. I 've ran process using Java 1.7.0_80 in 64-bit mode.
The problem is that System always nags about process.max-stack-size no matter how much I change the stack size of related project on /etc/projects and re-lunching the app ( Screen shot attached)
Should I use any specific tunning to run java applications on Solaris 10?
Why program does not show any error and run perfectly while solaris nags about stack limitation?
This is not a error message is a notice message, messages may be quieted in syslog by using rctladm -d syslog process.max-stack-size
I think it's nagging to give the SA an idea that something might be going on with the box.
I also have a suspicion that you may be working in a zone. If so, it may have some resource restrictions imposed on it.
Reco you talk to your SA.
the server is in global zone
| common-pile/stackexchange_filtered |
Potentiometer signal as a speed reference for a drive
Over what distance is it reliable/advisable to use a potentiometer signal as a speed reference for a drive ?
I've considered converting it to a 4-20mA, is this necessary ?
There is no "potentiometer signal". Potentiometer is a resistor. You can use it in a voltage divider to get a specific voltage relative to the resistance. The distance you can pull the wires is dependent on how high the voltage, how thick the wires, how noise-protected they are and how much noise you can tolerate.
What are your Specs for distance, cable type ( shielded twisted pair, STP preferred) , Pot resistance, and input instrument impedance. There are 3 issues, DC drop from load, Common mode noise CM ( induced from solenoids, DC motors etc) and differential noise (DM ), such as Radar or SW Radio noise, lightning noise then how much delay is acceptable in Pot change. If we know anything about these influences in your environment, then we can define specs, then the solution is trivial. Until then it is guesswork for even experts.
Sounds like you are already using it. If you are using it and you are happy, why change it? But if you are having problems that you think might be fixed by going to current loop, you should say what those problems are in your question. Edit the original question, I mean.
You can use a potentiometer for a speed setpoint. The details depend on the drive setpoint input options. If the drive takes, say, 0-10V and has a 10V output you can connect a pot directly (and the drive manual will show you how exactly to do it, and suggest the range of possible pot element resistances).
If it accepts only 4-20mA you would be best to use an appropriate transmitter, and refer to the transmitter manual for the appropriate pot resistances.
A pot is best for rough settings and fast easy changes, maybe for something like a conveyor. If you need precise settings that are rarely changed, digital input (like from the drive keypad) might be better.
As to the distance, voltage signals are best over relatively short distances, preferably within a cabinet or machine. Say up to a few meters. In ideal conditions you can extend it much, much further, with a shielded cable, but in a typical high EMI environment it's best not to. Current signals can be sent much further- hundreds of meters- with little trouble, however you may still have to take precautions if lightning and similar issues can occur nearby.
Do you think buffering the voltage with an op-amp would extend the useable range much, by lowering the source impedance?
Generally the reference signal should be over a very short distance due to the high di/dt in the application. While there is no hard rule, the further you extend the signal, the greater chance of picking up interference.
A 4-20 mA control loop is a much better choice due to its inherent noise immunity.
Wow. Why the downvote? I wish downvoters would explain themselves. I cancelled it out with an upvote.
@mkeith Thanks for the upvote and I agree with you on down voting without comment. But in fairness, I edited my answer because I originally thought the OP was proposing to put the pot on the shaft as a speed indicator. After rereading because I thought that was too far out there, I realized what the OP meant.
Oh, I didn't check the edit history. Makes sense!
| common-pile/stackexchange_filtered |
React axios API GET request returning too many objects
I am currently building a web application with React with TypeScript. I am trying to fetch a custom-built API. However, it is returning too many of them. How do I access the whole API to do filtering of them after storing them in a state?
import React, { useEffect, useState } from "react";
import axios from "axios";
interface Invoice {
id: number;
customer_id: number;
customer_name: string;
date: string;
total_invoice: number;
total_margin: number;
region: string;
invoice_lines: [
{
product_id: number;
product_name: string;
unit_price: number;
quantity: number;
total_line: number;
total_margin: number;
}
];
}
function App() {
const [data, setData] = useState<Invoice[]>([]);
useEffect(() => {
axios.get<Invoice[]>("http://localhost:3001/api/invoices").then(res => setData(res.data));
console.log(data);
}, []);
return (
<React.Fragment>
<header>
<h1>Hello</h1>
</header>
<main>
{data.map(d => (
<p key={d.id}>{d.customer_name}</p>
))}
</main>
</React.Fragment>
);
}
export default App;
By too many objects, do you mean there are multiple responses?
I have added the response image
Refer to my answer below for the reason and solution.
Your useEffect hook is missing the dependency array, so it triggers upon every render. Add a dependency array. If you want the API to fetch once when the component mounts then use an empty dependency array, otherwise add a dependency on some value that the effect isn't data as this will cause render looping as well.
useEffect(() => {
axios.get<Invoice[]>("http://localhost:3001/api/invoices")
.then(res => setData(res.data));
}, []);
If you want to log state updates then separate this into it's own effect with specific dependency.
useEffect(() => {
console.log(data);
}, [data]);
This is happening because you are setting state inside a useEffect hook which is configured to react to every state change. So when your component is mounted, your useEffect is getting fired, which in turn is setting the state, causing your useEffect to fire again, and so on in an infinite loop. If you want your useEffect to execute only once, pass an empty array as a second parameter to your useEffect hook, like so:
useEffect(()->{
//your function body here
},[])
This will cause your useEffect to be executed only when your component is mounted, and never again. Refer to https://reactjs.org/docs/hooks-effect.html for more information on how the useEffect hook works.
| common-pile/stackexchange_filtered |
Equicontinuity of set of polynomials
Let $S = \{ p\in C([0,1]), p\ \text{polynomial of degree} \leq d: \max |p(x)| \leq 1 \}$
I want to show that $S$ is equicontinuous. Using the definition of equicontinuity we need for all $\epsilon >0$ exists $\delta>0$ for all $x,y \in [0,1],$ for all $p\in S $: for $d(x,y)<\delta$ we have $ |p(x) - p(y)| < \epsilon$
However I cannot bound the difference:
$|\sum_{i=0}^{n} c_ix^i - \sum_{i=0}^{n} c_iy^i|=|\sum_{i=0}^{n} c_i(x^i-y^i)|$. We have that $|x-y|<\delta$ so $|x^i-y^i|$ is also bounded, but I think I also need a bound for $\max_{i} |c_i|$ which I cannot find. I think that since $p(x)$ is bounded, $\max_{x\in[0,1]} p(x)$ we should be able to get an upper bound for $\max_i |c_i|$.
I would like to prove this using only the definition of equicontinuity.
Yes, $S$ is equicontinuous. Let $q(x)=p((x+1)/2)$ and consider the Markov brothers' inequality for $k=1$:
$$\sup_{x\in [0, 1]}|p'(x)|=2\sup_{x\in [-1, 1]}|q'(x)| \le 2d^2 \sup_{x\in [-1, 1]}|q(x)| \leq 2d^2.$$
P.S. The set $S$ is also closed:
if a sequence $p_n(x)=\sum_{k=0}^da_k(n)x^k$ converges uniformly to $f$ in $[0,1]$ then $\max_{x\in [0,1]} |f(x)| \leq 1$ and $(a_k(n))_n$ is a convergent sequence for $k=0,\dots, d$ (use the fact that $(p_n(k/d))_n$ is convergent for $k=0,\dots,d$ and the Vandermonde determinant).
I haven't heard of this inequality, I thought something more elementary would be sufficient. Is it also possible to prove that $S$ is closed using this inequality?
@elrond Yes, $S$ is closed, see my P.S.
| common-pile/stackexchange_filtered |
Resonating glasses
When we collide two glasses they produce sound like not from one collision but from multiple. And frequency changes over time. Is this because they resonate with each other? But how this resonase happens?
What happens, if you hear two frequencies, which are similar?
If resonance happend then how would changing the frequency? I think if resonance happend then it should be a particular frequency.If the collision make change the structure i.e change the resonant frequency .Then Why are you so sure about resonance?
"not from one collision but from multiple" Do you get this mostly when you collide them very gently? "frequency changes over time" A glass will vibrate with multiple frequencies ("harmonics"). The higher ones will die down quicker. Could that be what you are hearing?
I believe the question can be rephrased (more clearly) as follows: "when we collide two glass balls (or two steel balls), we hear multiple collisions. The intervals between collisions change from long to short. Why?"
Take a look of this video https://www.youtube.com/watch?v=k1id4a4EU4M (Pay attention to time 1:01) when he casually collided the balls you hear such a phenomenon.
You can reproduce the phenomenon with two cups, to bowls, etc. Just hold them loosely in two hands so they naturally droop under your hands. Now move them close to make gentle collisions.
Why? Because when we collide two very hard objects, the first collision bounces both of them back a little bit so they separate from each other, only to allow the moving hands to push them back in to make the second collision. As they are pushed closer and closer, the intervals between collisions are shorter and shorter.
A vibrating object like a wine glass or a bell is a very complicated system. It is possible for a wine glass, for example, to exhibit multiple resonance modes in which the modes are weakly coupled i.e., energy in one mode is slowly shared with other modes. This means the quality of the perceived sound will shift perceptibly on a timescale of ~seconds.
Note also that the initial strike which sets the glass vibrating is a sharp impulse function which contains many harmonics. This means that many of the fundamental modes of the glass may get excited by the strike.
These two things mean that the decay of the sound emitted by a struck glass will be correspondingly complicated!
| common-pile/stackexchange_filtered |
QML RotationAnimation is not rotating
I am trying to rotate a rectangle from-to a specified angle, but I'm not sure I understand the docs. My code below runs, and my started and completed slots print the correct angles. But the rectangle has not rotated onscreen. What am I missing?
Rectangle {
width: 100
height: 100
RotationAnimation {
id: rotateCritter
duration: 1000
property real lastAngle: 0
onStarted: {
lastAngle = to;
console.log("Rotating from "+from+" to "+to)
}
onStopped: {
console.log("Done rotating from "+from+" to "+to)
from = lastAngle;
}
}
}
// On click totate the critter to the new angle
rotateCritter.to = 45
rotateCritter.start()
Your RotationAnimation is missing a target. Though it is the child of the Rectangle, this relationship does not automatically make the Rectangle the target of the animation; it must be explicit. I have given the Rectangle an id and color, and made this the target of the animation:
Rectangle {
id: critter
width: 100
height: 100
color: "red"
RotationAnimation {
id: rotateCritter
target: critter
duration: 1000
property real lastAngle: 0
onStarted: {
lastAngle = to;
console.log("Rotating from "+from+" to "+to)
}
onStopped: {
console.log("Done rotating from "+from+" to "+to)
from = lastAngle;
}
}
}
Another idea besides using a RotationAnimation object is to just animate on the Rectangle's own rotation property using a Behavior.
Rectangle {
id: rect
width: 100
height: 100
Behavior on rotation {
NumberAnimation {
duration: 1000
}
}
}
rect.rotation = 45 // Animates to 45 degrees
...
rect.rotation = 0 // Animates back to 0 degrees
| common-pile/stackexchange_filtered |
Storing images in PHP
I have stored several images in my database and now i am trying to retrieve them. The way my script is set up is one user would have would have a selection of information and images to go with that. So the code below shows how an image would display along with the price, car and details. Currently the images are not displaying and i am getting one of those question marks in a diamond shape error. Any ideas?
The images in the database are stored as a BLOB, MIME type image/jpeg
form method="post" action="booked.php">
<table>
<?php while($row=mysql_fetch_assoc($result)) { ?>
<tr>
Images: <img style="width: 130px; height: 100px" alt="<?php echo $row['car']; ?>" src="image/<?php echo $row['img']; ?>" /></td>
Car: <input type="text" name="car" value="<?php echo $row['car']; ?>" readonly>
Details: <input type="text" name="details" value="<?php echo $row['details']; ?>" readonly>
Price: <input type="text" name="price" value="<?php echo $row['price']; ?>" readonly>
<input type="radio" name="selected_car" value="<?php echo $row['car']; ?>" /><br>
<br /><br /><br />
</tr>
<?php } ?>
</table>
Are you sure, you really have those images in that folder?
yeah because i uploaded them from the folder onto the database
Are you fetching the values for $row correctly?
Try to echo $row['img'] somewhere, then try to get this image via browser.
It's seems, something wrong with path.
where is the $result query? and I cannot see the closing
Do not store images on database, it will eat all your resources, PHP for image save/read/cache and MySQL for storage.
look here how to display image from blob: http://stackoverflow.com/questions/1760754/how-to-display-an-image-from-a-mysql-blob
@Nikos my $query is $query = "SELECT car, details, price, img FROM Car "; and form is closed. Sorry didn't select it
try including an opening <td> tag; there is only a closing one.
To display images in html you need to use the img tag. The src of the image data needs to be a url.
You can't just return the data and expect it to display. People typically write a seperate script that just returns the image data with the mime type set using the header() function.
<img src="getimage.php?id=<?php echo $row['id'] ?>">
This supposes that you are actually storing the image data in a binary field in your database table.
Are you storing the file name or the actual file? If it's the actual file you need to write a script to get the data from the DB and present it as output. Something like:
<?php
$img = file_get_contents($row['img']);
echo $img;
If you really want to paste all the image data in the HTML, you need to base64-encode it. See how here:
displaying an image stored in a mysql blob
But I would strongly suggest that you do like gview says: write a separate php script to just get the image. It has to look something like this:
<?php
header('Content-type: image/jpeg');
$id = $_GET["id"];
... get the image from database by $id and echo it ...
?>
... even better - don't store the images as BLOBs. Just store them as files and only store the file names in the database.
| common-pile/stackexchange_filtered |
Querying ElasticSearch using a JSON file through JAVA API
I have a query in valid JSON format which works well in kibana or Sense when I use GET request.I am also able to create this query using XContentBuilder, but I need to send this query using its JSON form as it is to ElasticSearch. Is it possible to store the query in a JSON file and query ElasticSearch using this JSON file.
My query -
{
"min_score":5,
"sort" : [
{
"_geo_distance" : {
"location" : [40.715, -73.988],
"order" : "asc",
"unit" : "km",
"mode" : "min",
"distance_type" : "arc"
}
}
],
"query": {
"bool": {
"must": {
"query_string": {
"query": "hospital",
"analyzer": "english"
}
},
"filter": {
"geo_distance": {
"distance": "50000km",
"location": {
"lat": 40.715,
"lon": -73.988
}
}
}
}
}
}
What I want is to store this query in a JSON file and use this JSON file to send a search request directly without using Query builder.
This is not well supported by the offical ES API: https://discuss.elastic.co/t/search-elasticsearch-with-java-client-using-json-query/74329
Yeah, it was possible in the earlier versions, but the current version of ES does not support it.
You can use a search template, and store this template in the cluster state, see the official documentation about search templates, especially about pre-registered templates.
Yeah, Search templates can be used to send this json query directly to elastic search.
| common-pile/stackexchange_filtered |
Entity Framework Attach error: An object with the same key already exists in the ObjectStateManager
I'm a newbie with Entity Framework, and I have looked at the questions with the same title, but I haven't found a satisfying answer yet.
Here is my class:
public class MyUser
{
public string FirstName { get; set; }
public virtual ICollection<ProfileSkillEdu> Skills { get; set; }
}
And in the controller I have:
[HttpPost]
public ActionResult EditProfile(MyUser user, string emailAddress)
{
try
{
if (ModelState.IsValid)
{
_unitOfWork.GetMyUserRepository().Update(user);
_unitOfWork.Save();
return View(user);
}
}
catch (DataException)
{
//Log the error
ModelState.AddModelError("", "Unable to save changes. Try again, and if the problem persists see your system administrator.");
}
return View(user);
}
In my user repository:
public virtual void Update(TEntity entityToUpdate)
{
dbSet.Attach(entityToUpdate);
context.Entry(entityToUpdate).State = EntityState.Modified;
}
I keep getting An object with the same key already exists in the ObjectStateManager. The ObjectStateManager cannot track multiple objects with the same key. at dbSet.Attach(entityToUpdate). I watched the variables and I found that if MyUser has only 1 Skill object, it's fine because when it does Attach, the key is unique (value is 0). But if MyUser has 2 skill objects, both primary key have the value of 0 and therefore the error.
Can someone explain why all the Skill object's primary key have the values of 0? Also, is there a simple solution to this problem? I thought it's supposed to be straight forward but I have been struggling with this for hours.
Edit:
I wonder if the problem is because of how I use the context.
In the controller definition I have:
public class MyAccountController : Controller
{
IUnitOfWork _unitOfWork;
public ActionResult EditProfile()
{
if (_user == null)
{
MembershipUser currentUser = Membership.GetUser();
if (currentUser == null)
{
return RedirectToAction("Logon", "Account");
}
Guid currentUserId = (Guid)currentUser.ProviderUserKey;
MyUserService svc = new MyUserService();
MyUser user = svc.GetUserLoaded(currentUserId); //this uses include/eager loading to get the Skills too.
if (user == null)
{
return RedirectToAction("Logon", "Account");
}
else
_user = user;
}
return View(_user);
}
}
In my UnitOfWork I have:
public class UnitOfWork:IUnitOfWork
{
private GlobalContext context = new GlobalContext();
private GenericRepository<MyUser> myUserRepository;
private GenericRepository<Skill> skillRepository;
.. and implementation of Save() and Dispose()
}
explain how u are using your context across ur application. and also is the relationship between MyUser and ProfileSkillEdu well defined?
Most probably PK values of Skill objects are not posted back.
Tsegay, MyUser has an IUnitofWork field, and inside the unit of work there are repositories for each object (so UserRepository, SkillRepository, etc). The repository has the context object in it.
FYI, my context is disposed after every request. So EditPRofile and EditProfile POST uses different context. Is this why the Id is 0 for the navigation property objects?
Are you certain the context is disposed after every request? Have you set a breakpoint to ensure that? I've seen cases where dependency injection is used and a bug exists that the dispose was never called on the UoW or repository and thus context didn't get disposed.
Yes I debugged it and it was called after EditProfile() so I assume that it's a new context that is used at httpPost EditProfile(). I also found that the way I call GetUserLoaded() returns a detached object, not Unchanged. Will that be a problem?
| common-pile/stackexchange_filtered |
Angular router not recognized in Visual Studio
I'm doing a PluralSight course about integrating an ASP.NET Core RESTful API with an Angular frontend.
In the sample code I'm modifying I have a class that imports the Angular Router
import { Router } from '@angular/router';
and then injects it through the constructor:
export class CustomersComponent implements OnInit {
...
constructor(private router: Router,
private dataFilter: DataFilterService) { }
Visual Studio marks the word Router in the constructor red, and displays the following error:
VS also offers two possible actions to resolve this:
However, the first option produces another strange state:
And the second option just moves the 'Could not find symbol' to the line where the import is added. Furthermore, neither option is consistent with the code presented in the course.
I'm new to Angular and have no idea what's going on here, and how to resolve this type of error. Any help would be appreciated.
There is a discussion tab provided for each course on Pluralsight. Consider also posting your question there. The author may be able to help you. I don't use Angular in Visual Studio, but as a guess, is RouterModule missing from your Angular module?
I don't think that's it. This sort of thing is happening throughout the project. For example in app.module.ts, in this import:
import { BrowserModule } from '@angular/platform-browser'; the 'BrowserModule' bit is grey, VS claims it's unused. Then in @NgModule({
imports: [
BrowserModule is red and I get the same two 'options' to add import. But I can build and run the thing just fine. So... Yay! ... I guess. Must be some Visual Studio quirk.
| common-pile/stackexchange_filtered |
Oneplus Bluetooth neckband not connectind to Ubuntu 24.04.1
I recently switched to ubuntu from windows, i have Oneplus bluetooth Z2 which work completely fine with all other devices but is not connecting to my laptop now, this problem was not on Windows, and also not on Archcraft which i tried for sometime in the same laptop.
it is also now not even discovering my neckband.
i had already tried some methods from the forum, but its has not worked yet.
| common-pile/stackexchange_filtered |
Can't send objects using Sockets
Hello Im writing an app in which client sends name of room to server, server creates it and then sends back whole list of rooms. I have problem with receiving this object from server also whats interesting when I close clients' app and open again I have list of rooms just like it should be. I refresh room list in client app but its always empty only reopening helps that's pretty weird and I don't know an issue of this.
On client side:
getIs() method is returning is object
getOs() method returning os object
this.os = new ObjectOutputStream(clientSocket.getOutputStream());
this.is = new ObjectInputStream(clientSocket.getInputStream());
private void createRoom(ActionEvent event) {
String roomName = "CreateRoom ";
roomName += setRoomName();
String response = null;
try {
client.getOs().writeObject(roomName);
response = (String) client.getIs().readObject();
System.out.println(response);
} catch (IOException | ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void refreshRooms() {
String response = null;
try {
client.getOs().writeObject("RefreshRooms");
response = (String) client.getIs().readObject();
System.out.println(response);
rooms = (Rooms) client.getIs().readObject();
System.out.println("Print in client: ");
rooms.printAllRooms();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Server:
this.os = new ObjectOutputStream(connection.getOutputStream());
this.is = new ObjectInputStream(connection.getInputStream());
public void run() {
String inputRequest = null;
try {
while((inputRequest = (String) ois.readObject()) != null) {
System.out.println(inputRequest);
handleRequest(inputRequest);
}
} catch (IOException | ClassNotFoundException e) {
System.out.println("Client has disconnected.");
e.printStackTrace();
}
}
private void handleRequest(String request) {
String response = null;
String[] msg = request.split(" ");
if(msg[0].equals("CreateRoom")) {
try {
oos.writeObject("You want create a room.");
Room newRoom = new Room(msg[1]);
rooms.addRoom(newRoom);
System.out.println("Created room: " + newRoom.getName());
System.out.println("\n Print after creation: ");
rooms.printAllRooms();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}else if (msg[0].equals("RefreshRooms")) {
try {
oos.writeObject("You want list of rooms.");
System.out.println("Print before send.");
rooms.printAllRooms();
oos.writeObject(rooms);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
///EDIT:
So I removed PrintWriter and BufferedReader objects and now Im using only Object Streams. What doesn't work now is:
I create some rooms one after another and then refresh rooms list on clients app - in that case I get all rooms
But when I create one room refresh then create another and refresh I get only 1 room after 2nd refresh, so basically when I refresh server sends me always the same object from 1st send and I don't know how to change it.
Also Im printing these rooms on server side and always get all rooms so room creation is OK.
And your question is? NB You can't mix buffered streams on the same socket like this. Make up your mind. Either you're using buffered readers and writers or object streams. You can't have both.
Does the server get the message?
Well I changed to object streams but I have the same problem.
Server get message but sends always the same object even If i modify it.
You need to investigate ObjectOutputStream.reset(). NB readObject() doesn't return null at end of stream, so looping on it is incorrect. It throws EOFException.
You could try to flush the buffered streams:
os.flush()
This will force the stream to actually send the bytes of the serialized object. Without that, the BufferedOutputStream might just wait around and buffer data, as the name says. This is done so that the size of the sent packets does not become too small, which would result in a lot of overhead if you want to send multiple objects.
If you are done, you should close the stream anyway.
I added os.flush() before sending this object but I didn't change anything.
How is flushing "before" supposed to help?
Sorry i meant after, it looks like: oos.writeObject(rooms); oos.flush();
Class Rooms is an List of objects Room so I tried to send each element of this List in for loop and it works, isn't the best idea but I just can't send whole object of Rooms.
If you've removed the BufferedStreams in the EDIT, then the answer is obsolete anyway. Consider cleaning your code up and closing all streams properly first. This here: https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html might be useful.
| common-pile/stackexchange_filtered |
XTestFakeKeyEvent calls get swallowed
I'm trying to spoof keystrokes; to be a bit more precise: I'm replaying a number of keystrokes which should all get sent at a certain time - sometimes several at the same time (or at least as close together as reasonably possible).
Implementing this using XTestFakeKeyEvent, I've come across a problem. While what I've written so far mostly works as it is intended and sends the events at the correct time, sometimes a number of them will fail. XTestFakeKeyEvent never returns zero (which would indicate failure), but these events never seem to reach the application I'm trying to send them to. I suspect that this might be due to the frequency of calls being too high (sometimes 100+/second) as it looks like it's more prone to fail when there's a large number of keystrokes/second.
A little program to illustrate what I'm doing, incomplete and without error checks for the sake of conciseness:
// #includes ...
struct action {
int time; // Time where this should be executed.
int down; // Keydown or keyup?
int code; // The VK to simulate the event for.
};
Display *display;
int nactions; // actions array length.
struct action *actions; // Array of actions we'll want to "execute".
int main(void)
{
display = XOpenDisplay(NULL);
nactions = get_actions(&actions);
int cur_time;
int cur_i = 0;
struct action *cur_action;
// While there's still actions to execute.
while (cur_i < nactions) {
cur_time = get_time();
cur_action = actions + cur_i;
// For each action that is (over)due.
while ((cur_action = actions + cur_i)->time <= cur_time) {
cur_i++;
XTestFakeKeyEvent(display, cur_action->code,
cur_action->down, CurrentTime);
XFlush(display);
}
// Sleep for 1ms.
nanosleep((struct timespec[]){{0, 1000000L}}, NULL);
}
}
I realize that the code above is very specific to my case, but I suspect that this is a broader problem - which is also why I'm asking this here.
Is there a limit to how often you can/should flush XEvents? Could the application I'm sending this to be the issue, maybe failing to read them quickly enough?
It's been a little while but after some tinkering, it turned out that my delay between key down and key up was simply too low. After setting it to 15ms the application registered the actions as keystrokes properly and (still) with very high accuracy.
I feel a little silly in retrospect, but I do feel like this might be something others could stumble over as well.
| common-pile/stackexchange_filtered |
MongoDB query array where all elements match criteria
I hava a collection of 3 documents:
{"users":[{"id":1,"gender":"male"},{"id:1,"gender":"male"},{"id":1,"gender":"male"}]}
{"users":[{"id":1,"gender":"male"},{"id:1","gender":"male"},{"id":1,"gender":"female"}]}
{"users":[{"id":1,"gender":"female"},{"id":1,"gender":"female"},{"id":1,"gender":"female"}]}
I want to write a query where every gender is male (users.gender:"male") and as a result to get only:
{"users":[{"id":1,"gender":"male"},{"id:1,"gender":"male"},{"id":1,"gender":"male"}]}
So far I was able to get those documents where at least one field matches criteria. So I was getting:
{"users":[{"id":1,"gender":"male"},{"id:1","gender":"male"},{"id":1,"gender":"female"}]}
{"users":[{"id":1,"gender":"female"},{"id":1,"gender":"female"},{"id":1,"gender":"female"}]}
Is there any way to do it?
please rephrase the question. that is not valid json, and the example doesn't make sense.
The documents are not in proper json.Can you please edit your question to correct this?
Assuming you have only two type of genders in your documents : male and female, all the documents which do not contain female as the gender type, will automatically be those with every gender type as male. Hence $nin [doc] can be used.
Of course, this is heavily assuming and inferring from your example, which is not clear at all. Please format your question to be correct and concise so that others can benefit. Thanks.
This is actually another way of saying what Abhishek Pathak already said:
You can exclude female from the possible set of results. Assuming your collection is called example you can use the following query:
db.examples.find({"users.gender" : {$nin: ["female"]}})
| common-pile/stackexchange_filtered |
How to query latest records from a table
I have a table similar to this:
ID User_Code ProcessID DateCreated Remarks
1 AAA 1 2020-01-01 08:40 N/A
2 AAA 2 2020-01-01 09:34 R123
3 AAA 2 2020-01-01 10:40 SUCCESS
4 AAA 3 2020-01-01 11:00 N/A
5 BBB 1 2020-01-01 11:01 N/A
6 BBB 1 2020-01-01 11:10 N/A
7 BBB 2 2020-01-01 11:20 SUCCESS
8 BBB 3 2020-01-01 11:30 N/A
9 CCC 1 2020-01-01 11:31 N/A
10 CCC 2 2020-01-01 11:40 R001
11 CCC 2 2020-01-01 11:43 R002
What I want to accomplish is to create a result like this
TransDate Remarks UserCode Process1 Process2 Process3
2020-01-01 SUCCESS AAA OK OK OK
2020-01-01 SUCCESS BBB OK OK OK
2020-01-01 R001 CCC OK OK NOK
Where if there is a Record for a Process for the particular UserCode, the column value to be put is OK else if there is no Record, the value would be NOK. Also the Remarks pertains only to Process2, where the value should be the latest Remarks. The problem is it is not outputting the correct result instead it is displaying like this
TransDate Remarks UserCode Process1 Process2 Process3
2020-01-01 AAA OK OK OK
2020-01-01 SUCCESS BBB OK OK OK
2020-01-01 R001 CCC OK OK NOK
See below created SQL:
SELECT UserCode , DATE(DateCreated) AS TransDate,
IF (COUNT(
CASE
WHEN PROCESS = 1
THEN 1
ELSE NULL
END
) > 0, "OK", "NOK" ) AS 'Process1',
IF (COUNT(
CASE
WHEN PROCESS = 2
THEN 1
ELSE NULL
END
) > 0, "OK", "NOK" ) AS 'Process2',
IF (COUNT(
CASE
WHEN PROCESS = 3
THEN 1
ELSE NULL
END
) > 0, "OK", "NOK" )AS 'Process3'
FROM MyTable WHERE UserCode = '######'
GROUP BY DATE(DateCreated)
Sorry got confused, got the wrong data. the record should be
ID User_Code ProcessID DateCreated Remarks
1 AAA 1 2020-01-01 08:40 N/A
2 AAA 2 2020-01-01 09:34 R123
3 AAA 2 2020-01-01 10:40 SUCCESS
4 AAA 3 2020-01-01 11:00 N/A
5 AAA 1 2020-01-02 11:01 N/A
6 AAA 1 2020-01-02 11:10 N/A
7 AAA 2 2020-01-02 11:20 SUCCESS
8 AAA 3 2020-01-02 11:30 N/A
9 AAA 1 2020-01-03 11:31 N/A
10 AAA 2 2020-01-03 11:40 R001
11 AAA 2 2020-01-03 11:43 R002
12 BBB 1 2020-01-03 11:32 N/A
13 BBB 2 2020-01-03 11:38 SUCCESS
14 BBB 3 2020-01-03 11:38 N/A
And the result of the query should be similar to this because the query should be based on a usercode.
TransDate Remarks UserCode Process1 Process2 Process3
2020-01-01 SUCCESS AAA OK OK OK
2020-01-02 SUCCESS AAA OK OK OK
2020-01-03 R002 AAA OK OK NOK
in aboe query remark column is missing. Kindly update the query.
As @strawberry has pointed out the remarks for user_code ccc in your desired output contradicts the stated requirement - is this a typo or do you have another rule not yet stated?
Guys, see editted correct data and expected result.
The easy bit is to use conditional aggregation to transform the rows to columns the harder bit joins to a sub query which works out the last id per user_code and plucks the remarks.
select min(date(datecreated)) dt,
max(x.remarks),
t.user_Code,
max(case when processid = 1 then 'ok' else 'nok' end) process1,
max(case when processid = 2 then 'ok' else 'nok' end) process2,
max(case when processid = 3 then 'ok' else 'nok' end) process3
from t
left join
(
select t.user_code,remarks
from t
join
(select t1.user_code ucid, max(t1.id) maxid from t t1 where processid = 2 group by t1.user_Code) s
on s.ucid = t.user_code and s.maxid = t.id
) x on x.user_code = t.user_code
group by t.user_code;
note I have assumed that the remarks in your desired output is incorrect given that the latest remark for user_Code ccc is r002
Just to observe that this result differs from the op's desired result.
SELECT DISTINCT
MAX(DATE(DateCreated)) OVER (PARTITION BY User_Code) TransDate,
FIRST_VALUE(CASE ProcessID WHEN 2 THEN Remarks END) OVER (PARTITION BY User_Code ORDER BY CASE ProcessID WHEN 2 THEN DateCreated END DESC) Remarks,
User_Code,
CASE WHEN MAX(CASE WHEN ProcessID = 1 THEN Remarks END) OVER (PARTITION BY User_Code) IS NULL THEN 'NOK' ELSE 'OK' END Process1,
CASE WHEN MAX(CASE WHEN ProcessID = 2 THEN Remarks END) OVER (PARTITION BY User_Code) IS NULL THEN 'NOK' ELSE 'OK' END Process2,
CASE WHEN MAX(CASE WHEN ProcessID = 3 THEN Remarks END) OVER (PARTITION BY User_Code) IS NULL THEN 'NOK' ELSE 'OK' END Process3
FROM test;
TransDate Remarks User_Code Process1 Process2 Process3
2020-01-01 SUCCESS AAA OK OK OK
2020-01-01 SUCCESS BBB OK OK OK
2020-01-01 R002 CCC OK OK NOK
fiddle
@Strawberry I see the the only difference - in Remarks for CCC. But the value shown by OP contradicts to his conditions - to get the value from the last record, for which R002 for 2020-01-01 11:43 is correct whereas shown R001 for 2020-01-01 11:40 is not.
Yep, it's intriguing
@Akina. there was an error on the query result. You are right that the latest Remarks is R002 instead of R001. I have revised the actual data and the required output.
| common-pile/stackexchange_filtered |
How to add text unit (e.g. cm) inside input tag?
I'm looking to put a measurement unit in my input, but I don't know how.
I want something like this:
[............ cm]
This is my html:
<div class="input">
<label for="height">Height</label>
<input id="height" type="text" value="" name="height">
</div>
If I try to add a span next to the input, it just places it outside the input.
Inputs cannot have contents.
There must be a way to add a unit inside the input though? So the user knows in what kind of unit the input is.
an extra element positionned on the top of the input
If you would like to set cm at the end of your value, you could try to change the value with JavaScript.
Here is an example of how you could do it:
document.getElementById('number').addEventListener('keyup', (e) => {
e.target.value = e.target.value.split('cm').join('');
if (e.target.value.slice(-2) !== 'cm') {
e.target.value = `${e.target.value}cm`;
}
});
input {
text-align: right;
}
<input type="text" id="number" value="cm">
But anyway, that's not the best solution. I would recommend using placeholders, labels or maybe a tooltip to let the user know what kind of unit the input is about.
| common-pile/stackexchange_filtered |
access html tag property inside itself with angular
I have this html code :
<label [class]="text === 'test1' : 'class1' : 'class2'" text="test1"></label>
<label [class]="text === 'test1' : 'class1' : 'class2'" text="test2"></label>
I want to access the text property of those labels inside the class tester but I doesn't seem to be the right syntax and cannot find answer if it is possible and how ?
Try to add a template variables to the labels and use them.
<label #lbl1 [class]="lbl1.text === 'test1' : 'class1' : 'class2'" text="test1"></label>
Thanks this is what I was looking for, I've found it in the docs since :
https://angular.io/guide/template-syntax#expression-context
| common-pile/stackexchange_filtered |
Delphi (XE6) : Array of Byte to TStringList
I have a function which returns a dynamic array of byte
type
TMyEncrypt = Array of Byte;
TMyDecrypt = Array of Byte;
function decrypt(original: TMyEncrypt) : TMyDecrypt;
The content of the returned dynamic array TMyDecrypt is a standard text with CRLF.
How can i load this into a TStringList with CRLF as separator, without saving it to a temporary file before?
EDIT: the retruned array of byte contains unicode coded characters
You'd need to know the text encoding
Hi David, just added as edit (unicode)
StringList.Text := PChar(decrypt(MyEncrypt)); assuming byte array is in little endian order, and with proper null terminator.
That assumes the decrypted bytes are UTF-16, otherwise such a typecast will not work.
@Remy Since the question states that the data is encoded UTF-16, I don't think there's any need to assume anything
The question says "unicode coded" instead of "UTF-16 coded", and "unicode" does not always means UTF-16 to everyone. Better to be more explicit about the exact format.
@Remy On Windows that's usually what Unicode means. For instance TEncoding.Unicode.
Decode the byte array to a string, and then assign to the Text property of the string list.
var
Bytes: TBytes;
StringList: TStringList;
....
StringList.Text := TEncoding.Unicode.GetString(Bytes);
Note the use of TBytes which is the standard type used to hold dynamic arrays of bytes. For compatibility reasons it makes sense to use TBytes. That way your data can be processed by other RTL and library code. A fact we immediately take advantage of by using TEncoding.
You could use SetString, as my answer originally suggested:
var
Text: string;
Bytes: TBytes;
StringList: TStringList;
....
SetString(Text, PChar(Bytes), Length(Bytes) div SizeOf(Char)));
StringList.Text := Text;
Personally I prefer to use TEncoding because it is very explicit about the encoding being used.
If your text was null terminated then you could use:
StringList.Text := PChar(Bytes);
Again, I'd prefer to be explicit about the encoding. And I might be a little paranoid about my data somehow not being null terminated.
You might find that UTF-8 is a more efficient representation than UTF-16.
Thanks alot David... Problem is it's not in my "hands" about UTF-8 or UTF-16 since i'm not the creator of the files i "parse"
Note that I edited the answer to make it a little cleaner. I prefer TEncoding to the original SetString version.
@AndreasHinderberger: You have to know the encoding of the decrypted bytes in order to decode them correctly into a UTF-16 string for TStringList to consume. If you don't know the byte encoding, you cannot decode the bytes.
@Remy We already did this. I said the same thing, and Andreas supplied the information.
| common-pile/stackexchange_filtered |
How do you set ASP.net html control(label) using javaScript
I have created a JavaScript function to get the type of html control and set it to specified value.
function SetControlValue(ctrl, value) {
if (value == undefined)
return "";
if (document.getElementById(ctrl).type == "text") {
document.getElementById(ctrl).value = value;
}
else if (document.getElementById(ctrl).type == "label") {
//document.getElementById(ctrl).innerText = value;
document.getElementById(ctrl).innerHTML = value;
}
return false;
}
On my ASPX page i have created a label as below
<asp:Label id="lblMessage" class="labels"Font-Size="Medium" runat="server"></asp:Label>
and now calling the function
var don="sample text";
SetControlValue('lblMessage', don)
My question is why SetControlValue() function working on text fields but does not work on labels. Is there something that am missing? Thanks.
The reason why it's not working is because ASP changes the ID of your label. You'll need to get the client side id of the label control.
SetControlValue('<%= lblMessage.ClientID %>', value);
As @Tim B James suggested in the comments you can also set the ClientIDMode to Static as follows:
<asp:Label ClientIDMode="static" id="lblMessage" class="labels" Font-Size="Medium" runat="server"></asp:Label>
Edit: Basically you don't need to check for label or span. Instead you can check if your div element has a property called innerHTML or value. I suggest you to change the javascript function to the following and it should work:
function SetControlValue(ctrl, value) {
if (value == undefined) {
return "no value set";
}
var element = document.getElementById(ctrl);
if(!element) {
return "element not found";
}
if(element['value'] !== undefined) {
element['value'] = value;
} else if (element['innerHTML'] !== undefined) {
element['innerHTML'] = value;
}
return false;
}
Fiddle
Also, to add to this, you can set the ClientIDMode of your label to Static this will stop asp.net from changing the id.
Well i tried this it didn't work i was getting this error JavaScript runtime error: Unable to get property 'type' of undefined or null reference
plz check my updated answer! try using nodeName instead!
Try this:
document.getElementById(ctrl).innerHTML = value;
Update:
When a asp.net label control is rendered it is Rendered as <span> and not aslabel.
function SetControlValue(msg) {
document.getElementById('<%= Label1.ClientID %>').innerHTML = msg;
var control = document.getElementById('<%= Label1.ClientID %>');
if (control != null && control.nodeName == "span") {
alert("its label control");
}
This works for me!
NOTE: Please make changes in the function according to your needs!
hi @Angwenyi! plz check the edited answer! and let me know..! it works i have tested it!
A moment i try it out.
I get JavaScript runtime error: Unable to get property 'type' of undefined or null referenceon var control = document.getElementById('<%= lblMessage.ClientID %>') I dont know why
that is what i have mentioned just use the 'nodeName' instead of 'type' it will work!
| common-pile/stackexchange_filtered |
What is the best way to partition, and what is the best file-system for a new Hard Disk?
Possible Duplicate:
What's your recommendation on drive partitioning schemes for a desktop and home server?
Is Btrfs in Maverick considered stable?
Bonus: What is the "best" filesystem for "general" use?
Bonus 2: If btrfs is supposedly "better" than ext3/4, why isn't it the default?
Bonus 3: Will creating a separate boot partition reduce boot time?
I currently have it set up as:
?G Other Stuff
4G Swap
10G root ext4
17G home btrfs
you should ask multiple questions if you find they are sufficiently different. There's no harm in asking two questions at a time, you will get better answers this way. You'll find that both your question have been answered before at the link from Jorge Castro and http://askubuntu.com/questions/4752/is-btrfs-in-maverick-considered-stable. Don't worry if this question is going to get closed for this reason.
There is no best way to partition a drive as such. Of course it depends on your set-up. If you have a complicated or odd server system, you may want to spend a lot of time thinking about this. If your current configuration works, leave it be.
You will not gain a lot by making an effort.
But this is the important bit:
The default, i.e. what the ubuntu installer chooses for you if you select erase entire drive is heavily tested by millions of users. It will work very well.
The default file system in Ubuntu currently is ext4. This is what that partitioning looks like on one of my virtual machines:
This works extremely well
Bonus 2: If btrfs is supposedly "better" than ext3/4, why isn't it the default?
There are things that are better about btrfs compared to ext4, and vice versa. Keep in mind that ext4 is, also, on the bleeding edge of file systems. It's brand new and very advanced. Btrfs is not magical. :-)
It may very well become the default soon; we'll see.
Here's the technical bit about btrfs's cool features (this list is copied verbatim from btrfs):
Online volume growth and shrinking
Online block device addition and removal
Online defragmentation
Online balancing (movement of objects between block devices to balance load)
Transparent compression (currently zlib)
Subvolumes (separately-mountable filesystem roots)
Snapshots (writeable, copy-on-write copies of subvolumes)
File cloning (copy-on-write on individual files, or byte ranges thereof)
Object-level (RAID1-like) mirroring, (RAID0-like) striping
Checksums on data and metadata (currently CRC-32C)
In-place conversion (with rollback) from ext3/4 to Btrfs
File system seeding (Btrfs on read-only storage used as a copy-on-write backing for a writeable Btrfs)
User-defined transactions
Block discard support (reclaims space on some virtualization setups and improves wear leveling on SSDs by notifying the underlying device that storage is no longer in use)
(my emphasis)
If you care about and use any of the above, btrfs will be a great boon. It's, after the oracle/sun merger, going to be what ZFS should have always been: Implemented. :)
Bonus 3: Will creating a separate boot partition reduce boot time?
No. Boot time is mostly waiting for the magnetic seek-head in the hard drive, you can gain very little indeed by changing any configuration at all. Partitioning the drive differently will do not much good.
You will possibly hear miracle stories of people decreasing their boot time by putting /boot right at the very beginning of the disk (where the platters are "smaller", so to speak). There will be about a 50% decrease in seek times at the very inner part of the disk compared to the very outside. Making an effort to put /boot there is futile. It's will be close to the beginning anyway, no need to bother with that. You should know this though, it makes sense to put the operating system first, and data last.
Wow. Thanks... that's one of the longest, best and most detailed answers I've ever gotten.
| common-pile/stackexchange_filtered |
Implementing a point class in Python
So I'm trying to implement a point class which creates a point and then rotate, scale and translate the point. Here's what I've currently written.
class Point:
'''
Create a Point instance from x and y.
'''
def __init__(self, x, y):
self.x = 0
self.y = 0
'''
Rotate counterclockwise, by a radians, about the origin.
'''
def rotate(self, a):
self.x0 = math.cos(this.a) * self.x - math.sin(this.a) * self.y
self.y0 = math.sin(this.a) * self.x + math.cos(this.a) * self.y
'''
Scale point by factor f, about the origin.
Exceptions
Raise Error if f is not of type float.
'''
def scale(self, f):
self.x0 = f * self.x
self.y0 = f * self.y
'''
Translate point by delta_x and delta_y.
Exceptions
Raise Error if delta_x, delta_y are not of type float.
'''
def translate(self, delta_x, delta_y):
self.x0 = self.x + delta_x
self.y0 = self.y + delta_y
'''
Round and convert to int in string form.
'''
def __str__(self):
return int(round(self.x))
Something in this code is generating an error. Now I haven't implementing error catching and I do have an error method at the top
class Error(Exception):
def __init__(self, message):
self.message = message
But how would I catch the error if a certain variable is not of type float?
Here's one of the if statements I'm using:
def __init__(self, x, y):
if not isinstance(x, float):
raise Error ("Parameter \"x\" illegal.")
self.x = x
self.y = y
if not isinstance(y, float):
raise Error ("Parameter \"y\" illegal.")
self.x = x
self.y = y
But that gets me an indentation error. So how exactly can I print out an error message that says exactly which variable is causing the problem?
"Something in this code is generating an error." What is generating the error? (Hint: the error message tells you.)
Should not self.x = 0 be self.x = x and self.y = 0 be self.y = y?
The error says "
Traceback (most recent call last):
File "test_A.py", line 17, in
print Point(0.0,1.0)
TypeError: str returned non-string (type int)
"
Why do you care about the variables are float?
If you want to make sure it is a number you could query if the variable is of a instance of a number type and if not throw a custom error by yourself...
your error is because you cast the return value of str to int and not to str
But str is supposed to round and convert to an int.
Can't you just use a complex number?
@DavidRolfe: so why not just fix __str__ to return a string instead of an integer? __str__ must return a string. You can convert your int to a string if that's what you want to display.
@PeterWood: why would a complex number help here?
I don't think I can use complex numbers. With str it is supposed to convert it into an int in string form however I'm only using self.x. Would I need to append self.y in there somewhere?
@MartijnPieters A complex number supports rotation, scaling, and translation.
@PeterWood: but that won't let the OP learn about classes, would it. :-)
@MartijnPieters Is that the purpose?
Now the error says AttributeError: Point instance has no attribute 'x' for the str method.
@DavidRolfe: not if you don't want to. What do you expect print(Point()) to produce?
@PeterWood: almost certainly.
It's supposed to produce a single point and then rotate, scale, and translate that point so I can draw different shapes.
Ok so it appears that the problem is with the return int(round(self.x)) because it's saying that x does not exist even though I'm sure I defined x earlier in my program.
Again, the str method is supposed to round and convert to an int in string form but it keeps saying Point instance has no attribute 'x' for the str method. So why does it keep saying that? Is there something important I left out of my str method?
If you want to raise an exception, do it in the Point's initializer:
def __init__(self, x, y):
if not isinstance(x, float) or not isinstance(y, float):
raise Error("Point's coordinates must be floats.")
self.x = x
self.y = y
Or convert the coordinates to float:
def __init__(self, x, y):
self.x = float(x)
self.y = float(y)
I used Error class here, cause you mention you have it defined, but it probably should be more specific Exception
Should self.x = x and self.y = y be inside the indentation because I keep getting an error with that line.
Ok it seems to be working for catching the error but what if I wanted the error message to say Parameter "x" illegal or Parameter "a" illegal and actually name which variable is causing the error.
@DavidRolfe in this case when we talk only about 2 variables x and y, just make the if statement more detailed.
I tried using two if statements and it keeps saying
IndentationError: unindent does not match any outer indentation level
If the variable is not a float, you'll get a TypeError. Pretty much you can 'catch' these error like this;
try:
pass # your stuff here.
except e as TypeError:
print e # this means a type error has occurred, type is not correct.
Also, this would be worth reading for checking for correct types at the start with assert; https://wiki.python.org/moin/UsingAssertionsEffectively
| common-pile/stackexchange_filtered |
Openlayers 3 select interaction unable to add event condition
So I'm trying to add a select interaction into my maps when I hover over any feature it's clearly highlighted.
import Select from 'ol/interaction/select';
import pointerMove from 'ol/events/condition.js'
this.selectPointerMove = new Select({
condition: pointerMove
});
this.coreMapComponent.map.addInteraction(this.selectPointerMove);
The condition field is throwing an error of -
Type 'typeof events' is not assignable to type 'EventsConditionType'.
Type 'typeof events' provides no match for the signature '(event: MapBrowserEvent): boolean'.
Without a condition it works fine on a mouse click.
Should mention this in an Angular 6 project using the "@types/ol": "^4.6.2", if that's of any relevance.
try import {pointerMove} from 'ol/events/condition';
Yeah sorry should have mentioned I already tried that, same result unfortunately.
Try removing the .js after condition => import pointerMove from 'ol/events/condition'
I've tried that as well no luck unfortunately.
The current Openlayers version 5.x.x needs some typing updates. Since even you are using the Openlayers 5.x.x the types installed are from version 4.x.x.
This means that you need some workaround in on your code.
Since all typings on version 4.x.x are using DefaultExports method, you can't use NamedExports like:
import {pointerMove} from 'ol/events/condition';
Solution:
One option you can do is import all as a variable. With this, you will escape the TS error:
import Select from 'ol/interaction/select';
import * as condition from 'ol/events/condition';
this.selectPointerMove = new Select({
condition: (condition as any).pointerMove
});
this.coreMapComponent.map.addInteraction(this.selectPointerMove);
One side effect of that is that you will remove the option to do tree-shacking, but well, you will survive without that.
Hope this helps!
Does remove the ts lint error and prevent anything from erroring in the console but doesn't highlight the feature on pointerMove unfortunately, only on a mouse click.
As far as I know, adding interactions only says to the map that you can use (listen/subscribe) this interaction. I don't think on pointermove highlights the feature automatically. You should listen the event: this.coreMapComponent.on("pointermove", function() { /* highlight feature*/ })
And perform the hihglight on the callback function
| common-pile/stackexchange_filtered |
If I connect 2 android devices via WiFi-Direct - can one share its Internet connection with the other?
I am building an SDK for a client to connect 2 android devices for exchanging data (strings, commands...) via various channels, such as Bluetooth, USB cable, WiFi.
One of the devices is a standard android phone, with a SIM card and data, thus able to access the internet. The other has no SIM card.
Currently using WiFi I am opening a HotSpot on the phone device, and so the non-SIM device can access the internet via the HotSpot.
Now my client wants me to connect via WiFi-Direct, too.
So my question is - once I connect the two devices via WiFi-Direct, will I be able to access the internet on the non-SIM device, using the SIM/data on the "normal" phone?
Thx
No time to try that out?
Hi, if you mean did I try this "manually" then yes - tried and it does not seem to work. But since I am writing code to work with WiFi-Direct, IU thought I's ask uf there is something I can do in my program to allow this.
Well, Wi-Fi direct can create hotspot independent of the legacy Wi-Fi hotspot BUT this hotspot doesn't share internet and any request to external ip will be dropped.
But for android you can use NetShare app to do that, you can download it from
here.
it works as follows:
• in the client side NetShare use the vpn service to catch all internet traffic of the device and send them to NetShare in the server device and wait for the response.
• in the server side NetShare run a server with specific port to receive the internet packets sent from the client side in
previous step, it send these packets to the internet, after receiving the reponse from the internet it will send these responses to the NetShare in the client side which in turn provide it to the back to the client device.
for more details see the official website
Thanks. Will look into it. Problem is not sure my client would want to install 3rd party apps. Will see...
Not a problem, use the proxy which doesn't need to install the app in the other device but you will need to setup the proxy in it as explained in the help section of the app.
Thx. I will look into this, and let my client know.
| common-pile/stackexchange_filtered |
ASP.NET MVC: converting virtualpath to actualpath is wrong
I have been using Url.Content inside <% and %> in my views and all seems to be working fine... Then from withing my controller i have tried the following, but it always returns the wrong path
XDocument xdoc = XDocument.Load(Url.Content("~/content/xml/faq.xml"));
and
XDocument xdoc = XDocument.Load(VirtualPathUtility.ToAbsolute("~/content/xml/faq.xml"));
Basically the path shoudl be c:\Vs2008\Source\MyAppName.....
but its returning c:\MyAppName.....
So its invalid,
Any ideas why this is happning?? Is there a work around?
That's correct. I'm not sure why it's adding the C:\ but MyApp...\ is the absolute path.
Have you tried Server.MapPath?
Those two methods are only meant to be used in the context of clients accessing content through your web server. To read a file internally, within the application you need to use Server.MapPath() or a similar method to get the physical path on your disk.
I was having a similar problem linking to some .css and image files. I wrote a blog post on this at http://www.stickfiguresoftware.com/node/46 that may be helpful and even has some sample code that I got to work.
Not sure it's the perfect solution, but it worked out for me.
| common-pile/stackexchange_filtered |
C++ class exposed to QML error in fashion TypeError: Property '...' of object is not a function
I've successfully exposed a C++ class to QML. It is registered and found in Qt Creator. It's purpose is to connect to a database, as shown in following code:
#ifndef UESQLDATABASE_H
#define UESQLDATABASE_H
#include <QObject>
#include <QtSql/QSqlDatabase>
class UeSqlDatabase : public QObject
{
Q_OBJECT
Q_PROPERTY(bool m_ueConnected READ isConnected WRITE setConnected NOTIFY ueConnectedChanged)
private:
bool m_ueConneted;
inline void setConnected(const bool& ueConnected)
{ this->m_ueConneted=ueConnected; }
public:
explicit UeSqlDatabase(QObject *parent = 0);
Q_INVOKABLE inline const bool& isConnected() const
{ return this->m_ueConneted; }
~UeSqlDatabase();
signals:
void ueConnectedChanged();
public slots:
void ueConnectToDatabase (const QString& ueStrHost, const QString& ueStrDatabase,
const QString& ueStrUsername, const QString& ueStrPassword);
};
#endif // UESQLDATABASE_H
However, when I try to call method isConnected() from the following QML code
import QtQuick 2.0
Rectangle
{
id: ueMenuButton
property string ueText;
width: 192
height: 64
radius: 8
states: [
State
{
name: "ueStateSelected"
PropertyChanges
{
target: gradientStop1
color: "#000000"
}
PropertyChanges
{
target: gradientStop2
color: "#3fe400"
}
}
]
gradient: Gradient
{
GradientStop
{
id: gradientStop1
position: 0
color: "#000000"
}
GradientStop
{
position: 0.741
color: "#363636"
}
GradientStop
{
id: gradientStop2
position: 1
color: "#868686"
}
}
border.color: "#ffffff"
border.width: 2
antialiasing: true
Text
{
id: ueButtonText
color: "#ffffff"
text: qsTr(ueText)
clip: false
z: 0
scale: 1
rotation: 0
font.strikeout: false
anchors.fill: parent
font.bold: true
style: Text.Outline
textFormat: Text.RichText
verticalAlignment: Text.AlignVCenter
horizontalAlignment: Text.AlignHCenter
font.pixelSize: 16
}
MouseArea
{
id: ueClickArea
antialiasing: true
anchors.fill: parent
onClicked:
{
uePosDatabase.ueConnectToDatabase("<IP_ADDRESS>",
"testDb",
"testUser",
"testPassword");
if(uePosDatabase.isConnected()==true)
{
ueMenuButton.state="ueStateSelected";
}
else
{
ueMenuButton.state="base state"
}
}
}
}
I get the following error:
qrc:/UeMenuButton.qml:92: TypeError: Property 'isConnected' of object UeSqlDatabase(0x1772060) is not a function
What am I doing wrong?
You have this error because you have declared the property isConnected in C++ but you're calling it from QML in the wrong way: uePosDatabase.isConnected is the correct way, not uePosDatabase.isConnected().
If you want to call the function isConnected() you should change its name to differate it from the property, like getIsConnected(). Given your property declaration, you neither need to call this function directly nor you need to make it callable from QML with the Q_INVOKABLE macro.
I see this code: Q_INVOKABLE inline const bool& isConnected(), isn't Q_INVOKABLE expected to declare a method?
You must first create the object you want in QML code and then use the function:
your QML Code:
import QtQuick 2.0
import QSqlDatabase 1.0
Rectangle {
id: ueMenuButton
QSqlDatabase {
id: uePosDatabase
}
.
.
.
MouseArea
{
id: ueClickArea
antialiasing: true
anchors.fill: parent
onClicked: {
console.log(uePosDatabase.isConnected())
}
}
}
Be aware in case of an error some lines earlier in the same .js file can lead to this problem. The following code will not be executed.
| common-pile/stackexchange_filtered |
XOR and II (concatenation) summation symbols
In Latex, in math mode, if I want to express summation over a range I can use the following expression \sum_{from}^{to}. I can do the same for the product.
What is the name of the symbol that does this for XOR or concatenation?
If I do:
\oplus_{i=0}^7
I don't get the i=0 and 7 parts below and above the symbol, respectively, but to the right instead, like this:
With summation (\sum_{i=0}^7), that is not the case - they appear below and below in the output:
How can I make XOR, or concatenation (II) larger and with indices below and above the symbol?
You want to use \bigoplus instead of \oplus.
I've never seen concatenation done that way. Addition and XOR are commutative operations so it makes sense to sum over a set (or take the exclusive OR of a set). Concatenation is not like that. I think I would explicitly write out the concatenation. That said, you can use \bigparallel from the stmaryrd package.
\documentclass{article}
\usepackage{amsmath}
\usepackage{stmaryrd}
\newcommand*\concat{\mathbin{\|}}
\begin{document}
\[x_1\concat x_2\concat\dotsb\concat x_n\]
\[\bigparallel_{i=1}^n x_i\]
\end{document}
I saw these guys using concatenation summation: http://www.schneier.com/skein.pdf, pages 17,18.
@axel22: Interesting. I'd never seen that before. Let me update my answer.
I don’t see why the non-commutativity of the concatenation should disqualify it for this notation. In functional programming, a reduction of character strings via concatenation is a well-defined operation, and defined in the same way as a sum. We’re not talking about sets here, we’re talking about (well-ordered) sequences.
@Konrad: Fair enough. One might write $\sum_{x\in S}x$. A similar expression would be meaningless for concatenation. That's all I was saying. In this case, you're right that there's a canonical well order imposed by the indexing.
The usual way to get a larger \oplus symbol that takes limits above and below in display math mode is with \bigoplus. However this symbol might appear too big; a not-so-large symbol can be obtained by
\newcommand{\bigxor}{\mathop{\mathchoice
{\textstyle\bigoplus}{\textstyle\bigoplus}
{\scriptstyle\bigoplus}{\scriptscriptstyle\bigoplus}}}
For a concatenation big symbol one can do a similar thing:
\newcommand{\bigconc}{\mathop{\mathpalette\bigconcinn\relax}}
\newcommand{\bigconcinn}[2]{%
\vcenter{\hbox{$\bigconcchoose#1\bigconcsize|\mkern1mu\bigconcsize|$}}}
\newcommand{\bigconcchoose}[1]{\def\bigconcsize{}%
\ifx#1\displaystyle
\let\bigconcsize\Big
\else
\ifx#1\textstyle
\let\bigconcsize\big
\fi
\fi#1}
Now \bigconc will behave like \sum:
\[ \bigconc_{i=0}^{3} X_{i} \]
You can use the \DeclareMathOperator* command that defines operator with super/subscripts above/below itself:
\documentclass{article}
\usepackage{amsmath}
\DeclareMathOperator*{\OPLUS}{\oplus}
\begin{document}
\[ \OPLUS^a_b \]
\end{document}
| common-pile/stackexchange_filtered |
Why won't lua 5.4 load a module from the current directory by default?
I see multiple questions here, that imply loading modules from the current directory should be the default behavior of Lua. See "selected answers" for these two questions:
Why lua require won't search current directory? ("if you used lua test.lua then it shoud work")
Lua: relative import fails from different working directory ("else cwd is . which works by default")
But my experience with lua54 is that it doesn't:
C:\Code\lua\testcwd>type hello.lua
local function hello()
print("Hello!")
end
return {hello=hello}
C:\Code\lua\testcwd>type use_hello.lua
hello = require("hello")
hello.hello()
C:\Code\lua\testcwd>type use_hello2.lua
-- Util needed by requireLocal()
local function get_directory_path(sep)
local file_path = debug.getinfo(2, "S").source:sub(2)
local dir_path = file_path:match("(.*" .. sep .. ")")
if not dir_path then
dir_path = "." .. sep
end
return dir_path
end
-- Enables calling require for current working directory
local function requireLocal()
local separator = package.config:sub(1,1)
local dir_path = get_directory_path(separator)
package.path = package.path .. ";" .. dir_path .. "?.lua"
end
requireLocal()
hello = require("hello")
hello.hello()
C:\Code\lua\testcwd>lua54 use_hello.lua
lua54: use_hello.lua:1: module 'hello' not found:
no field package.preload['hello']
no file 'C:\lua\systree\share\lua\5.4\hello.lua'
no file 'C:\lua\systree\share\lua\5.4\hello\init.lua'
no file 'C:\lua\systree\lib\lua\5.4\hello.dll'
stack traceback:
[C]: in function 'require'
use_hello.lua:1: in main chunk
[C]: in ?
C:\Code\lua\testcwd>lua54 use_hello2.lua
Hello!
C:\Code\lua\testcwd>lua54 -v
Lua 5.4.2 Copyright (C) 1994-2020 Lua.org, PUC-Rio
Why is "use_hello.lua" not merely working? One clearly sees that it does NOT search the current directory. To the best of my knowledge, my PUC lua 5.4 (Windows 10) configuration is "default". Just to load a module from the current directory, why do I need to perform this "crazy setup" of "use_hello2.lua"? Is that something that changed with lua 5.4? The readme of 5.4 doesn't seem to say this changed.
What's the value of package.path?
@shingo package.path=C:\lua\systree\share\lua\5.4?.lua;C:\lua\systree\share\lua\5.4?\init.lua (before changing it; seems to match the "stack-trace")
This isn't the default value, maybe you have defined environment variables LUA_PATH or LUA_PATH_5_4.
@shingo +1 Idk why, but my LUA_PATH and LUA_CPATH are missing "." If you could write your previous comment as an answer, together with what LUA_PATH and LUA_CPATH should look like in Windows, I would accept it.
Did you try require(... .. ".hello") ?
The problem here is that the default value of package.path is overridden by the LUA_PATH environment variable, and its value does not include ./?.lua.
In the manual, it mentions that if you want to include the default search path in an environment variable, you can append ;; to it. This also applies to LUA_CPATH. for instance:
C:\lua\systree\share\lua\5.4\?.lua;C:\lua\systree\share\lua\5.4\?\init.lua;;
| common-pile/stackexchange_filtered |
How is bias added in a convolutional layer
In a typical neural network , bias is usally added like this
v = activation ( w1*x1+ ... + Wb*b)
However, I am not really sure how it is done in convolutional layer. My one thought is that it is added with each convoltional operation for a neuron. Is that correct?
There are two ways bias is usually added to a convolutional layer:
Tied bias: where you share one bias per kernel
Untied bias: where you use use one bias per kernel and output
Check this blog post comparing the two.
Other than that, bias isn't really necessary in CNNs. There are many popular implementations which ignore it altogether (e.g. ResNet-152).
| common-pile/stackexchange_filtered |
Get figure from function and set it as state once user types in input
I have a component that I want to use to update a 'balance' in a database.
To do this, I am pulling the figure in my componentDidMount using axios.get:
componentDidMount() {
axios.get("/api/fetch/fetchEditDebt", {
params: {
id: this.props.match.params.id
}
})
.then((response) => {
this.setState({
balance: response.data.balance,
})
})
}
I then have an input which takes the amount the user wants to add to the balance:
<form method="POST" onSubmit={this.onSubmit}>
<input className="edit-balance-input" type="number" name="add" value={this.state.add} onChange={this.onChange} step="1" />
<button className="edit-balance-button" type="submit">Save</button>
</form>
I then use a function to take the original balance from state, and the 'add' figure from the input state, and add them together:
const calculateUpdatedBalance = () => {
return parseInt(this.state.balance) + parseInt(this.state.add)
}
And this updated figure is then rendered inside of a span so the user can see the new balance:
<div className="edit-balance-balance-container">
<p className="edit-balance-balance-paragraph">Updated balance: </p>
<span className="edit-balance-updated">-£{calculateUpdatedBalance()}</span>
</div>
This all works great, and as expected - the difficulty comes in when I then want to post the updated balance to my database. I tried to post the add state, but unsurprisingly that just updates the balance to the amount the user put into the input.
So how do I access the figure generated by my calculateUpdatedBalance() function? I thought about trying to setState() in the function, but that produces a "too many state updates" error.
Does anyone have any suggestions for how I can get that updated figure, and post that to my database?
Here's my full component for reference:
class Add extends Component {
constructor(props) {
super(props)
this.state = {
balance: '',
add: 0,
updatedBalance: '',
fetchInProgress: false
}
this.onChange = this.onChange.bind(this);
}
componentDidMount() {
this.setState({
fetchInProgress: true
})
axios.get("/api/fetch/fetchEditDebt", {
params: {
id: this.props.match.params.id
}
})
.then((response) => {
this.setState({
balance: response.data.balance,
fetchInProgress: false
})
})
.catch((error) => {
this.setState({
fetchInProgress: false
})
if (error.response) {
console.log(error.response.data);
console.log(error.response.status);
console.log(error.response.headers);
} else if (error.request) {
console.log(error.request);
} else {
console.log('Error', error.message);
}
console.log(error.config);
})
}
onChange = (e) => {
this.setState({
[e.target.name]: e.target.value
})
console.log(this.state.add)
}
onSubmit = async(e) => {
e.preventDefault();
console.log(this.props.match.params.id)
await axios.post("/api/edit/editDebtBalance", {
balance: this.state.add,
}, {
params: {
id: this.props.match.params.id
}
})
this.props.history.push('/dashboard');
}
render() {
const calculateUpdatedBalance = () => {
return parseInt(this.state.balance) + parseInt(this.state.add)
}
return (
<section className="edit-balance-section">
<div className="edit-balance-container">
<DashboardReturn />
<div className="edit-balance-content">
<p className="edit-balance-paragraph">How much would you like to add to your balance?</p>
<div className="edit-balance-balance-container">
<p className="edit-balance-balance-paragraph">Current Balance: </p>
<span className="edit-balance-original">-£{this.state.balance}</span>
</div>
<div className="edit-balance-balance-container">
<p className="edit-balance-balance-paragraph">Updated balance: </p>
<span className="edit-balance-updated">-£{calculateUpdatedBalance()}</span>
</div>
<form method="POST" onSubmit={this.onSubmit}>
<input className="edit-balance-input" type="number" name="add" value={this.state.add} onChange={this.onChange} step="1" />
<button className="edit-balance-button" type="submit">Save</button>
</form>
</div>
</div>
</section>
)
}
}
Make calculateUpdatedBalance() a member function so you can call it from within onSubmit() as well as from within render()
Would you be able to provide a code example? I’ve never written a member function before. :-)
In fact you've written 4 in this very component: componentDidMount(), onChange(), onSubmit(), and render().
Thank you - I’ve got it. I moved the function above render and then called it in submit as you suggested. Appreciate the help!
If you make calculateUpdatedBalance() a member method of the Add component, then you can call it from both render() and onSubmit():
calculateUpdatedBalance() {
return parseInt(this.state.balance) + parseInt(this.state.add)
}
onSubmit = async (e) => {
...
await axios.post("/api/edit/editDebtBalance", {
balance: this.calculateUpdatedBalance(),
...
};
render() {
return (
...
<span className="edit-balance-updated">-£{this.calculateUpdatedBalance()}</span>
...
}
| common-pile/stackexchange_filtered |
How many I2C slaves can an I2C master support?
Is there a maximum number of I2C slaves that an I2C master can drive? What are the physical limiting factors (like current drive, capacitance or something like that)?
The software limiting factor is the size of the address used for the slaves: 7-bit or 10-bit, which support 127 and 1023 devices, respectively. Physically, there are two restrictions. First, the physical size of the bus because the bus is only meant for short runs (the inter IC part). If the bus is too large there are capacitive loading and propagation delay effects that need to be dealt with. Second, some devices can't support the full range of I2C addresses. As examples, the MPU6050 gyroscope only supports two addresses, and some devices reserve specific addresses for special purposes.
You have an off-by-one error. 7 bit addressing supports 128 addresses (0 to 127). 10 bit addressing support 1024 addresses (0 to 1023).
This ist not an off by one error. It is an "off by 6 error" because there are 7 reserved addresses not just one.
The addressing scheme is dictated by the devices on the bus, master or slave. Some devices have preset address ranges and reserved addresses. Other devices, many micro controllers for example, have no reserved addresses and can use any address in a given bit scheme.
These numbers are somewhat correct, however a caveat is needed. There are a few reserved addresses such as 1111 XXX and 0000 XXX. This means 7-bit = 2^7 - 16 =112 usable addresses, 10-bit is the full 2^10 1024. 8-bit isn't (shouldn't be a thing) it typically is including the R/W bit on 7-bit. https://www.nxp.com/docs/en/user-guide/UM10204.pdf
@busfault I acknowledge that 8bit addresses are not effectively a thing. I have removed them from my answer. Most microcontrollers can choose to ignore some or all of the reserved addresses and use the full address range. I note in my answer that this is device specific.
The maximum number of nodes is limited by the address space, and also
by the total bus capacitance of 400 pF, which restricts practical
communication distances to a few meters.
Read more at I²C
Addressing limits the number of devices- some can use 10-bit addressing (fairly rarely used), which limits the number of addresses to 1024. There are a handful of 'reserved' addresses.
I2C (as opposed to "two wire bus" or whatever others want to call similar buses), should follow the NXP (née Philips) standard, UM10204 I2C-bus specification and user manual. That should be your primary reference document, not the various interpretations and subsets that exist elsewhere.
The maximum number of devices will be influenced by the drive capability of the weakest output (which in turn determines the minimum pullup resistor), the wiring and input capacitance, and the operating mode/frequency. See section 7.2 Operating above the maximum allowable bus capacitance if the maximum capacitance must be exceeded:
An i2c bus is limited mainly by the capacitance of the bus (and thus speed), and accessible device addresses. And physical board space.
But there is no real upper limit, when you factor in bus buffers, extenders, repeaters, hubs, multiplexers, switches (or any other name for a device that can switch between multiple busses). These add some i2c overhead, as may can be accessible via the same i2c bus. The PCA9548A for example, is an 8 bit bus switch.
This single chip can theoretically quadruple the number of i2c slaves (127 * 8) otherwise available. And the PCA9548A can be configured for up to 8 addresses on a single bus, so 8 * 8 * 127 devices. (math may be off). And that's just with this device and not more.
Frankly, there is no theoretical limit if you adjust for capacitance.
I2C specifies 2 address lengths, 7 and 10 bits, which gives a theoretical maximum of 128 and 1024 distinct address, respectively.
However, there are a few reserved addresses, such as 0x00 (general call). This further limits the address space.
If you are building a system where you have direct control over the I2C devices, you can use the reserved addresses for your own use, but the system will no longer comply with the I2C standard.
In addition to the addressing, there is the physical bus limitations. Each device on the bus needs to be able to pull the bus low in a certain time span (depending on the bus speed). If the bus has lots of capacitance, devices may not be able to pull SDA low fast enough, and the pulls ups may not bring SDA back up fast enough.
Now, the hardware problems can be overcome with a little bit of driver hardware. I'm working on a project right now that uses I2C to communicate with devices over several 10s of meters. The main bus uses 24v, and each board has a driver that steps it down to 3.3v.
In a nut shell, the physical limitation of I2C can be overcome. The addressing can be overcome, but only if you have direct control over the device.
It has been almost three years since you were working on very long i2c buses. Did they work okay?
@wallyk I left that company shortly after posting that answer. I will say that given the right hardware, you can make I2C communicate over longer distances. However, there are other communication protocols that are designed for long distance and would probably be a better choice than I2C.
The primary limitation on the number of slaves a master can drive is generally going to come from electrical factors like bus capacitance, leakage, drive strength, etc. If one could construct slaves with zero parasitic capacitance and zero leakage, and if one could connect them with zero-capacitance board traces, then bus capacitance wouldn't be a factor, but in practice neither assumption is going to hold.
Addressing of devices which "know" about each other, on the other hand, isn't really a problem. It would be trivial to design a peripheral which would allow billions of chips to be connected using one read and one write address. Simply specify that every device must have a unique four-byte ID, and is required to listen to the write address all the time, but must drop out of every transaction whose first four transmitted data bytes don't match their ID. Further specify that devices may only respond to the read address if the last write transaction they heard matched their address.
If one wanted to add the ability to have the master determine the IDs of all connected slaves, one could reserve some special ID ranges for such purposes. For example, one could say that if the first ID byte is FF, then the next four bytes will be a mask and the four after that an ID; a device should stay connected (and ack the last ID byte) if the portion of its ID specified by the mask matches that given in the command. This would allow a master to identify at least one device using 64 transactions, and additional devices using 62 or fewer transactions each. Perhaps not the fastest possible means of device identification, but not bad given a search space of billions of device IDs.
Short answer: It depends
If you have (common) devices with 7 bit addresses up to 104 Devices (128 addresses - reserved addresses (0x00-0x07 and 0xF0-0xFF are reserved))(certain limitations apply)
If you have (less common) devices that support 10 bit addressing up to 1024 devices (you can mix 7bit and 10bit devices and reach up to 1136 devices that way)
Now to the limitations:
Most simple devices can only be configured to two to 8 different addresses. You can overcome this by custom ordering devices with different base addresses (but this normally means that you order a minimum ammount of devices)
There are also hardware limitations (mainly bus capacitance) but this can be solved with special i2c drivers.
If you want to connect many devices over bigger distances I'd suggest to use a fieldbus anyway! I2C is intended for communication inside a device (like a TV set).
I'm using I2C myself with a RaspberyPi with external cables of up to 50cm (even with T-Sections you should never have in a bus system). It works surprisingly well.
The number of devices connected to the bus is only limited by the total allowed bus capacitance of 400 pF. Because most ICs with an I²C interface use low-power, high-impedance CMOS technology, many ICs can be connected to the I²C bus before that maximum capacitance is reached
With added multiplexer chips (like TCA9544A) or buffers (like PCA9515B) you can overcome all limits - both bus capacitance and addressing.
You can place 3 devices with identical addresses behind a multiplexer and select only one of them, communicate with it and later select another. Of course software gets more complex.
If you have long wiring you can place a buffer in the middle and overcome the capacitance limit.
No, you will still be limited: 4 Channels/ Multiplexer, per channel you have 4 Subchannels, then you have subsubchannels, subsubsubchannels and more. You have 3 Address-lines per channelswitcher: Is 4^(2^3)=65536 channels. Per channel you have 2^8 - 7 - 8 + 2^10 = 1265 Devices / Channels (When you also use Buffers) (the -7 are for reserved addresses and -8 for the multiplex addresses). 1265 *65536 = 82903040 Maximum Devices.
Edit: Sorry had a error: it is 2^7 - 7 - 8 + 2^10 = 1137 Devices / Channel => 1137 * 65536 = 74514432 Devices. But there may be more possible when you use IO-Expander to turn on and off some Buffers (What is far form practical use but a theoretical possibility).
| common-pile/stackexchange_filtered |
How to show Autocomplete for text fields in infopath 2013?
I have one info Path form I want to implement the Autocomplete text box.When user Enter A show all the related value.This form is only open in info path filler how can I achieve this.
In Infpath Designer 2013 > open your current template >
Right click on your Textbox >
Select TextBox Properties > At Display Tab >
Check Enble AutoComplete
Also, check How to Achieve this via Jquery at Using SPServices with jQueryUI’s Autocomplete Function on InfoPath Forms in SharePoint
| common-pile/stackexchange_filtered |
How do you get a new player engaged in an old campaign?
A group of three players had been going through my campaign for a year and a half, when a fourth player was added around level 15.
The new player entered in media res: there was a ton of back story, more than a hundred NPCs and the rest of the group was running from one important quest to the next.
Unfortunately, I made a huge mistake in the way this character entered the campaign: he was scooped up from the Nine Hells and brought to the group's home world to assist them in a battle that won't take place for at least another 7 levels.
The thing is, neither the player nor his character has any stake in this world. They don't know any people/NPCs there, they don't know the history or the intricacies of the quests and their relationship with the rest of the party is strained at best. Because the character is unaligned, he's not even that interested in helping the party defeat the big bad.
At the moment, while the rest of the players talk to NPCs (both old and new), remember history, important clues and back story and discuss what quests to focus on, the new guy sits quietly in the corner and waits because he has no reason to do any of those things.
What are some good ideas to get this player AND his character engaged in the story and/or just help him have fun?
Note: The player is not at all disinterested - he seems to find the story interesting and wants to play with us. However, I understand why he would be unmotivated with the (in retrospective, bad and irreversible) way he was added.
I was in a similar situation a couple of years ago, in the role of the newly-come player. The party, all around 20th level and with a couple of years of familiarity with the world, had found my new character embedded in magical rock and awakened me deep inside a Drow city. Now, the DM had to work on integrating me with the party and with the world.
There were several things we did to help:
Link to other PCs. Apparently one of the PCs had a side-quest a couple of months earlier where he travelled a few thousand years to the past. The DM used that to link my character to him, with some out-of-character agreement between the players to add our acquaintance to our backstories. It made sure my character wasn't too out of touch, since at least I had one person I knew in this alien world.
One of the first things that happened after I joined the group was that I came into possession of a certain item, a little gem-covered pyramid I swiped "as my retirement fund". This ended up being an artifact with a lot of importance for the rest of the plot, so, again, a lot of NPCs wanted things to do with me.
The Big Bad, or at least some intermediate Big Bad, turned out to be linked to my character's past. This really got me more involved with things.
In addition to these tactics, you should also talk to the player. While it's true that a lot of the job of integrating the character is your job, as GM, it is still also the player's job to fit into the party. Roleplaying is a social activity, and if the player isn't here to socialize, he's doing something wrong. I don't believe in blindly adhering to character concepts at the expense of party dynamics.
Some ideas.
Think outside the box. Don't be blindly adherent to your grand vision of the plot. You haven't told us anything about his motivation or why he got "scooped," but change/add quests so that he/his faction/his whatever (family/race/etc) does have some kind of stake in them instead of expecting him to get on the railroad train heading to Cleveland. Also - if you screwed up with seven levels to go, and can't figure out how to incorporate the new PC fully for some reason, make a change. Have him bail on that character and bring in another; have him take over an existing NPC he likes, etc. Every day you perpetuate the mistake, you're making it again afresh, it is not irreversible in practice. Have Hellboy betray the party and the new player play a paladin who saves their bacon, for instance. He does have to change characters which is sad but can't be too invested in the character yet, apparently... See next point.
If the PCs are interacting with new NPCs, why isn't he? He lives there now... That sounds like a player needs to stop sitting on his hands problem. Whatever these NPCs did to make them important to the rest of the PCs, why is that not starting to take root with the new guy? Why does he not care about any of the quests, even from the point of view of I need to protect these guys or I will get phat magical lewt? This is clearly a player problem, he's not really identifying with whatever Hellboy he is and working proactively to achieve his goals. It's OK if those goals are irrelevant to what the party's doing, in fact it's more interesting that way. He can sweat them to come with him on a quest since he's doing a lot of vice versa. Talk with the player and just say "hey man you need to get more proactive about pursuing your character's goals."
Talk to the other players. Are they not giving a damn? When we get a new PC in the group (just like when a new person joins a work group or whatever IRL) the existing ones can make a big effort to include them, introduce them, say "what do you think..." It sounds like you mainly care about your plot and all your players just care about their own character and no one's doing much to facilitate each others' fun in an interactive way.
No personal interest in the party and no knowledge of the world or the characters? Stop right there! That sounds like a beautiful place to inject doubt into the party. Consider how you would react in the same situation. Taken from here, Earth, and brought to a land of utterly different constructs and dynamics.
You would imprint on whomever first discovered you. You'd travel with them, learn from them, and after following for some time - you would form your own opinions of things.
This is the perfect chance for the players to reaffirm their own assumptions about what side they are on. By this level, PCs are forces of nature who can destroy whole towns, level entire civilizations in some cases, and often they will do exactly that in the course of a campaign.
As an outsider, this might be totally shocking. Perhaps the race of beings the PCs assault are pure evil, but those are still women and children being slaughtered. There is potential here to have a character that plays wonderful counterpoint to these atrocious actions, or lesser ones, and forces the party to explain why they are doing what they are doing.
By expressing their needs to a 3rd party, they reaffirm them. They prove to themselves they are on the right track. And perhaps, they can convince the new member of the need. Or perhaps not. It could be even more fun for this new person to make the party realize how terrible they've become and lead them on a path of atonement, totally throwing the plot for a loop.
He was scooped up out of the Nine Hells by...? A god who wants to assist the party, presumably. Also presumably, a god with some interest in this character. There's scope for inquisitive minds there; perhaps when they go to the temple to inquire, the high priest provides an item that has been waiting a hundred years for the Chosen One who will need it. Change the attitude from 'What have I got to do with these people?' to 'A god thinks I am closely involved with these people; let me find out how.'
I would suggest something a little similar to @CatLord - have the new player write some backstory material about his character. Work with the new guy, providing plot points and campaign setting information for his backstory. Level 15 affords a lot of ground for backstory information, and having an origin in the Nine Hells (whether he was native to there or not) begs for a good story explaining the how and why he came from there.
There are a few purposes for developing this character's backstory.
As any character's background information does, it provides depth to the character. The deeper the character, the more invested and hopefully entertained and interested the player is about his character. This can manifest directly during gameplay, eliminating a lot of "why should my character care about this situation or NPC?" Well, there may be all sorts of reasons related to the character's history as to why the character should care about a present situation.
The backstory provides the GM an opportunity to weave in plot-related points that can be leveraged in the present time as a hook to get the character involved. These hooks may not be the same straight-forward motivators that the other characters in the party have, but there may be side-angles that afford the new character an opportunity to align his interests with the party's interests.
Background information provides a cohesive foundation for the character's behavior, and that may be interesting in itself to the other characters. The other characters may start having more interest in the new guy if the new guy starts making a point of behaving consistently in a certain way. "Why are you always so standoffish with the new people we meet?" "Well, you see this scar here? That came from when I approached this stranger at a bar once, and when he pulled a knife on me for just talking with him, that was only the start of my troubles..."
I have faced similar situation several times in our 4 years long GURPS fantasy campaign.
Half year after start, a guy playing a spy joined. His goal was to find his lost brother, and he quickly found the brother was kidnapped by common enemy. The guy left the party later, but he fit in well, especially when he could use his sneaky abilities.
More than two years after start a girl joined us with a character of priestess of a death God. We made a detailed backstory, a common goal and voice of her god suggesting to join the party. Since then, I made sure to include some demons, undead or black magic (the character is one of the best exorcists in area) in every session she plays. She had no links to PCs (except for common goal with one of them) or important NPCs, but desire to rebuild her god's cult and getting powerful allies in first few sessions helped her to integrate to the party quickly.
Few months after her, a guy joined as a scout. He left after two sessions - I think he expected something more similar to dungeoncrawling than to our campaign featuring espionage and intrigue (and war too, but he joined during a winter truce in the campaign).
A year after the priestess, a guy joined as a noble, a brother of one of the party's most important allies, who just returned from a long-term diplomatic mission. He quickly became one of the leading characters in the party.
Half year after that (almost three years after start) a mage joined the group. The character's biggest goals are training and research, but adventuring with the party is the right challenge for both the player and the character. Getting accustommed with the world and and rules wasn't easy for him, but he is one of the most active players from the very start.
Half a year ago, a new guy playing scout/ranger joined. The character simply inherited the noble's agenda (he was linked to a PC of one of the new players), but he has no goals of his own. He is not a very active player and I'm afraid he wouldn't stay for long.
Now I can induce few rules (they are somewhat similar to what mwyzplk suggested):
Motivation is crucial. The character needs agenda, which is interresting for the player (optimally the player's idea) and is either common or easily joinable with the agenda of the party or at least one other PC. If the player don't recognize his character's agenda as his own or if it's not weaved into the tapestry of party's goals (after more than a year, party's goals must be complex), the player would likely leave the party, or at least they wouldn't be interrested in the game, like your new player. I guess this is where you made the mistake. Being different (as DampeS8N suggested) could help, but still it's a passive goal, so it's not as easy to motivate the player as an active character-driven goal.
Spotlight is number two. Give the character scenes where they are the best expert in the party - it motivates the player and earns the character respect of the old party members. The worst situation is when there's already a PC of the same class and higher level, but these spotlight abilities don't have to be written in character sheet - you devil from hell can have information that others can't easily learn without him. Artifact mentioned in lisardggy's answer is another good example.
Also, help the player overcome the newcomer shock. There are many details in the game world to learn and many internal jokes in the party. Tell the player as much as possible before they join and keep explaining during game. Crucial point here is whether old players are helpful, whether they try to explain party's internal jokes and other difficult things to the newbie - GM can't make all the work by themself. Our group is excellent in this aspect, but still one player left because he didn't really want to adopt party's tactics and other customs. If your old players don't try to invite the newbie and the new player is not extroverted enough, this might be the core of the problem.
What to do in your situation? Tell your players to help the newcomer and sit with the new player to find some motive he would like. Ask him, what he expects, what he would like to do. If it's not reconcilable with his character, then replace the character, but this is unlikely. Then think how to give him what he wants, and ideally to join his goals with party's.
If the player just wants to honor his character's backstory (i.e. he is a "myguyist" of some sort), just state that the archvillain is an important ally (or even a master) of a rival faction in Hells, and let the party discover a plot arranged by another hellish faction. This will teach him to pay attention to mere mortals! Or bring some old friend from Hells, who would just visit him and ask what he found about this plane of existence - tell him, that ignorance is not good, even for a Hellboy like him.
Also, NPCs should be interrested in such a character, and the interrest doesn't have to be in form of assassination or exorcism attempts. Some should be interrested in him in the positive way - another encouragement to start involving in relationships with NPCs.
My recommendation is to come up with a cheat sheet for some of the backstory. I'm assuming he hasn't been in the Nine Hells since his creation/birth, so there must be some things he knows about the mortal world to some extent. Either that, or have the party spend some time sharing a few notes they think will be pertinent to the most imminent threat(s).
| common-pile/stackexchange_filtered |
How can I split a url in C# into a Key Value pair?
I have a Url:
https://www.facebook.com/video.php?v=10154500552625464
that I need to split into a key value pair with:
Value: https://www.facebook.com/video.php?v=10154500552625464
Key:<PHONE_NUMBER>2625464
I need to place them in a Dictionary:
static Dictionary<string, string> GetVideoLinks(string Data)
{
Regex pattern = new Regex (@"https:\/\/www\.facebook\.com\/video.php\?v=?<id>[0-9]{17}");
return ret;
}
Write code that does this.
This question appears to be off-topic because it shows no attempt to solve the problem.
Downvoted for show me the codes
There is a framework class for that: HttpUtility.ParseQueryString
Please read: Are answers that just contain links elsewhere really “good answers”?
| common-pile/stackexchange_filtered |
Spring-HATEOAS without extending ResourceSupport
I'm building a REST API. I have a domain model composed of beans than can't extend ResourceSupport. Which is the best way to expose them as resources using Spring-HATEOAS?
In case that's not possible, which is the best way to include links on the JSON generated by the beans?
You can use the Resource wrapper:
MyModel model = ...
Resource<MyModel> resource = new Resource(model);
resource.add(linkTo(...
Thank you! Do you know any tutorial where I can learn more about this?
@Andres There are some on the Spring HATEOAS homepage. The Spring Developer Channel on YouTube also has some excellent videos on building restful services using Spring HATEOAS.
You should separate Resources from your Domain.
Even if they might appear to be similar, Domain model and Resources are profoundly different.
Domain objects are your internal representation. The implementation have constraints depending on how your business logic/persistence is implemented and other design decision. For example they may be JPA entities or may be immutable.
Resources are the representation to the external world.
Might be one-to-one with the Domain or totally different. It is not so infrequent having multiple Resource representation for a single Domain entity.
But first of all, the Resource implementation is meant to be sent/received on the wire. So it has constraints for being marashalled/unmarshalled.
So your application should have separate objects for the domain and the resources.
With Spring HATEOAS the mapping is done using Resource assemblers.
You may have a look at this sample application: https://github.com/opencredo/spring-hateoas-sample and the related post: Implementing HAL hypermedia REST API using Spring HATEOAS
While this doesn't answer the question, This is useful to know!
| common-pile/stackexchange_filtered |
tumblr api. Got OAuth. How to get v2/user/info? (Php - curl)
I am developing an application where I must connect to ALL major social networks (Facebook, Tumblr, Twitter, Y!Answers, MSN, Youtube to start with, more later).
I have downloaded and used one type of class for each of these, but enough is enough!
I am re-writing a class that will (should) cope with all APIs.
My OAuth is now working with all of them, but, on tumblr, although I have api_key, oauth_token, oauth_token_secret, when trying to curl() to get user info (essentially user/screen name), the reply is always 401, unauthorized.
I probably set something wrong in the curl options, but what?
public function curl_it($method, $url, $params=array())
{
switch ($method)
{
case 'POST':
break;
default:
// GET, DELETE request so convert the parameters to a querystring
if ( ! empty($params))
{
foreach ($this->request_params as $k => $v)
{
// Multipart params haven't been encoded yet.
// Not sure why you would do a multipart GET but anyway, here's the support for it
if ($this->config['multipart'])
$params[] = $this->safe_encode($k) . '=' . $this->safe_encode($v);
else
$params[] = $k . '=' . $v;
}
$qs = implode('&', $params);
$this->url = strlen($qs) > 0 ? $this->url . '?' . $qs : $this->url;
$this->request_params = array();
}
break;
}
// configure curl
$c = curl_init();
curl_setopt_array($c, array(
CURLOPT_USERAGENT => $this->config['user_agent'],
CURLOPT_CONNECTTIMEOUT => $this->config['curl_connecttimeout'],
CURLOPT_TIMEOUT => $this->config['curl_timeout'],
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_SSL_VERIFYPEER => $this->config['curl_ssl_verifypeer'],
CURLOPT_FOLLOWLOCATION => $this->config['curl_followlocation'],
CURLOPT_PROXY => $this->config['curl_proxy'],
CURLOPT_ENCODING => $this->config['curl_encoding'],
CURLOPT_URL => $url,
CURLOPT_HEADERFUNCTION => array($this, 'curlHeader'),
CURLOPT_HEADER => FALSE,
CURLINFO_HEADER_OUT => true
)
);
if ($this->config['curl_proxyuserpwd'] !== false)
curl_setopt($c, CURLOPT_PROXYUSERPWD, $this->config['curl_proxyuserpwd']);
if ($this->config['is_streaming'])
{
// process the body
$this->response['content-length'] = 0;
curl_setopt($c, CURLOPT_TIMEOUT, 0);
curl_setopt($c, CURLOPT_WRITEFUNCTION, array($clas, 'curlWrite'));
}
switch ($method)
{
case 'GET':
curl_setopt($c, CURLOPT_GET, TRUE);
break;
case 'POST':
curl_setopt($c, CURLOPT_POST, TRUE);
break;
default:
curl_setopt($c, CURLOPT_CUSTOMREQUEST, $method);
}
if ( ! empty($params) )
{
// if not doing multipart we need to implode the parameters
if ( ! $this->config['multipart'] )
{
foreach ($params as $k => $v)
{
$ps[] = "{$k}={$v}";
}
$params = implode('&', $ps);
}
curl_setopt($c, CURLOPT_POSTFIELDS, $params);
}
else
{
// CURL will set length to -1 when there is no data, which breaks Twitter
$this->headers['Content-Type'] = '';
$this->headers['Content-Length'] = '';
}
// CURL defaults to setting this to Expect: 100-Continue which Twitter rejects
$this->headers['Expect'] = '';
if ( ! empty($this->headers))
{
foreach ($this->headers as $k => $v)
{
$headers[] = trim($k . ': ' . $v);
}
curl_setopt($c, CURLOPT_HTTPHEADER, $headers);
}
if ((isset($this->config['prevent_request'])) && ($this->config['prevent_request'] == false))
return;
// do it!
$response = curl_exec($c);
$code = curl_getinfo($c, CURLINFO_HTTP_CODE);
$info = curl_getinfo($c);
curl_close($c);
// store the response
$this->response['code'] = $code;
$this->response['response'] = $response;
$this->response['info'] = $info;
return $code;
}
Can anybody help?
(Useless to go to tumblr discussion group: there is never anybody there...)
JR
Are you sure Tumblr even supports username? As I remember Tumblr API is all about user blogs - you can get list of all user's blogs, then can read/post to any of the blog. I don't think they have a notion of username, etc
The doc refers to the user info, at api.tumblr.com/v2/user/info. The screen name is part of the return message, but you must have oauth credentials (which I have). It also must be a POST. I am not sure WHERE all the parameters have to be! (curl header?)
I wrote client for Tumblr API in php but I use php's oauth extension for that, not working directly with curl. I don't remember anything about username, just user's blogs. Feel free to poke around source on github, project name is Lampcms
It looks like my client is written for Tumbl's version 1 API, I don't know anything about their v2, sorry
| common-pile/stackexchange_filtered |
Empty string is a file? ( if [ ! -f "" ] )
The script is called isFile.sh and looks like this:
#!/bin/sh
echo $1
echo $2
if [ ! -f $1 ]; then
echo "$1 (arg1) is not a file"
fi
if [ ! -f $2 ]; then
echo "$2 (arg2) is not a file"
fi
First I created a file by doing touch file.exist.
And I ran bash isFile.sh file.exist file.notexist
The output was:
file.exist
file.notexist
file.notexist (arg2) is not a file
Then I ran bash isFile.sh "" file.notexist
The output was:
(# empty line)
file.notexist
file.notexist (arg2) is not a file
Expected output is:
(# empty line)
file.notexist
(arg1) is not a file
file.notexist (arg2) is not a file
Can somebody explain why?
Are you sure about echo$1? I would think it is echo $1 (better cut an paste). And welcome!
@ Volker Siegel, Sure it's echo $1, I edited my question, thanks.
The issue is that [ ! -f $1 ] becomes [ ! -f ] after expansion (and not [ ! -f "" ] as you thought!), so instead if checking if a given file exists, [ checks if the -f string is empty or not. It's not empty, but thanks to ! the final exit code is 1, thus the echo command is not executed.
That's why you need to quote your variables in POSIX shells.
Related questions:
Why does my shell script choke on whitespace or other special characters?
Security implications of forgetting to quote a variable in bash/POSIX shells
Thanks for mention me add quotes on variables, but still I can't get the output right. Means I still get the same output after I double quoted my variable in the IF condition. Change $1 to " " or ' ' doesn't work either.
@KitisinKyo, do you mean that bash -c '[ -f "" ] && echo yes' outputs yes for you? What system is that?
@Stéphane Chazelas, Nope, bash -c '[ -f "" ] && echo yes' had no output. I'm using CentOS 7
@KitisinKyo, yet you're saying in the comment above that if [ ! -f "" ]; then echo ...; fi outputs nothing. Try running the script with bash -x to see what happens.
@Stéphane Chazelas, I edited a wrong script, the code worked, I was dumb.
@KitisinKyo If this answer solved the problem please consider accepting it. Or add your own answer if the actual solution differs from this one. That way other users with similar problems may benefit from it.
| common-pile/stackexchange_filtered |
Unable to change my Date filed using javascript inside my Edit form
I am working on a sharepoint 2013 on-premises team site. now i want to set a column which is of type date/time to be equal to today date,so i added a script inside the Edit form, then i tried to set the date value using SPUtiltiy, as follow:-
var today = new Date();
var dd = today.getDate();
var mm = today.getMonth()+1; //January is 0!
var yyyy = today.getFullYear();
if(dd<10) {
dd = '0'+dd
}
if(mm<10) {
mm = '0'+mm
}
today = dd + '/' + mm + '/' + yyyy;
alert(today);
SPUtility.GetSPFieldByInternalName('OrderDateCustomerApproved_').SetValue(today);
but i got this error:-
Unable to set date, invalid arguments (requires year, month, and day as integers).
throw "Unable to set date, invalid arguments (requires year, month, and day as integers).";
so i tried to do so using pure JavaScript appraoch as follow:-
$('select[id^="OrderDateCustomerApproved_"]').val(today);
where this did not raise any error, but the field was not populated with today date!!
so can anyone adivce on this please?
My data column is:
A demo code based on my data column for your reference:
<script src="http://code.jquery.com/jquery-1.11.3.min.js" type="text/javascript"></script>
<script type="text/javascript">
$(function(){
var today = new Date();
var dd = today.getDate();
var mm = today.getMonth() + 1;
var yyyy = today.getYear();
if(dd<10) {
dd = '0'+dd
}
if(mm<10) {
mm = '0'+mm
}
today = dd + '/' + mm + '/' + yyyy;
$("input[id^='Date_x0020_Customer_x0020_Approv_']").val(today);
})
</script>
Note: you need to change id in the above code to yours.
I use the following piece of code to set value to Date-time field with SPUtility and jQuery
var currentDate = new Date();
$(SPUtility.GetSPFieldByInternalName(internalFieldName).Controls).find('input').first().val((currentDate.getMonth() + 1) + '/' + currentDate.getDate() + '/' + currentDate.getFullYear());
My Site collection uses MM/DD/YYYY format for dates so i'm setting it in that format. You will have to change the format if needed.
I was working on something similar last week. Maybe this will point you in the right direction.
<script>
'use strict';
(function() {
var timeSinceFieldViewCtx = {};
timeSinceFieldViewCtx.Templates = {};
timeSinceFieldViewCtx.Templates.Fields = {
"Time": { /* Change to your Date Column - Same as below */
"View": timeSinceFieldViewTemplate
}
};
SPClientTemplates.TemplateManager.RegisterTemplateOverrides(timeSinceFieldViewCtx);
})();
function timeSinceFieldViewTemplate(ctx) {
var dateDiff = new Date() - new Date(ctx.CurrentItem.Time); /* Change to your Date Column */
var daysDiff = Math.floor(dateDiff / 1000 / 60 / 60 / 24);
return daysDiff;
}
window.addEventListener('DOMContentLoaded', function() {
var x = document.getElementsByClassName('ms-vb-lastCell');
for( var i =0; i < x.length; ++i ) {
console.log(x[i].innerText + " Type Of: " + typeof Number(x[i].innerText));
if (Number(x[i].innerText) > 30 ) {
x[i].style.color = 'green';
x[i].style.fontWeight='normal';
}
if (Number(x[i].innerText) > 60 ) {
x[i].style.color = 'green';
x[i].style.fontWeight= 'bold';
}
}
}, true);
</script>
thanks for the code will try it now,, but why my code which looks shorter compared ur code did not work??
You're trying to achieve something different from what i did.
I think your code only applies to the "New" or "Edit" Form. Do you want to the set the date on a new form or in the list itself?
SharePoint adds a format function to a date object, so you can use that to do your formatting as well. And I don't used the ID of the field, I use the title because that makes things easier to read.
$("input[title='Date Customer Approved']").val(today.format('MM/dd/yyyy'));
| common-pile/stackexchange_filtered |
Github API: How to get list of users who have starred a repo AND cloned it
I'd like to get a list of users who have starred my public repo and a number of fields related to them such as whether or not they have cloned it. Is this possible with the Github API?
The endpoint for stargazers (users who have starred a repo) is documented here in the documentation. You can follow with additional requests on those users.
Checking whether people have cloned a repo is impossible (you don't even need to be logged in to clone a git / GH repo). You can, however, list the forks of a repo.
The endpoint for stargazers is:
https://api.github.com/repos/:owner/:repo/stargazers
for eg, if you want to get users who starred the repo https://github.com/suhailvs/django-schools/ you can use:
https://api.github.com/repos/suhailvs/django-schools/stargazers
Github webhook services are fairly detailed and you can really get a lot of information from them but gleaning if someone cloned your repo is a bridge too far. As far as I'm aware, there's no way to track that (you can track new forks). I'd recommend using a webhook to track new repo watches to accomplish at least some of what you're looking for.
Axibase has a tool which leverages Github webhooks, and notifies you via email or instant message (through the messenger of your choice) when someone stars your repository.
The underlying process is here:
It's a quick setup, the whole process takes less then 10 minutes. Basically you need to create a plaintext file with your email / messenger credentials on your local machine, launch an ATSD sandbox from the terminal, and paste the webhook generated at runtime into the appropriate field on Github. The full process is here.
Disclaimer: I work for Axibase
| common-pile/stackexchange_filtered |
ADFS - Correct way to massively provision relying party trusts for many similar SAML service provider
Let's say I have 200+ sites in the form of:
https://site1.example.com, https://site2.example.com
I have to deploy an identical SAML configuration for all of these sites. Ideally I would just have a single relying party trust set up in ADFS that would match all of these sites. Each of those sites will have a response endpoint of https://siteX.example.com/saml, this is also specified in the SAML request from each SP.
I was looking for a way to set up the relying party trust with a wildcard for what is accepted, but this does not seem possible, from what I can tell.
Now I am wondering if there is another solution, along the lines of a scripting, or cloning in mass. I also need to be able to grow this over time, preferably in an automated way.
With the Powershell cmdlet add-adfsrelyingpartytrust I think it should not be difficult to create the relying party trusts for these 200+ sites automatically in the script way. Here's the docs for your reference:https://technet.microsoft.com/en-us/itpro/powershell/windows/adfs/add-adfsrelyingpartytrust
Thanks for pointing me in that direction. It frustrates me that I can't figure out how to do this with one relying trust party. I found a single page on microsoft's site that says this is possible, but the article is incomplete and doesn't explain how to do it with SAML 2.0 https://social.technet.microsoft.com/wiki/contents/articles/2305.ad-fs-2-0-how-to-utilize-a-single-relying-party-trust-for-multiple-web-applications-that-share-the-same-identifier.aspx
Yes I believe it is possible doing this with single relying party trust. However like the article shown it needs some coding in your SP side, so that when user types the site URL in his/her browser, he/she will be redirected to the corresponding site endpoint and then be redirected to the IDP page when he/she enters the username.
| common-pile/stackexchange_filtered |
Difference between CString::GetbufferSetLength and CString::GetBuffer
I wonder what exactly the difference is between GetbufferSetLength(some_size) and GetBuffer(some_size).
The documentation isn't very clear to me and to me it sounds pretty much both functions do more or less the same thing.
This is my use case:
CStringA foo;
char *p = foo.GetBufferSetLength(somelength);
// fill the buffer p with exactly somelength (non null) chars
FillCharBufferWithStuff(p, somelength);
foo.ReleaseBuffer(somelength); // or should I rather use
// ReleaseBufferSetLength(somelength) ?
// now foo contains the somelength characters provided by FillCharBufferWithStuff
If I use GetBuffer instead of GetBufferSetLength it does exactly the same job.
I'd be grateful if somebody could shed a light on this.
The documentation isn't very clear to me and to me it sounds pretty much both functions do more or less the same thing.
Yes, pretty much. The only observable difference is that GetBufferSetLength() updates the stored length (and zero-terminates the buffer) before handing out the pointer:
PXSTR GetBufferSetLength(int nLength)
{
PXSTR pszBuffer = GetBuffer( nLength );
SetLength( nLength );
return( pszBuffer );
}
For reference, here is the SetLength() implementation (with state validation code omitted):
void SetLength(int nLength)
{
GetData()->nDataLength = nLength;
m_pszData[nLength] = 0;
}
That seems like an odd thing to do. Either of the GetBuffer() members puts the class instance into a state where invariants may temporarily be violated. Keeping part of that internal state consistent doesn't seem useful, moreover, since clients can easily track that piece of information.
Functionally, GetBuffer() and GetBufferSetLength() are equivalent in producing a writable buffer of the requested size, and the choice ultimately comes down to a matter of preference. I find GetBufferSetLength() easier to comprehend, but GetBuffer() does less unnecessary work.
For what it's worth the CSimpleStringT implementation pretty consistently uses GetBuffer(), paired with ReleaseBufferSetLength().
The above was written with the documented contract in mind that promotes the following pattern:
Call GetBuffer()/GetBufferSetLength() to crack the insides open
Update the innards as appropriate
Wrap everything up with a call to ReleaseBuffer()/ReleaseBufferSetLength() re-establishing class invariants
This is easy to follow and easy to verify by inspection. That said, if you are desperate to save a function call and willing to sacrifice readability and give up following the documentation, there is a meaningful difference: GetBufferSetLength() allows you to omit the call to ReleaseBuffer()/ReleaseBufferSetLength() provided that you can guarantee to subsequently write exactly nLength characters to the buffer.
This is supported by the (current) implementation but certainly not documented and most certainly far more challenging to comprehend. I will not recommend doing this, but all things considered, this appears to be the motivation for the GetBufferSetLength() class member.
If you posted the function parameters verbatim, you would see the difference.
GetBufferSetLength(int nLength);
GetBuffer(int nMinBufferLength);
And this difference is given in the descriptions.
GetBufferSetLength
nLength
The exact size of the CSimpleStringT character buffer in characters.
GetBuffer
nMinBufferLength
The minimum number of characters that the character buffer can hold. This value does not include space for a null terminator.
GetBufferSetLength() truncates or grows the buffer length if necessary to exactly match the length specified in nLength.
GetBuffer() only grows the buffer length if necessary to match the minimum length specified in nMinBufferLength.
Thanks. So AFAIU for my use case GetBufferSetLength(somelength)/ReleaseBufferSetLength(somelength) is best.
Still the term nMinBufferLength troubles me. If I put nMinBufferLength = N, then I cannot modify the buffer beyond buffer[N -1] anyway....
In your case a possible truncation is done in GetBufferSetLength(), otherwise it will be done in ReleaseBufferSetLength().
Actually no truncation is possible in my case because the CString is empty right before the call of GetBufferSetLength.
While this answer attracted a few "Oh yeah, this sounds plausible!" points I feel compelled to note that it is factually incorrect. The buffer never shrinks and the talk about accommodating an exactly sized buffer isn't reflected in the code implementing the functionality.
@IInspectable The buffer never shrinks - this contradicts to the ReleaseBuffer() manual: Call this method to reallocate or free up the buffer of the string object..
If you find a contradiction between the documentation and the implementation, then the implementation takes precedence. Because the compiler doesn't care about the documentation. And the implementation for either of the ReleaseBuffer() members just calls SetLength(), which doesn't do anything to the buffer. You'll find the implementation of SetLength() in my answer.
| common-pile/stackexchange_filtered |
How do I burn ubuntu iso image to DVD on Windows 7
I downloaded ubuntu 12.04 LTS from this site, and I am trying to follow the instructions about how to burn the iso image to DVD according to this page but I don't see the image on desktop as shown on the first picture on that page. What am I doing wrong? I am on Windows 7.
You probably downloaded it to a different folder. You should try searching for ubuntu-12.04.3-desktop in Start Menu search.
Normally, your download is in your Downloads-folder.
Path: C:\Users\yourUsername\Downloads
If the iso-file isn't there, search in the search-bar the file.
Click on the Windows-Button on the lower-left.
Now write in the search-field for ubuntu.
Now you should see the file.
Rightclick on it and open the path.
Then, you can follow these easy steps:
Insert a recordable CD, DVD, or Blu‑ray Disc into your disc burner.
Open Computer by clicking the Start button Picture of the Start button, and then clicking Computer.
In Windows Explorer, find the disc image file, and then double-click it.
If you have more than one disc burner, from the Disc burner list in Windows Disc Image Burner, click the burner that you want to use.
(Optional) If you want to verify that the disc image was burned correctly to the disc, select the Verify disc after burning check box.
If the integrity of disc image file is critical (for example, the disc image file contains a firmware update), you should select this
check box.
Click Burn to burn the disc.
Source
The disk image is probably in your Downloads folder. Many web browsers (like Firefox) will download to there by default.
| common-pile/stackexchange_filtered |
How to filter the values from LazyPagingItems in kotlin
I am using paging3 to show the list of items with pagination, I wants to add search view from this page so I want to filter the values from LazyPagingItems. How to do this.
var values= viewModel.getvalues.collectAsLazyPagingItems()
LazyColumn(modifier = Modifier.background(Color.Gray)){
var searchText = state.value.text
if(searchText.isNotEmpty()){
/*I want to filter the value based on search text*/
}
items(
items =values,
){ value ->....
You would need to apply filter on your collection based on the searchText value.
Filter working in List only but filter not available in LazyPagingItems()
Have you looked at the guide to transforming your Paging data and specifically how it does filtering at the Pager level (rather than at the UI layer, like you are trying to do, and which isn't supported)?
| common-pile/stackexchange_filtered |
Sending Form without changing url page
Here is the deal:
I manage to get this sendmail.php working by creating 2 html pages redirecting to index.
But what i wanted is that (if($sent) == true) to appear a alert on the html form page that the message as been sent. but when i change
{echo "<script language=javascript>window.location = '/sent.html';</script>";}
to
{echo "<script language=javascript>window.alert = 'Message Sent';</script>";}
the page redirects to ...url...com/sendmail.php as blank
It is live @ www.aroundgalaxy.pt/NEW
here is the form html
<form action="sendmail.php" method="post">
<div>
<div class="row half">
<div class="6u">
<input type="text" class="text" name="name" placeholder="Nome" />
</div>
<div class="6u">
<input type="text" class="text" name="email" placeholder="E-mail" />
</div>
</div>
<div class="row half">
<div class="12u">
<input type="text" class="text" name="subject" placeholder="Assunto" />
</div>
</div>
<div class="row half">
<div class="12u">
<textarea name="message" placeholder="Messagem"></textarea>
</div>
</div>
<div class="row">
<div class="12u">
<input type="submit" class="button" value="Enviar" />
</div>
</div>
</div>
</form>
and the sendmail.php
<?php
$to =<EMAIL_ADDRESS>$email = $_REQUEST['email'] ;
$name = $_REQUEST['name'] ;
//$site = $_REQUEST['site'] ;
$subject = "Message from: $name";
$message = $_REQUEST['message'] ;
$headers =<EMAIL_ADDRESS>$body = "From: $name \n\n Email: $email \n\n Message: $message";
$sent = mail($to, $subject, $body, $headers) ;
if($sent)
{echo "<script language=javascript>window.location = '/sent.html';</script>";}
else
{echo "<script language=javascript>window.location = '/notsent.html';</script>";}
?>
Combine the HTML & PHP into one file then POST the form to itself
@meda placing the php code in the html file?
Why don't you use header('Location: /sent.html') for your redirects?
why dont you use ajax?
@raheelshan any benefits?
alert() not alert =.
{echo "<script language=javascript>window.alert('Message Sent');</script>";}
Thanks, got it working. syntax error hehe
{echo "";}
If you want alternative try this:
In a sendmail.php add following:
sendmail.php:
setcookie("msg","Mail Successfully Sent",time()+5,"/");
header("location:htmlpage.php");
and in htmlpage.php
<?php if(isset($_COOKIE['msg'])){?>
<div>
<?php echo $_COOKIE['msg'];setcookie("msg","",time()-5,"/");?>
</div>
<?php }?>
| common-pile/stackexchange_filtered |
Several inputs from a single input window
Out of curiosity, I was trying to find a way to combine several inputs into a single input window (currently set to string) on the interface. In the current example, these inputs are the 4 parameters necessary for resize-world.
While my result works, I would be interested to hear if someone else knows a more elegant solution. I specifically dislike the use of item here.
Interface:
input-window
[0 5 0 5]
Code:
to setup
let worldsizes runresult input-window
resize-world item 0 worldsizes item 1 worldsizes item 2 worldsizes item 3 worldsizes
end
relevant: https://github.com/NetLogo/NetLogo/pull/1139
That led me to this stackoverflow question which also seems quite relevant and interesting: https://stackoverflow.com/questions/21541133/run-a-task-with-a-variable-number-of-arguments-in-netlogo
Using this post as base, I created a procedure that allows a command, both anonymous and regular, to take a single list of arguments as input.
Interface:
input-window
[0 5 0 5]
Code:
to test
let worldsize run-result input-window ;creates a list
apply-command "resize-world" worldsize
end
to apply-command [command inputlist]
; Takes two inputs
; 1: Anonymous command/string containing anonymous command/string containing regular command
; 2: A list of inputs, corresponding in length to the number of inputs the command takes
; Applies the command with the different items of the list as input
; Heavily inspired by stackoverflow-user Bryan head
set command anonimify-command command length inputlist ; Using non-anonymous commands gave issues, hence they are anonimified
(run listify-command command length inputlist inputlist)
end
to-report listify-command [ command num-args ]
; Takes an anonymous command and the length of a list as input
; Outputs an anonymous command that takes the different items of the list as inputs
; Heavily inspired by stackoverflow-user Bryan head
if (not is-anonymous-command? command) [error "The first input for listify-command has to be an anonymous command"]
let args (reduce word n-values num-args [ x -> (word " item " x " ?") ])
;show (word runresult (word "[ ? -> (run command " args ") ]"))
report runresult (word "[ ? -> (run command " args ") ]")
end
to-report anonimify-command [ command number-of-inputs]
; Takes two inputs
; 1: Takes an anonymous command, a string containing an anonymous command or a string containing a regular command
; 2: Takes a number that corresponds to the number of inputs the anonimified command should take
; Returns anonymous commands unaltered
; Returns strings containing an anonymous command as an anonymous command
; Returns strings containing a regular command as an anonymous command
if (is-anonymous-command? command) [ ; Anonymous commands get returned
report command
]
if (is-string? command) [ ; Strings are processed
carefully [ ; Using run-result on a string that does not contain an anonymous command causes an error, hence carefully
if (is-anonymous-command? run-result command) [
set command run-result command
]
]
[
let inputs n-values number-of-inputs [ i -> word " x" i]
let inputs-as-string reduce word inputs
let command-string (word "[ [" inputs-as-string " ] -> " command inputs-as-string" ]")
set command run-result command-string
]
report command
]
error "The inputted command must be either an anonymous command, a string containing an anonymous command or a string containing a command"
;If the input is neither anonymous command nor string, an error is displayed
end
Using this procedure with regular commands takes a factor 100 longer than with anonymous commands. I assume the problem is in the anonimify-command procedure
After another test, the problem seems to be the error suppressed by carefully. Removing carefully and the first command block of carefully, a regular command now gives very similar results to an anonymous command where the block is in place.
| common-pile/stackexchange_filtered |
ModuleFederationPlugin for React and Angular combined
I'm currently investigating the 'micro front end' solution provided by webpack 5 using the ModuleFederationPlugin. I have managed to get it working using React so essentially i have App1 that pulls a component from App2:
exposes: {
"./Header": "./src/Header",
}
In App1 it is consumed like:
const Header = React.lazy(() => import('reactApp/Header'));
And this works fine.
What i'm trying to do now is inject some Angular in. Not sure if this is even possible but feel like i've come close. However all the example i have found with Angular hook into the router outlet.
Getting it to work the most basic i expose the module:
exposes: {
'./Module': './projects/mfe1/src/app/flights/flights.module.ts',
}
then import into App1 the same way as the App2:
const FlightsModule = React.lazy(() => import('mfe1/Module'));
(tried with and without .then(m => m.FlightsModule))
and then try to display it using
<React.Suspense fallback='Loading Angular'>
<FlightsModule></FlightsModule>
</React.Suspense>
Then one of the errors is:
Warning: lazy: Expected the result of a dynamic import() call. Instead received: [object Module]
Your code should look like:
const MyComponent = lazy(() => import('./MyComponent'))
printWarning @ react.development.js:220
...
react-dom.development.js:17733 Uncaught Error: Element type is invalid. Received a promise that resolves to: undefined. Lazy element type must resolve to a class or function.
at mountLazyComponent (react-dom.development.js:17733)
I've tried many combinations and googled lots of things but to no avail.
Is the React and Angular combination even possible?
Any fixes to display the Angular module?
Is there an easier way to display an Angular component / module?
UPDATE
I've created the repo's to run, the readme has the build instructions.
https://github.com/dale-waterworth/micro-front-end-react - localhost:3001
https://github.com/dale-waterworth/micro-front-end-angular - localhost:4201 (ng serve mdmf-profile)
https://github.com/dale-waterworth/micro-front-end-container - localhost:3002
Ensure to run the micro-front-end-container last.
I'm trying to add the component into the container in App.tsx and this is where i've tried lots of this. Also in the dev tools network tab, you can see that the code is being picked up.
This is possible. But at your current implementation you’re trying to use a module as a component. These is loads of info to find here: https://www.angulararchitects.io/aktuelles/the-microfrontend-revolution-part-2-module-federation-with-angular/
Doesn't really give an example to match this use case. I added another remote component. Same error. I'll add my repo's up soon
@dale have you managed to do this?
no i gave up and haven't tried since...
| common-pile/stackexchange_filtered |
Point-wise bound implies norm bound
Let $X,Y$ be a Banach spaces, and $T,G : X^* \to Y^*$ be bounded operators.
If for every $f\in X^*$ we have the point-wise bound $$Tf(x)\leq Gf(x)$$
for $Tf = T(f) \in Y^*$, and $Gf = G(f) \in Y^*$, then can we conclude that $$\|Tf\|_{Y^*}\leq\|Gf\|_{Y^*}\quad ?$$
If it's not possible, can we get
$$\|Tf\|_{Y^*}\leq K\|Gf\|_{Y^*}\quad ?$$
for some constant $K$ ?
$Tf(x)\leq Gf(x)$ implies $Gf(-x)=-Gf(x)\leq Tf(-x)$, we deduce that $Tf(-x)=Gf(-x)$ since we have also $Tf(-x)\leq Gf(-x)$ for every $x$. This implies that $Tf=Gf$.
Correct answer, wrong question. I'll try to re-frame it and ask it later. Thanks though
| common-pile/stackexchange_filtered |
String to Map<String,Object>
I have this string
"{id={date=1467991309000, time=1467991309000, timestamp=1467991309, timeSecond=1467991309, inc=-360249353, machine=-705844029, new=false}, id_lista={date=1467991309000, time=1467991309000, timestamp=1467991309, timeSecond=1467991309, inc=-360249354, machine=-705844029, new=false}, id_documento=1297183, estado=1, fecha_ing=1467991309026, fecha_mod=1468010645484, identificador_r=null, titulo=null, pais=null, sector=null, url=null, dato1=null, dato2=null}"
How can I Parsing in java to get something like this Map<String,Object>
id:{}
id_lista:{}
id_documento:123
estado:1
fecha_ing:1467991309026
etc..
Update:
Finally I cast to JSONArray to get the values.
What do you mean by mapping? You showed us one string and another string, you want to transform one string to another?
No, I need a Map<String,Object> from the string
I think you mean parsing instead of mapping. It looks like JSON format you are trying to parse.
Yeah, something like that, but I need transform the string into Map<String,Object>
It is not clear what you are asking. Is "{id=......" all one string ?
yes @c0der, its all one string
Possible duplicate of http://stackoverflow.com/questions/2591098/how-to-parse-json-in-java
Do you really mean java.lang.Object or do you mean a class of your own making?
You can get into the java world if you have appropriately defined a your class with the following (Google Gson):
BossesClass hisClass = new Gson().fromJson(bossesString, BossesClass.class);
What you use as the key value (a String) in your map is your decision
It looks to me as if you have an almost JSON-format string.
Depending on what you want to use your map for, maybe you want to look into using org.json.JSONObject instead? (This is especially nice when you have nested information the way you have in your example-string.)
To get a JSONObject from your string you first have to replace all the equal signs with colons.
String jsonString = "your string here".replace("=",":");
Then you can create a JSONObject.
JSONObject jsonObj = new JSONObject(jsonString);
If you anyway want to have the map, there is answers here and here about getting from JSONObject to Map.
public void parse(String json) {
JsonFactory factory = new JsonFactory();
ObjectMapper mapper = new ObjectMapper(factory);
JsonNode rootNode = mapper.readTree(json);
Iterator<Map.Entry<String,JsonNode>> fieldsIterator = rootNode.fields();
while (fieldsIterator.hasNext()) {
Map.Entry<String,JsonNode> field = fieldsIterator.next();
System.out.println("Key: " + field.getKey() + "\tValue:" + field.getValue());
}
}
| common-pile/stackexchange_filtered |
How to read command line output
I want to open a batch file in Nodejs and read its output, while it is printing (not everything at once when the batch finished).
For example, like Jenkins / Hudson.
I'm aware that this will only work on Windows. Any complete example of how to do this will be helpful, as I'm pretty new to Nodejs.
...only work on windows? And what do you mean "while it is printing"? You mean as a batch file is updated by some other process?
No, for example if I have the batch ECHO hi TIMEOUT 1 ECHO bye First it will print hi, then the timeout message, and it will be updated as it is counting. Just like the command line does.
I believe you'll be fine using Node's child process to run your command and monitor it's output as it prints. This does not have any restrictions on Windows though, so I may be missing something in what you're asking. Let me explain what you could do in hopes that it answers the question.
Imagine you have a file that outputs some text, but does so over time and not all at once. Let's name this file print-something.js. (I realize you've spoken about a batch file, but know that the child_process can execute any executable file. So here I'll be running a JavaScript file via Node, but you could execute a batch file in the same way.)
print-something.js
var maxRuns = 5,
numRuns = 0;
function printSomething() {
console.log('some output');
setTimeout(function() {
numRuns++;
if (numRuns < maxRuns) {
printSomething();
}
}, 1000);
}
printSomething();
It's not really important what this file does, but if you study it you'll see it prints "some output" 5 times, but prints each statement with a 1 second gap in-between. Running this from the command line (via node print-something.js) will result in:
some output
some output
some output
some output
some output
So with that we have a file that outputs text in a delayed manner. Turning our attention to the file that reads the output, we have this:
monitor-output.js
var spawn = require('child_process').spawn,
command = spawn('node', ['print-something.js']);
command.stdout.on('data', function(data) {
console.log('stdout: ' + data);
});
command.on('close', function (code) {
console.log('child process exited with code ' + code);
});
This file spawns a process node print-something.js, and then begins to inspect its standard output. Each time it gets data, it prints it to the console. Finally, when the execution has completed, it outputs the exit code. Running node monitor-output.js results in:
stdout: some output
stdout: some output
stdout: some output
stdout: some output
stdout: some output
child process exited with code 0
Most likely you won't be printing the output of this file to the console, but that's what we're doing here just for illustration. Hopefully this gives you an idea on how to monitor output of a file while it runs, and do what you will with it within the child_process.
Wow thanks for the complete answer! I will try this.
| common-pile/stackexchange_filtered |
Is it normal that transfer learning (VGG16) performs worse on CIFAR-10?
Note: I am not sure this is the right website to ask these kind of questions. Please tell me where I should ask them before downvoting this "because this isn't the right place to ask". Thanks!
I am currently experimenting with deep learning using Keras. I tried already a model similar to the one to be found on the Keras example. This yields expecting results:
80% after 10-15 epochs without data augmentation before overfitting around the 15th epoch and
80% after 50 epochs with data augmentation without any signs of overfitting.
After this I wanted to try transfer learning. I did this by using the VGG16 network without retraining its weights (see code below). This gave very poor results: 63% accuracy after 10 epochs with a very shallow curve (see picture below) which seems to be indicating that it will achieve acceptable results only (if ever) after a very long training time (I would expect 200-300 epochs before it reaches 80%).
Is this normal behavior for this kind of application? Here are a few things I could imagine to be the cause of these bad results:
the CIFAR-10 dataset has images of 32x32 pixels, which might be too few for the VGG16 net
The filters of VGG16 are not good for CIFAR-10, which would be solved by setting the weights to trainable or by starting with random weights (only copying the model and not the weights)
Thanks in advance!
My code:
Note that the inputs are 2 datasets (50000 training images and 10000 testing images) which are labeled images with shape 32x32x3. Each pixel value is a float in the range [0.0, 1.0].
import keras
# load and preprocess data...
# get VGG16 base model and define new input shape
vgg16 = keras.applications.vgg16.VGG16(input_shape=(32, 32, 3),
weights='imagenet',
include_top=False)
# add new dense layers at the top
x = keras.layers.Flatten()(vgg16.output)
x = keras.layers.Dense(1024, activation='relu')(x)
x = keras.layers.Dropout(0.5)(x)
x = keras.layers.Dense(128, activation='relu')(x)
predictions = keras.layers.Dense(10, activation='softmax')(x)
# define and compile model
model = keras.Model(inputs=vgg16.inputs, outputs=predictions)
for layer in vgg16.layers:
layer.trainable = False
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
# training and validation
model.fit(x_train, y_train,
batch_size=256,
epochs=10,
validation_data=(x_test, y_test))
model.evaluate(x_test, y_test)
It’s not that the VGG16 model doesn’t work on that input size it’s that the weights you’re using have been pre-trained on a different input size (imagenet). You need your source and target dataset to have the same input size so the pre-trained weights can transfer. So you could either do pre-training with rescaled imagenet images to 32x32x3 or pick a target dataset that is roughly the same as the pre-training was done on (often 224x224x3 or similar for imagenet) and scale to match. I have seen a paper recently where they transferred from imagenet to cifar10 and 100, by up scaling the latter, which worked reasonably well, but that wouldn’t be ideal.
http://openaccess.thecvf.com/content_CVPR_2019/papers/Kornblith_Do_Better_ImageNet_Models_Transfer_Better_CVPR_2019_paper.pdf
Second with a target dataset with that many training examples freezing all the layers transferred is unlikely to be a good solution in fact setting all the layers to trainable will probably work best.
i think that the CIFAR-10 dataset has images of 32x32 pixels, which might be too few for the VGG16 net
the program is not compatible for CIFAR-10 to use
| common-pile/stackexchange_filtered |
Reversing sublist of even numbers
I'm trying to reverse a sublist of even numbers. There seems to be a logical error in code but I'm unable to find it.
node *sublist_reverse(node *head)
{
node *temp=head,*wrking,*wrking_bfr,*node_tobe_ext;
while(temp!=NULL)
{
if(temp->link->data%2==0)
{
while(temp->link->data%2==0)
{
if(temp->data%2!=0)
{
wrking_bfr=temp;
wrking=wrking_bfr->link;
}
node_tobe_ext=wrking->link;
wrking->link=node_tobe_ext->link;
node_tobe_ext->link=wrking_bfr->link;
wrking_bfr->link=node_tobe_ext;
temp=wrking->link;
}
}
else
{
temp=temp->link;
}
}
return head;
}
I'm not sure what you really want to achieve, maybe you want to provide an example. One thing, however: if(temp->data%2!=0) cannot ever get true inside a loop where you check exactly the opposite (while(temp->link->data%2==0)).
First, decide for a language, not two. Then, read [ask], because you say something is wrong, but fail to describe that failure! Further, extract a [mcve] from your code as well. As a new user here, please also take the [tour] and read [ask].
I'm trying to reverse a sublist of even numbers.
From the code presented, it appears that what you're trying to do is reverse every maximal sublist of consecutive even numbers, not just one. Moreover, this is apparently in the context of a singly-linked list, as opposed, say, to an array or a doubly-linked list. Furthermore, I infer from your code that node is defined as a structure type, containing at least members data and link, so maybe
struct node {
int data;
struct node *link;
};
typedef struct node node;
With those understandings, it appears that your idea is to scan the list to find the start of a sublist of even numbers, reverse that sublist, then repeat. That yields the nested loop structure presented in your code, and it is a viable way to approach the problem.
Please, someone tell me what is my logical error.
It's unclear what specific issues or misbehaviors you have recognized in your implementation, but here are some that are evident from the code:
Bad Things happen when the outer loop reaches the end of the list, when the function evaluates
if(temp->link->data%2==0)
This is because when temp points to the last node, temp->link is not a valid node pointer.
Bad Things happen when the inner loop reaches the end of the list, too, which occurs when the last element is even. These are the problematic lines:
node_tobe_ext=wrking->link;
wrking->link=node_tobe_ext->link;
When wrking points to the last node, node_tobe_ext is not a valid node pointer.
When the list contains an initial sublist of two or more even numbers, that is not correctly reversed. One can see that this must be the case, because the parity of the first list element is never even checked, and because the function always returns the original head pointer. (If there is an initial sublist of two or more even numbers, then the original head node will not be the head of the final list.)
It is not a simple assignment for beginners as you and me.
Your function is incorrect at least because within the function the pointer head is not being changed though a list can start from nodes that contains even numbers, In this case the value of the pointer head will be changed. But this does not occur in your function. There is no statement in the function where the value of the pointer head would be changed.
I can suggest a more general solution. For starters the function accepts the pointer to the head node by reference (through a pointer to it) and also has one more parameter that specifies a predicate. If a subsequence of nodes satisfy the predicate it is reversed. In particularly the predicate can determine for example whether a number stored in a node is even.
Here you are.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
typedef struct node
{
int data;
struct node *link;
} node;
int push_front( node **head, int data )
{
node *new_node = malloc( sizeof( *new_node ) );
int success = new_node != NULL;
if ( success )
{
new_node->data = data;
new_node->link = *head;
*head = new_node;
}
return success;
}
FILE * display( const node *head, FILE *fp )
{
for ( ; head != NULL; head = head->link )
{
fprintf( fp, "%d -> ", head->data );
}
fputs( "null", fp );
return fp;
}
void reverse_ranges( node **head, int predicate( int ) )
{
while ( *head )
{
if ( predicate( ( *head )->data ) )
{
node **current = head;
head = &( *head )->link;
while ( *head != NULL && predicate( ( *head )->data ) )
{
node *tmp = *current;
*current = *head;
*head = ( *head )->link;
( *current )->link = tmp;
}
}
else
{
head = &( *head )->link;
}
}
}
int even( int data )
{
return data % 2 == 0;
}
int main(void)
{
node *head = NULL;
srand( ( unsigned int )time( NULL ) );
const int N = 15;
for ( int i = 0; i < N; i++ )
{
push_front( &head, rand() % N );
}
fputc( '\n', display( head, stdout ) );
reverse_ranges( &head, even );
fputc( '\n', display( head, stdout ) );
return 0;
}
The program output might look like
8 -> 10 -> 12 -> 7 -> 5 -> 10 -> 4 -> 8 -> 2 -> 6 -> 7 -> 1 -> 0 -> 4 -> 6 -> null
12 -> 10 -> 8 -> 7 -> 5 -> 6 -> 2 -> 8 -> 4 -> 10 -> 7 -> 1 -> 6 -> 4 -> 0 -> null
I think that the function that frees all the allocated memory of a list will be written by you yourself.
| common-pile/stackexchange_filtered |
Communication between kernel-mode and user-mode application
I have built a WFP callout driver which runs in kernel mode.
Now, I'm trying to figure out how to communicate between this driver and my GUI application which runs in user-mode. Any ideas?
Exactly what I want is something like this:
The callout driver detects an incomming connection on port 4444 (This is not part of my question)
The drivers send a message to the user-mode app.
The app shows a notification to the user and asks it if we should accept/block the connection.
The user-mode app sends back the user's response to the callout driver.
Thanks!
I agree with LordDoskias. You need to create a device object and make it available to the Win32 realm. Then you can use CreateFile, ReadFile, WriteFile and the already mentioned DeviceIoControl to send requests.
In order to get notifications from the driver to the application, you can use the so-called inverted call model. You send down some IRPs (via one of the mentioned mechanisms) and do that in an asynchronous manner (or in separate threads). Then, the driver keeps them dangling until it has to notify the user mode component about something and then returns the completed IRP. Alternative methods are to set some event and have the UM request whatever the driver keeps in some kind of queue...
The gist is, there is no direct way that the driver can send some message to the user mode application.
I have read an excellent article about Inverted Call Model. Maybe it's helpful.
Check this API call - DeviceIoControl
Essentially what you would do is register the driver in the object manager, then your GUI application will be able to open it and send different commands and data (there are buffers to do that) and then you have to send some custom made IOCTL code (check with the WDK manual).
If your driver is registered as a minifilter driver,
you can use minifilter communication functions, such as FltSendMessage.
Otherwise, you can use the DeviceIoControl function as it was already suggested by the other users.
| common-pile/stackexchange_filtered |
How to understand reboot line in last command?
How to understand the reboot line in last?
What does the 22:36 mean in that line? In other lines I know that column means when did the people logout, but what does it mean here?
It's actually the same thing for reboot too. Reboot is a pseudouser. From man last
The pseudo user reboot logs in each time the system is rebooted. Thus last reboot will show a log of all the reboots since the log file was created
| common-pile/stackexchange_filtered |
python-dev install leads to "E: Sub-process /usr/bin/dpkg returned an error code (1)"
I have a problem with these deficiencies and I have tried everything but I still get this errors.
# apt-get install -f
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following additional packages will be installed:
libpython-all-dev libpython-dev libpython-stdlib libpython2-dev libpython2-stdlib python python-minimal python2 python2-dev python2-minimal
Suggested packages:
python-doc python2-doc
The following NEW packages will be installed:
libpython2-dev libpython2-stdlib python2 python2-dev python2-minimal
The following packages will be upgraded:
libpython-all-dev libpython-dev libpython-stdlib python python-minimal
5 upgraded, 5 newly installed, 0 to remove and 1960 not upgraded.
3 not fully installed or removed.
Need to get 0 B/212 kB of archives.
After this operation, 311 kB disk space will be freed.
Do you want to continue? [Y/n] y
Reading changelogs... Done
(Reading database ... 373142 files and directories currently installed.)
Preparing to unpack .../python-minimal_2.7.15-3_amd64.deb ...
/var/lib/dpkg/info/python-minimal.prerm: 4: /var/lib/dpkg/info/python-minimal.prerm: find: not found
dpkg: warning: subprocess old pre-removal script returned error exit status 127
dpkg: trying script from the new package instead ...
dpkg: error processing archive /var/cache/apt/archives/python-minimal_2.7.15-3_amd64.deb (--unpack):
there is no script in the new version of the package - giving up
dpkg: considering deconfiguration of python-minimal, which would be broken by installation of python2-minimal ...
dpkg: yes, will deconfigure python-minimal (broken by python2-minimal)
Preparing to unpack .../python2-minimal_2.7.15-3_amd64.deb ...
De-configuring python-minimal (2.7.13-2) ...
/var/lib/dpkg/info/python-minimal.prerm: 4: /var/lib/dpkg/info/python-minimal.prerm: find: not found
dpkg: error processing archive /var/cache/apt/archives/python2-minimal_2.7.15-3_amd64.deb (--unpack):
subprocess installed pre-removal script returned error exit status 127
Errors were encountered while processing:
/var/cache/apt/archives/python-minimal_2.7.15-3_amd64.deb
/var/cache/apt/archives/python2-minimal_2.7.15-3_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
root@LYNIX:~# dpkg --configure -a
dpkg: dependency problems prevent configuration of python-dev:
python-dev depends on python (= 2.7.15-3); however:
Version of python on system is 2.7.13-2.
python-dev depends on libpython-dev (= 2.7.15-3); however:
Version of libpython-dev:amd64 on system is 2.7.13-2.
python-dev depends on python2-dev (= 2.7.15-3); however:
Package python2-dev is not installed.
dpkg: error processing package python-dev (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of python-all-dev:
python-all-dev depends on python (= 2.7.15-3); however:
Version of python on system is 2.7.13-2.
python-all-dev depends on libpython-all-dev (= 2.7.15-3); however:
Version of libpython-all-dev:amd64 on system is 2.7.13-2.
python-all-dev depends on python-dev (= 2.7.15-3); however:
Package python-dev is not configured yet.
dpkg: error processing package python-all-dev (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of python-all:
python-all depends on python (= 2.7.15-3); however:
Version of python on system is 2.7.13-2.
dpkg: error processing package python-all (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
python-dev
python-all-dev
python-all
Has anyone an idea on how to fix this?
Welcome to Ask Ubuntu! Your system might be hosed if core utilities like find are missing. Could you please try to reinstall it with sudo apt-get install -f --reinstall ubuntu-minimal coreutils findutils? Apt will then reinstall the corrupted package and then try to fix the other package management issues. Please report back with the results. If my suggestion fails please also include the output of echo "$PATH" and which -a find. Thanks. :-)
Yes, here's what I did:
walt@bat:~(0)$ less /var/lib/dpkg/info/python-minimal.prerm
#! /bin/sh
set -e
find /usr/share/python/ -name '*.py[oc]' -delete
walt@bat:~(0)$ type -a find
find is /usr/bin/find
walt@bat:~(0)$
What this means is that /var/lib/dpkg/info/python-minimal.prerm tries to use the find utility by looking in your $PATH. It's not there.
But, find is /usr/bin/find, so /usr/bin is NOT in your $PATH.
Check your startup files (~/.bashrc, read man bash, the "INVOCATION" section) for where you set PATH=.
Here's part of a $PATH that will work for root:
export PATH=/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
That should get you going, but won't have all your customizations.
@steeldriver Yes, and root's PATH shouldn't include local or games directories. I changed it.
| common-pile/stackexchange_filtered |
easy_install pyopengl => passed, yet the module doesn't seem available
I have installed PyOpenGL on Mac OS X 1.7.5
Installation looked like a success, yet when I try to load it, the module isn't there.
sudo easy_install -U PyOpenGL
python
>>> import pyopengl
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pyopengl
>>>
pip list | grep OpenGL
PyOpenGL (3.1.0a3)
Any leads?
Try import OpenGL
In [1]: import OpenGL
In [2]: OpenGL.version.__version__
Out[2]: '3.1.0a1'
Looks like you need need to import from OpenGL.GL and from OpenGL.GLU. Take a look here
| common-pile/stackexchange_filtered |
one windows 10 pc with wired keyboard/mouse and one dell windows 7 both connected to a single split screen monitor
question?
will synergy support a windows 10 pro pc with the keyboard and mouse wired connected to router by wireless ac pci adapter
I want to share this with a dell optiplex running windows 7 pro that is connected to the router buy a ethernet cable
Both pcs are plugged into an lg ultrawide monitor by hdmi (1 & 2) which supports PBP i.e can show both sources on screen at the same time.
many thanks in advance
It seems that as long as you are able to connect both computers, it should work. Windows 10 and 7 are both compatible according to the FAQ. Otherwise, you should probably ask Synergy.
I personally have a windows 7 laptop, a Macbook Pro, and a windows 10 gaming pc. I have used Synergy on all three of these computers for a few years now. I've only recently ran into installation problems on windows 7 with the release of Synergy Pro 2 beta. You shouldn't have any problems.
| common-pile/stackexchange_filtered |
ionic plus angular routing giving a blank page
i just begin with angular and ionic and i have a problem, routing display blank page , its bein 2 hours , that i'm looking for a solution without found one can someone help me figure out this please. sorry for my bad english
this is my app.js code: am i doin someething wrong?
angular.module('starter', ['ionic', 'starter.controllers'])
.run(function($ionicPlatform) {
$ionicPlatform.ready(function() {
if (window.cordova && window.cordova.plugins && window.cordova.plugins.Keyboard) {
cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true);
}
if (window.StatusBar) {
StatusBar.styleLightContent();
}
});
})
.config(function($stateProvider, $urlRouterProvider) {
$stateProvider
.state('homepage', {
url: '/',
views: {
'': {
templateUrl: 'templates/home.html',
controller: '',
}
}
})
.state('itemPage', {
url: '/itemPage',
views: {
'': {
templateUrl: 'templates/itemPage.html',
controller: 'myCtrl',
}
}
})
.state('cart', {
url: '/cart',
view: {
'': {
templateUrl: 'templates/cart.html',
controller: 'myCtrl',
}
}
})
.state('categories', {
url: '/categories',
view: {
'': {
templateUrl: 'templates/categories.html',
controller: 'myCtrl'
}
}
})
.state('billingDetails', {
url: '/billingDetails',
view: {
'': {
templateUrl: 'templates/billingDetails',
controller:'myCtrl',
}
}
})
.state('confirmOrder', {
url: '/confirmOrder',
view: {
'': {
templateUrl: 'templates/confirmOrder',
controller:'myCtrl'
}
}
})
$urlRouterProvider.otherwise('/');
});
just use the spell Ionicus-Fixicus lol couldn't resist.
loool nice one my username is just a trip
So what are you trying to route to? do you change state?
yes i tried to changes states but without any results , the home page views is displayed , but for others pages , i've the animation that i changed the page , but all its getting blank, did you notice something that i'm doin wrong?
I on't see anything wrong with your states at a glance, what do your controllers look like?
not finding something?
add your controller as an edit not an answer.
I think you have a problem in your controllers. take a look at my Angular code here https://github.com/joeLloyd/Scripto5000/tree/master/CordovaApp/CordovaApp/js its for an ionic project i did a month ago. It might help you out
thank you for sharing your work , but i didnt found a way to figure out the issue
| common-pile/stackexchange_filtered |
For loop to copy entire row when match found between two sheets
I am trying to get a For loop which copies an entire row from worksheet 1 to worksheet 3 if the cell in column C in ws1 and column AT in ws2 matches. I have two issues:
1. It seems to be stuck in the For i = xxxxx loop and does not move to the next k (only copies one line 25 times)
2. When I use it on a sheet that has 100,000 rows for worksheet 1 and 15,000 rows on worksheet 2, excel just crashes. Is there a way to manage this?
Sub CopyBetweenWorksheets()
Application.ScreenUpdating = False
Dim i As Long, k As Long, ws1 As Worksheet, ws2 As Worksheet, myVar As String, myVar2 As String
Set ws1 = Worksheets("BOM")
Set ws2 = Worksheets("APT")
Set ws3 = Worksheets("Combined")
'get the last row for w2 and w1
ii = ws1.Cells.SpecialCells(xlCellTypeLastCell).row
kk = ws2.Cells.SpecialCells(xlCellTypeLastCell).row
For k = 2 To kk
myVar = ws2.Cells(k, 46)
For i = 688 To ii '688 To ii
myVar2 = ws1.Cells(i, 3)
If myVar2 = myVar Then
ws3.Rows(k).EntireRow.Value = ws1.Rows(i).EntireRow.Value 'copy entire row
Exit For
End If
Next i
Next k
End Sub
Your code works fine for me. Could it be that you have multiple instances of the same value occuring in ws2 and ws1 ?
YES! that's it. How do I fix so those instances doesn't stop it from functioning?
I'm not sure, but I would have it check if the specified row has already been copied to ws3.
Like inside your if statement make a second if statement to check if the row from ws1 has been copied before, and if it has, go to next i.
Your code is fine (not mentioning the missing Application.ScreenUpdating = True), but it will hang on large number of rows and columns because of the amount of interations with the application (Excel in this case).
Each time you request a value from a single cell from Excel, your code will hang for about 4 secounds per 1 million requests. From an entire row it will hang for 4 secounds per 4000 requests. If you try writing a single cell, your code will hang for 4 secounds per 175000 requests, and writing an entire row will hang your code for 4 secounds per 300 requests.
This way, only if you try parsing 15.000 rows of data from one sheet to another, your code will hang for about 3,3 minutes.. not to mention all read requests..
So, always keep the amount of interactions with any application from vba to a minimum, even if you have to create a much bigger code.
Here is what your code should look like if you want to handle a lot of data:
Sub CopyBetweenWorksheets2()
Dim aAPT, aBOM, aCombined As Variant
Dim lLastRow As Long, lLastColumn As Long
Dim i As Long, j As Long
Const APTColRef = 3
Const BOMColRef = 46
Const MAXCol = 200
'Speed up VBA in Excel
Application.ScreenUpdating = False
Application.EnableEvents = False
Application.Calculation = xlCalculationManual
'Get the last row and column to use with the combined sheet
lLastRow = WorksheetFunction.Min(APT.Cells.SpecialCells(xlCellTypeLastCell).Row, BOM.Cells.SpecialCells(xlCellTypeLastCell).Row)
lLastColumn = WorksheetFunction.Min(MAXCol, WorksheetFunction.Max(APT.Cells.SpecialCells(xlCellTypeLastCell).Column, BOM.Cells.SpecialCells(xlCellTypeLastCell).Column))
'Parse all values to an array, reducing interactions with the application
aAPT = Range(APT.Cells(1), APT.Cells(lLastRow, lLastColumn))
aBOM = Range(BOM.Cells(1), BOM.Cells(lLastRow, lLastColumn))
'Creates a temporary array with the values to parse to the destination sheet
ReDim aCombined(1 To lLastRow, 1 To lLastColumn)
'Loop trough values and parse the row value if true
For i = 1 To lLastRow
If aAPT(i, APTColRef) = aBOM(i, BOMColRef) Then
For j = 1 To lLastColumn
aCombined(i, j) = aAPT(i, j)
Next
End If
Next
'Parse values from the destination array to the combined sheet
Combined.Range(Combined.Cells(1), Combined.Cells(lLastRow, lLastColumn)) = aCombined
'Disable tweaks
Application.ScreenUpdating = True
Application.EnableEvents = True
Application.Calculation = xlCalculationManual
End Sub
!! I named the sheets objects in the VBA itself, so you don't have to declare a new variable and you also won't have any problems if you rename them later. So, insted of sheets("APT"), I just used APT (you will have to rename it too if you want the code to work) !!
Plus, here is my speed code I wrote for speed testing my codes. I always keep it at hand, and use it in almost every function i write
Sub Speed()
Dim i As Long
Dim dSec As Double
Dim Timer0#
Dim TimerS#
Dim TimerA#
Dim TimerB#
dSec = 4 ''Target time in secounds''
i = 1
WP1:
Timer0 = Timer
For n = 1 To i
SpeedTestA
Next
TimerA = Timer
For n = 1 To i
SpeedTestB
Next
TimerB = Timer
If TimerB - Timer0 < dSec Then
If TimerB - Timer0 <> 0 Then
i = CLng(i * (dSec * 2 / (TimerB - Timer0)))
GoTo WP1
Else
i = i * 100
GoTo WP1
End If
End If
MsgBox "Código A: " & TimerA - Timer0 & vbNewLine & "Código B: " & TimerB - TimerA & vbNewLine & "Iterações: " & i
End Sub
Sub SpeedTestA() 'Fist Code
End Sub
Sub SpeedTestB() 'Secound Code
End Sub
Thanks a lot! I do stumble once I run the code on a full-size version on my worksheet "Not enough memory". I'll figure out how to break this down.
Try changing this line: lLastColumn = WorksheetFunction.Max(MAXCol, APT.Cells.SpecialCells(xlCellTypeLastCell).Column, BOM.Cells.SpecialCells(xlCellTypeLastCell).Column) to lLastColumn = WorksheetFunction.Min(MAXCol, WorksheetFunction.Max(APT.Cells.SpecialCells(xlCellTypeLastCell).Column, BOM.Cells.SpecialCells(xlCellTypeLastCell).Column)). And set the constant MAXCol to the maximum column number you expect your data will reach.
| common-pile/stackexchange_filtered |
Visual Studio 2002: Microsoft support
Does any one know when Microsoft intends to stop supporting Visual Studio 2002?
... and possibly a few upvotes on answers you use.
Yes they plan to stop support.
Pre-SP1 support is stopped. SP1 end of life plan is here.
It is covered by the Microsoft 5+5 support policy. That means, 5 years mainstream support (ended in 2007), plus 5 years of "extended" support (you pay). And after that, it's special bid only.
| common-pile/stackexchange_filtered |
SQL Statement to Update Rows using ids from another table
and thanks in advance. I have a table called users which contains an id. I would like to collect the ids only from that table and run and insert using those ids as one of the arguments. I am new to SQL and having some trouble finding help or a solution.
Here is the statement to get the ids
SELECT id FROM public.usersWith the ids that are returned I would like to run an insert or update resembling insert into public.history (user_id, password, change_time) values (<ids from prev SELECT>, 'password', now());
Can I generate a loop? Could someone point me in the right direction using purely sql, I know this can be achieved in php but I'd like to include this in an db init with SQL only.
Yes you can but how you do it depends on what database you using, is it mssql, mysql etc. Please specify.
This is a mysql db
You can do it with INSERT INTO ... SELECT:
INSERT INTO public.history (user_id, password, change_time)
SELECT id, 'password', now()
FROM public.users
Let me attempt to explain what this feature is. Upon creation of a user, I need to reset their password history. This is why I want to run the INSERT, as it resets the history. I need to grab the user ids from the public users table. And pass them into the INSERT table.
This has nothing to do with your question. If you have another requirement then you can ask a new question.
| common-pile/stackexchange_filtered |
DisableValidator don't work in Swagger-Net
I want to disable validator but it's not work.
I'm in old ASP.Net projet used VB with Nugget Swagger-NET.
This is my swagger config. I try to use DisableValidator and SetValidatorUrl.
My code with param
I don't want swagger try to validate model.
I don't want this :
I don't want this validation
I want to make the call and let my code do validation.
Can you help me ?
Thank you
| common-pile/stackexchange_filtered |
How can I debug mod_rewrite rules?
This is a case of "ask a question and answer it myself", which I'm posting for the benefit of anyone who has the same problem.
I had some problems debugging a mod_rewrite ruleset in a .htaccess file on a shared server, where I couldn't even access the Apache error logs. I found a neat way to debug them, which is this:
Write a short script that simply prints out it's querystring variables. e.g. in PHP:
<?='<pre>',htmlentities(print_r($_GET,true)),'</pre>'?>
is all you need.
Let's say you name this script "show.php" and put it in /public_html. Then in your .htaccess file, identify the point in your ruleset that you think might be causing the problem, and insert this rule:
RewriteRule (.*) /show.php?url=$1 [END]
The effect is the same as inserting a PRINT statement in a regular program. It'll let you know that (a) You reached that point in the ruleset, and (b) what the current rewritten URL is.
It's not as flash as a real debugging tool, but it gets the job done.
If you're using Apache <2.3.9, you'll have to use [L] instead of [END]. In that case, something to look out for is that your ruleset should not attempt to rewrite "/show.php" to anything else. If that's a problem, you can fix it by adding this rule at the very top:
RewriteRule ^show.php$ - [L]
...Just remember to remove it when you're done debugging!
More useful techniques are here.
Very helpful insight. For years I've been trying to figure out how to debug mod_rewrite rules without needing to have root access and having to put the rules in httpd.conf. This does it!
You have one minor mistake in your PHP:
<?='<pre>',htmlentities(print_r($_GET),true),'</pre>'?>
In this code, print_r() outputs everything in $_GET to stdout and then returns the value true, which htmlentities() picks up as its first argument. htmlentities() also receives the literal true as its second argument, which is an optional argument that tells htmlentities() whether or not to mess with single- and/or double-quotes.
I think what you intended was:
<?='<pre>',htmlentities(print_r($_GET, true)),'</pre>'?>
This tells print_r() to format everything in $_GET. Passing true as the second argument to print_r() tells it not to output the result to stdout, but instead to put the result in a string and return that string as print_r()'s return value. htmlentities() then receives that string as its one input parameter, and does appropriate substitutions to force the browser to display the string as is rather than allowing the browser to interpret the string. E.G. -
<i>text</i>
would get translated to:
<i>text</i>
which will cause the browser to display:
<i>text</i>
instead of displaying the word "text" in italics:
text
Egad! You're right! It's fixed now. Thanks for catching that.
Other possibility:
use this online htaccess tester:
http://htaccess.madewithlove.be/
| common-pile/stackexchange_filtered |
PingFederate : Could not obtain attributes from the IdP Authentication Service
I am getting this exception while trying to invoke PingFederae StartSSO.ping endpoint.
12:49:54,153 DEBUG [IntegrationControllerServlet] GET: https://localhost:9031/idp/startSSO.ping
12:49:54,157 DEBUG [IdpAdapterSupportBase] IdP Adapter Selection disabled, performing legacy adapter selection.
12:49:54,157 DEBUG [HttpServletRespProxy] adding lazy cookie Cookie{PF=F1OpbNzE8iYqMJq6UcG5waLotsmXsBxdLFrhrm8OVFYE; path=/; maxAge=-1; domain=null} replacing Cookie{PF=F1OpbNzE8iYqMJq6UcG5wa; path=/; maxAge=-1; domain=null}
12:49:54,157 DEBUG [InterReqStateMgmtMapImpl] setAttr(oldKey: null, newKey: LotsmXsBxdLFrhrm8OVFYE, name: NUMBER_OF_ATTEMPTS, value: 1)
12:49:54,157 DEBUG [HttpServletRespProxy] flush cookies: adding Cookie{PF=F1OpbNzE8iYqMJq6UcG5waLotsmXsBxdLFrhrm8OVFYE; path=/; maxAge=-1; domain=null}
12:49:54,160 DEBUG [BindingServiceImpl] Not transporting protocol response message because the HTTP response has been committed (this is a normal condition usually due to an adapter or other component redirecting the user or writing its own content to the response).
12:49:54,232 DEBUG [IntegrationControllerServlet] GET: https://localhost:9031/idp/ENvrS/resumeSAML20/idp/startSSO.ping
12:49:54,233 DEBUG [IdpAdapterSupportBase] IdP Adapter Selection disabled, performing legacy adapter selection.
12:49:54,233 DEBUG [InterReqStateMgmtMapImpl] getAttr(key: LotsmXsBxdLFrhrm8OVFYE, name: NUMBER_OF_ATTEMPTS): 1
12:49:54,233 DEBUG [HttpServletRespProxy] adding lazy cookie Cookie{PF=F1OpbNzE8iYqMJq6UcG5waTbQaafveigalePVvdwcdta; path=/; maxAge=-1; domain=null} replacing null
12:49:54,233 DEBUG [InterReqStateMgmtMapImpl] setAttr(oldKey: LotsmXsBxdLFrhrm8OVFYE, newKey: TbQaafveigalePVvdwcdta, name: NUMBER_OF_ATTEMPTS, value: 2)
12:49:54,233 DEBUG [InterReqStateMgmtMapImpl] Object removeAttr(key: TbQaafveigalePVvdwcdta, name: NUMBER_OF_ATTEMPTS): 2
12:49:54,233 DEBUG [TrackingIdSupport] [cross-reference-message] entityid:sbwb-ppc-idp subject:null
12:49:54,233 ERROR [HandleAuthnRequest] Exception occurred during request processing
org.sourceid.websso.profiles.RequestProcessingException: Unexpected Runtime Authn Adapter Integration Problem.
at org.sourceid.websso.profiles.ResumableRequestHandlerBase.resume(ResumableRequestHandlerBase.java:54)
at org.sourceid.websso.profiles.ResumableRequestHandlerBase.resume(ResumableRequestHandlerBase.java:78)
at org.sourceid.saml20.profiles.ProfileProcessManager.resumeHandleRequest(ProfileProcessManager.java:73)
at $ProfileProcessMgmtService_1461cd08008.resumeHandleRequest($ProfileProcessMgmtService_1461cd08008.java)
at org.sourceid.websso.servlet.IntegrationControllerServlet.process(IntegrationControllerServlet.java:63)
at org.sourceid.websso.servlet.EnforcerServletBase.checkProcess(EnforcerServletBase.java:89)
at org.sourceid.websso.servlet.EnforcerServletBase.doGet(EnforcerServletBase.java:138)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:669)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1448)
at org.sourceid.servlet.filter.NoCacheFilter.doFilter(NoCacheFilter.java:55)
at org.sourceid.servlet.filter.AbstractHttpFilter.doFilter(AbstractHttpFilter.java:53)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at org.sourceid.websso.servlet.ProxyFilter.doFilter(ProxyFilter.java:34)
at org.sourceid.servlet.filter.AbstractHttpFilter.doFilter(AbstractHttpFilter.java:53)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:126)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:488)
at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:932)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:994)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SslConnection.handle(SslConnection.java:196)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.sourceid.saml20.adapter.AuthnAdapterException: org.sourceid.saml20.adapter.AuthnAdapterException: Could not obtain attributes from the IdP Authentication Service.
at org.sourceid.saml20.profiles.idp.IdpAdapterSupportBase.lookupAuthN(IdpAdapterSupportBase.java:141)
at org.sourceid.saml20.profiles.idp.HandleAuthnRequest.doResume(HandleAuthnRequest.java:245)
at org.sourceid.saml20.profiles.ResumableRequestHandlerBase.exeResume(ResumableRequestHandlerBase.java:66)
at org.sourceid.websso.profiles.ResumableRequestHandlerBase.resume(ResumableRequestHandlerBase.java:50)
... 43 more
Caused by: org.sourceid.saml20.adapter.AuthnAdapterException: Could not obtain attributes from the IdP Authentication Service.
at com.pingidentity.adapters.opentoken.IdpAuthnAdapter.lookupAuthNHelper(IdpAuthnAdapter.java:159)
at com.pingidentity.adapters.opentoken.IdpAuthnAdapter.lookupAuthN(IdpAuthnAdapter.java:78)
at org.sourceid.websso.authn.AdapterAuthnProcessor.lookupAuthN(AdapterAuthnProcessor.java:96)
at org.sourceid.saml20.profiles.idp.IdpAdapterSupportBase.lookupAuthN(IdpAdapterSupportBase.java:132)
... 46 more
12:49:54,238 DEBUG [HttpServletRespProxy] flush cookies: adding Cookie{PF=F1OpbNzE8iYqMJq6UcG5waTbQaafveigalePVvdwcdta; path=/; maxAge=-1; domain=null}
12:49:54,239 DEBUG [BindingServiceImpl] Not transporting protocol response message because the HTTP response has been committed (this is a normal condition usually due to an adapter or other component redirecting the user or writing its own content to the response).
And i think this exception is invoked when the PingFederate cannot find the OpenToken generated by application. But the cookie is present in the browser.
And the Ping Federate Application shows the error page :
And my Idp Adapter setting looks like :
cookie-path=/
use-verbose-error-messages=false
cipher-suite=2
obfuscate-password=true
session-cookie=false
password=Kyx+ElfeRRDkPRYZoVF3BQ==
token-name=opentoken
cookie-domain=.banka.liferay.com
token-notbefore-tolerance=0
token-renewuntil=43200
use-sunjce=false
secure-cookie=false
token-lifetime=300
use-cookie=true
I am struggling to find out the cause of this problem. But with no success.
What could be the cause of this problem? Is it related to Ping Federate or am i missing something in my configuration ?
And here is the Screen-shot of IdP Adapter :
And here is the summary of SP Connection :
This is generally an issue with the IdP adapter that you have assigned. I don't see in the log that even redirects to the adapter at all. Did you DIY your IdP adapter, or modify/use one of ours?
No, i did it myself. But however i had used the default server setting as provided .
Can you post a screenshot of your IdP Adapter Summary Screen, as well as your Connection Summary Screen?
@AndyK.-PingIdentity i have added the screenshot of the IdP adapter and the SP connection summary.
So, when you go to /idp/StartSSO.ping?partner=..., do you get properly redirected to your authentication service at banka.liferay.com:8080?
Yes i do get properly redirected to banka.liferay.com:8080 and through there i again redirect to the /resume URL ...
Last question... I hope! Can you paste the actual text of the opentoken cookie in, please? Not just an image?
Here is the Content of cookie : T1RLAQLE29CYKQCqEn5V9Ih4hjg1UAL5FxD8wpejuXasaDHaWC9aq8vdAACgg_y5insv4_mZk5AHiJW-qIYp0ODiU1pZ2tHylc9V5-fWslFGVZ7SG1Kfez7faK8XRDJTMm6ciEDowCf2NnXlm0I4mKOsPbXAZch9hiSLrEll_FYqdiHQS_i7EIlago4QYIGZ8hhhb8WKyLXJC6uiT7QOvq_RiPBLShhp7HvRpP4KyyBeM12YP_aPJX6mzLlVv11vor7xO2s8EzGXohW73w**
And a quick question , do we need to set openToken when the endpoint /startSSO.ping is invoked or when PF redirects to authentication service ? just clearing my thoughts i have tried both ..
The opentoken decodes properly. :-/ So, stepping through... You redirect for authn, you get an opentoken at authn, and it decodes properly, based on the settings you provided... Are you using the Java integration kit, or just the reference adapter?
At first let me revisit you to my steps ... A user login to IdP Application which redirects to StartSSO.ping , then PF IdP server redirects to authentication service ... And then again from authentication service the resume URL is called ... Is this the correct flow ?
And yes i have used Java integration kit, provided Open Token adapter.
that's the correct flow... As you return to the resume, PingFed should decrypt the token, consume the attributes, and then redirect you on to the SP side...
hey @AndyK.-PingIdentity thanks for listening my , queries . And i found a similar kind of error handling technique in the Ping Federate Forum https://www.pingidentity.com/support/solutions/index.cfm/Troubleshooting-Page-Expired-Error .
Could it be that you're redirected to the resume URL with the hostname being localhost? In that case your browser won't send a cookie issued to .banka.liferay.com to the server, hence the error.
I didn't get what you are implying on. OK here is the flow after login through banka.liferay.com, https://localhost:9031/idp/startSSO.ping is invoked through redirection.After which banka.liferay.com/web/my-bank/home?resume=/idp/bUDlM/resumeSAML20/idp/startSSO.ping&spentity=sbwb-ppc-idp , then it is redirected to pingFederate server location https://localhost:9031/idp/bUDlM/resumeSAML20/idp/startSSO.ping . Isn't this the supposed flow of redirection ?
is executed
What I meant is that your browser won't send a cookie set for .banka.liferay.com to localhost. You'll either want to change the Base URL of PingFederate so that it's on the same cookie domain as the application, or you can opt to send the OpenToken as a query parameter.
Thanks for the suggestion. I think i will try using query parameter for now. If it works i can indeed correct the mistake in my cookie based method.
Thanks for the suggestion. At-least i got rid of the error log as issued above. Not a way i want it to be, a OpenToken as query parameter , but i will figure out the way to handle cookie.
Good call, Mehmet! Completely missed that. [sigh] Forest from the trees, and all that.
| common-pile/stackexchange_filtered |
How to display foreign key data value from database to TextBlock on while update form in WPF (MVVM / Entity Framework)?
OpcUaEndpoint with data
Mode, description are displaying in text-block but foreign key value OpcUaEndpoint is not being displayed.
What am I missing here? Please help.
Xaml markup:
<ItemsControl Name="icName" Margin="10,10,10,10" ItemsSource="{Binding Scans}" Grid.IsSharedSizeScope="True" Grid.ColumnSpan="6" MouseDown="icName_MouseDown">
<ItemsControl.ItemTemplate>
<DataTemplate>
<Border x:Name="Border" Padding="8" BorderThickness="0 0 0 1" BorderBrush="{DynamicResource MaterialDesignDivider}">
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition SharedSizeGroup="Checkerz" />
<ColumnDefinition />
</Grid.ColumnDefinitions>
<ToggleButton VerticalAlignment="Center" IsChecked="{Binding IsSelected, UpdateSourceTrigger=PropertyChanged}" Style="{StaticResource MaterialDesignActionLightToggleButton}" Content="{Binding Code}" Grid.Column="0" />
<StackPanel Margin="30 0 0 0" Grid.Column="1">
<TextBlock FontWeight="Bold" Text="{Binding Name}" />
<TextBlock>
<Run Text="Mode: " />
<Run Text="{Binding Mode}" />
<Run Text=", " />
<Run Text="{Binding Description}" />
<Run Text="{Binding Scan.OpcUaEndpointRefID}" />
</TextBlock>
</StackPanel>
</DataTemplate>
</ItemsControl.ItemTemplate>
</ItemsControl>
Code behind:
public ControlCenterView()
{
InitializeComponent();
var viewmodel = new ScanListViewModel();
ScanList.DataContext = viewmodel;
}
Modelview
public DB.model.OpcUaEndpoint OpcUaEndpoint
{
get { return _opcUaEndpoint; }
set
{
_opcUaEndpoint = value;
OnPropertyChanged("OpcUaEndpoint");
}
}
public IEnumerable<DB.model.OpcUaEndpoint> OpcUaEndpointsValues
{
get { return GetOpcUaEndpoints(); }
}
public IEnumerable<DB.model.Scan> ScanValues
{
get { return GetScans(); }
}
private IList<DB.model.OpcUaEndpoint> GetOpcUaEndpoints()
{
_opcUaEndpointsNames = new List<DB.model.OpcUaEndpoint>();
foreach (var opcpointname in opcUaEndpointsService.GetAll())
{
_opcUaEndpointsNames.Add(opcpointname);
}
return _opcUaEndpointsNames;
}
//OpcUaEndpoint Model
public class OpcUaEndpoint : PlantObject
{
/// <summary>
/// Get or Sets the Identifier.
/// </summary>
[Key]
public int OpcUaEndpointId { get; set; }
/// <summary>
/// name of the OpcUaEndpoints.
/// </summary>
[MaxLength(80)]
public string Name { get; set; }
/// <summary>
/// OpcUaEndpoint.
/// </summary>
[MaxLength(255)]
public string Endpoint { get; set; }
/// <summary>
/// description of the OpcUaEndpoints
/// </summary>
[MaxLength(255)]
public string Description { get; set; }
/// <summary>
///UserName of the OpcUaEndpoints
/// </summary>
[MaxLength(80)]
public string Username { get; set; }
/// <summary>
/// Password of the OpcUaEndpoints
/// </summary>
[MaxLength(512)]
public string Password { get; set; }
/// <summary>
/// RequestTimeout of the OpcUaEndpoints
/// </summary>
public int RequestTimeout { get; set; }
public ICollection<Scan> Scans { get; set; }
[NotMapped]
public bool IsSelected { get; set; }
[NotMapped]
public string SearchData { get; set; }
public override string ToString()
{
return Name;
}
}
//Scan Model
//its OpcUaEndpointId key to foreign key OpcUaEndpointRefId in Scan Model.
//And Scan Is Binded to Item control in Xaml.
public class Scan : PlantObject
{
[Key]
public int Id { get; set; }
public int ScanId { get; set; }
/// <summary>
/// name of the scan Mode.
/// </summary>
[MaxLength(80)]
public string Name { get; set; }
[MaxLength(255)]
public string Description { get; set; }
public string Code { get; set; }
**/// <summary>
/// Id of the OPC-UA Endpoint for the Scan
/// </summary>
[ForeignKey("OpcUaEndpoint")]
public int OpcUaEndpointRefId { get; set; }**
/// <summary>
/// Foreign key to OpcUaEndPoint
///
/// </summary>
public OpcUaEndpoint OpcUaEndpoint { get; set; }
[MaxLength(255)]
public string DrivingTag { get; set; }
[MaxLength(80)]
public string Operator { get; set; }
[MaxLength(80)]
public string Value { get; set; }
[MaxLength(80)]
public string DataMode { get; set; }
[NotMapped]
public bool IsSelected { get; set; }
public bool IsEnable { get; set; }
public DateTime? StartedOn { get; set; }
public DateTime? StoppedOn { get; set; }
[MaxLength(80)]
public string Status { get; set; }
}
Please post the code for your OpcUaEndpoint model
@NeilB posted sir..please find it above.
It's not clear if your intention is to reference to a property in the OpcUaEndpoint or a property in the view model. I think the problem is that binding directly to an instance of OpcUaEndpoint can't convert to string.
Try this:
<Run Text="{Binding OpcUaEndpointRefId}" />
Or This:
<Run Text="{Binding OpcUaEndpoint.OpcUaEndpointID}" />
Not working sir. i updated my xaml code and scan model. please correct me. if Im missing something there.
You are trying to bind directly to the OpcUaEndpoint object in the Scan model. You must either bind to a property of the OpcUaEndpoint object, or a property of the Scan model like OpcUaEndpointRefID. What binding error do you see when the project is run? Do you see any text in the run bound to OpcUaEndpoint?
when i bind OpcUaEndpoint.OpcUaEndpointId its not showing any error neither any value. And when I bind its shows $exception {The runtime refused to evaluate the expression at this time.} System.StackOverflowException.
What can i do sir then?
i want to the same in combobox. can you look into code and correct me please.
https://stackoverflow.com/questions/52109553/how-to-bind-data-value-to-combobox-on-while-update-form-in-wpf-mvvm
Sir please find the hyperlink of image above in starting of question. OpcUaEndpoint with data
where OpcUaEndpoint showing data.
First things first, I can't see where you set your OpcUaEndpoint public property, if you are setting your private field OnPropertyChanged("OpcUaEndpoint"); won't be reached and it won't be successfully updated. We can't see what are you binding to that StackPanel, your XAML is missing the best part please update it with the full Grid (you can delete private things if you want to).
Show us where you set OpcUaEndpoint and be sure you aren't setting _opcUaEndpoint.
On the other hand... you are setting a whole entity to the Run Text property which will cause "problems", you should bind it like this <Run Text="{Binding OpcUaEndpoint.OpcUaEndpointId }" />
Sir, here it is
public DB.model.OpcUaEndpoint OpcUaEndpoint
{
get { return _opcUaEndpoint; }
set
{
_opcUaEndpoint = value;
OnPropertyChanged("OpcUaEndpoint");
}
}
And I updated My xaml code'
when i bind OpcUaEndpoint.OpcUaEndpointId its not showing any error neither any value. And when I bind its shows $exception {The runtime refused to evaluate the expression at this time.} System.StackOverflowException.
Sir i want to the same in combobox. can you look into code and correct me please.
https://stackoverflow.com/questions/52109553/how-to-bind-data-value-to-combobox-on-while-update-form-in-wpf-mvvm
I mean, where do you set your OpcUaEndpoint ? Where do you do anything similar to this OpcUaEndpoint = new OpcUaEndpoint() ? Where do you update your OpcUaEndpoint property ?
No sir. I'm did not set or updated OpcUaEndpoint Peoperty in my whole code.
Then how do you expect it to be shown ? If it has no value it won't show anything
Sir please find the hyperlink of image above in starting of question. OpcUaEndpoint with data where OpcUaEndpoint showing data.
| common-pile/stackexchange_filtered |
Hibernate - Formula - invalid Query
I have a question about Hibernate Formula.
Let us assume that I have an entity "garage" in which there are numerous "cars". Now I would like to add a property in the car class that tells me which number of car it is in the garage.
I created this formula on the car-class:
@Formula("(SELECT pos FROM (SELECT rownum AS pos, c.id FROM garage g "
+ "LEFT JOIN car c ON g.id = c.garage_id "
+ "WHERE g.id = GARAGE_ID "
+ "ORDER BY c.date) WHERE id = CAR_ID)")
The entity car has both properties (ID and GARAGE_ID). How can I now incorporate these into the Formula?
Furthermore I have the problem that Hibernate always generates an invalid SQL from it.
Pos becomes query0_.pos. However, this is not valid.
The rule with the SQL fragment inside the @Formula annotation is that any "bare" (unqualified) column reference is going to be qualified by the alias of the entity the formula belongs to.
So, for example, in this @Formula:
@Formula("(select upper(isbn) || ' ' || upper(a.name) from authors a where a.id=author_id)")
The columns which are going to get a qualifier attached to them are isbn and author_id.
So in your example:
you should add a qualifier to pos to suppress this behavior, but
you should leave id and garage_id unqualified since they belong to the car entity.
However, there's something that seems rather confusing in your code. Why would you be joining back to the car table here? That doesn't look quite right.
Thank you for your reply.
The situation described above is only for example. I need the position in the garage in the car class. You may have a nicer idea - or a simple SQL statement :)
| common-pile/stackexchange_filtered |
Ubuntu 20.04 LTS mouse and keyboard not working on login screen
Everything goes well on Ubuntu 18.04 LTS, but once I upgrade to Ubuntu 20.04 LTS, my mouse and keyboard does not work anymore on login screen.
The fact is: they could work for a few seconds at the begining, BUT then the screen seems stucked and nothing works anymore, I could only force to restart my computer.
What could I do to fix this?
I had the same problem and I tried several things, and finally, this is what worked for me:
First, you need to enter into recovery mode. Enter the GRUB menu by rebooting and then pressing eSC if you boot using UEFI or Shift if you boot using BIOS. You may have to press the key multiple times or hold the key down.
When you see the menu, use the arrow keys to navigate and select Advanced options for Ubuntu.
Select the line containing the latest version of the kernel but with (recovery mode) at the end and press Enter.
After you entered recovery mode you will see the Recovery Menu. Navigate to the line containing root Drop to root shell prompt. Press Enter and then when you see a prompt press Enter for maintenance press Enter again.
Run the following commands:
sudo apt-get update --fix-missing
sudo apt-get upgrade -y
sudo apt-get install -y xserver-xorg-input-all
Finally to restart run the command:
sudo reboot
When the login page appears, your mouse and keyboard should work normally.
Note: if you get an error Temporarily unable to fetch ....com when running apt-get, press Ctrl+D from the root prompt to go back to recovery menu and select Network in the drop down to access the internet.
Thank you, this worked well for me. I was fortunate to have SSH remote access to my machine, so I simply ran the apt commands you shared without booting into BIOS.
Thanks, it also worked for me. But why we need this "server-xorg-input-all"? what does it do exactly? I thought it is nvidia. I uninstalled it a couple of times.
Some of us nyoobs were confused to arrive at Grub command line rather than grub menu (too many escape presses) - solution; in Grub terminal type "normal" press return, then escape - you are then back in
THANKS A LOT, it works
This worked for me in Ubuntu 22.04. Could someone please explain why this happened?
My god thank you! My PC was basically a brick after a failed Wacom update but you saved it!
The solution worked for me on Ubuntu 22.04, I didn't need to upgrade xserver-xorg-input-all and don't forget to use sudo while executing the upgrade command
| common-pile/stackexchange_filtered |
Second Bachelors in Maths after Masters in Optics for Theretical Physics Phd
I have completed my Masters(5 year Integrated) in Optics recently with average CGPA. I have couple of projects experiences in Theoretical Physics, but no publications. My masters course had rigorous higher mathematics papers yet I find that I have very little knowledge and experience in doing even the undergraduate math, though I am very much interested in it, may be because my course was designed to give more importance to experimental physics. Thus, I started improving math using online lectures and our mathematics stackexchange forum, and I can feel I am able to gain much more than what I could during the course. But many told me not to take a gap by spending time on maths after masters as it'd affect my chances of a Phd admission, as I am already 26.
So is it a good idea to do a second bachelors degree in Mathematics(distant education) before applying for a Phd in theoretical Physics in Europe/Australia ?
Or is there a better way to compensate the time I spend on improving maths myself ?
Note: The reason I was thinking of doing a bachelor's while improving my math is, that way at least I could show that I was doing something while I was away from academia.
So is it a good idea to do a second bachelors degree in Mathematics(distant education) before applying for a Phd in theoretical Physics in Europe/Australia ?
In general, getting another bachelors degree after getting a masters degree is not seen very favorably. I understand your though process - show the admissions committee you'll do grade-A work in math.
In reality, it's much more likely to signal to the admissions committee you don't have a clear purpose, and make them wonder why you didn't just switch to a masters in math. Furthermore, since you already have an advanced degree, they are less likely to be impressed you went back to undergrad and aced another bachelors degree.
The admissions committee is most likely to weight your academic performance doing your masters degree heavily. If you are truly set on switching fields, I would recommend you get another masters degree in mathematics OR applying for a PhD in mathematics.
In some countries you might not be able to get directly a master's degree in maths after a physics master's. You'll need the undergrad. Unless maybe it is a bad master's, in which case it might not be worthwhile. For a PhD, it never ceases to amaze me how apparently anyone can apply to them without the qualifications needed and get the positions. This is crazy. So yeah, if OP wants to switch to maths then he might have to start from undergrad, maybe third year all the way up. If OP wants to do theoretical physics then the best way is probably a master's in theoretical physics rather than math
I've know several European PhD students that swapped from Physics to Math or Math to Computer Science (or other fields with significant overlap). Many universities will work with students to figure out how to fill in the gaps.
I think may b u r right. But, undergraduate maths is really necessary for theoretical works, isn't it ?. People often suggest to take mathematical physics paper and 'll be fine with it. But, I feel I should improve math separately, don't know only me have that feeling.
@ss1729 - I've know several PhDs who switch fields from undergrad to grad school. There is nothing stopping you from getting a second bacholors, but I think you should look at doing a 2nd masters and finding a professor that is sympathetic to your case.
A bachlor's degree is not just a few math courses: it has a lot of other things it comprises of which makes it a 4 year education. Don't get the degree. Just sit in on a few courses, read on your own, and join in the master's courses. End up with a masters if you really need it, but as long as you learn what you need then you can do the PhD.
@ChrisRackauckas - Thanks for pointing out that a BS degree is more than just a few extra courses. You've also got to consider how long you want to be in school. 4 yr BS + 2 yr MS = 6 years already. You're time isn't limitless.
| common-pile/stackexchange_filtered |
Macro Condition (if cell is empty select the cell in previous column)
Hello I am making a macro for a certain task. That checks the date order of each columns in descending order. And I want a condition for column "AM"(<Range("AM"& i).Value Then) that if its empty then it must check the pre ious cell that has value(ex.column AK)
Dim lastRowAP As Long
lastRowAP =Cells(Rows.Count,"AP").End(xlUp).Row
For i=4 To lastRowAP
If Range("AP"& i).Value="" And Range("AO"& i).Value <Range("AM"& i).Value Then
Range"AO"&i).Interior.Color= RGB(255,0,0)
Range("CA"& i).Value = "G"
End if
Nexti
I haven't tried anything as I am new to this. Hope you can help me.
Please try.
Sub Macro()
Dim lastRowAP As Long
Dim sValue, i As Long, j As Long
lastRowAP = Cells(Rows.Count, "AP").End(xlUp).Row
For i = 4 To lastRowAP
If Range("AP" & i).Value = "" Then
sValue = Range("AM" & i).Value
' If cell AMi is blank, locate the non-blank cell on the left
If sValue = "" Then
For j = Cells(1, "AM").Column - 2 To 1 Step -2
sValue = Cells(i, j).Value
If sValue <> "" Then Exit For
Next
End If
If sValue <> "" And Range("AO" & i).Value < sValue Then
Range("AO" & i).Interior.Color = RGB(255, 0, 0)
Range("CA" & i).Value = "G"
End If
End If
Next i
End Sub
Hello, thank you for your on answering me, but it is not working, also I forgot to mention that it needs to skip 1 previous column then check the second 1 from the left
Do you talk about the logic of locate the non-blank cell? I would suggest you share the detail of logic with sample date.
Yes, but I need to skip 1 column because it has different content like from AM i will skip AL and proceed to column AK and if the column AK is blank again I will skip column AJ and check AI instead, and so on
Thank you for the help. It is currently working, however I copied the code and change the current columns to check the next columns with the same purpose. And I get the 'Complie error: Duplicate declaration in current scope' error
Update the post w/ the latest code. It's difficult to say anything w/o seeing the code.
| common-pile/stackexchange_filtered |
Correlation of location to specific numerical values
I'm trying to calculate the correlation of specific locations (I have latitude and longitude data)
to specific numerical values I have. Does anyone know any particular methods to do this? I have calculated correlations
before but not involving point data or latitude and longitude data.
| common-pile/stackexchange_filtered |
Reading strings from text file as variable in python
I am pretty new to programming, and have been searching for the answer to this, but I think I have been asking the wrong question in my searches.
Basically, I am trying to interpret strings from a .csv file as variables to use as arguments.
Anyway, I have the following code:
a = libtcod.Color(0, 176, 240)
b = libtcod.Color(100, 155, 200)
c = libtcod.Color(80, 055, 100)
def create_room(x, y, color):
pass
y = 1
with open('Map.txt') as csvfile:
readCSV = csv.reader(csvfile, delimiter = ',')
for row in readCSV:
x = 1
for column in row:
create_room(x, y, color)
x += 1
y += 1
Which runs through a .csv file and gives the column string for every x and y value in the file.
So, if the .csv file reads 'a,b,c', I want to call create_room(1, 1, a), create_room(2, 1, b), and create_room(3, 1, c).
Instead I think I am calling create_room(1, 1, 'a'), create_room(2, 1, 'b'), and create_room(3, 1, 'c'), which doesn't give me what I want.
This seems like this should be pretty easy to solve, but I either haven't found the answer or possibly found an answer but didn't realize it, heh (again, I am quite new to python and programming in general).
Any help would be appreciated!
Instead of three variables named a, b, and c, you should create a dictionary. That way you can look up the strings 'a', 'b', and 'c' to get their respective value:
#Initialize a dictionary with the appropriate values
color_lookup = {
'a': libtcod.Color(0, 176, 240),
'b': libtcod.Color(100, 155, 200),
'c': libtcod.Color(80, 055, 100)
}
#Then call create_room like this
create_room(x, y, color_lookup[color])
| common-pile/stackexchange_filtered |
Order of an entire function represented as an infinite product
Question- What is the order of growth of the entire function given by the infinite product of $1-(z/n!)$ where $n$ goes from 1 to infinity?
My thoughts- I have already proven that the infinite product converges to an entire function. One can see that the zeros of the infinite product are at $n!$.
So, by Hadamard's factorization theorem, I can deduce that the order $p$ is such that $0\leq p <1$.
I suspect the order of growth is zero, but using inequalities like
$\lvert \log(1+z)\vert < \lvert z\rvert$ for $\lvert z\rvert < 1$ and $e^n < n!$ for $n\geq 6$, all I could get was the order of growth is less than $1$.
I feel like there should be some simple inequality argument that shows us the order is zero.
Can somebody give me a hint?
Thanks.
Claim: $p = 0$
Let $f(z) = \prod_{i=1}^\infty (1-\frac{z}{n!})$
I will show that for any $\epsilon >0$, the order of growth $p \leq \epsilon \;$ i.e. I will show $\exists \; A, B >0$ such that $|f(z)| \leq Ae^{B|z|^\epsilon}$.
$\begin{align}|f(z)| = |\prod_{i=1}^\infty (1-\frac{z}{i!})| &= |\prod_{i! \leq |z|} (1-\frac{z}{i!})| \;|\prod_{i! > |z|} (1-\frac{z}{i!})| \\
&\leq \; \prod_{i! \leq |z|} (1+\frac{|z|}{i!}) \; \prod_{i! > |z|} (1+\frac{|z|}{i!}) \end{align}$
Let us fix $z$, and suppose $\;n! \leq |z| < (n+1)!$ and $n\geq 6$ $\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$ (1)
Then, we use the inequality $e^n < n!$ for $n>6$ to get $e^n \leq |z| \implies n \leq log|z|$ $\;\;\;\;\;\;\;\;\;$ (2)
Now,
$\begin{align}\prod_{i! \leq |z|} (1+\frac{|z|}{i!}) &\leq \prod_{i! \leq |z|} (\frac{2|z|}{i!}) \;\;\;(since \;\;\frac{|z|}{i!}\geq 1) \\ & \leq\prod_{i! \leq |z|} |z| \\
& \leq |z|^n \;\;\;\;\;\;\;\;\;\;\;(since \;\; n! \leq |z|) \\
& \leq |z|^{log|z|} \;\;\;\;\;\;\;\; by \;\mathbf{(2)} \\
\end{align}$
Note that for any $\epsilon > 0, \frac{(log|z|)^2}{|z|^\epsilon} \rightarrow 0$ as $|z| \rightarrow \infty$.
Thus, $\exists \; C > 0, D > 0$ such that
$$(log|z|)^2 \leq C + D|z|^{\epsilon} \implies |z|^{log|z|} \leq e^{C +D|z|^{\epsilon}}$$
Also,
$$\begin{align}\prod_{i! > |z|} (1 + \frac{|z|}{i!}) &= e^{\sum_{i! > |z|}(log(1+\frac{|z|}{i!}))} \\
&\leq e^{\sum_{i! > |z|} (\frac{|z|}{i!})} \\
&= e^{\frac{|z|}{(n+1)!}\sum_{i! > |z|} (\frac{(n+1)!}{i!})} \\
& \leq e^{\frac{2|z|}{n!}} \;\;\; (using \mathbf{(1)}) \\
& \leq e^2
\end{align}$$
If $|z| < 6!$, then $|f(z)|$ is bounded.
Thus, for any $\epsilon >0$, $\;$$\exists \; A, B >0$ $\;$ such that $|f(z)| \leq Ae^{B|z|^\epsilon}$.
Hence, $p = 0$
QED
| common-pile/stackexchange_filtered |
Can't install python avro library
I cannot seem to install certain packages that have avro as a dependency. In fact, I cannot install avro at all.
$ pip install avro==1.10.0
Collecting avro==1.10.0
Using cached avro-1.10.0.tar.gz (67 kB)
WARNING: Requested avro==1.10.0 from https://files.pythonhosted.org/packages/3c/6f/75fb40defc4e2316d5088f635223b57518f59320a13fc12f430a17e4dc48/avro-1.10.0.tar.gz#sha256=bbf9f89fd20b4cf3156f10ec9fbce83579ece3e0403546c305957f9dac0d2f03, but installing version file-.avro-VERSION.txt
WARNING: Discarding https://files.pythonhosted.org/packages/3c/6f/75fb40defc4e2316d5088f635223b57518f59320a13fc12f430a17e4dc48/avro-1.10.0.tar.gz#sha256=bbf9f89fd20b4cf3156f10ec9fbce83579ece3e0403546c305957f9dac0d2f03 (from https://pypi.org/simple/avro/). Requested avro==1.10.0 from https://files.pythonhosted.org/packages/3c/6f/75fb40defc4e2316d5088f635223b57518f59320a13fc12f430a17e4dc48/avro-1.10.0.tar.gz#sha256=bbf9f89fd20b4cf3156f10ec9fbce83579ece3e0403546c305957f9dac0d2f03 has inconsistent version: filename has '1.10.0', but metadata has 'file-.avro-VERSION.txt'
ERROR: Could not find a version that satisfies the requirement avro==1.10.0 (from versions: 1.3.3, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.10.0, 1.10.1, 1.10.2) ERROR: No matching distribution found for avro==1.10.0
I get the same results for 1.10.1 and 1.10.2
FYI, here are the packages I'm trying to install
confluent-kafka
apache-beam
requests
certifi
I'm using pip==21.2.4, doesn't work with pip==21.1.3 either.
Try download avro from https://pypi.org/project/avro/1.10.0/ and install manually using python setup.py install
@Matin, for real? that's so bad though... How come it's not possible to go through package manager...
That seems so strange. I checked here right now that the latest version is the 1.10.2. You can try to just pip install avro without a specific version and see what happens (it should install the compatible version for you), then posting the result so we can dig more.
Try upgrading both setuptools and if you haven't already pip
| common-pile/stackexchange_filtered |
HTML5 datalist tag is not working as expected in Safari
I have a datalist tag which allows my users to have a suggestion box. Now i know that this feature is not supported in safari. So what can i do to solve this issue?
Here is my code - I'm actually populating my values with database dynamically..
<select id="select_departure_city"style="border-radius:6px" onchange="this.nextElementSibling.value = $('#select_departure_city option:selected').text().trim()"></select>
<input id="input_departure_city" class="form-control admin-input width-80 height-34p padding-0" name="departure_city" type="text" list="listDepartureCity" />
<datalist id="listDepartureCity" ></datalist>
Before posting this i tired many solutions but none of them is worked for me.
I tired
HTML5 datalist tag is not populating in Safari
Datalist not working in Safari
GitHub Help1
GitHub Help2
As in these posts, Select tag with in datalist tag is solution like this.
<datalist id="languages">
<select>
<option value="JavaScript">JavaScript</option>
<option value="Haskell">Haskell</option>
</select></datalist>
But in my case Options tag are not place within select tag after populating dynamically. My code after adding select is as below
<datalist id="listDepartureCity">
<select></select>
<option value="JavaScript">JavaScript</option>
<option value="Haskell">Haskell</option>
</datalist>
How can I solve this?
the problem you got in safari is not about those options not inside the select tag, which, in fact you can easily append options into select. safari just dont handle well with dymanic changing datalist options.
If you need to use a feature that's not widely supported by all browsers, chances are you'll need a polyfill. Here's one for datalist that might work for you: https://github.com/thgreasi/datalist-polyfill
i try this but not worked for me. As in our case all the options tags are not place with in the select tag as i mention in the post.
| common-pile/stackexchange_filtered |
Database-Centric Vs Tired Architecture
Consider 3 applications:
One is for a fitness club that has a backend database and offers some services on the website,
another is an e-commerce site, with heavy dependence on Database,
a point of sale application with heavy use of database
I need to choose among tiered architecture and database-centric, as defined in this book.
I can't seem to find resources that discuss exact motivations of choosing one over the other, apart from this page, which states:
using stored procedures that run on database servers, as opposed to
greater reliance on logic running in middle-tier application servers
in a multi-tier architecture. The extent to which business logic
should be placed at the back-end versus another tier is a subject of
ongoing debate. For example, Toon Koppelaars presents a detailed
analysis of alternative Oracle-based architectures that vary in the
placement of business logic, concluding that a database-centric
approach has practical advantages from the standpoint of ease of
development and maintainability.
It seems to me point of sale should follow database-centric, and fitness center application should follow tiered architecture, but I am not sure about the ecommerce. Any pointers are greatly appreciated.
and Database Centric Architecture:
We can create an application with either of the architecture styles. The selection of the architecture must be done based on the requirements of the system.
Here are a few of them :
Consistency
Availability
Amount of data the application would be dealing with
Type of data (Structured (tables & rows) / Un-Structured (documents,Media files,sensor data)
Amount of traffic the application should withstand
Latency per request
and there are many more to look for as per our application needs.
As I understand from database-centric style should only be used for applications that heavily rely on the features of the database such as stored procedures, triggers. And any modern-day application such as e-commerce websites etc must not be database-centric. These applications are ever-evolving and need to be highly scalable in nature to appropriately adjust to future demands.
Having said that, it is not written on the stone that we need to use a specific architecture for specific kinds of applications. We need to carefully evaluate the requirements/expectations of users from our applications. Which would provide us with the rightful insights to pick the appropriate architectural style.
Would like to recommend the following for further reading.
Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems
web-application-architecture-101
| common-pile/stackexchange_filtered |
What do bones need to grow outside the body?
In what environment outside of a living organism can bones grow?
If I were to put an arbitrary human bone in a room and attach a bunch of tubes to it, or dunk it in some kind of solution, where would I have to connect the tubes and what stuff would have to be in the tubes/solution to make the bone grow as if it were inside a living organism?
Are you counting being sheathed in velvet like an antler as "inside" the body?
@swbarnes2 Yeah, I forgot to think about antlers, but since they're attached to a body I'm counting them as "inside" the body. I mean I want these bones to grow independent from an animal.
| common-pile/stackexchange_filtered |
Differences between contentType and dataType in jQuery ajax function
I have the following Jquery callback function and I have a litle doubt about it (I don't know very well Jquery):
$("form.readXmlForm").submit(function() {
// Riferimento all'elemento form che ha scatenato il submit
var form = $(this);
// Variabile che contiene il riferimento al bottone clickato
var button = form.children(":first");
$.ajax({ // Viene eseguita la chiamata AJAX
type: "POST", // Tipo di richiesta: POST
// URL verso quale viene inviata la richiesta
url: form.attr("action"),
// Dati XML inviati:
data: "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?><javaBean><foo>bar</foo><fruit>apple</fruit></javaBean>",
// Tipo di media type accettabile dalla response:
contentType: "application/xml",
dataType: "text",
success: function(text) {
MvcUtil.showSuccessResponse(text, button);
},
error: function(xhr) {
MvcUtil.showErrorResponse(xhr.responseText, button);
}
});
As you can see this function simply execute an AJAX Request to the backend setting the parameter for this request.
I have set that I am sending the request towards an URL, that the request is a POST request and that the data that I am sending are the following string:
"barapple"
I have some difficulties to understand what is the differences between contentType and dataType
I think that contentType specify the type of data that are acceptable recived in the HTTP Response, is it right?
And dataType? What say? The type of data that I am sending in the HTTP Request?
In this case is is "text" because I am sending a textual string that rappresent XML code?
Is the content-type and data-type purpose differs between jQuery usage and REST API usage?
From the documentation:
contentType (default: 'application/x-www-form-urlencoded; charset=UTF-8')
Type: String
When sending data to the server, use this content type. Default is "application/x-www-form-urlencoded; charset=UTF-8", which is fine for most cases. If you explicitly pass in a content-type to $.ajax(), then it'll always be sent to the server (even if no data is sent). If no charset is specified, data will be transmitted to the server using the server's default charset; you must decode this appropriately on the server side.
and:
dataType (default: Intelligent Guess (xml, json, script, or html))
Type: String
The type of data that you're expecting back from the server. If none is specified, jQuery will try to infer it based on the MIME type of the response (an XML MIME type will yield XML, in 1.4 JSON will yield a JavaScript object, in 1.4 script will execute the script, and anything else will be returned as a string).
They're essentially the opposite of what you thought they were.
Also contentType affects the headers, dataType doesn't
A typical example of unclear API.
In English:
ContentType: When sending data to the server, use this content type. Default is application/x-www-form-urlencoded; charset=UTF-8, which is fine for most cases.
Accepts: The content type sent in the request header that tells the server what kind of response it will accept in return. Depends on DataType.
DataType: The type of data that you're expecting back from the server. If none is specified, jQuery will try to infer it based on the MIME type of the response. Can be text, xml, html, script, json, jsonp.
| common-pile/stackexchange_filtered |
How to get playlists a Dailymotion video is a member of?
I'm currently working on a project that uses daily motion as its video platform. It will contain several private playlists and videos and I need to be able to fetch video details including what playlist that video is a member of.
When querying the /me/video/<VIDEO_ID> endpoint, there is no field I can add to the fields query to retrieve a list of playlists and the /video/<VIDEO_ID>/playlists although undocumented, returns a 200 but with an empty list of results.
Using this documentation, there seem to be no way to get a list of playlists a video is a member of, am I correct?
You are using the right endpoint.
To get the list of playlists a video is a member of, you can use the video connection: /video/<VIDEO_ID>/playlists
This connection should be documented, we'll investigate on this issue
This is not documented but this also returns a 200 with an empty list property. I have mentioned, these videos are private, maybe that makes a difference. Although I tried both /video/<VIDEO_ID>/playlists and /video/<PRIVATE_VIDEO_ID>/playlists, neither work and both return a 200 with an empty list property
Ok, I've figured out the issue, the video/<PRIVATE_VIDEO_ID>/playlists endpoint works but the playlist must be public.
There is still a bit of an issue, if I want my video object to contain their playlists, I'll have to query the video endpoint to get the fields and then to query its playlists separately.
Scenario would be; I want to display a list of latest videos and their associated playlists, for each video, I'd have to first query /video/<PRIVATE_VIDEO_ID> with the fields I need (in this case, title, private_id, duration, tags) and also query /video/<PRIVATE_VIDEO_ID>/playlists in order to get its playlists title.
It would be significantly more efficient if I was able to query for the playlists field in the /video/<PRIVATE_VIDEO_ID> endpoint.
| common-pile/stackexchange_filtered |
Quail or fish in B'midbar 11:31?
In B'Midbar 11:31 it is stated:
וְרוּחַ נָסַע מֵאֵת יי וַיָּגָז שַׂלְוִים מִן־הַיָּם וַיִּטֹּשׁ עַל־הַֽמַּחֲנֶה כְּדֶרֶךְ יֹום כֹּה וּכְדֶרֶךְ יֹום כֹּה סְבִיבֹות הַֽמַּחֲנֶה וּכְאַמָּתַיִם עַל־פְּנֵי הָאָֽרֶץ
"A wind went forth from the Lord and swept quails from the sea and spread them over the camp about one day's journey this way and one day's journey that way, around the camp, about two cubits above the ground."
Are שַׂלְוִים definitively quails? I ask because it states they came מִן־הַיָּם which could mean from the sea, or from the west. Comparing this, the complainers remembered the "fish" from Mitzrayim. Could it be that they ate fish of some sort rather than quail, or are there any sources that insinuate a definite definition of שַׂלְוִים?
FWIW, Rabbi Aryeh Kaplan's commentary cites no opposing view.
Interesting question, but is wind more likely to carry fish or, say, birds and locusts?
It is more likely to carry birds and locust for sure - only thing is, this being an obvious display of HaShem's might, could it not be out of the ordinary? I notice that Shemot 14:21 speaks of the בְּרוּחַ קָדִים parting the sea, so it isn't out of the question for the wind to do something seemingly impossible, right?
msh210, I'm sorry I don't understand exactly what you mean - do you mean that there is no opposing view therefore it shouldn't be questioned, or does it mean there has been no opposing view to contradict this possible explanation?
@Kovesh It is conceivable, but I would expect that an identification as fish would have been more obvious if that was the case. Anyway, the Talmud identifies it as a type of bird (Yoma 75b).
Thank you - that reference in the Talmud is helpful. I also found Tehillim 78:27 which rules out the idea of fish.
@Kovesh, I meant that that commentary says the verse refers to quail and cites no other possible translation, whereas it often does cite multiple translations of names of types of animals and plants. That doesn't mean you won't find any other translation (for example, he also says the plague of tz'fardea was frogs without citing any dissenting opinion, which does exist), but it makes it less likely IMO.
To flesh out Fred's comment: in Yoma 75b, R. Yehoshua ben Korchah states that the verb שטוח ("spreading out") in the next verse also implies שחוט, "slaughtering," indicating that שליו is something that requires shechitah - thus excluding fish (and locusts). Although Rebbi disagrees with that exegesis, he doesn't seem to argue with the basic fact of what שליו is.
(A few lines further down, the Gemara definitively identifies it as an avian species. First of all, it says that one variety of שליו is called פסיוני, which in Bava Kamma 55a is listed in a group of birds that are kilayim; second, it describes the שליו proper as כציפורתא, "like a small bird.")
Also, in Chullin 105a the verse describing the aftermath, "While the meat was still between their teeth..." (11:33), is used as a source for the rule that you can't eat dairy while there is still meat stuck between your teeth. Which again implies that the meat they ate was just that, something to which the laws of basar bechalav would apply - again excluding fish.
Basar bechalav doesn't apply (deoraita) to birds.
@DoubleAA: neither does the whole issue of eating meat and milk one after the other; mideoraisa it's only if they're cooked together that they're forbidden. So the whole thing is only an asmachta.
The Tur (יו''ד ס' פ''ט) says that if you ate 'basar', even 'chaya' and עוף you must wait 6 hours. We derive the law of waiting between meat and milk from the episode of the שליו, that the שליו 'meat' was in between their teeth: (Ex. 11:32 ) "הַבָּשָׂר, עוֹדֶנּוּ בֵּין שִׁנֵּיהֶם"
Perhaps this means that the Tur holds שליו not to be quail since a quail would be considered עוף.
Are you making a diyuk in the word even? Can you elaborate on that?
@DoubleAA, yes, when i saw it i thought it could be an implication, although i agree its not explicit.
| common-pile/stackexchange_filtered |
"only protocol 3 supported" error when trying to connect to Postgresl Database through Django
Windows 10 x64
Python 3.6
psycopg2==2.8.3
PostgreSQL <IP_ADDRESS>-1.1C(x64)
Django 2.2.1
When trying to execute query to database I get an error "only protocol 3 supported"
I run:
python manage.py makemigrations
I got a big pile of errors with the
django.db.utils.InterfaceError: only protocol 3 supported
in the end.
psql
\list
The settings.py DATABASES = {} SECTION is set up with valid credentials
I can even connect to db using them
How can I solve this problem?
I think you should read this
@Goran Installed the latest version of Postgres and everything works fine
| common-pile/stackexchange_filtered |
Azure sql server cannot create database
I am trying to create a database in Azure SQL server. However, when I try to create it, it prompts "Gen4 family is not available in this region" and cannot create the database. What is the problem?
it has clearly mentioned that Gen4 family is not available in the region you have selected.
Your Region may only support gen5 hardware. choose gen5 or if you are using cli than provide
--family Gen5
OR
Change the Region that have gen4 hardware.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.