text
stringlengths 15
59.8k
| meta
dict |
|---|---|
Q: How do you map PubSub value of a GraphQL subscription in Apollo? I'm making a GraphQL backend in Apollo, and I'd like to use subscriptions. I followed Apollo's docs, and I've gotten basic subscriptions working using graphql-subscriptions. This package also comes with filtering via the built-in withFilter function. However, I don't want to emit all of the data published via PubSub because some of this data is used for filtering purposes only.
Ex.
// GraphQL Schema
type Subscription {
mySubscription(filter: MySubscriptionFilter!): ID!
}
// Publishing the event
pubsub.publish("MY_SUBSCRIPTION", { id: "2092330", username: "asdf", roles: [ "USER", "MODERATOR" ] });
// Handling the event for a subscription
const resolvers = {
Subscriptions: {
mySubscription: {
subscribe: withFilter(
() => pubsub.asyncIterator("MY_SUBSCRIPTION"),
(payload, variables) => {
return customFiltering(payload, variables);
}
)
}
}
}
This returns an object with the type: { id, username, roles }. However, the username and roles fields are only used for filtering. I ultimately need to return an object of type { mySubscription: id }, because that's what my GraphQL schema says.
Is there a way to do something like this?
// Handling the event for a subscription
const resolvers = {
Subscriptions: {
mySubscription: {
subscribe: withFilter(
() => pubsub.asyncIterator("MY_SUBSCRIPTION"),
(payload, variables) => {
return customFiltering(payload, variables);
}
).map(x => {
return { mySubsription: x.id }
}) // Map function where x is the payload from the pubsub
}
}
}
A: Whoops, it looks like I overlooked the resolve function in a subscription.
From the graphql-subscriptions github page
Payload Manipulation
You can also manipulate the published payload, by adding resolve methods to your subscription:
const SOMETHING_UPDATED = 'something_updated';
export const resolvers = {
Subscription: {
somethingChanged: {
resolve: (payload, args, context, info) => {
// Manipulate and return the new value
return payload.somethingChanged;
},
subscribe: () => pubsub.asyncIterator(SOMETHING_UPDATED),
},
},
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72636926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Can I use a Visual Studio 6 compiled C++ static library in Visual Studio 2008? Is it possible to use a C++ static library (.lib) compiled using Visual Studio 6 in Visual Studio 2008?
A: It really depends. Does the lib expose only 'extern "C"' functions where memory is either managed by straight Win32 methods (CoTaskMemAlloc, etc) or the caller never frees memory allocated by the callee or vice-versa? Do you only rely on basic libraries that haven't changed much since VS 6? If so, you should be fine.
There are 2 basic things to watch for. Changes to global variables used by 3rd-party libraries, and changes to the structure of structs, classes, etc defined by those 3rd-party libraries. For example, the CRT memory allocator has probably changed its hidden allocation management structures between the 2 versions, so having one version of the library allocate a piece of memory and having another free it will probably cause a crash.
As another example, if you expose C++ classes through the interface and they rely on MS runtime libraries like MFC, there's a chance that the class layout has changed between VS 6 and VS 2008. That means that accessing a member/field on the class could go to the wrong thing and cause unpredictable results. You're probably hosed if the .lib uses MFC in any capacity. MFC defines and internally uses tons of globals, and any access to MFC globals by the operations in the .lib could cause failures if the MFC infrastructure has changed in the hosting environment (it has changed a lot since VS 6, BTW).
I haven't explored exactly what changes were made in the MFC headers, but I've seen unpredictable behavior between MFC/ATL-based class binaries compiled in different VS versions.
On top of those issues, there's a risk for functions like strtok() that rely on static global variables defined in the run-time libraries. I'm not sure, but I'm concerned those static variables may not get initialized properly if you use a client expecting the single-threaded CRT on a thread created on the multi-threaded CRT. Look at the documentation for _beginthread() for more info.
A: I shouldn't think why not - as long as you keep the usual CRT memory boundaries (ie if you allocate memory inside a library function, always free it from inside the library - by calling a function in the lib to do the freeing).
this approach works fine for dlls compiled with all kinds of compilers, statically linked libs should be ok too.
A: Yes. There should be no issues with this at all. As gbjbaanb mentioned, you need to mind your memory, but VS2008 will still work with it. As long as you are not trying to mix CLR, (managed) code with it. I'd recommend against that if at all possible. But, if you are talking about raw C or C++ code, sure, it'll work.
What exactly are you planning on using? (What is in this library?) Have you tried it already, but are having issues, or are you just checking before you waste a bunch of time trying to get something to work that just wont?
A: Sure it'll work.
Are you asking where in VS2008 to code the references?
If so, go to proj props -> Linker -> Input on Configuration properties on the property pages. Look for "additional dependencies" and code the .LIB there.
Go to proj props -> Linker -> General and code the libs path in "Additional Library Directories".
That should do it!!
A: There are cases were the answer is no, when we moved from VS6 to VS2k5 we had to rebuild all our libraries, as the memory model had changed, and the CRT functions where different.
A: There were a handful of breaking changes between VC6, VS2003, VS2005 and VS2008. Visual C++ (in VS2005) stopped support for single-threaded, statically linked CRT library. Some breaking changes enumerated here and here. Those changes will impact your use of VC6 built libs in later versions.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/723416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Rally RestAPi-unable to update Test set results Though i am able to pull all the test cases present in a particular test set and modify or update all the test Results.
Test set is not reflecting the verdict(pass/fail) after updating the Test cases in that Test set.
If i navigate to one of the tescase detail page, i am able to see updated testcase result
This is the test-set status after updating the testcases
But when one of those testcases are opened i am able to see the updated testcase result
code:
QueryRequest testSetRequest = new QueryRequest("TestSet");
testSetRequest.setFetch(new Fetch(new String[] {"Name", "TestCases", "FormattedID"}));
testSetRequest.setQueryFilter(new QueryFilter("FormattedID", "=", "TS346"));
QueryResponse testSetQueryResponse = restApi.query(testSetRequest);
if(testSetQueryResponse.wasSuccessful()){
System.out.println("Successful: " + testSetQueryResponse.wasSuccessful());
System.out.println("Size: " + testSetQueryResponse.getTotalResultCount());
for (int i=0; i<testSetQueryResponse.getResults().size();i++){
JsonObject testSetJsonObject = testSetQueryResponse.getResults().get(i).getAsJsonObject();
System.out.println("Name: " + testSetJsonObject.get("Name") + " ref: " + testSetJsonObject.get("_ref").getAsString() + " Test Cases: " + testSetJsonObject.get("TestCases").getAsJsonObject().get("_ref"));
int numberOfTestCases = testSetJsonObject.get("TestCases").getAsJsonObject().get("Count").getAsInt();
System.out.println(numberOfTestCases);
if(numberOfTestCases>0){
QueryRequest testCaseRequest = new QueryRequest(testSetJsonObject.getAsJsonObject("TestCases"));
testCaseRequest.setFetch(new Fetch("FormattedID"));
//load the collection
JsonArray testCases = restApi.query(testCaseRequest).getResults();
for (int j=0;j<numberOfTestCases;j++){
System.out.println(testCases.get(j).getAsJsonObject().get("FormattedID").getAsString());
String s= testCases.get(j).getAsJsonObject().get("FormattedID").getAsString();
testCaseRequest = new QueryRequest("TestCase");
testCaseRequest.setFetch(new Fetch("FormattedID","Name"));
testCaseRequest.setQueryFilter(new QueryFilter("FormattedID", "=", s));
QueryResponse testCaseQueryResponse = restApi.query(testCaseRequest);
String testCaseRef = testCaseQueryResponse.getResults().get(0).getAsJsonObject().get("_ref").getAsString();
try{
//Add a Test Case Result
System.out.println("Creating Test Case Result...");
JsonObject newTestCaseResult = new JsonObject();
newTestCaseResult.addProperty("Verdict", "Pass");
newTestCaseResult.addProperty("Date", "2013-11-29T18:00:00.000Z");
newTestCaseResult.addProperty("Notes", "Automated Selenium Test Runs");
newTestCaseResult.addProperty("Build", "208");
newTestCaseResult.addProperty("TestCase", testCaseRef);
CreateRequest createRequest = new CreateRequest("testcaseresult", newTestCaseResult);
CreateResponse createResponse = restApi.create(createRequest);
if(createResponse.wasSuccessful()){
System.out.println(String.format("Created %s", createResponse.getObject().get("_ref").getAsString()));
//Read Test Case
String ref = Ref.getRelativeRef(createResponse.getObject().get("_ref").getAsString());
System.out.println(String.format("\nReading Test Case Result %s...", ref));
GetRequest getRequest = new GetRequest(ref);
getRequest.setFetch(new Fetch("Date", "Verdict"));
GetResponse getResponse = restApi.get(getRequest);
JsonObject obj = getResponse.getObject();
System.out.println(String.format("Read Test Case Result. Date = %s, Verdict = %s", obj.get("Date").getAsString(), obj.get("Verdict").getAsString()));
} else {
String[] createErrors;
createErrors = createResponse.getErrors();
System.out.println("Error occurred creating Test Case: ");
for ( i=0; i<createErrors.length;i++) {
System.out.println(createErrors[i]);
}
}
}
finally{
}
}
}
}
}
else {
String[] createErrors;
createErrors = testSetQueryResponse.getErrors();
System.out.println("Error occurred creating Test Case: ");
for (int i=0; i<createErrors.length;i++) {
System.out.println(createErrors[i]);
}
}
so,Any idea of how to update the test set results
A: Just need to add the below code at the new test case result
newTestCaseResult.addProperty("TestSet", testsetref);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/20281308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to listen specific value in DynamoDB I'm using aws DynamoDB and I Want the user to listen a value inserted on a Json value. So when this value is changed the user will trigger a function in the app.
Somebody know how to make it?
A: You have to setup DynamoDB streams. A lambda function attached to the stream is going to analyze db changes for related to the specific item and then perform other actions specific to your application.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70139550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Program to check if N^2 number of elements can be converted to a N*N symmetric matrix? I was solving some problems on matrices the other day when this question hit me. Is there any way we can check if N^2 number of elements can be arranged in such a way that they form a symmetric matrix?
For instance, if N=3, then N^2=9
Let the elements be : 1 2 3 1 2 3 1 2 3.
The above elements can be arranged to form a symmetric matrix like:-
1 2 3
2 3 1
3 1 2
Similarly, 9 1s can be used to form a matrix as follows:-
1 1 1
1 1 1
1 1 1
But the elements 1 2 3 4 5 6 7 8 9, can in no way be arranged to form a symmetric matrix.
I thought about this question a lot but could not come up with a solution. Could someone please help me?
A: In an N×N symmetric matrix, every entry above the main diagonal has an equal counterpart below the main diagonal. This means that, aside from the N elements on the main diagonal, all elements come in equal pairs. (Elements on the main diagonal can also come in equal pairs, but they're not required to; the matrix's symmetry isn't affected by, for example, whether a22 = a33 or not.)
So, you can simply count how often each distinct value occurs, and see how many of the values occur an odd number of times. If there are N or fewer distinct values that occur an odd number of times, then the main diagonal of an N×N matrix can accommodate the unpaired values, so a symmetric matrix is possible; otherwise, not.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/60910948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Slow loading of Images using Glide on RecyclerView I have a recyclerView with a grid layout, at a time 6 cards are being displayed on the screen and JSON with 10 objects are called and for image loading, I'm using glide
Now Image loading was not up to the marking
So I search more regarding Glide and found a method for calling the thumbnail
Glide
.with(Context)
.load(url)
.thumbnail(0.25f)
.transition(DrawableTransitionOptions.withCrossFade())
.into(ImageView);
But still wasn't helpful much
then I use another method of calling a RequestBuilder
It did help but not the desired level
Can anyone suggest what else I could do to decrease the loading time of images to make the user experience better?
A: Use these options (In kotlin) -
GlideApp.with(mContext)
.apply(getRectangleRequestOptions(true))
.load(url)
.thumbnail(0.5f)
.into(layout.bannerAdapterImg)
Where getSquareRequestOptions is -
fun getSquareRequestOptions(isCenterCrop:Boolean=true): RequestOptions {
return RequestOptions().also {
it.placeholder(R.drawable.ic_placeholder)
it.error(R.drawable.ic_err_image)
it.override(200, 200) // override size as you need
it.diskCacheStrategy(DiskCacheStrategy.ALL) //If your images are always same
it.format(DecodeFormat.PREFER_RGB_565) // the decode format - this will not use alpha at all
if(isCenterCrop)
it.centerCrop()
else
it.fitCenter()
}
}
*For java code just update getSquareRequestOptions function in java.
This is the best glide can do. If it still takes time than compress images from server side.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58451416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to change the pitch of a .wav file in Android? Can somebody tell me how to change the pitch of a wave file in Android?
A: Android does not have such functions built in, and the process is not at all trivial. If you would like to try and code it yourself, I suggest looking at such algorithms as PSOLA, WSOLA and Phase Vocoder for pitch alteration. The book DAFX by Udo Zölzer discusses many of these in quite good detail and most of it is fairly straightforward. Phase Vocoder, I believe, works the fastest, but also takes more DSP and mathematical knowledge to understand. PSOLA is perhaps the least mathematically complicated. I personally prefer WSOLA and Enhanced WSOLA (EWSOLA), but those take quite a bit of processing power.
For correlation techniques (if you use WSOLA) I suggest doing it if frequency domain (Google FFT-based correlation). It is much quicker.
If most of this had just gone over your head, you might want to reconsider doing this altogether, but I by no means try to discourage you. = )
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2025903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Ms Access Gathering data from subdatasheet Im trying to find a query which shows only fields with IS NULL from a subdatasheet column.
So far I have:
SELECT EmployeeId,FirstName,LastName,Salary
FROM Employee
WHERE ServiceDate IS NULL
Within the EmployeeId there is an expandable subdatasheet and I'm trying to figure out how to call upon the "ServiceDate" which is within the subdatasheet to display which employee hasn't got a service.
I hope this make sense.
EDIT:
Here are the two tables:
http://i.stack.imgur.com/3LWSh.jpg
http://i.stack.imgur.com/8zSVS.jpg
Result I'm after:
EmployeeId FirstName LastName Salary
E003 Ken Moore $59,000.00
A: Try something on these lines:
SELECT Employee.EmployeeId,Employee.FirstName,Employee.LastName,Employee.Salary
FROM Employee
LEFT JOIN Services
ON Employee.EmployeeId = Services.EmployeeId
WHERE Services.EmployeeId IS NULL
Do not forget that MS Access has a Find Unmatched query wizard.
You might like to look at:
Fundamental Microsoft Jet SQL for Access 2000
Intermediate Microsoft Jet SQL for Access 2000
Advanced Microsoft Jet SQL for Access 2000
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/25014819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Sizes of tensors must match except in dimension 2. Got 16 and 32 (The offending index is 0) I'm getting following error :
Sizes of tensors must match except in dimension 2. Got 16 and 32 (The offending index is 0)
in following code:
x1=self.avg_pool(l1)
print('x1:', x1.shape)
x2=self.avg_pool(l2)
print('x2:', x2.shape)
x3=self.avg_pool(l3)
print('x3:', x3.shape)
x4=self.avg_pool(l4)
print('x4:', x4.shape)
x = self.aspp(x)
print('x:', x.shape)
x=torch.cat((x4,x3,x2,x1,x),dim=1)
cout1=x
I'm getting shapes of x1,x2,x3,x4,x as
x1: torch.Size([5, 256, 32, 32])
x2: torch.Size([5, 512, 32, 32])
x3: torch.Size([5, 1024, 32, 32])
x4: torch.Size([5, 2048, 32, 32])
x: torch.Size([5, 256, 16, 16])
A: You are concatenating on dim=1, well that means you need to join the tensors one after the othe ralong dim=1. The value that you get after concatenation along dim=1 is value=256+512+1024+2048+256, provided shapes of the tensors match in other dimensions too. The size of tensor x should be x=(5,256,32,32).
A: Inputs (values of l1, l2, l3, l4) are not provided, so I just guessed it and tried to mimic your code. Below snippet works fine.
import torch
import torch.nn as nn
import torch.nn.functional as F
class Pyramid_Pooling(nn.Module):
def __init__(self, levels, inChans, outChans):
super(Pyramid_Pooling, self).__init__()
self.inChans = inChans
self.outChans = outChans
assert len(levels) == 4
self.pool_4 = nn.AvgPool2d((levels[3], levels[3]))
self.pool_3 = nn.AvgPool2d((levels[2], levels[2]))
self.pool_2 = nn.AvgPool2d((levels[1], levels[1]))
self.pool_1 = nn.AvgPool2d((levels[0], levels[0]))
# lower the number of channels to desired size
self.bottleneck_pyramid = nn.Conv2d(
self.inChans, self.outChans, kernel_size=1
)
def forward(self, en1, en2, en3, en4):
pooled_out_1 = self.pool_1(en1)
print(pooled_out_1.shape)
pooled_out_2 = self.pool_2(en2)
print(pooled_out_2.shape)
pooled_out_3 = self.pool_3(en3)
print(pooled_out_3.shape)
pooled_out_4 = self.pool_4(en4)
print(pooled_out_4.shape)
cat = torch.cat((pooled_out_1, pooled_out_2, pooled_out_3, pooled_out_4), 1)
out = self.bottleneck_pyramid(cat)
return out
I tried to guess your inputs and issue might be somehwere in input dimensions. You want to output 32 x 32 then input should be like below. Also added a 1x1 conv to lower the channels to desired output.
x_train_0 = torch.randn((3, 256, 32, 32), device = torch.device('cuda'))
x_train_1 = torch.randn((3, 512, 64, 64), device = torch.device('cuda'))
x_train_2 = torch.randn((3, 1024, 128, 128), device = torch.device('cuda'))
x_train_3 = torch.randn((3, 2048, 256, 256), device = torch.device('cuda'))
inChans = x_train_0.shape[1] + x_train_1.shape[1] + x_train_2.shape[1] + x_train_3.shape[1]
outChans = 512
# kernel_size = [1,2,4,8]
pyramid_pooling = Pyramid_Pooling([1, 2, 4, 8], inChans, outChans)
pyramid_pooling.to(torch.device('cuda'))
out = pyramid_pooling(x_train_0, x_train_1, x_train_2, x_train_3)
print(f"Output shape: {out.shape}")
Output will look like this:
torch.Size([3, 256, 32, 32])
torch.Size([3, 512, 32, 32])
torch.Size([3, 1024, 32, 32])
torch.Size([3, 2048, 32, 32])
Output shape: torch.Size([3, 512, 32, 32])
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/69420157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: mod_rewrite rule for "www to non www" doesn't redirect sub-pages I have made a thorough search before I have asked this question here. Please hear me out:
I am trying to redirect my blog from www to non www and it doesn't redirect any sub-pages. I have an http > https redirect in place as well and it works perfectly for both domain as well as the sub-pages. Here are the rules I have in my .htaccess
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteRule ^^rcp-pep-ipn //?rcp-pep-listener=IPN [QSA,L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
RewriteBase /
RewriteCond %{HTTPS} off [OR]
RewriteCond %{HTTP_HOST} ^www\. [NC]
RewriteCond %{HTTP_HOST} ^(?:www\.)?(.+)$ [NC]
RewriteRule ^ https://%1%{REQUEST_URI} [L,NE,R=301]
</IfModule>
# END WordPress
I'd really appreciate an explanation if I am doing anything wrong here. I have literally pulled my hair since I have used the exact same code (from the second RewriteBase /) for all other sites and it worked flawlessly.
A: You should bring those protocol checking conditions to the beginning. You have some problems within the rules too. Try:
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{HTTPS} off [OR]
RewriteCond %{HTTP_HOST} ^www\.(.*) [NC]
RewriteRule ^(.*)$ https://%1/$1 [R=301,NE,L]
RewriteRule ^rcp-pep-ipn /?rcp-pep-listener=IPN [QSA,L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51213679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Search files in shared Google Drive I have the problem.
Earlier I had the file for search in my own Google Drive. And the script worked.
function checkInFiles(){
var nameJSON = 'jobToCAF';
var files = DriveApp.searchFiles('title contains "' + nameJSON + '"');
while (files.hasNext()) {
var file = files.next();
Logger.log(file.getName());
}
}
Script searches for files named like 0080076042_jobToCAF.JSON.
The difference is that now the file is located in shared Google Drive and search doesn't work.
Can you help ?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/69103989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Different Java behavior for Generics and Arrays Why does java allows inconsistent type to be entered into a generic object reference but not in an array?
For Eg:
When initializing array:
int[] a = {1, 2, 3};
And, if I enter:
int[] a = {1, 2, "3"}; //Error for incompatible types
While for generics,
import java.util.ArrayList;
public class Test {
private static ArrayList tricky(ArrayList list) {
list.add(12345);
return list;
}
public static void main(String[] args) {
int i = 0;
ArrayList<String> list = new ArrayList<>();
list.add("String is King");
Test.tricky(list);
}
}
The above code will let you add any Type in the list object, resulting in a run time exception in some cases.
Why is there such a behavior?? Kindly give a proper explanation.
A: The method's parameter has no generic so all classes are allowed.
You may google 'type erasure' for more information.
If you add the generic type to your method you will get a compiler error:
private static ArrayList<String> tricky(ArrayList<String> list) { // ...
By the way, you do not need to return the list because you modify the same instance.
A: Here's why:
The reason you can get away with compiling this for arrays is because
there is a runtime exception (ArrayStoreException) that will prevent
you from putting the wrong type of object into an array. If you send a
Dog array into the method that takes an Animal array, and you add only
Dogs (including Dog subtypes, of course) into the array now referenced
by Animal, no problem. But if you DO try to add a Cat to the object
that is actually a Dog array, you'll get the exception. Generic
Methods (Exam Objectives 6.3 and 6.4) 615 616 Chapter 7: Generics and
Collections
But there IS no equivalent exception for generics, because
of type erasure! In other words, at runtime the JVM KNOWS the type of
arrays, but does NOT know the type of a collection. All the generic
type information is removed during compilation, so by the time it gets
to the JVM, there is simply no way to recognize the disaster of
putting a Cat into an ArrayList and vice versa (and it becomes
exactly like the problems you have when you use legacy, non-type safe
code)
Courtesy : SCJP Study guide by Kathy Sierra and Bert Bates
A: When you declare you ArrayList like ArrayList list = ... you do not declare the type of object your list will contain. By default, since every type has Object as superclass, it is an ArrayList<Object>.
For good practices, you should declare the type of your ArrayList<SomeType> and, thereby, avoid adding inconsistant elements (according to the type)
A: Because you haven't defined the generic type of your list it defaults to List<Object> which accepts anything that extends Object.
Thanks to auto-boxing a primitive int is converted to an Integer, which extends Object, when it is added to your list.
Your array only allows int's, so String's are not allowed.
A: This is because in your method parameter you did not specify a particular type for ArrayList so by default it can accept all type of objects.
import java.util.ArrayList;
public class Test {
//Specify which type of objects you want to store in Arraylist
private static ArrayList tricky(ArrayList<String> list) {
list.add(12345); //This will give compile time error now
return list;
}
public static void main(String[] args) {
int i = 0;
ArrayList<String> list = new ArrayList();
list.add("String is King");
Test.tricky(list);
}
}
A: When you use the tricky method to insert data into your ArrayList Collection, it doesn't match the specified type i.e String, but still This is compatible because of Generics compatibility with older Legacy codes.
If it wouldn't have been for this i.e if it would have been the same way as of arrays, then all of the pre-java generic code would have been broken and all the codes would have to be re-written.
Remember one thing for generics, All your type-specifications are compile time restrictions, so when you use the tricky method to insert data in your list reference, what happens is the compiler thinks of it as a list to which ANYTHING apart from primitives can be added.
Only if you would have written this:
...
public class Test {
private static ArrayList tricky(ArrayList<String> list) {
list.add(12345); //Error, couldn't add Integer to String
return list;
}
...
}
I have written a documented post on this, Read here.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42975464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: I can't fetch array elements on view section that have been passed from the controller using foreach() method I passed the array that have been fetched from the model into the view section. The values are fetched on controller part as well as on the view part. The used the foreach() method to fetch the array elements as shown below
Controller Part
function view_savings()
{
$user = $this->session->userdata(uname);
$report['savings'] = $this->money_m->get_savings($user);
$this->load->view('showsavings',$report);
}
View Part
<?php
foreach($savings as $vs)
{
echo $vs->username;
echo $vs->stype;
echo $vs->inst_name;
echo $vs->acc_name;
echo $vs->smonth;
echo $vs->syear;
}
?>
The array values are displayed from the $savings array(). Is there any problem with my code. Please help me..
A: Before calling the view,
print_r($report['savings']);
With above code are you getting any result??
A: just try it
<?php
if(isset($savings) && count($savings) > 0)
{
foreach($savings as $vs)
{
echo $vs['username'];
echo $vs['stype'];
echo $vs['inst_name'];
echo $vs['acc_name'];
echo $vs['smonth'];
echo $vs['syear'];
}
}
?>
Good Luck ['}
A: Use this in your view
<?php
foreach($savings->result() as $vs)
{
echo $vs->username;
echo $vs->stype;
echo $vs->inst_name;
echo $vs->acc_name;
echo $vs->smonth;
echo $vs->syear;
}
?>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31875662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Change ASP.Net Label text colour if contains '-' I'm trying to change the text/font colour of my ASP.net label if it contains a '-' symbol.
This is a for a percentage change label so negative numbers need to be green and positive red.
I keep getting TypeError: document.getElementById(....) is null
I know window.onLoad isn't best practice, this is just to get it tested quickly.
Can anyone advise what I've done wrong..going round in circles.
window.onload = fillDays;
function fillDays() {
var change = document.getElementById("<%=lblPercentageDifferenceToFillReqCurrentVsPreviousMonth %>").value;
if (change.indexOf(char) = '-') {
document.getElementById("<%=lblPercentageDifferenceToFillReqCurrentVsPreviousMonth %>").style.color = "green";
}
else {
document.getElementById("<%=lblPercentageDifferenceToFillReqCurrentVsPreviousMonth %>").style.color = "red";
}
console.log("fillDays")
};
A: You have to use ClientID
var change = document.getElementById("<%= lblPercentageDifferenceToFillReqCurrentVsPreviousMonth.ClientID %>").value;
Assuming that it is an actual Control like
<asp:TextBox ID="lblPercentageDifferenceToFillReqCurrentVsPreviousMonth" runat="server"></asp:TextBox>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51964225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Value is POSTED definitely but isset() says NO See the code give given below. I was fighting with this code since last 5 hours to know why isset() is evaluating the condition as false if value is posted exactly what it shall POST.
If I uncomment the line no. - 4,5,6,7,8 and put rest of the code from line no. 10 to 28 I can see the POSTED value .
Can Anyone help in this by any guidance or suggestion. I will be thankful.
<?php
include 'dbconnection.php';
include 'functions.php';
// var_dump($_POST); what happens when you uncomment this line?
//sec_session_start();
// $email = $_POST['logemail'];
// $password = $_POST['p'];
// echo $password;
// echo $email;
// Our custom secure way of starting a php session.
if(isset($_POST['logemail'], $_POST['p'])) {
$email = $_POST['logemail'];
$password = $_POST['p']; // The hashed password.
if(login($email, $password, $mysqli) === true) {
// Login success
//$url = 'mwq';
//echo '<META HTTP-EQUIV=Refresh CONTENT="0; URL='.$url.'">';
echo $password;
echo $email;
} else {
// Login failed
header('Location: login.php?error=1');
}
} else {
// The correct POST variables were not sent to this page.
echo 'Invalid Request Data Not POSTED';
}
?>
A: Without seeing the form I can only theorize what happens:
*
*You started with a clean slate, the form shows, you enter the username and password and submit it
*The code passes the condition and then performs the login
*The result of login(...) !== true for whatever reason (wrong password or bug in the code)
*The page redirects to login.php?error=1.
Now, the $_POST is empty and $_REQUEST contains array('error' => 1). Perhaps the login.php shouldn't redirect to itself but rather redirect back to the page where you show the form.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/12697319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: retrieve a picture stored in a folder Zk Tomcat I'm new in zk programming, I had stored pictures from my app to a folder out the server(tomcat) and I want to retrieve it , the folder is c:\temps and supposing the picture names is myPic.jpg when I write pics.setSrc("c:\temps\myPic.jpg"); it doesn't work for me what could I do to fix it the storage is very well but retrieving the picture is working only on the Eclipse IDE but in other browsers it's not working
@Wire
private org.zkoss.zul.Image pics;
private static final String SAVE_PATH = "C:\temps\";
private ProfileDao pd = new JpaProfileDao();
private ListModel<profile> profileModel;
@Override
public void doAfterCompose(Component comp) throws Exception {
super.doAfterCompose(comp);
profileModel = new ListModelList<Profile>(pd.findAll());
prfileSelect.setModel(profileModel);
String src="C://tmp//testimg.jpg";
pics.setSrc(src);
Clients.showNotification(pics.getSrc());
}
in the browser I have these problem
Not allowed to load local resource:file:///C://temps//testimg.jpg
A: you're trying to access to a file in the server side ,the server doesn't know about your disc
so use the aliases for tomcat
<Context crossContext="true" docBase="here_the_path_in_disc" path="project_name/resource_name" reloadable="true"/>
A: I haven't worked with ZK for a while, but I'm pretty sure you need to give it an URL, relative to your webapp root directory. So something like "/images/testimg.jpg" or a canonical URL like "https://www.google.dk/images/srpr/logo11w.png", like you would if you ere writing HTML.
A: Best approach will be to use alternativedocroot on Glassfish or aliases option on Tomcat - aliases
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/23954543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Carrierwave video thumbnail I am using the carrierwave gem (http://github.com/jnicklas/carrierwave) for ruby on rails.
How do I go about creating thumbnails for video uploads?
In my previous implementation with Paperclip, I was using FFMPEG.
How / where should this be done in Carrierwave?
There aren't too many useful resources on Google on this. Perhaps, someone with more experience can provide feedback.
Has anyone attempted to do this. Do share!
A: I have got it working as mentioned
It seems that the processing via ffmpeg needs to be done before model.save
A: It's not implemented actually on Carrierwave. So you need code it yourself by some process action in your Upload Class.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/4143549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How can I iterate over a file more than once? Currently, this code is giving me all lines from searchfile once. But with my newbie understanding, it should print out all lines from searchfile for every infile line!
searchfile = open('.\sometext.txt', 'r')
infile = open('.\somefile', 'r')
for line1 in infile:
for line2 in searchfile:
print line2
searchfile.close()
infile.close()
I tried to use searchfile.readlines() to create a list to print all infile lines for all searchfile lines, but it still does not work. Does anyone have a clue?
A: I suppose searchfile is a file you opened earlier, e. g. searchfile = open('.\someotherfile', 'r').
In this case, your construction doesn't work, because a file is an iterable which can be iterated over only once and then it is exhausted.
You have two options here:
*
*Reopen the file on every outer loop fun
*Read the file's contents into a list an iterate over this list as often as you need to.
What happens in your code?
At the start of the nested for loops, both your files are open and can be read from.
Whenever the first inner loop run is over, searchfile is at its end. When the outer loop now comes to process its second entry, the inner loop is like an empty loop, as it just cannot produce more entries.
A: The in and with keywords
You don't need two nested for loops!
Instead use the more pythonic in keyword like so:
with open("./search_file.txt", mode="r") as search_file:
lines_to_search = search_file.readlines()
with open("./file_to_search.txt", mode="r") as file_to_search:
for line_number, line in enumerate(file_to_search, start=1):
if line in lines_to_search:
print(f"Match at line {line_number}: {line}")
Pro tip: Open your files using the with statement to automatically close them.
A: You need to set searchfile current position to the beginning for every infile iteration. you can use seek function for this.
searchfile = open('.\sometext.txt', 'r')
infile = open('.\somefile', 'r')
for line1 in infile:
searchfile.seek(0,0)
for line2 in searchfile:
print line2
searchfile.close()
infile.close()
A: We need a bit more details about what are your objects in this code. But you probably would like to do:
infile = open('.\somefile', 'r')
for line1 in infile:
for line2 in line1:
print line2
searchfile.close()
infile.close()
If your infile is a list of lists - That are the cases where a nested for loop would make sense.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/59047840",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Object Detection on Cloud using React-Native Expo Complete React Native noobie here. I am trying to make an app in React Native Expo that will send an image to a cloud platform (still haven't decided on which one). The cloud will then recognize the objects in the image, put them in a square, and then return the image to the phone. I've been researching for days and I found APIs like Google Vision, and I found that Google has a module for multiple object detection. This is exactly what I need to do but I don't know how to implement it. I've been looking online and found a project that seems to be similar but is lacking object detection it rather has image labeling which is not what I need.
*
*Are there any 3rd party APIs I can use in my project that can be used in i.e. Firebase?
*How do I connect my app with i.e. Firebase? Can someone give me an example?
*The picture needs to be manipulated on the cloud thus the squares need to be added on the cloud. Is this possible? I've checked out Google Vision but it seems to just return a JSON file (might be wrong)?
*What would be the best cloud service that already has the API I need?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65389770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Position of an element in webpage I am trying to get the (x,y) co-ordinates of an element in javascript. I am using offset of jquery. My code is:
var offsets = $('#11a').offset();
var top = offsets.top;
var left = offsets.left;
console.log("Coordinates of re-ranker (top,left): " +top + "," +left)
My element is:
<p id= '11a'>
The value in console.log is different from the pixel values of 11a which is something like 60px x 23px. The value i am getting in console.log is something totally different and also in decimal. So what values am I getting in offsets? Is it different from pixel values?
A: .offset returns pixel values relative to the document. It is entirely possible for this method to return float values as not all sizes are pixel integers, for example:
Consider this HTML:
<div style="position: absolute; left: 33%;"></div>
The following command for me (for me):
console.log($("div").offset().left); // Outputs 276.203125
jsFiddle of the above.
However, it is important to note the yellow box on the linked page:
jQuery does not support getting the offset coordinates of hidden
elements or accounting for borders, margins, or padding set on the
body element.
While it is possible to get the coordinates of elements with
visibility:hidden set, display:none is excluded from the rendering
tree and thus has a position that is undefined
A: It gets you the offsets of the element relative to the document!
You can learn more here: http://api.jquery.com/offset/
A: You can get the css values via
$("#11a").css("top").replace("px", ""); // You have to remove the 'px' it returns
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/21110108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: location based services - how to improve the location with RSSI, TA and ARFCN? I get LAC,CellId from my device. With these values I can know the Lat/Lng and, by triangulation, the area where the device is.
I also get ARFCN, TA and RSSI. Is it possible to improve the location by using these values ?
A: You need to have access to cellid to lat/lng db from mobile network operator to get lat/long of your device. Other way of getting this is by using location apis provided by android, iphone etc.
A: Signal Strength could possibly give you an indication of how far from a tower you are, thus improving accuracy
What platform are you using?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/10872623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Call servelt from jsp inside tag How can I call a servlet from jsp inside an <a> tag and pass him a parameter ?
<a href="???" />
I can write <a href="/servletName" /> but how to pass him a parameter ?
A: yes You can do that in two ways:
*
*create a function with an ajax call to your servlet.
*point your href to the servlet link (as JB Nizat mentioned)
for the first method, you can follow the below way (if you use jquery) :
function callServer(){
$.ajax({
url: 'ServletName',
type: 'POST',
data: 'parameter1='+parameter1,
cache: false,
success: function (data) {
//console.log("SERVLET DATA: " + data);
if (typeof (data) !== 'undefined' && data !== '' && data !== null) {
var response = JSON.parse(data);
console.log(response);
}
},error: function(data){
};
});
}
and call this function in your tag like below:
<a href="javascript:callServer();"> </a>
or the much better way like :
<a href="#" onclick="callServer();"> </a>
you can select the better approach !
A: Using Anchor tag without Ajax like this you can try
<a href="servletName?paramName1=value1¶mName2=value2">click me to send parameter to servlet</a>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30125037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Update Field based on another table's Field values i want to update a filed named "outofdate"(type date : 2015-01-14 10:03:11 ) based on another filed name "lastmodification" .
i want to add 10 days to "outofdate" field where :outofdate < NOW() (actual date)
My code is not working:
Update *
`mytable` set outofdate = lastmodification + 84500*10
WHERE outofdate < NOW( ) LIMIT 0,100
thx in advance!
A: Update *? That is not valid syntax. I think the rest is basically ok:
Update mytable
set outofdate = lastmodification + interval 10 day;
WHERE outofdate < NOW( )
LIMIT 0, 100;
Note that the number of seconds in a day is not 84,500. Also, for date/time data types, use date_add() or interval addition.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28395557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to replace url hyphen(-) with underscore(_) in CodeIgniter 4 $routes->setTranslateURIDashes(true);
I have tried this method in routes.php file but it's not working. Plz help
A: Add to app/config/Routes.php:
$routes->setTranslateURIDashes(true);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61710765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Firebase authentication - onVerificationCompleted and onCodeSent both called I have question which is same as Firebase authentication - I get onVerificationCompleted callback 3s after getting onCodeSent However above question doesn't help me with my problem.
I've following flow in my app.
Signup (Activity)
-> onVerificationCompleted: I redirect user to CreateAccount activity.
-> onCodeSent: I redirect user to OTP screen -> on OTP verified user will be redirected to CreateAccount activity.
Issue/Problem:
I'm getting two call back initially user is redirected to CreateAccount via onVerificationCompleted and after few seconds I get call back of onCodeSent so app automatically opens OTP app.
*
*How to avoid two call backs?
*How to prevent OTP screen in case of onVerificationCompleted called?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71456899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Correct way to parse JSON payload with multiple root objects I'm using AFNetworking for my network requests. Assume that I have the following JSON payload:
{
"car": [{
"name": "Blue BMW",
"owner_id": "123"
}],
"owners": [{
"id": "123",
"name": "John"
}]
}
What is the correct way to parse that JSON structure considering that the object graph is not persisted using CoreData. Iterating through the owners object to find the owner details for each car would be highly inefficient, so what is a better approach?
A:
Iterating through the owners object to find the owner details for each car would be highly inefficient, so what is a better approach?
O(n^2) is perfectly fine for reasonably small n. On a modern iOS device, you'd have to get into an order of 10k objects for this to even see a performance hit—likely much smaller than what you're being sent back by JSON.
As others mentioned before in comments, and as the old saying goes, "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil".
Just code it. If your app is slow, profile it in instruments. Only then can you really know what the bottlenecks in your application are (humans are generally very bad at guessing a priori)
A: Use NSJSONSerialization class to parse JSON.
Use + (id)JSONObjectWithData:(NSData *)data options:(NSJSONReadingOptions)opt error:(NSError **)error
to create a Foundation object from given JSON data.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/17248750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-5"
}
|
Q: Assorting linked list in c i have a list of this struct and i want to sort the list depending on the value
of draw_number, and the list has a pointer front that points to the first struct
of the list and a pointer rear that points on the last struct of the list .I found many codes for sorting a list but i can't do my job for my code. the struct is:
typedef struct itm{
int draw_number;
char date[11];
char temi[6];
struct itm *next;
}item;
A: Simplest will be bubble sort.
item* sort(item *start){
item *node1,*node2;
int temp;
for(node1 = start; node1!=NULL;node1=node1->next){
for(node2 = start; node2!=NULL;node2=node2->next){
if(node2->draw_number > node1->draw_number){
temp = node1->draw_number;
node1->draw_number = node2->draw_number;
node2->draw_number = temp;
}
}
}
return start;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/17629898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-5"
}
|
Q: using current session for drupal_http_request() so I'm using drupal_http_request()Docs to fetch another page...the thing is that page is within a logged in area so the function returns the login page instead of the proper page that contains the current session....is there a way to configure drupal_http_request() so that it uses the current session to fetch the page or to pass the session data in?
A: When you want to inject the session into the HTTP request, you must mimic the standard behavior. Depending on how your session works, this means either adding the session cookie or the session get parameter.
drupal_http_request()Docs allows you to specify headers. You can for example build the cookie header for your session and then send it with the request.
To see how the cookie header is build, analyse the request-headers your browser sends to your drupal site, you can do that with firebug. Look for the Cookie: header in the request headers. Note that it can differ depending on server configuration.
You then can add the cookie information to the $headers parameter:
$headers = array('Cookie' => 'your sessionid cookie data');
drupal_http_request($url, $headers);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7149856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Remove specific things from given line in bash I am having this below as an output
2017-04-10 12:23:00.307411 IP 124.108.16.209.443 > 192.168.180.3.44526: tcp 1209
2017-04-10 12:23:00.836184 IP 192.168.180.3.43095 > www.facebook.com.443: tcp 303
2017-04-10 12:23:09.948709 IP www.facebook.com.443 > 192.168.180.3.47172: tcp 38
2017-04-10 12:23:09.986789 IP 192.168.180.31.47172 > www.facebook.com.443: tcp 0
and I want output like this,
2017-04-10 12:23:00 IP 192.168.180.3 > www.facebook.com
2017-04-10 12:23:09 IP 192.168.180.31 > www.facebook.com
and importantly want to remove lines which starts with alphabets after IP word and also delete that line which do not starts with 192.168.180 after IP word, basically from above example I only want 2nd and 4th line as output.
A:
to remove lines which starts with alphabets after IP word and also
delete that line which do not starts with 192.168.180 after IP word
awk approach:
awk '$4!~/^[[:alpha:]]/ && $4~/^192\.168\.180/' file
space is a default field separator in awk.
$4!~/^[[:alpha:]]/:
$4 - fourth field
!~ - not matches
/^[[:alpha:]]/ - regular expression, means "starts with alphabetic characters"
&& - boolean "and" operator.
boolean1 && boolean2 - True if both boolean1 and boolean2 are true.
$4~/^192\.168\.180/ - matches a line if the fourth field starts with 192.168.180
Additional approach:
To strip the unneeded parts of certain columns use the following approach:
awk -v p=".[^.]+$" '$4!~/^[[:alpha:]]/ && $4~/^192\.168\.180/
{$7=$8="";for(i=2;i<=6;i+=2)gsub(p,"",$i);print}' file
The output:
2017-04-10 12:23:00 IP 192.168.180.3 > www.facebook.com
2017-04-10 12:23:09 IP 192.168.180.31 > www.facebook.com
A: I - - want to remove lines which starts with alphabets after IP word and also means (essentially) the same thing as keep lines which start with numerals and
delete that line which do not starts with 192.168.180 after IP word means to keep lines which have with IP 192.168.180 in them and basically makes the first requirement obsolete. man grep:
DESCRIPTION
grep searches the named input FILEs for lines containing a match to the
given PATTERN. - - By default, grep prints the matching lines.
Try:
grep "IP 192.168.180" file
2017-04-10 12:23:00.836184 IP 192.168.180.3.43095 > www.facebook.com.443: tcp 303
2017-04-10 12:23:09.986789 IP 192.168.180.31.47172 > www.facebook.com.443: tcp 0
A: You can try sed, something like this, it just syntax:
sed -e '/pattern here/ { N; d; }'
N and d are the command you can use.
This delete 1 lines after a pattern (including the line with the pattern):
patter= your string
sed -e '/pattern/,+1d' file.txt
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43317325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Minimum value that not zero In Applescript, I need to find the integer value nearest to zero but not zero. Numbers are all zero or greater than zero. At present, I have three integers.
I guess I could write a loop, but is there an easier way?
examples:
{0,3,4} find 3.
{1,0,0} find 1
{4,10,2} find 2
{0,0,0} find nothing or 0
A: You need to write a loop because, at some point, every item in the list needs to be evaluated, so there's no getting around that (assuming an iterative method; you could, of course, write a recursive algorithm that doesn't contain an explicit loop—I'll illustrate both below).
1. Iteration
The iterative method keeps track of the lowest, non-zero number encountered as we work our way, one-by-one, through each number in the list. When we reach the end of the list, the tracked value will be the result we're after:
on minimumPositiveNumber from L
local L
if L = {} then return null
set |ξ| to 0
repeat with x in L
set x to x's contents
if (x < |ξ| and x ≠ 0) ¬
or |ξ| = 0 then ¬
set |ξ| to x
end repeat
|ξ|
end minimumPositiveNumber
get the minimumPositiveNumber from {10, 2, 0, 2, 4} --> 2
2. Recursion
The recursive method compares the first item in the list with the lowest, non-zero value in the rest of the list, keeping the lowest, non-zero value:
on minimumPositiveNumber from L
local L
if L = {} then return 0
set x to the first item of L
set y to minimumPositiveNumber from the rest of L
if (y < x and y ≠ 0) or x = 0 then return y
x
end minimumPositiveNumber
get the minimumPositiveNumber from {10, 2, 0, 2, 4} --> 2
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/55838252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How can i download some of content instead of all content of web page when i click on Save as HTML in browser When I click on save as HTML button its Download all content of web page but i have require to Download some for specific content from all content.
function download(filename, contents) {
const anchor = document.createElement('a');
anchor.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(contents));
anchor.setAttribute('download', filename);
anchor.style.display = 'none';
document.body.appendChild(anchor);
anchor.click();
document.body.removeChild(anchor);
}
const btn = document.querySelector('#save-btn');
btn.addEventListener('click', (evt) => {
const container = document.createElement('div');
const html = document.createElement('html');
html.innerHTML = document.documentElement.innerHTML;
container.appendChild(html);
download('index.html', container.innerHTML);
});
<p>Hello World</p>
<button id="save-btn">Save as HTML</button>
<div>
<p>Hello World-1</p>
<p>Hello World-2</p>
<p>Hello World-3</p>
</div>
On above HTML Example, when i click on save as HTML Button i have download some of portion of file like download div content instead of all content.
in sort when I open download file it display only this content:
<div>
<p>Hello World-1</p>
<p>Hello World-2</p>
<p>Hello World-3</p>
</div>
A: To only download the HTML of a specific element, change the logic to select that element instead of the entire body, like this:
document.querySelector('#save-btn').addEventListener('click', e => {
e.preventDefault();
let html = document.querySelector('div').outerHTML; // update this selector in your local version
download('index.html', html);
});
function download(filename, text) {
var element = document.createElement('a');
element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text));
element.setAttribute('download', filename);
element.style.display = 'none';
document.body.appendChild(element);
element.click();
document.body.removeChild(element);
}
Working example
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/67501059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Unable to import the package in jupyter notebook I am unable to import the package in the conda env inside jupyter notebook.
I can import the package via terminal but it's not loading in the notebook.
Can someone help me to debug this?
Update: Tried to add site-packages to path as well as per the suggestions by @Azhar
A: I had a same problem. Solution for me was:
*
*In jupyter print %pip install lasio
*Reset the kernel and start again
That's all. By the way, via conda it didn't work.
Good lick!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73678066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to decrease deletion time in IBM db2 We are using IBM db2, trying to delete 200000 data from different tables, using stored procedure invoked through java code.
It's taking 75+ hours to delete, suggest ways to optimize the deletion time.
we can't stop the log for the deletion.
we can't use the truncate feature.
A: Something like this is one way
/*
* Example of deleting X rows of data at a time from a table to avoid e.g. transaction log full on row organized talbes
*
*/
BEGIN
DECLARE DONE BOOLEAN DEFAULT FALSE;
--
DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' BEGIN SET DONE = TRUE; END;
--
WHILE NOT DONE
DO
DELETE FROM (SELECT * FROM MY_TABLE WHERE COL1 = 'A' FETCH FIRST 20000 ROWS ONLY)
COMMIT;
END WHILE;
END
You could also consider using MDC tables to allow "rollout" deletion, or other physical design optimizations
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65352558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: How can I define a Java 9 module inside both a source and source-test directories? I'm having a really hard time with some Java modules. I'm working on an old Java project which has been "migrated" to Java 11. Nowhere in the project is there any module-info.java file defined.
I'm getting compilation errors in Eclipse/VS Code, which look like:
The package org.w3c.dom is accessible from more than one module: <unnamed>, java.xml
I don't fully understand why it's causing the problem, but I added a module-info.java definition to the root of the module.
module com.company.app {
requires java.xml;
}
And that compilation error went away. I now have visibility errors everywhere and many, many more than before.
I've started to fix the visibility errors with exports and imports entries as needed, but now I have a problem.
In one of the projects, there is a source and a separate source-test folder. I've defined a module definition in the source folder.
The code in the source-test folder is separate, but has the same package structure. The following code:
import static org.junit.Assert.assertNotNull;
import org.junit.Test;
import org.junit.experimental.categories.Category;
The import org cannot be resolved. (in the line of the import static).
The type org.junit.Test is not accessible (in the corresponding line)
The type org.junit.experimental.categories.Category is not accessible (once again, in the corresponding line.)
I don't want to add the junit dependency to the main project code, since it's a testing dependency. However, if I define another module-info.java module inside the source-test folder, it complains about the build path containing a duplicate entry 'module-info.java'.
How can the dependencies and modules be correctly defined?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/55973047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Why have I to write c - '0' instead of just c? Hey I can't understand why my code doesn't write when I put just ++ndigit[c] (instead of ++ndigit[c -'0'], then with ++nchar[c] it's ok.
If you have any tuto I'll be really interested !
#include <stdio.h>
int main()
{
int c, i, y, ns;
int ndigit[10];
int nchar[26];
ns = 0;
for(i = 0; i >= 0 && i<= 9; ++i) {
ndigit[i] = 0;
}
for(y = 'a'; y <= 'z'; ++y) {
nchar[y] = 0;
}
while((c = getchar()) != EOF) {
if(c == ' ' || c == '\t') {
++ns;
}
if(c >= 'a' && c <= 'z') {
++nchar[c];
}
if(c >= '0' && c <= '9') {
++ndigit[c];
//++ndigit[c-'0'];
}
if(c == '\n') {
printf("chiffres: ");
for(i=0;i<10;++i) {
printf("%d:%d ", i, ndigit[i]);
}
printf("lettres: ");
for(y='a';y<='z';++y) {
printf("%d:%d ", y, nchar[y]);
}
printf("space: %d\n", ns);
}
}
}
A: Actually when you set the variable to c='0', it means that the value of c is now the ascii value of '0' and that is = 48.
Since you are setting the value of c to 48 but the array size is 10, your code will get a runtime exception because you are trying to access an index that doesn't even exist.
Remember when you use '0' it means character. So setting this value to an int variable makes the value equals to the ascii value of that character. Instead you can use c=0 directly.
A: Because the character '4' (for example) is usually not equal to the integer 4. I.e. '4' != 4.
Using the most common character encoding scheme ASCII, the character '4' has the value 52, and the character '0' has the value 48. That means if you do e.g. '4' - '0' you in practice to 52 - 48 and get the result 4 as an integer.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43468998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Share project between Windows Store Application And WCF Service Is it possible to create and "share" a class library project between an Windows Store Application and WCF Service project ?
I created Windows Store Application project and WCF Service project. Now, both projects should refer a Class library project.
If I create a regular Class Library project, the WCF Service project can refer it.
But when I add the reference to the Windows Store Application Visual Studio 2012 says :
Unable to add the reference to project XXX
So I decided to replace the regular Class Library project by a Class Library(Windows Store apps).
Now the Windows app project can refer the Class Library(Windows Store apps).
But when I add the reference to the WCF Service Visual Studio says :
Unable to add the reference to project XXX
A: To share code between the two projects create a Portable Class Library (PCL) project and reference it in the two projects.
Make sure you choose the right .Net framework (for compatibility with WCF project).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/18720423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to use the find command in Java Wanted to know if there is a find functionality in Java.
Like in Linux we use the below command to find a file :
find / -iname <filename> or find . -iname <filename>
Is there a similar way to find a file in Java? I have a directory structure and need to find certain files in some sub directories as well as sub-sub directories.
Eg: I have a package abc/test/java
This contains futher directories say
abc/test/java/1/3 , abc/test/java/imp/1, abc/test/java/tester/pro etc.
So basically the abc/test/java package is common and it has a lot of directories inside it which contain a lot .java files.
I need a way to obtain the absolute path of all these .java files.
A: You can use unix4j
Unix4jCommandBuilder unix4j = Unix4j.builder();
List<String> testClasses = unix4j.find("./src/test/java/", "*.java").toStringList();
for(String path: testClasses){
System.out.println(path);
}
pom.xml dependency:
<dependency>
<groupId>org.unix4j</groupId>
<artifactId>unix4j-command</artifactId>
<version>0.3</version>
</dependency>
Gradle dependency:
compile 'org.unix4j:unix4j-command:0.2'
A: You probably do not have to re-invent the wheel because library named Finder already implements the functionality of Unix find command: https://commons.apache.org/sandbox/commons-finder/
A: Here's a java 8 snippet to get you started if you want to roll your own. You might want to read up on the caveats of Files.list though.
public class Find {
public static void main(String[] args) throws IOException {
Path path = Paths.get("/tmp");
Stream<Path> matches = listFiles(path).filter(matchesGlob("**/that"));
matches.forEach(System.out::println);
}
private static Predicate<Path> matchesGlob(String glob) {
FileSystem fileSystem = FileSystems.getDefault();
PathMatcher pathMatcher = fileSystem.getPathMatcher("glob:" + glob);
return pathMatcher::matches;
}
public static Stream<Path> listFiles(Path path){
try {
return Files.isDirectory(path) ? Files.list(path).flatMap(Find::listFiles) : Stream.of(path);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30974148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Linkedin authentication request error so I'm implementing the option to login with your linkedin account, but I find that sometimes you will get a request error saying:
Request Error
We’re sorry, there was a problem with your request. Please make sure you have cookies enabled and try again.
Or follow this link to return to the home page.
So I did some digging and I found that this error pops up if you don't have a certain cookie from linkedin called JSESSIONID. This is only created when you go to linkedin.com, but not my extension authentication page. Anyone have an explanation and a solution?
Thanks
A: Here is a work around:
link to an approved solution
it provides a java implementation, and they point out it is more about the version of the library you are using.
hopefully it helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/25776169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to combine several list and create dataframe pandas I have list that contains something like this:
[] [['a' 'b' 5] ['c' 'd' 2]] []
I don't understand, how to combine these several lists in one list and exclude empty lists. Finally, I need to get Data Frame with 3 columns.
A: You can do something like this-
from functools import reduce
nested_list = [[], ['a','b',5],['c', 'd', 2], []]
merged_list = reduce((lambda x, y:x+y), nested_list)
This solution applies for single level down type nested lists([[a,b,c],[x,y,z]]).
If you can provide what type of list you want to be merged I can provide a solution for that. For now, I have assumed it's just a single level down nested list.
A: Assuming the desired output will be looked like this:
col1 col2 col3
NaN ['a' 'b' 5] NaN
NaN ['c' 'd' 2] NaN
And currently you are having the following list at your hands:
>>>a_list
[[], [['a' 'b' 5], ['c' 'd' 2]], []]
Then you can do the following to create the DataFrame:
>>>import pandas as pd
>>>import numpy as np
>>>df = pd.DataFrame(columns=['col1','col2','col3'])
>>>a_list = [[], [['a' 'b' 5], ['c' 'd' 2]], []]
>>>for i in range(len(df.columns.tolist())):
... try:
... df[df.columns[i]] = a_list[i]
... except:
... df[df.columns[i]] = np.nan
>>>df
col1 col2 col3
0 NaN [a, b, 5] NaN
1 NaN [c, d, 2] NaN
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/44108211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Looking to convert an console app to a asap.net web form I currently have this console app that extracts words from an image using Microsoft's Optical Character Recognition (OCR). the results currently post into a console app, I want to create the same thing but for a web page form. What's the best way to do that? Can I use a list box or a label to show the results instead of console.writeline?
here's the code for the console app
static class Program{
// Add your Computer Vision subscription key and endpoint to your environment variables.
const string subscriptionKey = ("********"); const string endpoint = ("https://wesam.cognitiveservices.azure.com/");
// the OCR method endpoint
static string uriBase = endpoint + "vision/v2.1/ocr";
static async Task Main()
{
// Get the path and filename to process from the user.
Console.WriteLine("Optical Character Recognition:");
Console.Write("Enter the path to an image with text you wish to read: ");
string imageFilePath = @"C:\Users\alabe\Downloads\Syria.jpg";
if (File.Exists(imageFilePath))
{
// Call the REST API method.
Console.WriteLine("\nWait a moment for the results to appear.\n");
await MakeOCRRequest(imageFilePath);
}
else
{
Console.WriteLine("\nInvalid file path");
}
Console.WriteLine("\nPress Enter to exit...");
Console.ReadLine();
}
/// <summary>
/// Gets the text visible in the specified image file by using
/// the Computer Vision REST API.
/// </summary>
/// <param name="imageFilePath">The image file with printed text.</param>
static async Task MakeOCRRequest(string imageFilePath)
{
try
{
HttpClient client = new HttpClient();
// Request headers.
client.DefaultRequestHeaders.Add(
"Ocp-Apim-Subscription-Key", subscriptionKey);
// Request parameters.
// The language parameter doesn't specify a language, so the
// method detects it automatically.
// The detectOrientation parameter is set to true, so the method detects and
// and corrects text orientation before detecting text.
string requestParameters = "language=unk&detectOrientation=true";
// Assemble the URI for the REST API method.
string uri = uriBase + "?" + requestParameters;
HttpResponseMessage response;
// Read the contents of the specified local image
// into a byte array.
byte[] byteData = GetImageAsByteArray(imageFilePath);
// Add the byte array as an octet stream to the request body.
using (ByteArrayContent content = new ByteArrayContent(byteData))
{
// This example uses the "application/octet-stream" content type.
// The other content types you can use are "application/json"
// and "multipart/form-data".
content.Headers.ContentType =
new MediaTypeHeaderValue("application/octet-stream");
// Asynchronously call the REST API method.
response = await client.PostAsync(uri, content);
}
// Asynchronously get the JSON response.
string contentString = await response.Content.ReadAsStringAsync();
// Display the JSON response.
Console.WriteLine("\nResponse:\n\n{0}\n",
JToken.Parse(contentString).ToString());
Rootobject r = JsonConvert.DeserializeObject<Rootobject>(contentString);
foreach (Region region in r.regions)
{
foreach (Line line in region.lines)
{
foreach (Word word in line.words)
{
Console.WriteLine(word.text);
Console.Write("");
}
Console.WriteLine();
}
}
}
catch (Exception e)
{
Console.WriteLine("\n" + e.Message);
}
}
/// <summary>
/// Returns the contents of the specified file as a byte array.
/// </summary>
/// <param name="imageFilePath">The image file to read.</param>
/// <returns>The byte array of the image data.</returns>
static byte[] GetImageAsByteArray(string imageFilePath)
{
// Open a read-only file stream for the specified file.
using (FileStream fileStream =
new FileStream(imageFilePath, FileMode.Open, FileAccess.Read))
{
// Read the file's contents into a byte array.
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}
}
}
A: Instead of chasing down Web Forms, try using Blazor and .Net Core. You still get to re-use your existing C# code. Quick tutorial here: Blazor Tutorial
Starting a new Blazor app (provided you have .Net core installed) is as simple as running one of the following commands from powershell in a folder of your choosing.
//serverside (renders views on server and syncs to client)
dotnet new blazorserver
//clientside (c# is transpiled to webassembly and runs purely in client browser - still finicky but YMMV)
dotnet new blazorwasm
You could use that existing working project as a springboard to learn a new technology, and gets you up and running fast.
I have followed the following article for getting file uploads, and you could customize example to get image bytes. blazor-inputfile
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65130444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Combine JSON decoding pipeline, when response can be an array or an object at runtime I have an API that may return a JSON array of JSON objects or a single JSON object.
How do I write a combine publisher pipeline that handles this case?
I normally hard-code the type of the JSON-response when I add my combine decode operator to my pipeline:
.decode(type: [MyArrayType].self, decoder: JSONDecoder())
.decode(type: MyObjectType.self, decoder: JSONDecoder())
A: It's definitely possible to update the publisher chain and to try to decode an array of [SomeDecodable], and failing that, fallback on decoding the SomeDecodable itself.
The pattern would be to wrap it in a FlatMap, and to deal with a failure inside of it.
So, let's say you have some upstream publisher with an output of Data and failure of Error, and you're trying to decode some generic type T: Decodable, this could be a way to approach:
upstream
.flatMap { data in
Just(data)
// attempt to decode as [T]
.decode(type: [T].self, decoder: JSONDecoder())
// if successful, publish each element one-by-one
.flatMap { arr -> AnyPublisher<T, Error> in
Publishers.Sequence(sequence: arr).eraseToAnyPublisher()
}
// if error, attempt to decode as T, possibly failing
.tryCatch { _ in Just(data).decode(type: T.self, decoder: JSONDecoder()) }
}
// this will be an AnyPublisher<T, Error>
.eraseToAnyPublisher()
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/63237595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to import normalised json data from several files into a pandas dataframe? I have json datafiles in several directories that I want to import into Pandas to do some data analysis. The format of the json depends on the type defined in the directory name. For example,
dir1_typeA/
file1
file2
...
dir1_typeB/
file1
file2
...
dir2_typeB/
file1
...
dir2_typeA/
file1
file2
Each file contains a complex nested json string that will be a row of the DataFrame. I will have two data frames for each TypeA and TypeB. Later on I will append them if needed.
So, far I've got all the files paths I need with os.walk and am trying to go through
import os
from glob import glob
PATH = 'dir/filepath'
files = [y for x in os.walk(PATH) for y in glob(os.path.join(x[0], 'file*'))]
for file in files:
with open(issuefile, 'r') as f:
data = f.read()
data_json = json_normalize(json.loads(data))
type = ' '.join(issuefile.split('/')[3]
data_json['type'] = type
# append to data frame for typeA and typeB
if 'typeA' in type:
# append to typeA dataframe
else:
# append to typeB dataframe
There is one added issue, which is files inside a directory may have slightly different fields. For example, file1 may have a few more fields that file2 in dir1_typeA. So, I need to accommodate that dynamic nature in data frame for each type as well.
How do I create these two dataframes?
A: I think you should concatenate the files together first before you read them into pandas, here is how you'd do it in bash (you could also do it in Python):
cat `find *typeA` > typeA
cat `find *typeB` > typeB
Then you can import it into pandas using io.json.json_normalize:
import json
with open('typeA') as f:
data = [json.loads(l) for l in f.readlines()]
dfA = pd.io.json.json_normalize(data)
dfA
# that this.first this.second
# 0 otherthing thing thing
# 1 otherthing thing thing
# 2 otherthing thing thing
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/39632248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: accessing images I am new to Symfony.
I created a layout page in which I have this :
<img src="images/header.jpg" width="790" height="228" alt="" />
but the image isn't displayed in the page when accessed from the browser.
I put the file header.jpg in the web/images/ folder.
I thought I could learn Symfony in a week while working on a small project. is it possible ?
A: Use slash at the beginning like
<img src="/images/header.jpg" width="790" height="228" alt="" />
You can also use image_tag (which is better for routing)
image_tag('/images/header.jpg', array('alt' => __("My image")))
In the array with parameters you can add all HTML attributes like width, height, alt etc.
P.S. IT's not easy to learn Symfony. You need much more time
A: If you don't want a fully PHP generated image tag but just want the correct path to your image, you can do :
<img src="<?php echo image_path('header.jpg'); ?>"> width="700" height="228" alt="" />
Notice that the path passed to image_path excludes the /images part as this is automatically determined and created for you by Symfony, all you need to supply is the path to the file relative to the image directory.
You can also get to your image directory a little more crudely using
sfConfig::get('sf_web_dir')."/images/path/to/your/image.jpg"
It should be noted that using image_tag has a much larger performance cost attached to it than using image_path as noted on thirtyseven's blog
Hope that helps :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2071317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Distributed mutex There are hundreds of servers in our cloud. There is a script which can be invoked at any time by any of these servers. I have to make sure that at any given time only one server is running the script. While one server has acquired lock on the script, and another server tries to execute, just write to a log file and exit. There can be multiple such scripts, each having a separate mutex lock. That means I want a solution which accommodates multiple servers multiple scripts. I am looking for a very simple solution. Please point to me to any available tool or suggest me on popular ways of implementing this.
A: Do you mean you want to implement something yourself like a single server which controls the locks for each script?
All your others servers would have to ask it for 'permission' to run the script and then inform it when they are done, probably with some timeout check mechanism also. You would need to think about having some high availability mechanism to ensue your 'lock controller' server does not become a single point of failure for the entire system. Also, you may want to check if you will need to queue requests rather than just existing - even if it is not a requirement now, if it is likely to become one it might be easier to design for it from the start.
Some common approaches are listed in the answers to these question here - the questions are a bit old but I think still relevant:
Distributed Lock Service
What are some good ways to do intermachine locking?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/18526843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Why is my SPARQL query so slow? I'm trying to fetch a number of queries from the database of the EU plenary debates through a SPARQL interface (Interface here, schema here). As I do that I would like to retrieve the names of the speaker, their home country and their partyname.
This takes me 5 minutes to complete for each agenda item, which seems slow. Am I making any obvious mistakes in my query that is slowing it down?
SELECT ?text (SAMPLE(?speaker) AS ?speaker) (SAMPLE(?given) AS ?given) (SAMPLE(?surname) AS ?surname) (SAMPLE(?acronym) AS ?country) (SAMPLE(?partyLabel) AS ?partyLabel) (SAMPLE(?type) AS ?type)
WHERE {
<http://purl.org/linkedpolitics/eu/plenary/2010-12-16_AgendaItem_4> dcterms:hasPart ?speech.
?speech lpv:speaker ?speaker.
?speaker foaf:givenName ?given.
?speaker foaf:familyName ?surname.
?speaker lpv:countryOfRepresentation ?country.
?country lpv:acronym ?acronym.
?speech lpv:translatedText ?text.
?speaker lpv:politicalFunction ?func.
?func lpv:institution ?institution.
?institution rdfs:label ?partyLabel.
?institution rdf:type ?type.
FILTER(langMatches(lang(?text), "en"))
} GROUP BY ?text
Note, changing ?speech lpv:translatedText ?text. to ?speech lpv:textt ?text. reduces query time to 30 seconds.
A: There doesn't look to be anything particularly wrong with your SPARQL query and you have made no obvious mistakes (other than some syntax validity issues which I discuss later)
The problem appears to be that the SPARQL service you are using uses a triple store that doesn't cope with queries with large numbers of joins very well. When experimenting with your query moving the triple patterns around produced a Stack Overflow in the SPARQL service!
I would suggest downloading the data yourself from http://linkedpolitics.ops.few.vu.nl/home - there are links under point 3 of the About the Data section from which you can download the data yourself. You can then load it into the triple store of your choice and run your query against that instead.
For example I downloaded the data and put it into Apache Jena Fuseki (disclaimer - I work on the Apache Jena project) and was able to run the query almost instantaneously after I fixed the query to be proper valid SPARQL.
Making the Query valid SPARQL
The query as given is not strictly valid SPARQL so you'll need to correct it in order to run it elsewhere.
Firstly the various prefixes used are not defined by the query because the service you are using inserts them automatically, to run this query against another triple store you'll need to add the following to the start of the query:
PREFIX dcterms: <http://purl.org/dc/terms/>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
PREFIX lpv: <http://purl.org/linkedpolitics/vocabulary/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
It is also not legal to perform a variable assignment where the variable name given is already in scope e.g. (SAMPLE(?speaker) AS ?speaker) so those need to change:
(SAMPLE(?speaker) AS ?speaker1)
Which results in the following valid and portable SPARQL query:
PREFIX dcterms: <http://purl.org/dc/terms/>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
PREFIX lpv: <http://purl.org/linkedpolitics/vocabulary/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT ?text (SAMPLE(?speaker) AS ?speaker1) (SAMPLE(?given) AS ?given1) (SAMPLE(?surname) AS ?surname1) (SAMPLE(?acronym) AS ?country1) (SAMPLE(?partyLabel) AS ?partyLabel1) (SAMPLE(?type) AS ?type1)
WHERE {
<http://purl.org/linkedpolitics/eu/plenary/2010-12-16_AgendaItem_4> dcterms:hasPart ?speech.
?speech lpv:speaker ?speaker.
?speaker foaf:givenName ?given.
?speaker foaf:familyName ?surname.
?speaker lpv:countryOfRepresentation ?country.
?country lpv:acronym ?acronym.
?speech lpv:translatedText ?text.
?speaker lpv:politicalFunction ?func.
?func lpv:institution ?institution.
?institution rdfs:label ?partyLabel.
?institution rdf:type ?type.
FILTER(langMatches(lang(?text), "en"))
} GROUP BY ?text
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/33540751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Joomla! K2 - How to fetch Author Item Count for user page? I have several authors on my website and each of them has a page of his own with his image, description and a list of all his items. I would like to add an "Item Counter" to their pages, so the page will look like this:
author name, number of posts -> author description -> author items.
The only thing i miss here is the number of posts by the author.
Thank you so much for the answers!
A: You will need to slightly modify the K2 view (we did this to one of our clients). You will need to create a query that resembles the following in the view:
SELECT count(*) FROM #__k2_items WHERE authorid='id';
Now you should pass the result of that query to the template (using the assignRef function on the $this object. That's it!
Note: not sure about the field name authorid, I'm sure it's something else but I can't remember it on top of my head.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24439788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Not able to Update password and useraccountcontrol in AD using Spring LDAP I am new to Spring LDAP and Active directory and facing issue while updating password for new created user in AD.
Using SPRING LDAP I have first created User in AD successfully and then tried to update password and useraccountcontrol of user when I getting below exception.
We have been trying for last 1 week and not able to resolve. Any help/direction is highly appreciated.
I have been through many blogs and tried as mentioned in below two blogs but still blocked and getting same exception:
How do I resolve "WILL_NOT_PERFORM" MS AD reply when trying to change password in scala w/ the unboundid LDAP SDK?
Adding a user with a password in Active Directory LDAP
Stacktrace:
16:43:56,991 INFO [stdout] (http-localhost-127.0.0.1-8080-1) INFO [http-localhost-127.0.0.1-8080-1] (HelperDao.java:26) - HelperDao.getNextUserId(): entry
16:43:57,007 INFO [stdout] (http-localhost-127.0.0.1-8080-1) Hibernate: SELECT LTRIM(TO_CHAR( IP_USER_XDUSERID_SEQ.nextval, '000000000000000000000000000')) ID from dual
16:43:57,164 INFO [stdout] (http-localhost-127.0.0.1-8080-1) INFO [http-localhost-127.0.0.1-8080-1] (HelperDao.java:30) - HelperDao.getNextUserId(): exit
16:47:17,051 INFO [stdout] (http-localhost-127.0.0.1-8080-1) 16:47:17.051 [http-localhost-127.0.0.1-8080-1] ERROR com.st.liotroevo.web.dao.UserADRepository - catching
16:47:17,051 INFO [stdout] (http-localhost-127.0.0.1-8080-1) javax.naming.OperationNotSupportedException: [LDAP: error code 53 - 0000200D: SvcErr: DSID-031A0FC0, problem 5003 (WILL_NOT_PERFORM), data 0
16:47:17,067 INFO [stdout] (http-localhost-127.0.0.1-8080-1)
16:47:17,067 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3160) ~[?:1.7.0_45]
16:47:17,067 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:3033) ~[?:1.7.0_45]
16:47:17,067 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2840) ~[?:1.7.0_45]
16:47:17,067 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at com.sun.jndi.ldap.LdapCtx.c_modifyAttributes(LdapCtx.java:1478) ~[?:1.7.0_45]
16:47:17,067 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_modifyAttributes(ComponentDirContext.java:273) ~[?:?]
16:47:17,067 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.modifyAttributes(PartialCompositeDirContext.java:190) ~[?:?]
16:47:17,099 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.modifyAttributes(PartialCompositeDirContext.java:179) ~[?:?]
16:47:17,099 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at javax.naming.directory.InitialDirContext.modifyAttributes(InitialDirContext.java:167) ~[?:1.7.0_45]
16:47:17,099 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_45]
16:47:17,099 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_45]
16:47:17,099 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_45]
16:47:17,099 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_45]
16:47:17,099 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at org.springframework.transaction.compensating.support.CompensatingTransactionUtils.performOperation(CompensatingTransactionUtils.java:69) ~[spring-ldap-core-2.0.2.RELEASE.jar:2.0.2.RELEASE]
16:47:17,099 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at org.springframework.ldap.transaction.compensating.manager.TransactionAwareDirContextInvocationHandler.invoke(TransactionAwareDirContextInvocationHandler.java:85) ~[spring-ldap-core-2.0.2.RELEASE.jar:2.0.2.RELEASE]
16:47:17,099 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at com.sun.proxy.$Proxy69.modifyAttributes(Unknown Source) ~[?:?]
16:47:17,099 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at com.st.liotroevo.web.dao.UserADRepository.update(UserADRepository.java:104) [classes:?]
16:47:17,099 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at com.st.liotroevo.web.service.UserService.updateUser(UserService.java:92) [classes:?]
16:47:17,099 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at com.st.liotroevo.web.service.serviceImpl.IPRegistrationServiceImpl.createUser(IPRegistrationServiceImpl.java:72) [classes:?]
16:47:17,099 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_45]
16:47:17,115 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_45]
16:47:17,115 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_45]
16:47:17,115 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_45]
16:47:17,115 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at org.jboss.ws.common.invocation.AbstractInvocationHandlerJSE.invoke(AbstractInvocationHandlerJSE.java:111) [jbossws-common-2.0.2.GA.jar!/:2.0.2.GA]
16:47:17,115 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at org.jboss.wsf.stack.cxf.JBossWSInvoker._invokeInternal(JBossWSInvoker.java:181) [jbossws-cxf-server-4.0.2.GA.jar!/:4.0.2.GA]
16:47:17,115 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at org.jboss.wsf.stack.cxf.JBossWSInvoker.invoke(JBossWSInvoker.java:127) [jbossws-cxf-server-4.0.2.GA.jar!/:4.0.2.GA]
16:47:17,115 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:58) [cxf-rt-core-2.4.6.jar!/:2.4.6]
16:47:17,115 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [?:1.7.0_45]
16:47:17,115 INFO [stdout] (http-localhost-127.0.0.1-8080-1) at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_45]
Below is the code snippet:
ldap.url=ldaps://url:636
ldap.userDn=CN=IP User,OU=AdminAccounts,DC=stp-qa,DC=st,DC=com
ldap.password=dummyPass
ldap.base=OU=ST,OU=People,DC=stp-qa,DC=st,DC=com
ldap.clean=false
@Entry(objectClasses = { "top", "person", "organizationalPerson","user","st-individualpassportuser"})
public final class User {
@Id
private Name dn;
@Attribute(name = "mail")
private String email;
@Attribute(name = "cn")
@DnAttribute(value="cn",index=0)
private String fullName;
@Attribute(name = "givenName")
private String firstName;
@Attribute(name = "sn")
private String lastName;
@Attribute(name = "st-AccValidationStatus")
private String accountStatus;
@Attribute(name = "st-entryStatus")
private String validationStatus;
@Attribute(name = "whenCreated")
private String creationDate;
@Attribute(name = "st-ValidatedOn")
private String validationDate;
@Attribute(name = "st-ValidatedBy")
private String validatedBy;
@Attribute(name = "st-currentLogon")
private String lastLogon;
@Attribute(name = "st-loginRedirectURL")
private String loginRedirectUrl;
@Attribute(name = "st-jvCompany")
private String jvCode;
@Attribute(name = "sAMAccountName")
private String samAccount;
@Attribute(name = "st-userSpecifedCompany")
private String employerName;
@Attribute(name = "postalCode")
private String zipCode;
@Attribute(name="st-xduserid")
private String xdUserId;
@Attribute(name="st-Logincount")
private String loginCount;
@Attribute(name="unicodePwd")
private byte[] unicodePassword;
@Attribute(name="userAccountControl")
private String userAccountControl;
@Attribute(name="st-AccLastValidated")
private String userAccLastValidated;
@Attribute(name="st-secretQuestion")
private String userSecretQuestion;
@Attribute(name="st-secretAnswer")
private String userAnswerToSecretQuestion;
}
Java Class for Password computation:
/**
* Add unicode Password to userObject.
* Ldap does not allow to set password/userAccountControl during creation of user by design, So need to update user after creation in AD with password and userAccountControl.
* @param password
*/
private void addPasswordToUserProfile(String password) {
String newQuotedPassword = "\"" + password + "\"";
try {
byte[] newUnicodePassword = newQuotedPassword.getBytes("UTF-16LE");
int UF_NORMAL_ACCOUNT = 0x0200;
int UF_PASSWORD_EXPIRED = 0x800000;
adUserBean.setUserAccountControl(Integer.toString(UF_NORMAL_ACCOUNT + UF_PASSWORD_EXPIRED));
adUserBean.setUnicodePassword(newUnicodePassword);
} catch (UnsupportedEncodingException e) {
logger.catching(e);
}
}
Repository.Java
@Repository
public class UserADRepository {
@Autowired
private LdapTemplate ldapTemplate;
public User create(User user) {
ldapTemplate.create(user);
return user;
}
public User findByFullName(String fullName) {
return ldapTemplate.findOne(
LdapQueryBuilder.query().where("cn").is(fullName), User.class);
}
/**
* Find user in LDAP based on User SamAccountName
* @param samAccount
* @return
*/
public User findBySamAccountName(String samAccount) {
User usr = null;
try {
usr = ldapTemplate.findOne(
LdapQueryBuilder.query().where("sAMAccountName")
.is(samAccount), User.class);
} catch (EmptyResultDataAccessException emptyException) {
return usr;
}
return usr;
}
/**
* Find user in LDAP based on User DN (distinguisedName)
* @param dn
* @return
*/
public User findByDn(Name dn) {
User usr = null;
try {
usr = ldapTemplate.findByDn(dn, User.class);
} catch (NameNotFoundException e) {
return usr;
}
return usr;
}
/**
* Update user in AD
* @param User
*/
public void update(User User) {
ldapTemplate.update(User);
}
public void delete(User User) {
ldapTemplate.delete(User);
}
Thanks in advance for kind help or direction to resolve this issue.
A: for updating AD password use a separate method, it seems that LdapTemplate.update() does not define the correct ModificationItem for password.
public void setPassword(Person p){
String relativeDn = getRelativeDistinguishedName(person.getDistinguishedName());
LdapNameBuilder ldapNameBuilder = LdapNameBuilder.newInstance(relativeDn);
Name dn = ldapNameBuilder.build();
DirContextOperations context = ldapTemplate.lookupContext(dn);
Attribute attr = new BasicAttribute("unicodepwd", encodePassword(person.getPassword()));
ModificationItem item = new ModificationItem(DirContext.REPLACE_ATTRIBUTE, attr);
ldapTemplate.modifyAttributes(dn, new ModificationItem[] {item});
}
A: In addition to aalmero answer, it seems like Spring Ldap repository can't save unicodePwd.
But you can use LdapTemplate for it:
UserAd userAd = new UserAd();
// set your stuff
userAdRepository.save(userAd);
ModificationItem[] mods = new ModificationItem[1];
mods[0] = new ModificationItem(DirContext.REPLACE_ATTRIBUTE, new BasicAttribute("unicodePwd", encodePassword("password-respecting-policies")));
ldapTemplate.modifyAttributes(userAd.getDn(), mods);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28964782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Getting the following error message TypeError: not all arguments converted during string formatting I am trying to read a circle with its coordinates and radius from the user and trying to return its details but on executing this code i am getting an error.
import math
class Circle():
def __init__(self, centre, radius):
self.centre = centre
self.radius = radius
self.area1=math.pi*radius*radius
def __str__(self):
return ("A Circle which has centre at " + "(" + "%0.6f" % (self.centre)+")" + "and its radius " + "%0.6f" % (self.radius))
if __name__ == "__main__":
print("CHECKING THE FUNCTIONALITY OF CIRCLE CLASS")
x = float(input("Enter x coordinate of first circle's centre: "))
y = float(input("Enter y coordinate of the first circle's centre: "))
r = float(input("Enter first circle's radius: "))
pointx1 = x
pointy1 = y
radius1 = r
centre_1 = (pointx1,pointy1)
first_circle = Circle(centre_1, r)
print(first_circle)
Error message says:
File "C:/Users/Deepak/Desktop/part2.py", line 9, in __str__
return ("A Circle which has centre at " + "(" + "%0.6f" % (self.centre)+")"
+ "and its radius " + "%0.6f" % (self.radius))
TypeError: not all arguments converted during string formatting
I can't understand what went wrong here.I need help urgently.Any assistance would be appreciated.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/47074700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Xna model parts are overlying others I am trying to import in XNA an .fbx model exported with blender.
Here is my drawing code
public void Draw()
{
Matrix[] modelTransforms = new Matrix[Model.Bones.Count];
Model.CopyAbsoluteBoneTransformsTo(modelTransforms);
foreach (ModelMesh mesh in Model.Meshes)
{
foreach (BasicEffect be in mesh.Effects)
{
be.EnableDefaultLighting();
be.World = GameCamera.World * Translation * modelTransforms[mesh.ParentBone.Index];
be.View = GameCamera.View;
be.Projection = GameCamera.Projection;
}
mesh.Draw();
}
}
The problem is that when I start the game some model parts are overlying others instead of being behind. I've tried to download other models from internet but they have the same problem.
A: This line:
be.World = GameCamera.World * Translation * modelTransforms[mesh.ParentBone.Index];
is usually arrainged the other way around, and the order that you multiply matrices in will make the results different. Try this:
be.World = modelTransforms[mesh.ParentBone.Index] * GameCamera.World * Translation;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/12341347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Access unmanaged (external) Azure Databricks Hive table via JDBC I am using Azure Databricks with Databricks Runtime 5.2 and Spark 2.4.0. I have setup external Hive tables in two different ways:
- a Databricks Delta table where the data is stored in Azure Data Lake Storage (ADLS) Gen 2, the table was created using a location setting, which points to a mounted directory in ADLS Gen 2.
- a regular DataFrame, saved as a table to ADLS Gen 2, not using the mount this time but instead the OAuth2 credentials I've set on the cluster level using spark.sparkContext.hadoopConfiguration
Both the mount point and the direct access (hadoopConfiguration) have been configured using OAuth2 credentials and an Azure AD Service Principal, which has the necessary access rights to Data Lake.
Both tables show up correctly in Databricks UI and can be queried.
Both tables are also visible in a BI tool (Looker), where I have successfully configured a JDBC connection to my Databricks instance. After this the differences begin:
1) table configured using the mount point does not allow me to run a DESCRIBE operation in the BI tool, not to mention a query. Everything fails with error "com.databricks.backend.daemon.data.common.InvalidMountException: Error while using path /mnt/xxx/yyy/zzz for resolving path '/yyy/zzz' within mount at '/mnt/xxx'."
2) table configured using without the mount point allows me to run DESCRIBE operation, but a query fails with error "java.util.concurrent.ExecutionException: java.io.IOException: There is no primary group for UGI (Basic token) (auth:SIMPLE)".
JDBC connection and querying from the BI tool to a managed table in Databricks works fine.
As far as I know, there isn't anything I could configure differently when creating the external tables, configuring the mounting point or the OAuth2 credentials. It seems to me that when using JDBC, the mount is not visible at all, so the request to the underlying datasource (ADLS Gen 2) can not succeed. On the other hand, the second scenario (number 2 above) is a bit more puzzling and in my mind seems like something somewhere under the hood, deep, and I have no idea about what to do with that.
One peculiar thing is also my username which shows up in scenario 2. I don't know where that comes from, as it is not involved when setting up the ADLS Gen 2 access using the Service Principal.
A: I had a similar issue and I solved it by adding this parameter in my Databricks cluster :
spark.hadoop.hive.server2.enable.doAs false
See : https://docs.databricks.com/user-guide/faq/access-blobstore-odbc.html
RB
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/55042885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Unable to load properties with Java Spring I am a complete newbie to Spring. I am trying to figure out how to access properties from props files injected into my app through Spring.
I wrote a simple test provided below. I run it by passing location of the properties file through environment variables provided at JRE options
$ mvn test -DSPRING_CONFIG_NAME=my_spring \
-DSPRING_CONFIG_LOCATION=file:///Users/desilets/Documents/conf
Here is the content of the my_spring.properties file
$ cat /Users/desilets/Documents/conf/my_spring.properties
my.spring.greeting=hello world
When I run the test, it fails. Yet the output indicates that the environment variables were well received:
SPRING_CONFIG_NAME=my_spring
SPRING_CONFIG_LOCATION=file:///Users/desilets/Documents/conf
greeting=null
What am I doing wrong?
Thx.
---- Code for the test ---
import org.junit.Assert;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Value;
public class AccessPropertiesTest {
@Value("${my.spring.greeting}")
String greeting;
@Test
public void test__LoadProperties() throws Exception {
System.out.println("SPRING_CONFIG_NAME="+
System.getProperty("SPRING_CONFIG_NAME"));
System.out.println("SPRING_CONFIG_LOCATION="+
System.getProperty("SPRING_CONFIG_LOCATION"));
System.out.println("greeting="+greeting);
Assert.assertEquals(
"The property my.spring.greeting was not read correctly",
greeting, "hello world");
}
}
A: If its a spring project there would be two locations for properties
src/main/resources
src/test/resources
If you run tests it will pick from src/test/resources.
A: @RunWith(SpringRunner.class)
@DataJpaTest
public class AccessPropertiesTest {
@Value("${my.spring.greeting}")
String greeting;
.....
}
refer https://www.baeldung.com/spring-boot-testing
add: my.spring.greeting=anyValue into application.properties or application.properties.yaml file
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62936107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Getting "java.lang.ArrayIndexOutOfBoundsException" exception while trying to find duplicate number from an array Getting java.lang.ArrayIndexOutOfBoundsException while trying to find duplicate number in an array in Java.
Here is the code:
public class FindDuplicateNumberInArray {
public static void main(String[] args) {
int arr[] = { 11, 24, 65, 1, 111, 25, 58, 95, 24, 37 };
Arrays.sort(arr);
String sortedArray = Arrays.toString(arr);
System.out.println(sortedArray);
for (int i = 1; i < arr.length; i++) {
if (arr[i] == arr[i + 1]) {
System.out.println("Duplicate element from teh given array is = " + arr[i]);
}
}
}
}
A: public class FindDuplicateNumberInArray {
public static void main(String[] args) {
int arr[] = { 11, 24, 65, 1, 111, 25, 58, 95, 24, 37 };
Arrays.sort(arr);
String sortedArray = Arrays.toString(arr);
System.out.println(sortedArray);
// for (int i = 1; i < arr.length; i++) {
// if (arr[i] == arr[i - 1]) {
for (int i = 0; i < arr.length-1; i++) {
if (arr[i] == arr[i + 1]) {
System.out.println("Duplicate element from the given array is = " + arr[i]);
}
}
}
}
To check for a duplicate number, you are running the for loop to last element(say nth position), but your if condition checks the last element with (n+1)th element, which doesn't exist. And also, you need to check the 1st element too, so say i=0.
Or you can just change the if (arr[i] == arr[i + 1]) condition to if (arr[i] == arr[i - 1])
A: You have a very basic IndexOutOfBounds-Exception there. When accessing arrays, you have to provide an index. If that index is greater than array.length - 1, which is the last accessible index, you get an out of bounds exception. The same is true for lists.
Because you compare the current (i) value to the next one (i + 1), you run out of bounds, because you count to i < arr.length. This means when i == arr.length - 1 you still add 1 to i, which is equal to arr.length, which is more than arr.length - 1.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/57555328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: PHP If statement responding with all code rather than printing answer I'm fairly new to programming. I'm trying to write what should be a fairly basic if, elseif piece of code for my database, but when it compiles it just prints the code from the first if statement all the way to the end. I've been going over it for days and can't work out where I'm going wrong
<!DOCTYPE html>
<html>
<body>
<?php
$row = "A1 Header";
$compulsary = FALSE;
$mutable = TRUE;
$included = FALSE;
if ($compulsary == FALSE and $mutable == TRUE) {
echo "<textarea style=background-color:yellow; name=\"message\">Please Enter</textarea><br>";
}
elseif ($compulsary == FALSE and $mutable == FALSE){
echo "'"$row"'";
}
elseif ($compulsary == True and $mutable == True) {
echo "<textarea style=background-color:yellow; name=\"message\">Please Enter</textarea><br>";
}
else {
echo "'"$row"'";
}
?>
</body>
</html>
A: try this
<!DOCTYPE html>
<html>
<body>
<?php
$row = "A1 Header";
$compulsary = FALSE;
$mutable = TRUE;
$included = FALSE;
if ($compulsary == FALSE and $mutable == TRUE) {
echo "<textarea style=background-color:yellow; name=\"message\">Please Enter</textarea><br>";
} elseif ($compulsary == FALSE and $mutable == FALSE) {
echo "'".$row."'";
} elseif ($compulsary == True and $mutable == True) {
echo "<textarea style=background-color:yellow; name=\"message\">Please Enter</textarea><br>";
} else {
echo "'".$row."'";
}
?>
</body>
</html>
A: You can do like this:
<!DOCTYPE html>
<html>
<body>
<?php
$row = "A1 Header";
$compulsary = FALSE;
$mutable = TRUE;
$included = FALSE;
if ($compulsary == FALSE and $mutable == TRUE) {
echo "<textarea style='background-color:yellow;' name='\"message\"'>Please Enter</textarea><br>";
}
elseif ($compulsary == FALSE and $mutable == FALSE){
echo $row;
}
elseif ($compulsary == TRUE and $mutable == TRUE) {
echo "<textarea style='background-color:yellow;' name='\"message\"'>Please Enter</textarea><br>";
}
else {
echo $row;
}
?>
</body>
</html>
A: I think you have a syntax error. Try this:
echo "'\"$row\"'";
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/36403559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Get a sum of comple Json structure with linq I have a complex json structure (at least for me ) that looks like
{
"Assets": [{
"Name": "asset1",
"Code": "SS-15",
"Items": [{
"Name": "Item1",
"KGs": 255,
"Cartons": 1222,
"Containers": 3
}, {
"Name": "Item2",
"KGs": 150,
"Cartons": 2322,
"Containers": 5
}
]
}, {
"Name": "asset2",
"Code": "SA-23",
"Items": [{
"Name": "Item1",
"KGs": 88,
"Cartons": 40,
"Containers": 1
}, {
"Name": "Item2",
"KGs": 960,
"Cartons": 710,
"Containers": 31
}
]
}
]}
I need to summarize globally how many KGs, Cartons and Containers are for each type of item, something like this:
[{
"unit": "KGs",
"Item1": 343,
"Item2": 1110
}, {
"unit": "Cartons",
"Item1": 1262,
"Item2": 3032
}, {
"unit": "Containers",
"Item1": 4,
"Item2": 36
}]
I have been using LINQ and so far I have something like:
object.SelectMany(x => x.Items.GroupBy(k => k.Name, m => m.KGs)).GroupBy(g => g.Key);
It kind of looks what I am looking for but is not giving me the info I need.
Note: I am deserializing the json to a class in my project.
A: I don't know if you're going to laugh at this or what because it's so far away from what you where expecting, but the main problem I see in this code is that you're trying to transpose items in a collection (Item1, Item2) to properties in an object ("Item1": 4, "Item2": 36), that forces me to think about dynamic creation of objects by using ExpandoObject class:
(You have to install and import MoreLinq in order to use DistinctBy)
var unitsNames = o["Assets"].SelectMany(a => a["Items"]).DistinctBy(i=>i["Name"]).Select(i=>i["Name"].ToString());
var allUnits=o["Assets"].SelectMany(a=>a["Items"]);
var kgs=GetExpandoObject(unitsNames, allUnits,"KGs");
var cartons = GetExpandoObject(unitsNames, allUnits, "Cartons");
var containers = GetExpandoObject(unitsNames, allUnits, "Containers");
List<ExpandoObject> res = new List<ExpandoObject>()
{
kgs as ExpandoObject,cartons as ExpandoObject,containers as ExpandoObject
};
string jsonString = JsonConvert.SerializeObject(res);
...
private static IDictionary<string, object> GetExpandoObject(IEnumerable<string> unitsNames, IEnumerable<JToken> allUnits, string concept)
{
var eo = new ExpandoObject() as IDictionary<string, Object>;
eo.Add("unit", concept);
foreach (var u in unitsNames)
{
var sum = allUnits.Where(un => un["Name"].ToString() == u).Sum(_ => (int)_[concept]);
eo.Add(u, sum);
}
return eo;
}
This is the result:
[
{"unit":"KGs","Item1":343,"Item2":1110},
{"unit":"Cartons","Item1":1262,"Item2":3032},
{"unit":"Containers","Item1":4,"Item2":36}
]
I'm not saying this is not possible to do with a single Linq query, but I didn't see it that clear, perhaps someone smarter and/or with more available time might achieve it
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40911139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Can I convert a tuple to a parameter list 'inline'? traceback.format_exception() takes three arguments.
sys.exc_info() returns a tuple of three elements that are the required arguments for traceback.format_exception()
Is there any way of avoiding the two line "conversion":
a,b,c = sys.exc_info()
error_info = traceback.format_exception(a,b,c)
Clearly
error_info = traceback.format_exception(sys.exc_info())
doesn't work, because format_exception() takes three arguments, not one tuple (facepalm!)
Is there some tidy way of doing this in one statement?
A: You can use the * operator to unpack arguments from a list or tuple:
error_info = traceback.format_exception(*sys.exc_info())
Here's the example from the docs:
>>> range(3, 6) # normal call with separate arguments
[3, 4, 5]
>>> args = [3, 6]
>>> range(*args) # call with arguments unpacked from a list
[3, 4, 5]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/26297360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Marginal effects for de-meaned polynomials in mixed models In the mixed model (or REWB) framework it is common to model within changes by subtracting the cluster mean (demeaning) from a time varying x-variable, see eg. (Bell, Fairbrother & Jones, 2018). This estimator is basically the same as a fixed effects (FE) estimator (shown below using the sleepstudy data).
The issue arises when trying to model polynomials using the same principle. The equality between the estimators break when we enter our demeaned variable as a polynomial. We can restore this equality by first squaring the variable and then demeaning (see. re_poly_fixed).
dt <- lme4::sleepstudy
dt$days_squared <- dt$Days * dt$Days
dt <- cbind(dt, datawizard::demean(dt, select = c("Days", "days_squared"), group = "Subject"))
re <- lme4::lmer(Reaction ~ Days_within + (1 | Subject), data = dt, REML = FALSE)
fe <- fixest::feols(Reaction ~ Days | Subject, data = dt)
re_poly <- lme4::lmer(Reaction ~ poly(Days_within, 2, raw = TRUE) + (1 | Subject),
data = dt, REML = FALSE)
fe_poly <- fixest::feols(Reaction ~ poly(Days, 2, raw = TRUE) | Subject, data = dt)
re_poly_fixed <- lme4::lmer(Reaction ~ Days_within + days_squared_within + (1 | Subject),
data = dt, REML = FALSE)
models <-
list("re" = re, "fe" = fe, "re_poly" = re_poly, "fe_poly" = fe_poly, "re_poly_fixed" = re_poly_fixed)
modelsummary::modelsummary(models)
The main issue with this strategy is that for postestimation, especially packages that calculate marginal effects (e.g. marginaleffects in R or margins in STATA) the variable needs to be entered as a polynomial term for the calculations to consider both x and x^2. That is using poly() or I() in R or factor notation c.x##c.x in STATA). The difference can be seen in the two calls below, where the FE-call returns one effect for "Days" and the manual call returns two separate terms.
(me_fe <- summary(marginaleffects::marginaleffects(fe_poly)))
(me_re <- summary(marginaleffects::marginaleffects(re_poly_fixed)))
I may be missing something obvious here, but is it possible to retain the equality between the estimators in FE and the Mixed model setups with polynomials, while still being able to use common packages for marginal effects?
A: The problem is that when a transformed variable is hardcoded, the marginaleffects package does not know that it should manipulate both the transformed and the original at the same time to compute the slope. One solution is to de-mean inside the formula with I(). You should be aware that this may make the model fitting less efficient.
Here’s an example where I pre-compute the within-group means using data.table, but you could achieve the same result with dplyr::group_by():
library(lme4)
library(data.table)
library(modelsummary)
library(marginaleffects)
dt <- data.table(lme4::sleepstudy)
dt[, `:=`(Days_mean = mean(Days),
Days_within = Days - mean(Days)),
by = "Subject"]
re_poly <- lmer(
Reaction ~ poly(Days_within, 2, raw = TRUE) + (1 | Subject),
data = dt, REML = FALSE)
re_poly_2 <- lmer(
Reaction ~ poly(I(Days - Days_mean), 2, raw = TRUE) + (1 | Subject),
data = dt, REML = FALSE)
models <- list(re_poly, re_poly_2)
modelsummary(models, output = "markdown")
Model 1
Model 2
(Intercept)
295.727
295.727
(9.173)
(9.173)
poly(Days_within, 2, raw = TRUE)1
10.467
(0.799)
poly(Days_within, 2, raw = TRUE)2
0.337
(0.316)
poly(I(Days - Days_mean), 2, raw = TRUE)1
10.467
(0.799)
poly(I(Days - Days_mean), 2, raw = TRUE)2
0.337
(0.316)
SD (Intercept Subject)
36.021
36.021
SD (Observations)
30.787
30.787
Num.Obs.
180
180
R2 Marg.
0.290
0.290
R2 Cond.
0.700
0.700
AIC
1795.8
1795.8
BIC
1811.8
1811.8
ICC
0.6
0.6
RMSE
29.32
29.32
The estimated average marginal effects are – as expected – different:
marginaleffects(re_poly) |> summary()
#> Term Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
#> 1 Days_within 10.47 0.7989 13.1 < 2.22e-16 8.902 12.03
#>
#> Model type: lmerMod
#> Prediction type: response
marginaleffects(re_poly_2) |> summary()
#> Term Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
#> 1 Days 10.47 0.7989 13.1 < 2.22e-16 8.902 12.03
#>
#> Model type: lmerMod
#> Prediction type: response
A: The following answer is not exactly what I asked for in the question. But at least it is a decent workaround for anyone having similar problems.
library(lme4)
library(data.table)
library(fixest)
library(marginaleffects)
dt <- data.table(lme4::sleepstudy)
dt[, `:=`(Days_mean = mean(Days),
Days_within = Days - mean(Days),
Days2 = Days^2,
Days2_within = Days^2 - mean(Days^2)),
by = "Subject"]
fe_poly <- fixest::feols(
Reaction ~ poly(Days, 2, raw = TRUE) | Subject, data = dt)
re_poly_fixed <- lme4::lmer(
Reaction ~ Days_within + Days2_within + (1 | Subject), data = dt, REML = FALSE)
modelsummary(list(fe_poly, re_poly_fixed), output = "markdown")
We start with the two models previously described. We can manually calculate the AME or marginal effects at other values and get confidence intervals using multcomp::glht(). The approach is relatively similar to that of lincom in STATA. I have written a wrapper that returns the values in a data.table:
lincom <- function(model, linhyp) {
t <- summary(multcomp::glht(model, linfct = c(linhyp)))
ci <- confint(t)
dt <- data.table::data.table(
"estimate" = t[["test"]]$coefficients,
"se" = t[["test"]]$sigma,
"ll" = ci[["confint"]][2],
"ul" = ci[["confint"]][3],
"t" = t[["test"]]$tstat,
"p" = t[["test"]]$pvalues,
"id" = rownames(t[["linfct"]])[1])
return(dt)
}
This can likely be improved or adapted to other similar needs. We can calculate the AME by taking the partial derivative. For the present case we do this with the following equation: days + 2 * days^2 * mean(days).
marginaleffects(fe_poly) |> summary()
Term Effect Std. Error z value Pr(>|z|) 2.5 % 97.5 %
1 Days 10.47 1.554 6.734 1.6532e-11 7.421 13.51
Model type: fixest
Prediction type: response
By adding this formula to the lincom function, we get similar results:
names(fe_poly$coefficients) <- c("Days", "Days2")
mean(dt$Days) # Mean = 4.5
lincom(fe_poly, "Days + 2 * Days2 * 4.5 = 0")
estimate se ll ul t p id
1: 10.46729 1.554498 7.397306 13.53727 6.733549 2.817051e-10 Days + 2 * Days2 * 4.5
lincom(re_poly_fixed, "Days_within + 2 * Days2_within * 4.5 = 0")
estimate se ll ul t p id
1: 10.46729 0.798932 8.901408 12.03316 13.1016 0 Days_within + 2 * Days2_within * 4.5
It is possible to check other ranges of values and to add other variables from the model using the formula. This can be done using lapply or a loop and the output can then be combined using a simple rbind. This should make it relatively easy to present/plot results.
EDIT
Like Vincent pointed out below there is also marginaleffects::deltamethod. This looks to be a better more robust option, that provide similar results (with the same syntax):
mfx1 <- marginaleffects::deltamethod(
fe_poly, "Days + 2 * Days2 * 4.5 = 0")
mfx2 <- marginaleffects::deltamethod(
re_poly_fixed, "Days_within + 2 * Days2_within * 4.5 = 0")
rbind(mfx1, mfx2)
term estimate std.error statistic p.value conf.low conf.high
1 Days + 2 * Days2 * 4.5 = 0 10.46729 1.554498 6.733549 1.655739e-11 7.420527 13.51405
2 Days_within + 2 * Days2_within * 4.5 = 0 10.46729 0.798932 13.101597 3.224003e-39 8.901408 12.03316
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73303108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is it possible to modify or mock the Inversify container used by a Typescript class in a Jasmine unit test? I have a Typescript class that uses InversifyJS and Inversify Inject Decorators to inject a service into a private property. Functionally this is fine but I'm having issues figuring out how to unit test it. I've created a simplified version of my problem below.
In the Jasmine unit test, how can I swap out the injected RealDataService with a FakeDataService? If the property wasn't private I could create the component and assign a fake service but I am wondering if this is possible by using the IOC Container.
I initially followed this this example in the InversifyJS recipes page but quickly realised that the container they created isn't used in any class under test. Also, most of the code examples that I can see in the InversifyJS docs don't cover how to unit test it.
Here is a simplified version of the problem:
myComponent.ts
import { lazyInject, Types } from "./ioc";
import { IDataService } from "./dataService";
export default class MyComponent {
@lazyInject(Types.IDataService)
private myDataService!: IDataService;
getSomething(): string {
return this.myDataService.get();
}
}
dataService.ts
import { injectable } from "inversify";
export interface IDataService {
get(): string;
}
@injectable()
export class RealDataService implements IDataService {
get(): string {
return "I am real!";
}
}
IOC Configuration
import "reflect-metadata";
import { Container, ContainerModule, interfaces, BindingScopeEnum } from "inversify";
import getDecorators from "inversify-inject-decorators";
import { IDataService, RealDataService } from "./dataService";
const Types = {
IDataService: Symbol.for("IDataService")
};
const iocContainerModule = new ContainerModule((bind: interfaces.Bind) => {
bind<IDataService>(Types.IDataService).to(RealDataService);
});
const iocContainer = new Container();
iocContainer.load(iocContainerModule);
const { lazyInject } = getDecorators(iocContainer);
export { lazyInject, Types };
Unit Tests
import { Container } from "inversify";
import { Types } from "./ioc";
import MyComponent from "./myComponent";
import { IDataService } from "./dataService";
class FakeDataService implements IDataService {
get(): string {
return "I am fake!";
}
}
describe("My Component", () => {
let iocContainer!: Container;
let myComponent!: MyComponent;
beforeEach(() => {
iocContainer = new Container();
iocContainer.bind(Types.IDataService).to(FakeDataService);
// How do I make myComponent use this iocContainer?
// Is it even possible?
myComponent = new MyComponent();
});
it("should use the mocked service", () => {
const val = myComponent.getSomething();
expect(val).toBe("I am fake!");
});
});
A: I was able to solve this by importing a container from a different file. Using this method, you would write a different container for every combination of dependencies you want to inject into a test. For brevity, assume the code example with ninja warriors given by the Inversify docs.
// src/inversify.prod-config.ts
import "reflect-metadata";
import { Container } from "inversify";
import { TYPES } from "./types";
import { Warrior, Weapon, ThrowableWeapon } from "./interfaces";
import { Ninja, Katana, Shuriken } from "./entities";
const myContainer = new Container();
myContainer.bind<Warrior>(TYPES.Warrior).to(Ninja);
myContainer.bind<Weapon>(TYPES.Weapon).to(Katana);
myContainer.bind<ThrowableWeapon>(TYPES.ThrowableWeapon).to(Shuriken);
export { myContainer };
// test/fixtures/inversify.unit-config.ts
import "reflect-metadata";
import {Container, inject, injectable} from "inversify";
import { TYPES } from "../../src/types";
import { Warrior, Weapon, ThrowableWeapon } from "../../src/interfaces";
// instead of importing the injectable classes from src,
// import mocked injectables from a set of text fixtures.
// For brevity, I defined mocks inline here, but you would
// likely want these in their own files.
@injectable()
class TestKatana implements Weapon {
public hit() {
return "TEST cut!";
}
}
@injectable()
class TestShuriken implements ThrowableWeapon {
public throw() {
return "TEST hit!";
}
}
@injectable()
class TestNinja implements Warrior {
private _katana: Weapon;
private _shuriken: ThrowableWeapon;
public constructor(
@inject(TYPES.Weapon) katana: Weapon,
@inject(TYPES.ThrowableWeapon) shuriken: ThrowableWeapon
) {
this._katana = katana;
this._shuriken = shuriken;
}
public fight() { return this._katana.hit(); }
public sneak() { return this._shuriken.throw(); }
}
const myContainer = new Container();
myContainer.bind<Warrior>(TYPES.Warrior).to(TestNinja);
myContainer.bind<Weapon>(TYPES.Weapon).to(TestKatana);
myContainer.bind<ThrowableWeapon>(TYPES.ThrowableWeapon).to(TestShuriken);
export { myContainer };
// test/unit/example.test.ts
// Disclaimer: this is a Jest test, but a port to jasmine should look similar.
import {myContainer} from "../fixtures/inversify.unit-config";
import {Warrior} from "../../../src/interfaces";
import {TYPES} from "../../../src/types";
describe('test', () => {
let ninja;
beforeEach(() => {
ninja = myContainer.get<Warrior>(TYPES.Warrior);
});
test('should pass', () => {
expect(ninja.fight()).toEqual("TEST cut!");
expect(ninja.sneak()).toEqual("TEST hit!");
});
});
A: Try exporting the container from your IOC configuration, ioc.ts, like this
export { iocContainer, lazyInject, Types };
Then you can rebind the IDataService Symbol to your mocked FakeDataService in the unit test
import { Types, iocContainer } from "../tmp/ioc";
import MyComponent from "../tmp/myComponent";
import { IDataService } from "../tmp/dataService";
import { injectable } from "inversify";
@injectable() // Added
class FakeDataService implements IDataService {
get(): string {
return "I am fake!";
}
}
describe("My Component", () => {
let myComponent!: MyComponent;
beforeAll(() => {
// Rebind the service
iocContainer.rebind<IDataService>(Types.IDataService).to(FakeDataService);
// Alternatively you could do it like this with the same end result:
iocContainer.unbind(Types.IDataService);
iocContainer.bind<IDataService>(Types.IDataService).to(FakeDataService);
myComponent = new MyComponent();
});
it("should use the mocked service", () => {
const val = myComponent.getSomething();
expect(val).toBe("I am fake!");
});
});
I tried it myself and it works fine. I found this via the inversify.js container API docs
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/54830746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Input Value is returning as undefined I am trying to retrieve the value of the input box and print it to the div. It keeps coming back as undefined and I don't know why. Here is my code:
<body>
<input type="text" id="name">
<button type="button" id="btn">Click me</button>
<div class="square"></div>
<script>
const btn = document.querySelector("#btn");
let divEl = document.querySelector(".square");
btn.addEventListener('click', () => {
let nameEl = document.querySelector("#name").val;
console.log(nameEl);
});
</script>
</body>
A: Change your
let nameEl = document.querySelector("#name").val to
let nameEl = document.querySelector("#name").value
Everything than should work fine.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62700885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: PHP loop to echo all Joomla article id's With my very basic PHP knowledge I'm trying to make a module for Joomla V1.5. I am not quite into all the Joomla classes and methods but perhaps you can help me out.
What I'm trying to do is create a php loop which echo's all the article id's (and some html) from a certain category.
Normally I would do this by calling on the content table from the Joomla db but to make the code a bit more tidy I want to use the Joomla classes for this.
Can anyone point me the right direction which classes and methods to use for this?
A: There are no classes for handling the selection of the articles.
So it comes down to using a query and looping through the result set:
$catId = 59; // the category ID
$query = "SELECT * FROM #__content WHERE catid ='" . $catId . "'"; // prepare query
$db = &JFactory::getDBO(); // get database object
$db->setQuery($query); // apply query
$articles = $db->loadObjectList(); // execute query, return result list
foreach($articles as $article){ // loop through articles
echo 'ID:' . $article->id . ' Title: ' . $article->title;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7298967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to fix a misconfiguration with Phusion Passenger and a VirtualHost directive? I have done this, can anyone tell me why my Rails application isn't loading?
hack ~ # cd /www ; rails mysite.com ; cd /etc/apache2/sites-available
hack sites-available # cat default
<VirtualHost *:80>
ServerName mysite.com
ServerAlias dev.mysite.com
DocumentRoot /www/htdocs/mysitecom
ErrorLog "|/usr/sbin/rotatelogs /www/logs/mysite.com/error_combined_log 7862400"
CustomLog "|/usr/sbin/rotatelogs /www/logs/mysite.com/access_combined_log 7862400" combined
ServerSignature email
RailsBaseURI /
<Directory /www/htdocs/mysite.com>
Allow from all
Options -MultiViews
</Directory>
</VirtualHost>
hack sites-available #
A: *
*You've initialized your app at /www/mysite.com but pointed your DocumentRoot at a different directory, /www/htdocs/mysitecom (and I'm assuming you meant rails new mysite.com).
*DocumentRoot should point to your app's public dir.
Change DocumentRoot to /www/mysite.com/public or wherever your app's public folder actually lives.
Make sure passenger is enabled (and quit using root):
hack $ sudo a2enmod passenger
hack $ sudo /etc/init.d/apache2 restart
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/10793629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Send SMS to real device without SIM card with appium Is it possible to simulate sending SMS to a real device without SIM Card with appium?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72745204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: paypal payment integration from asp.net page can anyone detail the particulars that are required to send data collected from a asp.net web page using vb.net that has a bunch of text boxes for first name, last name, address... item price, quantity, total price... to paypal for processing and settlement?
i have never worked with paypal so i am assuming paypal will get this info and generate a bill to send the user.
thanks in advance
A: The following article should explain much of the process to you. For further reading you can also check out the PayPal developer documentation.
Update:
Here is an updated example for current version of ASP.NET (4.5 at the time of writing)
A: Integrate PayPal into website vb.net
*
*Open cmd enter "http://www.catalog.update.microsoft.com/search.aspx?q=kb3140245" (install windows update for nuGetPackage)
*Open cmd enter reg add "HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client" /v DisabledByDefault /t REG_DWORD /d 0 /f /reg:32(Register cmd update)
*Register nuGet update in solution of the project.
Goto : https://devblogs.microsoft.com/nuget/deprecating-tls-1-0-and-1-1-on-nuget-org/
*NuGet clients and PowerShell reg.
*Install Paypal nuGet package in solution
*Add Paypal SDK reference in the project.
*Add configurations for Paypal.
*NuGet manager package JSON uninstall.
*Tools -> nuget manager-> package manager console paste Install-Package Newtonsoft.Json -Version 6.0.1, enter.
*Add in vb file of startupwizard the code in paypal1.
https://drive.google.com/file/d/1HXChrl0XWR_sE_rZCtGAvgWu24C55KEV/view?usp=sharing
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2842352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: How to fix the assertion error in this problem? I'm having a problem fixing the error below. I'm returning correct outputs, then the assertion error occurred. How to fix it? What should I do?
Example Usage:
from vector import Vector
v1 = Vector(1, 1, 0) # Initialize vector <1,1,0>
v2 = Vector(2, 1, 3) # Initialize vector <2,1,3>
print(v1) # Should print "<1,1,0>"
print(v2) # Should print "<2,1,3>"
v3 = v1 + v2
print (v3) # v3 is <3,2,3>
v4 = v1 * 3
print (v4) # v4 is <3,3,0>
v5 = v2 * 2
print(v5) # v5 is <4,2,6>
c_p = v1.cross_product(v2)
print(c_p) # returns <3,-3,-1>
d_p = v1.dot_product(v2)
print(d_p) # returns 3
m = v1.magnitude() # returns 1.4142135623730951
print (m)
This is my code:
import math
class Vector:
def __init__(self, x=0, y=0, z=0):
self.x = x
self.y = y
self.z = z
def __str__(self):
return '<{}, {}, {}>'.format(self.x,self.y,self.z)
def __add__(self, other):
return Vector(self.x+other.x,self.y+other.y,self.z+other.z)
def __mul__(self, other=int()):
return Vector(self.x*other,self.y*other,self.z*other)
def cross_product(self, other):
c = (self.y*other.z - self.z*other.y,self.z*other.x- self.x*other.z,self.x*other.y - self.y*other.x)
return c
def dot_product(self, other):
d = (self.x*other.x + self.y*other.y + self.z*other.z)
return d
def magnitude (self):
e = (math.sqrt(self.x**2 + self.y**2 + self.z**2))
return e
Outputs from my code:
<1, 1, 0>
<2, 1, 3>
<3, 2, 3>
<3, 3, 0>
<4, 2, 6>
(3, -3, -1)
3
1.4142135623730951
The error, I am guessing that this could be due to whitespaces? I tried resolving it though.
, line 12, in test_vector
self.assertEquals(str(v1), "<2,1,-2>")
AssertionError: '<2, 1, -2>' != '<2,1,-2>'
- <2, 1, -2>
? - -
+ <2,1,-2>
A: Just like this:
class Vector:
def __init__(self, x=0, y=0, z=0):
self.x = x
self.y = y
self.z = z
def __str__(self):
return '<{},{},{}>'.format(self.x,self.y,self.z)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71722461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Mongo Nested array field selection Below is mongodb document is like this and these nested array field selection:
object :
{
"_id": {
"$oid": "5de775b53ec85e73da2b6d8a"
},
"vpg_id": 2,
"year": 2019,
"am_data": {
"822": {
"am_name": "Unmanaged ",
"no_of_mnths": 12,
"total_invoice": 14476.15,
"total_bv_invoice": 1840,
"opp_won_onetime_amt": 0,
"one_time_quota": 0,
"recurring_quota": 200,
"opp_won_rec_amt": 0,
"avg_total_invoice": 1206.3458333333333,
"avg_total_bv_invoice": 153.33333333333334,
"avg_opp_won_onetime_amt": 0,
"avg_one_time_quota": 0,
"avg_opp_won_rec_amt": 0,
"avg_recurring_quota": 16.666666666666668
},
"2155": {
"am_name": "Daniel Schiralli",
"no_of_mnths": 12,
"total_invoice": 396814.66000000003,
"total_bv_invoice": 577693.3200000001,
"opp_won_onetime_amt": 4792.5,
"one_time_quota": 14400,
"recurring_quota": 4800,
"opp_won_rec_amt": 345,
"avg_total_invoice": 33067.888333333336,
"avg_total_bv_invoice": 48141.11000000001,
"avg_opp_won_onetime_amt": 399.375,
"avg_one_time_quota": 1200,
"avg_opp_won_rec_amt": 28.75,
"avg_recurring_quota": 400
}
}
}
I want to select only no_of_mnths and am_name from all am_data arrays.
The keys
822
and
2155 is dynamic.
It will change so i cannot directly give it in query. How i can approach to get this data. Don't want
Any help ?
A: You can use $objectToArray operator to get rid of the dynamic keys.
db.getCollection('Test').aggregate([
{ $project: {"keys": { "$objectToArray": "$$ROOT.am_data" }} },
{ $unwind : "$keys"},
{ $project: {"am_name":"$keys.v.am_name", "no_of_mnths":"$keys.v.no_of_mnths" } }
])
Result:
[{
"_id" : ObjectId("5de775b53ec85e73da2b6d8a"),
"am_name" : "Unmanaged ",
"no_of_mnths" : 12
},
{
"_id" : ObjectId("5de775b53ec85e73da2b6d8a"),
"am_name" : "Daniel Schiralli",
"no_of_mnths" : 12
}]
A: the key should not change. base on your description you need to adjust the schema,
instead of am_data Object it should be an array instead.
{
"_id": {
"$oid": "5de775b53ec85e73da2b6d8a"
},
"vpg_id": 2,
"year": 2019,
"am_data": [
{
"id": "822",
"am_name": "Unmanaged ",
"no_of_mnths": 12,
"total_invoice": 14476.15,
"total_bv_invoice": 1840,
"opp_won_onetime_amt": 0,
"one_time_quota": 0,
"recurring_quota": 200,
"opp_won_rec_amt": 0,
"avg_total_invoice": 1206.3458333333333,
"avg_total_bv_invoice": 153.33333333333334,
"avg_opp_won_onetime_amt": 0,
"avg_one_time_quota": 0,
"avg_opp_won_rec_amt": 0,
"avg_recurring_quota": 16.666666666666668
},
{
"id": "2155",
"am_name": "Daniel Schiralli",
"no_of_mnths": 12,
"total_invoice": 396814.66000000003,
"total_bv_invoice": 577693.3200000001,
"opp_won_onetime_amt": 4792.5,
"one_time_quota": 14400,
"recurring_quota": 4800,
"opp_won_rec_amt": 345,
"avg_total_invoice": 33067.888333333336,
"avg_total_bv_invoice": 48141.11000000001,
"avg_opp_won_onetime_amt": 399.375,
"avg_one_time_quota": 1200,
"avg_opp_won_rec_amt": 28.75,
"avg_recurring_quota": 400
}
]
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/59206998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Jquery checking that passwords match with php + Json I have a form that I am validating with JS and PHP. Everything is going well so far apart from when I try to check if the passwords match.
Here is the form:
<div>
<label for="passlength">Password, valid: 0-9</label>
<input type="text" name="passlength" value="<?=@$_REQUEST['passlength']?>" id="passlength" />
<span id="validatePasslength"><?php if ($error) { echo $error['msg']; } ?></span>
</div>
<div>
<label for="passlength">Password2, valid: 0-9</label>
<input type="text" name="passlength2" value="<?=@$_REQUEST['passlength2']?>" id="passlength2" />
<span id="validatePasslength2"><?php if ($error) { echo $error['msg']; } ?></span>
</div>
This is the Javascript:
var r = $('#passlength').val()
;
var validatePasslength2 = $('#validatePasslength2');
$('#passlength2').keyup(function () {
var t = this;
if (this.value != this.lastValue) {
if (this.timer) clearTimeout(this.timer);
validatePasslength2.removeClass('error').html('<img src="../../images/layout/busy.gif" height="16" width="16" /> checking availability...');
this.timer = setTimeout(function () {
$.ajax({
url: 'ajax-validation.php',
data: 'action=check_passlength2&passlength=' + r + '&passlength2=' + t.value,
dataType: 'json',
type: 'post',
success: function (j) {
validatePasslength2.html(j.msg);
}
});
}, 200);
this.lastValue = this.value;
}
});
Here is the php:
//Check for passlength
if (@$_REQUEST['action'] == 'check_passlength' && isset($_SERVER['HTTP_X_REQUESTED_WITH'])) {
// means it was requested via Ajax
echo json_encode(check_passlength($_REQUEST['passlength']));
exit; // only print out the json version of the response
}
function check_passlength($password) {
// global $taken_usernames, $usersql;
$resp = array();
// $password = trim($password);
if (!preg_match('/^[0-9]{1,30}$/', $password)) {
$resp = array("ok" => false, "msg" => "0-9 Only");
} else if (preg_match('/^[0-9]{1,2}$/', $password)) {
$resp = array("ok" => false, "msg" => "Password too short");
} else if (preg_match('/^[0-9]{6,30}$/', $password)) {
$resp = array("ok" => false, "msg" => "Password too long");
} else {
$resp = array("ok" => true, "msg" => "Password ok");
}
return $resp;
}
//Check for passlength2
if (@$_REQUEST['action'] == 'check_passlength2' && isset($_SERVER['HTTP_X_REQUESTED_WITH'])) {
// means it was requested via Ajax
echo json_encode(check_passlength2($_REQUEST['passlength'],$_REQUEST['passlength2']));
exit; // only print out the json version of the response
}
function check_passlength2($password,$password2) {
// global $taken_usernames, $usersql;
$resp = array();
// $password = trim($password);
if (!preg_match('/^[0-9]{1,30}$/', $password2)) {
$resp = array("ok" => false, "msg" => "0-9 Only");
} else if (preg_match('/^[0-9]{1,2}$/', $password2)) {
$resp = array("ok" => false, "msg" => "Password too short");
} else if (preg_match('/^[0-9]{6,30}$/', $password2)) {
$resp = array("ok" => false, "msg" => "Password too long");
} else if ($password !== $password2) {
$resp = array("ok" => false, "msg" => "Passwords do not match");
} else {
$resp = array("ok" => true, "msg" => "Password ok");
}
return $resp;
}
I am pretty sure it is an issue with the javascript because if I set var r = 1234; It works. Any ideas??
A: You just want to see if the passwords match, and are between a min and max length? Isn't the above overkill? Am I missing something?
You could use js alone to check the length of the first password field, then in the onblur event of the second field, check to see if field1==field2.
Minor thing I noticed, the label for the second field has the wrong "for" attribute.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2408318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to convert list of objects from one type to an other I have two objects.
class User{
public int id{get;set;}
public string name{get;set;}
}
class UserProtectedDetails{
public string name{get;set;}
}
How can I convert List<User> to List<UserProtectedDetails>?
I know how to do it in reflection, but is there anyway to do it in Linq or other .NET way?
A:
is there anyway to do it in linq or other .net way?
Sure:
List<User> list = ...; // Make your user list
List< UserProtectedDetails> = list
.Select(u => new UserProtectedDetails{name=u.name})
.ToList();
EDIT: (in response to a comment) If you would like to avoid the {name = u.name} part, you need either (1) a function that makes a mapping for you, or (2) a constructor of UserProtectedDetails that takes a User parameter:
UserProtectedDetails(User u) {
name = u.name;
}
One way or the other, the name = u.name assignment needs to be made somewhere.
A: Well, it could be as easy as
var userList = new List<User>();
var userProtectedDetailsList = userList.Select(u =>
new UserProtectedDetails { name = u.name }
)
.ToList();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/12018504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: ORA-00928: missing SELECT keyword: update using with I am getting ORA-00928: missing SELECT keyword error when using "update" with "with".
This is giving the error.
with wr_double as
(select...)
update work_request r
set r.name = r.name || '_old'
where exists
(select 1 from wr_double wd
where wd.name = r.name and wd.wr_id = r.id)
But this works fine
with wr_double as
(select...)
select * from work_request r
where exists
(select 1 from wr_double wd
where wd.name = r.name and wd.wr_id = r.id)
Also, if I place my sub-query from the with in the body of the update it works fine.
update work_request r
set r.name = r.name || '_old'
where exists
(select 1 from
(select
wr.name,
wr.id as wr_id,
dup_wr.count,
d.id as d_id,
d.create_date
from
(select...) wd
where wd.name = r.name and wd.wr_id = r.id)
Can I not use "with" in this way with an "update"?
A: you have to write below way because CTE is part of the SELECT not the UPDATE
update work_request
set name = name || '_old'
where exists (
with wr_double as
(select...)
select 1 from wr_double wd wd.name = work_request.name and wd.wr_id = work_request.id
);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/55363941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Visual Studio Community Won't Load Default Files I'm trying to learn C++ and when I opened a new win32 console project it just gave me an empty project when it should have gave gave me like stdafx.h or the name.cpp file. Any way to fix this?
A: Did you try updating your workloads? After installing VS, if you didn't choose the right workloads to load certain types of projects such as Win32 console projects, go into your programs folder from the control panel and right click on Visual Studio. Choose 'change', not 'uninstall'. The page you are given is your workloads. Read them and decide which one supports that type of program. Check that workload, then in the right sidebar you will see custom options. Not all the ones you need may be automatically checked. Make sure any that refer to the build process are checked. Then click to proceed and VS will install the updates.
For Win32 programs, you need to add the workload called "Desktop development with C++" and then on the Installation Details pane on the right check all boxes that mention a "build".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/49552511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to connect Odoo with postgres (which is running on other server) I want to connect odoo 10 with the postgresql (which is running on other server). Is it possible ? If, yes! then please do help me out by posting step by step procedure or any sort of tutorial link (because I'm beginner). For now, I'm using odoo10 and postgresql-9.6 For different servers I'm using two different virtual machines (one is for odoo & other is for postgresql)
Here, I'm using two virtual machines 216.200.116.8 (for odoo) & 216.200.116.174 (for postgresql). I'm able to access postgresql from 216.200.116.8 remotely.
Here is my /etc/odoo.conf
Here is my /etc/postgresql/9.6/main/postgresql.conf
Here is my /etc/postgresql/9.6/main/pg_hba.conf
After everything is configured I'm running odoo server from /opt/odoo/odoo10.0/odoo-bin
I've encountered this error! The detailed error says:
Odoo ver : 10 & postgresql ver : 9.6
Database User :postgres
password for postgres: passwd
Help needed! Thanks in Advance
A: If I am not mistaken, you must change the user in the database, odoo doesn't allow "postgres" as the user, you must create an "odoo user" to connect the application to the database like this:
db_user: odoo
db_password: yourpassword
Then in the postgres.conf file you must change this line:
listen_adresses = 'localhost'
I can't remember if something must be changed in the .hba
A: You need to change in pg_hba.conf file
In IPV4 section , you need to replace 1 line with below change.
host all all 0.0.0.0/0 trust
This is for first line in IPV4 section.
After Restart PostgreSQL , you can connect.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42789112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: LESS support on older android browser(2.* specific) I am converting a website to responsive using media queries and less. I am compiling less on client side using js, but it doesn't support older Android browsers(2.X). Is there a fix for this?
A: Returning to css is a good idea in this case: precompile the final version. You can use SimpleLESS or similar compiler to do it. Reducing client-side resources is healthy, especially for mobile/responsive UIs.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/22628523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is it possible to simulate a browser version update? I'm developing an add-on and I want to test how code that gets executed during the onInstalled event behaves after a browser version update.
Is there a way to simulate the browser update event, without waiting when an actual newer version of the browser will be out?
A: I don't find any way to simulate browser update event. Here I use Edge as an example. As a workaround, you can download Edge Canary version to test which updates everyday.
Besides, if your device is Microsoft AD domain joined, you can also configure this policy to roll back Edge version, then update it when you test.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71082064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How does WPF-Grid implement the SharedSizeGroup-behaviour? Im trying to figure out how Grid does with size-sharing in their columns and rows. Im looking at the Grid-code with Reflector but can't find any hits. The cols/rows sharing size should first get a desired size and then be measured again with the max found size to get the same size to avoid just being clipped in the arrange-pass, if Im not mistaken. But I cant find any code for size-sharing at all with Reflector. Could someone explain how size sharing should be implemented roughly in a custom panel class with respect to measure and arrange?
A: Look at System.Windows.Controls.DefinitionBase
It's values (taken from sharedscope if used) are then used in grid.SetFinalSize
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/5745547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Recursive looping over array of objects using reduce I have an array of objects with children. The goal is to remove every item from items arrays.
Is it possible to do without using forEach and map loops? How to use reduce in this case?
The problem is some arrays have items on one level and others have children array with items inside. Sample here:
{
"label": "child1",
"children": [
{
"label": "child2",
"items": [
"item1",
"item2"
]
},
{
"label": "child3",
"items": [
"item1",
"item2",
"item3"
]
}
]
}
As a result, I want to see a mutated array of objects with empty items arrays.
Here`s an object to be mutated:
[
{
"label": "parent",
"children": [
{
"label": "child1",
"children": [
{
"label": "child2",
"items": [
"item1",
"item2"
]
},
{
"label": "child3",
"items": [
"item1",
"item2",
"item3"
]
}
]
},
{
"label": "child4",
"items": []
},
{
"label": "child5",
"items": ["item1","item2"]
}
]
}
]
And here is my incomplete solution:
function flattenDeep(arr) {
return arr.reduce(
(acc, val) =>
Array.isArray(val)
? acc.concat(flattenDeep(val.children))
: acc.concat(val.children),
[]
);
}
A: Here's a way to empty all items arrays.
The idea is to use a predefined reducer method that can you can use recursively.
const reducer = (reduced, element) => {
// empty items array
if (element.items) {
element.items.length = 0;
}
// if element has children, recursively empty items array from it
if (element.children) {
element.children = element.children.reduce(reducer, []);
}
return reduced.concat(element); // or: [...reduced, element]
};
document.querySelector("pre").textContent =
JSON.stringify(getObj().reduce(reducer, []), null, " ");
// to keep relevant code on top of the snippet
function getObj() {
return [
{
"label": "parent",
"children": [
{
"label": "child1",
"children": [
{
"label": "child2",
"items": [
"item1",
"item2"
]
},
{
"label": "child3",
"items": [
"item1",
"item2",
"item3"
]
}
]
},
{
"label": "child4",
"items": []
},
{
"label": "child5",
"items": ["item1","item2"]
}
]
}
];
}
<pre></pre>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/54052614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: SelectFeature for features in layer underneath top layer I'm using the "OpenLayers.Control.SelectFeature" to hover over many features in a vector layer. However, when i add another layer on top, the hover 'highlight' ability is lost because the top layer is blocking it. Does anyone know if there is some "allow passthrough" feature or something.
Placing the top layer below is not an option as it needs to be on top.
EDIT:
If you load up my code you'll see that it works fine until you press the "move up" button which will bring the layer on top of the other layer and things will not work anymore.
HERE is my code:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta name="viewport" content="initial-scale=1.0, user-scalable=no" />
<title>Open Layers TEST</title>
<link rel="stylesheet" href="http://openlayers.org/dev/theme/default/style.css" type="text/css"/>
<style type="text/css">
body {
font-family: "Lucida Grande", Verdana, Geneva, Lucida, Arial, Helvetica, sans-serif;
font-size: 80%;
color: #222;
background: #fff;
}
html, body
{
margin: 20px;
padding: 20px;
height: 100%;
width: 100%;
}
.smallmap {
width: 600px;
height: 500px;
border: 1px solid #ccc;
padding: 20px;
}
#controlToggle li {
list-style: none;
}
</style>
</head>
<body onload="init()">
<p><button onclick="MoveLayer('UP')">Move Up</button><button onclick="MoveLayer('DOWN')">Move Down</button></p>
<div id="map" class="smallmap"></div>
<script type="text/javascript" src="http://openlayers.org/dev/OpenLayers.js"></script>
<script type="text/javascript">
var map, selectControl, vectors2, vectors1;
OpenLayers.Feature.Vector.style['default']['strokeWidth'] = '2';
function init() {
map = new OpenLayers.Map('map');
var wmsLayer = new OpenLayers.Layer.WMS(
"OpenLayers WMS",
"http://vmap0.tiles.osgeo.org/wms/vmap0",
{ layers: 'basic' }
);
vectors1 = new OpenLayers.Layer.Vector("B&W(Vector1 - Results)", {
rendererOptions: { zIndexing: true },
styleMap: new OpenLayers.StyleMap({
"default": new OpenLayers.Style({
strokeColor: '#ff3',
strokeOpacity: .9,
strokeWidth: 2,
fillColor: '#33f',
fillOpacity: .2,
graphicZIndex: 10,
cursor: "pointer"
}),
"select": new OpenLayers.Style({
strokeColor: '#33f',
strokeOpacity: .9,
strokeWidth: 2,
fillColor: '#ff3',
fillOpacity: .2,
graphicZIndex: 12,
cursor: "pointer"
})
})
});
vectors2 = new OpenLayers.Layer.Vector("Y&B(Vector2 - Region)", {
rendererOptions: { zIndexing: true },
styleMap: new OpenLayers.StyleMap({
"default": new OpenLayers.Style({
strokeColor: '#000',
strokeOpacity: .5,
strokeWidth: 2,
fillColor: '#fff',
fillOpacity: .9,
graphicZIndex: 10,
cursor: "pointer"
}),
"select": new OpenLayers.Style({
strokeColor: '#fff',
strokeOpacity: .5,
strokeWidth: 2,
fillColor: '#000',
fillOpacity: .2,
graphicZIndex: 12,
cursor: "pointer"
})
})
});
map.addLayers([wmsLayer, vectors1, vectors2]);
map.addControl(new OpenLayers.Control.LayerSwitcher());
selectControl = new OpenLayers.Control.SelectFeature(
[vectors2],
{
hover: true,
highlightOnly: true
});
// selectControl.handlers['feature'].stopDown = false;
// selectControl.handlers['feature'].stopUp = false;
map.addControl(selectControl);
selectControl.activate();
var feature1 = new OpenLayers.Feature.Vector(
OpenLayers.Geometry.fromWKT(
"POLYGON((28.828125 0.3515625, 132.1875 -13.0078125, -1.40625 -59.4140625, 28.828125 0.3515625))"
)
);
vectors1.addFeatures([feature1]);
var feature2 = new OpenLayers.Feature.Vector(
OpenLayers.Geometry.fromWKT(
"POLYGON((-120.828125 -50.3515625, -80.1875 -80.0078125, -40.40625 -20.4140625, -120.828125 -50.3515625))"
)
);
var feature3 = new OpenLayers.Feature.Vector(
OpenLayers.Geometry.fromWKT(
"POLYGON((-52.734375 43.9453125, 82.265625 -65.7421875, 72.421875 41.8359375, 27.421875 67.1484375, -52.734375 43.9453125))"
)
);
vectors2.addFeatures([feature2, feature3]);
vectors1.events.fallThrough = true;
map.zoomToMaxExtent();
}
function MoveLayer(direction) {
if (direction == "UP") {
map.raiseLayer(vectors1, 1);
} else {
map.raiseLayer(vectors1, -1);
}
map.resetLayersZIndex();
// vectors1.setZIndex(9999);
}
</script>
</body>
</html>
A: Here is one approach that might work: Adding vectors1 should allow you to highlight even by clicking MoveUp. Then add a handler to apply style to the features you want:
function style_feature(feature) {
var hoverStyle =new OpenLayers.Style({
//add style here
});
//todo: add logic to check feature you want and style accordingly
this.layer.drawFeature(e, hoverStyle);
};
selectControl = new OpenLayers.Control.SelectFeature(
[vectors2,vectors1],
{
hover: true,
highlightOnly: true,
highlight: style_feature
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/4379832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: visual basic for applications classes Compile error: Expected: end of Statement I am trying to rewrite some C++ into an excel macro, but can't seem to even get line 1 of any tutorial on classes in VBA to work.
I have tried the following:
Public Class gamepath
End Class
Sub Whatever()
End Sub
Then, when I run the Whatever() Macro I expect it to compile but I get the error:
Compile error:
Expected: end of Statement
And it highlights the word gamepath
I am not skilled enough in VB to know why this error occurs, and the error is too vague for my searches to pull up anything I can use. Can anyone tell me why this won't compile?
A: the code you're using looks like it's vb.net, not VBA. The syntax is similar, but not the same. In VBA, you don't script a class, you insert a special type of code module that contains the class's code. Sub Whatever resides in that.
Insert a class module, name it "GameClass" (classes are typically proper-cased, not lower-cased). Add your methods and any properties (here is a good overview of property getters/setters) in that module:
Then you can instantiate your GameClass and call its methods from elsewhere:
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/57479266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Memory still leaking even though all arrays are being deleted C++ My program includes some 1D and some 2D arrays. Because of style guidelines I have to use those and can't use things like std::vector etc. We also have to create new arrays on the heap and delete them afterwards.
My code looks like this:
void wordsfrequency(int words, int length, char **input) { //passing in the amount of words in the
int many, occurrence; // input[] array, the strlen of the
cout << "How many words do you want to check? \n"; // input[] array, and the input[] 2d array
cin >> many;
char **temp;
temp = new char*[many+1]; // creating a 2d array and initializing it
for (int i = 0; i < many; i++) {
temp[i] = new char[13];
for (int j = 0; j < 13; j++) {
temp[i][j] = i*many+13;
}
}
for (int i = 0; i < many; i++) {
cout << "Please enter a word \n";
cin >> temp[i];
}
for (int i = 0; i < many; i++) { // counting how many times a word from
occurrence = 0; // temp[] occurs in input[]
for (int j = 0; j < words; j++) {
if (strcmp(input[j], temp[i]) == 0) {
occurrence += 1;
}
if (j == words-1) {
cout << temp[i] << " occurs " << occurrence << " times \n";
}
}
}
for (int i = 0; i < many; i++) { // looping through the array and deleting
for (int j = 0; j < 13; j++) { // all the allocated memory
delete[] temp[j];
}
delete[] temp[i];
}
delete [] temp;
}
This is my main function where I also delete the input array at the bottom.
int main() {
int words, length;
string filename;
ifstream fin;
cout << "Please enter a filename \n";
cin >> filename;
fin.open(filename.c_str(), ios::in);
cout << "How many words will you enter? \n";
cin >> words;
char **input;
input = new char*[1024];
for (int i = 0; i < 1024; i++) {
input[i] = new char[13];
for (int j = 0; j < 13; j++) {
input[i][j] = i*1024+13;
}
}
fin.getline(*input, 1024, '\0');
length = strlen(*input);
char *token = strtok(*input, " ");
for (int i = 0; i < words; i++) {
input[i] = token;
token = strtok(NULL, " ");
}
fin.close();
wordsfrequency(words, length, input);
for (int i = 0; i < 1024; i++) {
for (int j = 0; j < 13; j++) {
delete[] input[j];
}
delete[] input[i];
}
delete[] input;
return 0;
}
The code works, it actually counts the occurrences correctly, but it leaks a bit of memory. I have tried a crap load of commenting things out or tweaking with for loop parameters, but I can't get it to not leak any memory. On top of that, I am getting an Invalid free() error with valgrind.
Invalid free() / delete / delete[] / realloc()
==6809== at 0x4C2BB8F: operator delete[](void*) (vg_replace_malloc.c:651)
==6809== by 0x4021D7: main
==6809== Address 0x5a299d0 is 0 bytes inside a block of size 13 free'd
==6809== at 0x4C2BB8F: operator delete[](void*) (vg_replace_malloc.c:651)
==6809== by 0x40216E: main
==6809== Block was alloc'd at
==6809== at 0x4C2AC38: operator new[](unsigned long) (vg_replace_malloc.c:433)
==6809== by 0x401E02: main
Any help would be greatly appreciated!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/66432355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: R - mutate with regex in a loop I have a data frame in which every column consists of number followed by text, e.g. 533 234r/r.
The following code to get rid off text works well:
my_data <- my_data %>%
mutate(column1 = str_extract(column1, '.+?(?=[a-z])'))
I would like to do it for multiple columns:
col_names <- names(my_data)
for (i in 1:length(col_names)) {
my_data <- my_data%>%
mutate(col_names[i] = str_extract(col_names[i], '.+?(?=[a-z])'))
}
But it returns an error:
Error: unexpected '=' in:
" my_data <- my_data %>%
mutate(col_names[i] ="
I think mutate_all() wouldn't work as well, bcos str_extract() requires column name as argument.
A: If we are using strings, then convert to symbol and evaluate (!!) while we do the assignment with (:=)
library(dplyr)
library(stringr)
col_names <- names(my_data)
for (i in seq_along(col_names)) {
my_data <- my_data %>%
mutate(!! col_names[i] :=
str_extract(!!rlang::sym(col_names[i]), '.+?(?=[a-z])'))
}
In tidyverse, we could do this with across instead of looping with a for loop (dplyr version >= 1.0)
my_data <- my_data %>%
mutate(across(everything(), ~ str_extract(., '.+?(?=[a-z])')))
If the dplyr version is old, use mutate_all
my_data <- my_data %>%
mutate_all(~ str_extract(., '.+?(?=[a-z])'))
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64959733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to modify a certain system call in minix 3.2.1? I'm new at minix 3.2.1 and I'd like to change a certain system call and its output. For example when I type mkdir Newdirectory, I want to see in the screen New dir -> myNewDirectory 755 (755 stands for the access rights). How could I achieve this?
A: first of all you need to find the correct file to modify. For your example, you can modify the mkdir command by changing/adding code in the usr/src/servers/vfs/open.c file. If you look the open.c file you'll see that there is a do_mkdir function there. You can use :
printf("New dir -> %s",fullpath);
do_mkdir actually has the name of the new directory in the fullpath array so don't have to make a variable yourself. As for the acces rights you can use S_IRWXU/S_IRWXG/S_IRWXO to see the acces rights(for more information visit http://pubs.opengroup.org/onlinepubs/7908799/xsh/sysstat.h.html). For example you can store the access rights in integer variables :
if(bits & S_IRUSR) x = x + 4;
if(bits & S_IWUSR) x = x + 2;
if(bits % S_IXUSR) x = x + 1;
Just do the same for the group and others rights and there you go
Keep in mind that you'll need to compile the file in order to aply the changes. Go to usr/src/realeasetools directory and use the make hdbootcommand in the terminal. Restart and you'll see the changes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/36669816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What data structure to use for regexped result in Python? I parsed log file and now I have regexped result for every status:
'/status1'
'/status2'
'/status3'
For every result I need to keep some info : num1, num2, num3
What data structure to use in Python that I could use array of statuses in format:
/status1, num1, num2, num3
/status2, num1, num2, num3
/status3, num1, num2, num3
that I could use some calculation later with these nums for every status
A: There are no "arrays" in Python, often when we talk of an "array" in Python context, we refer to NumPy array (this is not what you want here). But Python has lists that can hold objects of different types, thus it is feaseable to have e.g.:
[[str, int, int, int],[str, int, int, int],...]
which might be what you need.
Not sure if it helps, I'd rather add it in a comment, but my account is not allowed to add comments yet.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/48127434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Redirect add to cart button woocommerce is it possible to redirect a link of the add to cart button depending on the product quantity?
The button works in ajax, which hook should I use?
I tried to use this code on the theme function, but it doesn't work:
function check_quantity() {
$adult_number=$_POST['adult_number'];
$child_number=$_POST['child_number'];
$infant_number=$_POST['infant_number'];
$total = $adult_number + $child_number + $infant_number;
if ($total > 4){
wp_redirect( get_home_url().'/viaggi-gruppo/');
exit;
}
}
add_filter('woocommerce_add_to_cart', 'check_quantity');
How can I do?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70058567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Getting a specific number of JSON items with jQuery from tumblr feed I'm trying to populate a page with X entries from my tumblr feed, and I'm wondering how I can only pull that X number from the JSON object it returns.
Here's my code, pulled from another Stack Overflow post and modified:
//Tumblr retrieval
$.getJSON("http://tumblr-address/api/read/json?callback=?",
function(data) {
$.each(data.posts, function(i,posts){
var title = this["regular-title"];
var type = this.type;
var date = this.date;
var url = this["url-with-slug"];
$('#sideRail ol').prepend('<li><p><a href=' +url +'>' + title + '</a></p><p>' + date + '</p></li>');
});
});
I've tried using a while loop with a counter, but it just repeats everything X times before moving on to the next item in the list.
Thanks for any help.
A: Use the Array.slice method on the post array. For example, to retrieve 10 items:
$.getJSON("http://tumblr-address/api/read/json?callback=?",
function(data) {
$.each(data.posts.slice(0, 10), function(i,posts){
// ...
A: You can use the num query parameter:
$.getJSON("http://tumblr-address/api/read/json?num=20", ...
And I don't think you need to have a blank callback parameter. You're not doing JSONP.
A: old post but updates info cant hurt...
yes the old api allowed the num= parameter to specify a l8imit to returned items, the new Api version 2 uses 'limit=' instead. but defaults to 20 if left out.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2908561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Recover commit lost during commit amend I've made a mistake I guess
Yesterday I created a new branch (feature/crud-suppliers), then yesterday and today I worked on that branch.
30 minutes ago, after I finished my changes I added the files modified and did a git commit --amend --no-edit then I remembered that I didn't commit anything before, so I wanted to add a message to the commit.
I thought that with git rebase -i HEAD~2 I could go inside and change the message, but the commit wasn't there, so i just pressed to exit ctrl+X and noticed that it completed the rebase..
After that my edits disapear, few minutes ago I pushed everything to check if i could find the edits on github but no luck.
I tried git reflog but checking the hash before the rebase didn't show my edits.
this is my git reflog:
637b687 (HEAD, master) HEAD@{0}: checkout: moving from feature/crud-suppliers to 637b687
a55e9d9 (origin/feature/crud-suppliers, feature/crud-suppliers) HEAD@{1}: checkout: moving from 3f5931ac661a4d4ee983fe0a173ae309a874be83 to feature/crud-suppliers
3f5931a HEAD@{2}: checkout: moving from 8dd9857224adf665df1d5d981c067d6068c3bea6 to 3f5931a
8dd9857 HEAD@{3}: checkout: moving from feature/crud-suppliers to 8dd9857
a55e9d9 (origin/feature/crud-suppliers, feature/crud-suppliers) HEAD@{4}: checkout: moving from feature/crud-products to feature/crud-suppliers
9ed4250 (origin/feature/crud-products, feature/crud-products) HEAD@{5}: checkout: moving from develop to feature/crud-products
069daa3 (origin/develop, develop) HEAD@{6}: checkout: moving from feature/crud-suppliers to develop
a55e9d9 (origin/feature/crud-suppliers, feature/crud-suppliers) HEAD@{7}: rebase -i (finish): returning to refs/heads/feature/crud-suppliers
a55e9d9 (origin/feature/crud-suppliers, feature/crud-suppliers) HEAD@{8}: rebase -i (start): checkout HEAD~2
a55e9d9 (origin/feature/crud-suppliers, feature/crud-suppliers) HEAD@{9}: checkout: moving from 3f5931ac661a4d4ee983fe0a173ae309a874be83 to feature/crud-suppliers
3f5931a HEAD@{10}: checkout: moving from a55e9d98dc253dfb72461e7f4ef07dc815df0400 to 3f5931a
a55e9d9 (origin/feature/crud-suppliers, feature/crud-suppliers) HEAD@{11}: checkout: moving from feature/crud-suppliers to a55e9d9
a55e9d9 (origin/feature/crud-suppliers, feature/crud-suppliers) HEAD@{12}: rebase -i (finish): returning to refs/heads/feature/crud-suppliers
a55e9d9 (origin/feature/crud-suppliers, feature/crud-suppliers) HEAD@{13}: rebase -i (start): checkout HEAD~2
a55e9d9 (origin/feature/crud-suppliers, feature/crud-suppliers) HEAD@{14}: rebase -i (finish): returning to refs/heads/feature/crud-suppliers
a55e9d9 (origin/feature/crud-suppliers, feature/crud-suppliers) HEAD@{15}: rebase -i (pick): CRUD employees
3f5931a HEAD@{16}: rebase -i (pick): Added new ways to retreive company informations
8dd9857 HEAD@{17}: rebase -i (pick): Created company user views
faedafc HEAD@{18}: rebase -i (pick): Changed email link to reset password
9b54992 HEAD@{19}: rebase -i (start): checkout HEAD~2
b69bfb0 HEAD@{20}: commit (amend): Merge pull request #11 from alebuffoli/feature/crud-employees
069daa3 (origin/develop, develop) HEAD@{21}: checkout: moving from feature/crud-products to feature/crud-suppliers
Update
The method suggested below worked, but as a matter of fact, before to receive any reply on this question I was able to recover all my lost changes with the history function on my editor (Pycharm), so I guess to mention it if you are in a similar situation and cannot recover the changes with the methods below.
A: git also keeps a log for individual branches : run
git reflog feature/crud-suppliers
to view only actions that moved that branch.
By default : git rebase completely drops merge commits. If your changes were stored in commit Merge pull request #11 from ..., then running git rebase HEAD~2 would discard that commit.
You can use -r|--rebase-merges to keep them.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64055403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to stop adding two of the same ID to a list? (C# Multithreading) I've got an issue which is probably easily solvable, but for some reason I can't wrap my head around...
I have a single list, which contains a class which contains some information. One of these is an ID, which starts from 0 and increases by one with each submission.
When running multiple threads, they submit a different variation of the same ID. This should not be possible, as it checks whether it can be added just before I literally call the List<>().Add.
Any suggestions on how I can avoid this?
Main method:
public static bool AddToList(List<ExampleItem> itemList, List<Xxx> xxx, ExampleItem newItem)
{
ExampleItem lastItem = itemList[itemList.Count - 1];
// We must validate the old item one more time before we progress. This is to prevent duplicates.
if(Validation.ValidateIntegrity(newItem, lastItem))
{
itemList.Add(newItem);
return true;
}
else
return false;
}
Validation method:
public static bool ValidateBlockIntegrity(ExampleItem newItem, ExampleItem lastItem)
{
// We check to see if the ID is correct
if (lastItem.id != newItem.id - 1)
{
Console.WriteLine("ERROR: Invalid ID. It has been rejected.");
return false;
}
// If we made it this far, the item is valid.
return true;
}
A: Thanks for the suggestions from @mjwills and whoever deleted their answer, I was able to figure out a good method.
I'm now using a ConcurrentDictionary<long, ExampleClass> which means I can both index and add without risking the issue of having duplicated ID's - exactly what I needed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51759317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Extend an Attribute Here is my situations, I use this attribute MyAttrAttribute all over my codebase, I would now like to give it the property such that any place that I use [MyAttr], it also applies a third party attribute [TheirAttr].
Of course, I could do a find a replace across my code to extend the attribute, but is there a way to modify MyAttrAttribute to apply the third party attribute as well?
A: You could derive MyAttrAttribute from TheirAttrAttribute, and then Attribute.GetCustomAttribute method should work with both types:
public static Attribute GetCustomAttribute(
Assembly element,
Type attributeType
)
....
attributeType
Type: System.Type
The type, or a base type, of the custom attribute to search for.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31528169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: kafka jdbc sink connector: recreated all other connectors when create or update one connector I have about 500 connectors, and every time I create a new one, I have to wait long time to recreate the previous. It is too slow.
The sink.properties is as follows
{
"name": "saas-saas_order-worder_fee_price-sink",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"errors.log.include.messages": "true",
"tasks.max": "1",
"topics": "saasosvc-worder_fee_price",
"transforms": "unwrap",
"auto.evolve": "true",
"name": "saas-saas_order-worder_fee_price-sink",
"transforms.unwrap.type": "io.debezium.transforms.UnwrapFromEnvelope",
"auto.create": "true",
"connection.url": "jdbc:postgresql://postgres:5432/saas?user=postgresuser&password=postgrespw",
"errors.log.enable": "true",
"insert.mode": "upsert",
"pk.mode": "record_value",
"pk.fields": "id"
}
}
some logs are as follows
> 2019-06-12 05:44:42,242 INFO ||
> WorkerSourceTask{id=saas-com_dyrs_mtsp_furnituresaleservice-source-0}
> flushing 0 outstanding messages for offset commit
> [org.apache.kafka.connect.runtime.WorkerSourceTask]
> 2019-06-12 05:44:45,950 INFO || WorkerSinkTask{id=saas-appconstruction-appconstruction_requestrecord-sink-0}
> Committing offsets asynchronously using sequence number 20376:
> {appssvc-appconstruction_requestrecord-0=OffsetAndMetadata{offset=2340,
> leaderEpoch=null, metadata=''}}
> [org.apache.kafka.connect.runtime.WorkerSinkTask]
> 2019-06-12 05:45:18,242 INFO || Connector saas-saas_order-worder_fee_price-sink config updated
> [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
> 2019-06-12 05:45:18,745 INFO || Rebalance started [org.apache.kafka.connect.runtime.distributed.DistributedHerder]
> 2019-06-12 05:45:18,747 INFO || Stopping connector saas-order_base-product_vs_install-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,747 INFO || Stopping connector saas-dyrs_complainservice-complain_type_record-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,748 INFO || Stopping connector saas-appcustomerself-appcustomerself_appversionuser-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,748 INFO || Stopping connector saas-com_dyrs_mtsp_changeservice-replace_product-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,748 INFO || Stopping connector saas-com_dyrs_mtsp_designservice-version_info-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,748 INFO || Stopping connector saas-dyrs_settlementservice-source
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,748 INFO || Stopping connector saas-appcustomerself-appcustomerself_advice-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,748 INFO || Stopping connector saas-im-im_accesstoken-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,761 INFO || 172.31.206.219 - - [12/Jun/2019:05:45:18 +0000] "POST /connectors/ HTTP/1.1" 201 604 549
> [org.apache.kafka.connect.runtime.rest.RestServer]
> 2019-06-12 05:45:18,775 INFO || Stopped connector saas-im-im_accesstoken-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,776 INFO || Stopping connector saas-saas_order-dd_order-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,778 INFO || Stopped connector saas-dyrs_settlementservice-source
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,780 INFO || Stopping connector saas-dyrs_settlementservice-balance_info_detail-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,781 INFO || Stopped connector saas-com_dyrs_mtsp_designservice-version_info-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,781 INFO || Stopping connector saas-dyrs_authorityservice-pro_city_area-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,781 INFO || Stopped connector saas-dyrs_complainservice-complain_type_record-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,781 INFO || Stopping connector saas-constructionconfig-constructionconfig_checkmanagedetailstandard-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,799 INFO || Stopped connector saas-saas_order-dd_order-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,799 INFO || Stopping connector saas-saas_order-goods-sink [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,857 INFO || Stopped connector saas-saas_order-goods-sink [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,857 INFO || Stopping connector saas-com_dyrs_mtsp_businessopportunityservice-business_process_operator-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,860 INFO || Stopped connector saas-appcustomerself-appcustomerself_appversionuser-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,860 INFO || Stopping connector saas-com_dyrs_mtsp_changeservice-custom_wood_info-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,862 INFO || Stopped connector saas-order_base-product_vs_install-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,863 INFO || Stopping connector saas-com_dyrs_mtsp_quotelistservice-pre_quote_detail_tab_info-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,865 INFO || Stopped connector saas-com_dyrs_mtsp_changeservice-replace_product-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,868 INFO || Stopping connector saas-com_dyrs_mtsp_quoteservice-program_template_tab-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,870 INFO || Stopped connector saas-dyrs_settlementservice-balance_info_detail-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,870 INFO || Stopping connector saas-dyrs_settlementservice-balance_config-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,866 INFO || Stopped connector saas-appcustomerself-appcustomerself_advice-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,873 INFO || Stopping connector saas-dyrs_authorityservice-account_login_fail-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,872 INFO || Stopped connector saas-dyrs_authorityservice-pro_city_area-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,873 INFO || Stopping connector saas-com_dyrs_mtsp_dataexpansionservice-dynamic_tabpk-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,875 INFO || Stopped connector saas-constructionconfig-constructionconfig_checkmanagedetailstandard-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,875 INFO || Stopping connector saas-com_dyrs_mtsp_promotionservice-offer_content-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,884 INFO || Stopped connector saas-com_dyrs_mtsp_businessopportunityservice-business_process_operator-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,885 INFO || Stopping connector saas-com_dyrs_mtsp_businessopportunityservice-personal_clue_limit-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,891 INFO || Stopped connector saas-com_dyrs_mtsp_changeservice-custom_wood_info-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,891 INFO || Stopping connector saas-dyrs_settlementservice-balance_operation-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,902 INFO || Stopped connector saas-com_dyrs_mtsp_quotelistservice-pre_quote_detail_tab_info-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,902 INFO || Stopping connector saas-order_base-unit_vs_unit-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,904 INFO || Stopped connector saas-com_dyrs_mtsp_dataexpansionservice-dynamic_tabpk-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,905 INFO || Stopped connector saas-com_dyrs_mtsp_quoteservice-program_template_tab-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,907 INFO || Stopping connector saas-com_dyrs_mtsp_fieldassociationservice-source
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,904 INFO || Stopped connector saas-dyrs_authorityservice-account_login_fail-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,907 INFO || Stopping connector saas-customerself-customerself_msgsendrecorddetail-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,908 INFO || Stopping connector saas-finereport-constructionprocess_constructioninfoprocesscheck-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:18,913 INFO || Stopped connector saas-dyrs_settlementservice-balance_config-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:19,134 INFO || Stopped connector saas-order_base-unit_vs_unit-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:19,136 INFO || Stopping connector saas-com_dyrs_mtsp_furnituresaleservice-fur_sale_contract_info-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:19,139 INFO || Stopping connector saas-com_dyrs_mtsp_businessopportunityservice-call_record-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:19,154 INFO || Stopped connector saas-com_dyrs_mtsp_furnituresaleservice-fur_sale_contract_info-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:19,154 INFO || Stopping connector saas-com_dyrs_mtsp_fieldassociationservice-pre_filed-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:19,166 INFO || Stopped connector saas-com_dyrs_mtsp_promotionservice-offer_content-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:19,166 INFO || Stopping connector saas-runmonitoring-runmonitoring_browsinghistory-sink
> [org.apache.kafka.connect.runtime.Worker]
> 2019-06-12 05:45:19,167 INFO || Stopped connector saas-com_dyrs_mtsp_fieldassociationservice-pre_filed-sink
> [org.apache.kafka.connect.runtime.Worker]
Thanks
A: "Stop the world" rebalances are a known issue with Kafka Connect. The good news is that with KIP-415 which is due in Apache Kafka 2.3 there is a new incremental rebalance feature which should make things much better.
In the meantime the only other option is to partition your Kafka Connect workers and have separate clusters, splitting the 500 connectors up over them (e.g. by function, type, or other arbitrary factor).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56556007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: ADF - How to copy an Excel Sheet with Multiple Sheets into separate .csv files I currently have an Excel file that has multiple worksheets (over 11). This Excel file currently lives in a remote file server. I am trying to use Azure Data FactoryV2 to copy the Excel file and split each worksheet as its own .csv file within an ADLS Gen2 folder. The reason for this is because not every tab has the same schema and I want to only select the valid ones later.
I currently have an ADF dataset pointing to the Excel dataset correctly and have created a parameter for the sheet name using @dataset.SheetName. I am not sure where to go next. After creating a new pipeline I have tried nesting a Copy Activity inside a ForEach activity, however, it asks for the SheetName value.
How do I construct this pipeline to grab the names of the worksheets existing in the Excel file and then iterate a copy activity for each sheet? I cannot assume I will know the sheet names or how many sheets there will be. I would prefer to avoid creating multiple datasets for the Excel file if possible.
Any insight would be appreciated.
A: Get list of Excel sheet names in ADF is not support yet and you can vote here.
*
*So you can use azure funcion to get the sheet names.
import pandas
xl = pandas.ExcelFile('data.xlsx')
# see all sheet names
print(xl.sheet_names )
*Then use an Array type variable in ADF to get and traverse this array.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/67541195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: upload custom wordpress theme to website instead of the current one Good Evening
the task is to replace new wordpress theme with new design with current old wordpres theme
the new theme is ready and test on my laptop on localhost
what should i do to upload it correctly ?
A: Compress the theme into a zip file, and just upload it via the Wordpress Dashboard....
Appearance -> Themes -> then click on "Add New" -> then click on "Upload Theme"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/69871709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-3"
}
|
Q: How to Round and Format Mail Merge Number I have an amount that fluctuates from 1 million to over a billion and want to show the result as $1.5 million or $1.5 billion using the field codes in Word 2013 for a mail merge. (ie. 1,500,000 should display $1.5 million and 1,500,000,000 should display as $1.5 billion.)
I have this so far:
{=int({MERGEFIELD AreaSales})/100000000 \# $,0.0}
Which gives me close to what I'm looking for $1.5 but without accounting for an amount in the millions or billions and the proper label. Thanks in advance!
A: I don't understand exactly what you're asking - you should always provide examples of what you want to have. I'm assuming what you mean is you want to see the word "million" or "billion", as appropriate. This can be done using a separate IF field:
{ IF { MERGEFIELD AreaSales } > 999999.99 "{ IF { MERGEFIELD AreaSales } < 1000000000 "million" "billion" }" "" }
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/37450742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Embedding password in and Internal Console App I'm about to release a console application onto one of our servers to be run by the task scheduler on a daily basis. Based on certain conditions, the application will email selected users through Office 365. In order to email users, of course, I'll need to use the credentials for an email account.
Given that this is an internal application on one of our servers, is embedding the username and password in the code safe? If not, what is the best practice to securely get around this?
If it helps, this is my code for the email function (written in C#):
String userName = "my.email@organization.ca";
String password = "myPassword";
MailMessage msg = new MailMessage();
msg.To.Add(new MailAddress(user.getEmail()));
msg.From = new MailAddress(userName);
msg.Subject = "My Subject";
msg.Body = "My message";
msg.IsBodyHtml = true;
SmtpClient client = new SmtpClient();
client.Host = "mail.office365.com";
client.Credentials = new System.Net.NetworkCredential(userName, password);
client.Port = 587;
client.EnableSsl = true;
client.Send(msg);
Edit: I should add that only administrators have access to this specific server.
A:
is embedding the username and password in the code safe
Of course not. Storing credentials without decent protection is never safe. Every user having access to the code or the executable can extract the credentials.
If I were you, I would store those credentials in an encrypted file on the machine it runs in a place where only the service user can access it. Encrypt the file with a device-specific key, so even if other users can obtain the file, it is useless to them since they can't decrypt it.
A: It's not safe in that anyone with access to that code or the application can get the password to the account with ease.
It's about risk vs reward. If a malicious player has access to the code or server then surely you have bigger issues to worry about than someone being able to access a mail account that can easily be recovered or shut down.
Is it worth going through extra trouble to prevent this outcome? It's up to you.
Arguably the configuration details should at least be in App.config so that they can be easily changed if a password expires, otherwise you'd have to recompile and redeploy the entire solution.
Ideally you should look into the various means available to you to perform encryption of your configuration file if you want to be as safe as possible. A simple search for c# encrypt configuration has enough material to get you started.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/47635106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: WoW Addon (Lua) Variables and methods I've been messing with this for hours and I can't seem to get this to work
for i=1, 10 do
local frame = "MyFrame"..i
frame:EnableMouseWheel(true)
end
and the error I get is
attempt to call method 'EnableMouseWheel' (a nil value)
but if I do
MyFrame1:EnableMouseWheel(true)
there's no problem what so ever and it works
is there anyway to use a variable as a frame name for the method?
A: This will work:
local vars = getfenv()
for i=1, 10 do
local frame = "MyFrame"..i
vars[frame]:EnableMouseWheel(true)
end
Although you appear to be looking for the solution to the wrong problem. Why not store them in an array to begin with?
A: If you want to convert a string name into a variable name you need to access the global object as a table:
_G["MyFrame1"]
I don't know about what version of Lua warcraft uses. If its a really old version that doesn't have _G then you probably need to use the getglobal functions instead
getglobal("MyFrame1")
That said, this is usually an antipattern. If you are the one that originally defined the MyFrame variables its normally better to use an array instead:
Myframes = {
MyFrame1,
MyFrame2,
}
since this lets you avoid the string manipulation
local frame = MyFrames[i]
frame:EnableMouseWheel(true)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/15399281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I parse an XML file given it's path in Jquery? So I am trying to parse a XML file that a user choose through a file chooser. The problem I am having is on my input change event, the jquery is not being call.
$('input[type=file]').change(function(e){
path = $(this).val();
$.ajax({
type: "GET",
url: path,
dataType: "xml",
success: parseXml
});
});
function parseXml(xml)
{
head = xml;
alert('I reached here');
}
A: Do:
$(document).ready(function() {
$("input[type='file']").blur(function(){
var path = $(this).val();
alert(path);
$.ajax({
type: "GET",
url: path,
dataType: "xml",
success: function(response) {
parseXml(response);
}
});
});
});
function parseXml(xml) {
//parse here
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8889226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Typescript: Convert For loop to Promise and wait to resolve all ISSUE: save.emit() runs before all the iterations are completed in the below "for" loop which shows incorrect values of the addUpdate call inside loop.
I am trying to convert a for loop to promise and then wait for each of those promise to resolve so that I can emit changes and close a popup.
Below, is a test code in which I want to print "console.log("Print Before")" first for each iteration and then at the end print "console.log("Print After")" once all iterations are done.
Any help on the syntax for this is really appreciated.
convertForLoopToPromiseAndWait(someParameterOfTypeObject) {
for (var test of someParameterOfTypeObject) {
var testVariable = test.setValue;
if (testVariable) {
dataService.addUpdateEndpointCall();
console.log("Print Before");
}
}
console.log("Print After");
save.emit();
}
async addUpdateEndpointCall() {
const promise1 = this.dataService.addCall().take(1).toPromise();
const promise2 = this.dataService.deleteCall().take(1).toPromise();
await Promise.all([promise1, promise2])
.then(_ => this.save.emit());
}
A: Convert convertForLoopToPromiseAndWait to async method, then you can use await after for keyword and before dataService.addUpdateEndpointCall();
async convertForLoopToPromiseAndWait(someParameterOfTypeObject) {
for await (var test of someParameterOfTypeObject) {
var testVariable = test.setValue;
if (testVariable) {
await dataService.addUpdateEndpointCall();
console.log("Print Before");
}
}
console.log("Print After");
await save.emit();
}
A: Another way is to make a list of promises and wait for them to resolve:
const promises = [];
for await (your iterating condition) {
promises.push(dataService.addUpdateEndpointCall());
}
then use
await Promise.all(promises).then( this.save.emit());
A: I think that you have made a mistake here:
const promise1 = await this.dataService.addCall().take(1).toPromise();
const promise2 = await this.dataService.deleteCall().take(1).toPromise();
You await promises. The results put in the variables promise1 and promise2 will then not be promises.
Don't you mean the following?
async addUpdateEndpointCall() {
const promise1 = this.dataService.addCall().take(1).toPromise();
const promise2 = this.dataService.deleteCall().take(1).toPromise();
await Promise.all([promise1, promise2])
.then(_ => this.save.emit());
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71312652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: porting the piece of code I have following pieces of code:
code1:
lis = ["a", "s", "d"]
string.join(lis)
code2:
lis = ["a", "s", "d"]
' '.join(lis)
Results:
For both the cases the result is 'a s d'
Now, there should be certain cases (if I'm correct) when the value of 'sep' which is the default separation value differ from ' '. I would really like to know when does such cases occur ?
I have following doubts:
*
*Is there any difference between the above two codes, more specifically in the 'join'
statement in case of python2.x.
*If 'yes', then how do I perform the task of 'code1' in python3.x because in python3.x string does not have the module 'join'
thanks in advance..
A: I had to look it up - large parts of the string are obsolete (replaced by real methods on real str objects); because of this, you should propably use ' '.join even in Python 2. But no, there is no different - string.join defaults to joining by single spaces (i.e. ' '.join is equivalent).
A: There is no significant difference. In all cases where you could use the first form the second will work, and the second will continue to work on Python 3.
However, some people find that writing a method call on a literal string looks jarring, if you are one of these then there are a few other options to consider:
' '.join(iterable) # Maybe looks a bit odd or unfamiliar?
You can use a name instead of a literal string:
SPACE = ' '
...
SPACE.join(iterable) # Perhaps a bit more legible?
Or you can write it in a similar style to string.join(), but be aware the arguments are the other way round:
str.join(' ', iterable)
Finally, the advanced option is to use an unbound method. e.g.
concatenate_lines = '\n'.join
...
print(concatenate_lines(iterable))
Any of these will work, just choose whichever you think reads best.
A: The two statements are equivalent.
string.join(words[, sep])
Concatenate a list or tuple of words
with intervening occurrences of sep.
The default value for sep is a single
space character.
While for the second case:
str.join(iterable)
Return a string which is the concatenation of
the strings in the iterable iterable.
The separator between elements is the
string providing this method.
Since the second version still exists in Python 3, you can use that one without any problems.
Sources:
*
*Python 2.7.1 documentation of string.join() (static method)
*Python 2.7.1 documentation of join() (string method)
A: string.join accepts a list or a tuple as parameter, wheras " ".join accepts any iterable.
if you want to pass a list or a tuple the two variants are equal. In python3 only the second variant exists afaik
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/4848862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: AttributeError : 'tuple' has no attribute 'to' I am writing this Image Classifier and I have defined the loaders but getting this mistake and I have no clue about it.
I have defined the train loader, for a better explanation I tried this
for ina,lab in train_loader:
print(type(ina))
print(type(lab))
and I got
<class 'torch.Tensor'>
<class 'tuple'>
Now, For training of the model, I did
def train_model(model,optimizer,n_epochs,criterion):
start_time = time.time()
for epoch in range(1,n_epochs-1):
epoch_time = time.time()
epoch_loss = 0
correct = 0
total = 0
print( "Epoch {}/{}".format(epoch,n_epochs))
model.train()
for inputs,labels in train_loader:
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
output = model(inputs)
loss = criterion(output,labels)
loss.backward()
optimizer.step()
epoch_loss +=loss.item()
_,pred =torch.max(output,1)
correct += (pred.cpu()==label.cpu()).sum().item()
total +=labels.shape[0]
acc = correct/total
and I got the error:
Epoch 1/15
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-36-fea243b3636a> in <module>
----> 1 train_model(model=arch, optimizer=optim, n_epochs=15, criterion=criterion)
<ipython-input-34-b53149a4bac0> in train_model(model, optimizer, n_epochs, criterion)
12 for inputs,labels in train_loader:
13 inputs = inputs.to(device)
---> 14 labels = labels.to(device)
15 optimizer.zero_grad()
16 output = model(inputs)
AttributeError: 'tuple' object has no attribute 'to'
If you want anything more, please tell me!
Thanks
Edit: The label looks like this.
This was an Image Classification between Bee and Wasp. It also contains insects and non insects
('wasp', 'wasp', 'insect', 'insect', 'wasp', 'insect', 'insect', 'wasp', 'wasp', 'bee', 'insect', 'insect', 'other', 'bee', 'other', 'wasp', 'other', 'wasp', 'bee', 'bee', 'wasp', 'wasp', 'wasp', 'wasp', 'bee', 'wasp', 'wasp', 'other', 'bee', 'wasp', 'bee', 'bee')
('wasp', 'wasp', 'insect', 'bee', 'other', 'wasp', 'insect', 'wasp', 'insect', 'insect', 'insect', 'wasp', 'wasp', 'insect', 'wasp', 'wasp', 'wasp', 'bee', 'wasp', 'wasp', 'insect', 'insect', 'wasp', 'wasp', 'bee', 'wasp', 'insect', 'bee', 'bee', 'insect', 'insect', 'other')
A: It literally means that the the tuple class in Python doesn't have a method called to. Since you're trying to put your labels onto your device, just do labels = torch.tensor(labels).to(device).
If you don't want to do this, you can change the way the DataLoader works by making it return your labels as a PyTorch tensor rather than a tuple.
Edit
Since the labels seem to be strings, I would convert them to one-hot encoded vectors first:
>>> import torch
>>> labels_unique = set(labels)
>>> keys = {key: value for key, value in zip(labels_unique, range(len(labels_unique)))}
>>> labels_onehot = torch.zeros(size=(len(labels), len(keys)))
>>> for idx, label in enumerate(labels_onehot):
... labels_onehot[idx][keys[label]] = 1
...
>>> labels_onehot = labels.to(device)
I'm shooting a bit in the dark here because I don't know the details exactly, but yeah strings won't work with tensors.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/63825841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: regex: combine capturing group with OR condition Let's say I have a string :
s = "id_john, num847, id_000, num___"
I know how to retrieve either of 2 patterns with |:
re.findall("id_[a-z]+|num[0-9]+", s)
#### ['id_john', 'num847'] # OK
I know how to capture a portion only of a match with parenthesis:
re.findall("id_([a-z]+)", s)
#### ['john']
But I fail when i try to combine those two features, this is my desired output:
#### ['john', '847']
Thanks for your help.. (I work with python)
A: No need for lookaheads or complex patterns.
Consider this:
>>> re.findall('id_([a-z]+)|num([0-9]+)', s)
[('john', ''), ('', '847')]
When the first pattern matches, the first group will contain the match, and the second group will be empty. When the second pattern matches, the first group is empty, and the second group contains the match.
Since one of the two groups will always be empty, joining them couldn't hurt.
>>> [a+b for a,b in re.findall('id_([a-z]+)|num([0-9]+)', s)]
['john', '847']
A: You may use this code in Python with lookaheads:
>>> s = "id_john, num847, id_000, num___"
>>> print re.findall(r'(?:id_(?=[a-z]+\b)|num(?=\d+\b))([a-z\d]+)', s)
['john', '847']
RegEx Details:
*
*(?:: Start non-capture group
*
*id_(?=[a-z]+\b): Match id_ with a lookahead assertion to make sure we have [a-z]+ characters ahead followed by word boundary
*|: OR
*num(?=\d+\b))([a-z\d]+: Matchnum` with a lookahead assertion to make sure we have digits ahead followed by word boundary
*): End non-capture group
*([a-z\d]+): Match 1+ characters with lowercase letters or digits
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/50144074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to find the Autocoorelation Matrix of a 1D array/vector I've 1D array of size n which represents a signal in the time domain, I need to find the Autocorrelation Matrix of this signal using python, then I'll compute the eigenvectors and eigenvalues of this matrix.
what I've tried is to use the Toeplitz method from scipy.linalg as follows
res = scipy.linalg.toeplitz(c=np.asarray(signal),r=np.asarray(signal))
eigenValues,eigenVectors = numpy.linalg.eig(res)
I'm not sure if that's correct because on Matlab forums I saw a quite different solution Matlab solution
A: Terminology about correlations is confusing, so let me take care in defining what it sounds like you want to compute.
Autocorrelation matrix of a random signal
"Autocorrelation matrix" is usually understood as a characterization of random vectors: for a random vector (X[1], ..., X[N]) where each element is a real-valued random variable, the autocorrelation matrix is an NxN symmetric matrix R_XX whose (i,j)th element is
R_XX[i,j] = E[X[i] ⋅ X[j]]
and E[⋅] denotes expectation.
To reasonably estimate an autocorrelation matrix, you need multiple observations of random vector X to estimate the expectations. But it sounds like you have only one 1D array x. If we nevertheless apply the above formula, expectations simplify away to
R_XX[i,j] = E[X[i] ⋅ X[j]] ~= x[i] ⋅ x[j].
In other words, the matrix degenerates to the outer product np.outer(x, x), a rank-1 matrix with one nonzero eigenvalue. But this is an awful estimate of R_XX and doesn't reveal new insight about the signal.
Autocorrelation for a WSS signal
In signal processing, a common modeling assumption is that a signal is "wide-sense-stationary or WSS", meaning that any time shift of the signal has the same statistics. This assumption is particularly such that the expectations above can be estimated from a single observation of the signal:
R_XX[i,j] = E[X[i] ⋅ X[j]] ~= sum_n (x[i + n] ⋅ x[j + n])
where the sum over n is over all samples. For simplicity, imagine in this description that x is a signal that goes on infinitely. In practice on a finite-length signal, something has to be done at the signal edges, but I'll gloss over this. Equivalently by the change of variable m = i + n we have
R_XX[i,j] = E[X[i] ⋅ X[j]] ~= sum_m (x[m] ⋅ x[j - i + m]),
with i and j only appearing together as a difference (j - i) in the right-hand side. So this autocorrelation is usually indexed in terms of the "lag" k = j - i,
R_xx[k] = sum_m (x[m] ⋅ x[j - i + m]).
Note that this results in a 1D array rather than a matrix. You can compute it for instance with scipy.signal.correlate(x, x) in Python or xcorr(x, x) in Matlab. Again, I'm glossing over boundary handling considerations at the signal edges. Please follow these links to read about the options that these implementations provide.
You can relate the 1D correlation array R_xx[k] with the matrix R_XX[i,j] by
R_XX[i,j] ~= R_xx[j - i]
which like you said is Toeplitz.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/63825355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Adding a table to a particular Div in the codebehind? Just for testing, I made a page with nothing but the form and div tags and, in the codebehind, I created a table, added rows to it, then used the following code to add the table to the page:
Page.Controls.Add(dtStatuses);
What I'd like to do now, though, is add this table to a particular div. For instance, if I have a div tag with an ID of "testDiv", how can I add the above table (dtStatuses) to that div, with just code in the codebehind?
A: Just set runat="server" to the div
<div id="testDiv" runat="server"></div>
and add the control like this
testDiv.Controls.Add(dtStatuses);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/18616426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What lucene analyzer can be used to handle Japanese text? Which lucene analyzer can be used to handle Japanese text properly? It should be able to handle Kanji, Hiragana, Katakana, Romaji, and any of their combination.
A: You should probably look at the CJK package that is in the contrib area of Lucene. There is an analyzer and a tokenizer specifically for dealing with Chinese, Japanese, and Korean.
A: I found lucene-gosen while doing a search for my own purposes:
Their example looks fairly decent, but I guess it's the kind of thing that needs extensive testing. I'm also worried about their backwards-compatibility policy (or rather, the complete lack of one.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/1625000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: webpack configuration: pixi.js imported in app.js generates an over 2MB distribution file Creating a pixi.js component based on pixi.js and es6 modules.
In my app.js I am importing everything from pixi.js:
import * as PIXI from 'pixi.js';
I am also transpiling the code with babel:
rules: [
{
test: /\.js$/,
exclude: /(node_modules)/,
use: {
loader: 'babel-loader',
options: {
presets: ['@babel/preset-env']
}
}
}
]
which as a result generates a large file. If I exclude pixi.js from app.js - the file is 700KB only.
A: pixi.js (unminified) is 1.3MB, so what do you expect? If you want a smaller filesize you have to use a minification plugin for webpack, like uglify.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/52427275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to get canonical filename on a case-insensitive filesystem? Imagine I have a file foO/bar.txt.
On a case-insensitive filesystem, I'm able to open the file as FOO/BaR.tXt.
Now I would like to detect the "canonical" filename (foO/bar.txt), so I could warn my users, that they should use the correct spelling if they want their save-files to be usable on systems with case-sensitive filesystems.
(that is: my users can insert relative paths via a text-input; on Windows they sometimes use non-canonical cases; when the project is then opened on a case-sensitive system, the relative paths are broken)
The entire code is in plain old C, and should work cross-platform (Linux, macOS, Windows; the latter two being the obvious candidates for case-insensitive filesystems...)
I tried using glob() (using the filename as the pattern), hoping that it would return the canonicalized filename, but alas! it does not. also the Windows equivalent FindFirstFile() will happily return the queried filename, rather than return the filename as found on disk.
Any idea for a simple solution that involves only stdlib?
(ideally without manually reading the content of the directory and then checking whether there's an exact match...)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73463994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.