text
stringlengths 15
59.8k
| meta
dict |
|---|---|
Q: How to get and convert back user's birthdate to/from a NSTimeInterval? Suppose, my birthdate is (mon/day/year) 10-05-1932 (as NSDate), how to convert this to NSTimeInterval.
Again, how to convert back that NSTimeInterval to NSDate back?
I tried using different methods of NSDate but haven't succeed yet.
What I'm doing?
NSString *strdate = @"10-05-1932";
NSDateFormatter *df = [[NSDateFormatter alloc] init];
[df setDateFormat:@"MM-dd-yyyy"];
NSDate *date = [df dateFromString:strdate];
NSLog(@"%.f", -[date timeIntervalSinceNow]);
This logs 2616605071 (as on 4th September 2015 at 16:21) – When I checked it with the site like this it gives me wrong date.
A: NSTimeInterval timeInterval = date.timeIntervalSinceReferenceDate;
NSDate *anotherDate = [NSDate dateWithTimeIntervalSinceReferenceDate: timeInterval];
Try the code below. date and anotherDate will be identical.
NSCalendar *calendar = [NSCalendar currentCalendar];
calendar.timeZone = [NSTimeZone timeZoneForSecondsFromGMT:0];
NSDateComponents *components = [[NSDateComponents alloc] init];
[components setDay:10];
[components setMonth:5];
[components setYear:1934];
NSDate *date = [calendar dateFromComponents:components];
NSLog(@"%@", date);
NSTimeInterval timeInterval = [date timeIntervalSinceReferenceDate];
NSDate *anotherDate = [NSDate dateWithTimeIntervalSinceReferenceDate:timeInterval];
NSLog(@"%@", anotherDate);
UPDATE:
It's incorrect because you get timestamp (time interval) from that website which use UNIX timestamp. Also, it's incorrect because you use timeIntervalSinceNow which will likely change every time you call the method because it's relative to the current time. If you want the date/time interval that compatible with that website. Use:
NSTimeInterval timeInterval = date.timeIntervalSince1970;
NSDate *anotherDate = [NSDate dateWithTimeIntervalSince1970:timeInterval];
You can copy the timeInterval from the code above (-1188000000) and paste it on the website and it will give you a correct date.
Internally, NSDate store time interval relative to reference date (Jan 1st, 2001). The website you mentioned is UNIX timestamp that relative to Jan 1st, 1970.
A: This is just worked!
NSString *strdate = @"10-05-1932";
NSDateFormatter *df = [[NSDateFormatter alloc] init];
[df setDateFormat:@"MM-dd-yyyy"];
NSDate *date = [df dateFromString:strdate];
NSLog(@"%@",date);
NSTimeInterval interval = [date timeIntervalSince1970];
NSLog(@"%f", interval);
NSDate *date2 = [NSDate dateWithTimeIntervalSince1970:interval];
NSLog(@"%@",date2);
Thanks to @sikhapol
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/32398053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I retrieve disk information in C#? I would like to access information on the logical drives on my computer using C#. How should I accomplish this? Thanks!
A: If you want to get information for single/specific drive at your local machine. You can do it as follow using DriveInfo class:
//C Drive Path, this is useful when you are about to find a Drive root from a Location Path.
string path = "C:\\Windows";
//Find its root directory i.e "C:\\"
string rootDir = Directory.GetDirectoryRoot(path);
//Get all information of Drive i.e C
DriveInfo driveInfo = new DriveInfo(rootDir); //you can pass Drive path here e.g DriveInfo("C:\\")
long availableFreeSpace = driveInfo.AvailableFreeSpace;
string driveFormat = driveInfo.DriveFormat;
string name = driveInfo.Name;
long totalSize = driveInfo.TotalSize;
A: For most information, you can use the DriveInfo class.
using System;
using System.IO;
class Info {
public static void Main() {
DriveInfo[] drives = DriveInfo.GetDrives();
foreach (DriveInfo drive in drives) {
//There are more attributes you can use.
//Check the MSDN link for a complete example.
Console.WriteLine(drive.Name);
if (drive.IsReady) Console.WriteLine(drive.TotalSize);
}
}
}
A: What about mounted volumes, where you have no drive letter?
foreach( ManagementObject volume in
new ManagementObjectSearcher("Select * from Win32_Volume" ).Get())
{
if( volume["FreeSpace"] != null )
{
Console.WriteLine("{0} = {1} out of {2}",
volume["Name"],
ulong.Parse(volume["FreeSpace"].ToString()).ToString("#,##0"),
ulong.Parse(volume["Capacity"].ToString()).ToString("#,##0"));
}
}
A: Use System.IO.DriveInfo class
http://msdn.microsoft.com/en-us/library/system.io.driveinfo.aspx
A: Check the DriveInfo Class and see if it contains all the info that you need.
A: In ASP .NET Core 3.1, if you want to get code that works both on windows and on linux, you can get your drives as follows:
var drives = DriveInfo
.GetDrives()
.Where(d => d.DriveType == DriveType.Fixed)
.Where(d => d.IsReady)
.ToArray();
If you don't apply both wheres, you are going to get many drives if you run the code in linux (e.g. "/dev", "/sys", "/etc/hosts", etc.).
This is specially useful when developing an app to work in a Linux Docker container.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/412632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
}
|
Q: Nesting flexbox layout causes scrollable area to misbehave I've read various questions about scrolling within flexbox but I seem to have a slightly more complicated version of what I've seen. I have an editor that uses flexbox and contains a toolbar with many components that is scrollable, and this works fine on its own. However, on the live site it's wrapped in a parent which is display:flex (for reasons I won't go into but this editor isn't the only component on the page), and as soon as the editor is put within this flex parent its scrollable area becomes full width, pushing the whole page wider than it should be.
In this snippet, all the 'Thumb Example' elements are supposed to be scrollable within their parent .image-thumbs-container respecting the page's overall width:800px limit, but they lay out and push their parent wider than that. But, if you turn off the display:flex of the .product-page element, the scrolling then works. I built this simplified example of just the editor to demonstrate the problem, and it worked fine, took me a while to realise it was a parent element that was causing the error.
[Edit] This snippet may not work as expected within StackOverflow, please see this identical Pen: https://codepen.io/neekfenwick/pen/NWbpqZg
body {
background-color: lightgrey;
}
.page-container {
width: 800px;
background-color: white;
}
.product-page {
display: flex; /* Disable me to make scrolling work */
flex-wrap: wrap;
}
.uploads-container {
text-align: left;
white-space: nowrap;
padding: 5px;
border: 1px solid black;
display: flex;
box-sizing: border-box;
}
.uploads-scroller {
overflow-x: scroll;
flex: 1 1 auto;
overflow-y: hidden;
}
.image-thumbs-container {
border: initial;
}
.image-thumb {
display: inline-block;
width: 100px;
border: solid 2px grey;
border-radius: 5px;
margin-right: 2px;
vertical-align: top;
}
<!DOCTYPE html>
<html lang="en">
<head>
<link rel="stylesheet" href="mock.css">
<title>Mockup</title>
</head>
<body>
<div class="page-container">
<div class="product-page">
<div class="unrelated-content">Page contains other content required to layout by flexbox.</div>
<div class="editor">
<div class="uploads-panel">
<div class="uploads-container">
<div class="uploads-file-container">File Upload<br>Widget Goes<br>Here</div>
<div class="uploads-scroller">
<div class="image-thumbs-container">
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
</div>
</div>
</div>
</div>
<div class="workspace">
<h2>Some complicated workspace content goes here.</h2>
</div>
</div>
<div class="unrelated-content">Page contains other content required to layout by flexbox.</div>
</div>
</div>
</body>
</html>
So, please bear in mind I don't want to alter anything above the .editor div. A lot of our site relies on the .product-page CSS. I can consider it, but, is it possible to fix this scrolling problem by only modifying elements from the .editor div and down?
A: You can fix this one by just setting 100% width on .editor.
Since the parent already has flex-wrap: wrap, this should work out just fine for you. The content below the editor will just wrap to below it.
.editor { /* <-- add this */
width: 100%;
}
body {
background-color: lightgrey;
}
.page-container {
width: 800px;
background-color: white;
}
.product-page {
display: flex; /* Disable me to make scrolling work */
flex-wrap: wrap;
}
.uploads-container {
text-align: left;
white-space: nowrap;
padding: 5px;
border: 1px solid black;
display: flex;
box-sizing: border-box;
}
.uploads-scroller {
overflow-x: scroll;
flex: 1 1 auto;
overflow-y: hidden;
}
.image-thumbs-container {
border: initial;
}
.image-thumb {
display: inline-block;
width: 100px;
border: solid 2px grey;
border-radius: 5px;
margin-right: 2px;
vertical-align: top;
}
<!DOCTYPE html>
<html lang="en">
<head>
<link rel="stylesheet" href="mock.css">
<title>Mockup</title>
</head>
<body>
<div class="page-container">
<div class="product-page">
<div class="unrelated-content">Page contains other content required to layout by flexbox.</div>
<div class="editor">
<div class="uploads-panel">
<div class="uploads-container">
<div class="uploads-file-container">File Upload<br>Widget Goes<br>Here</div>
<div class="uploads-scroller">
<div class="image-thumbs-container">
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
</div>
</div>
</div>
</div>
<div class="workspace">
<h2>Some complicated workspace content goes here.</h2>
</div>
</div>
<div class="unrelated-content">Page contains other content required to layout by flexbox.</div>
</div>
</div>
</body>
</html>
A: Add width: 100% to your .editor element.
Or, just in case you really needed to, add max-width: 800px to any of the following elements... .editor, .uploads-container, or .uploads-container
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/66202625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How can I DRY out this Ruby Code How can I DRY out the following Ruby Code:
x = 'a random string to be formated'
x = x.split('^')[0] if x.include?('^')
x = x.split('$')[0] if x.include?('$')
x = x.split('*')[0] if x.include?('*')
I'm looking for the amazingly elegant ruby one liner but I'm having a hard time finding it.
It should probably be somewhat readable though.
Thanks
A: I think this might be what you're looking for
x.split(/\^|\$|\*/)
A: This works for me:
x = "a random string to be formatted"
['^', '$', '*'].each { |token|
x = x.split(token)[0] if x.include?(token)
}
A: Based on the code you provided
x = 'a random string to be formated'
%w(^ $ *).each do |symbol|
x = x.split(symbol)[0] if x.include?(symbol)
end
A: You're looking for this regex:
string.match(/^([^^$*]+)[$^*]?/).captures[0]
It returns all characters up the the first occasion of $, ^, or *, or the end of the string.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/15648460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is there a magic method that gets called when I write **object_identifier? Consider the use of the double-star notation in python:
instance = NotDictionary(name="", description="")
print( **instance )
Is there any way I can control the resulting value of the expression
**instance
In python?
What methods or what parent must my object have, in order to be treated as a 'expandable' dict in this instance?
For instance, in the same way I can control addition between two objects by the add and radd methods?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/32596270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: error when defining function inside another function in fortran For some use I need to define a function inside another function inside a fortran module.
A sample code for easy comprehension is
module func
implicit none
contains
real function f(x,y)
real x,y,g
real function g(r)
real r
g=r
end function g
f=x*g(y)
end function f
end module func
use func
implicit none
write(*,*) f(1.0,1.0)
end
This is giving lots of errors in gfortran like unexpected data declaration, expected end function f, not g....etc.
What is the correct way of defining a function inside another function in fortran?
A: You use an internal subprogram, see below. Note internal subprograms themselves can not contain internal subprograms.
ian@eris:~/work/stack$ cat contained.f90
Module func
Implicit None
Contains
Real Function f(x,y)
! Interface explicit so don't need to declare g
Real x,y
f=x*g(y)
Contains
Real Function g(r)
Real r
g=r
End Function g
End Function f
End Module func
Program testit
Use func
Implicit None
Write(*,*) f(1.0,1.0)
End Program testit
ian@eris:~/work/stack$ gfortran-8 -std=f2008 -Wall -Wextra -fcheck=all -O -g contained.f90
ian@eris:~/work/stack$ ./a.out
1.00000000
ian@eris:~/work/stack$
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/60995301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: connection oracle 11g with java 8 (eclipse) I can't import java.sql.*; because it said that this package is not accessible although I put ojdbc6 and also I tried ojdbc8 in the classpath but it isn't work;
[ I mention that I have oracle 11g and jdk 10 & eclipse 4 ]
A: How did you add your ojdbc driver ? A typical way is shown below:
First you need to right click your project and go for "Build Path" and go for "Configure build Path"
Then , in the "libraries" tab, click on the "Add External JARs" go to your ojdbc driver file to load it.
This is the typical way to load your ojdbc driver. Try first see whether it will help
A: If your IDE (Eclipse) gives error:
The package java.sql is not accessible
Or if compiling using javac gives error:
test\Test.java:3: error: package java.sql is not visible
import java.sql.*;
^
(package java.sql is declared in module java.sql, but module Test does not read it)
1 error
Then it is because your Java 9+ project has a module-info.java file, i.e. your project is modular.
You need to add the following line to the module-info.java file:
requires java.sql;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61313068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: UITableView Content Offset Increasing Unexpectedly I have a UITableView embedded in a navigation controller. When i segue from the TableView onto the next screen, i store the offset such that if i press the 'back' button, the offset on the original view remains the same, and it doesnt scroll up to the top.
var tableViewContentOffset = CGPointMake(0.0, 0.0)
override func viewWillAppear(animated: Bool) {
super.viewDidAppear(animated)
tableView.contentOffset = tableViewContentOffset
}
override func viewWillDisappear(animated: Bool) {
super.viewDidDisappear(animated)
tableViewContentOffset = tableView.contentOffset
}
This works perfectly when the view loads the first time, however each time i move onto the next screen and then return, the offset is increased by y(-64.0)
What is causing this?
First time loaded:
After Segueing and returning once:
A: You can set automaticallyAdjustScrollViewInsets to false.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/34984425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: nginx not serving correct react build content I have a react app. When I build my app via the dev server (localhost: 3000), the app is served correctly with the correct styling etc.
However, when I serve via nginx (via docker) I get a completely different UI? Does anyone know why this may be?
I don't think it is a docker issue. I am using webpack if this is something that I may need to look at?
My nginx config:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /usr/share/nginx/html;
location = / {
try_files $uri /index.html;
}
}
Dockerfile:
FROM nginx:1.16-alpine
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
COPY public/ /usr/share/nginx/html/
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74577930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Azure Functions scaling and concurrency using Queue triggers and functionAppScaleLimit on the Consumption Plan I have an Azure Function app on the Linux Consumption Plan that has two queue triggers. Both queue triggers have the batchSize parameter set to 1 because they can both use about 500 MB of memory each and I don't want to exceed the 1.5GB memory limit, so they should only be allowed to pick up one message at a time.
If I want to allow both of these queue triggers to run concurrently, but don't want them to scale beyond that, is setting the functionAppScaleLimit to 2 enough to achieve that?
Edit: added new examples, thank you @Hury Shen for providing the framework for these examples
Please see @Hury Shen's answer below for more details. I've tested three queue trigger scenarios. All use the following legend:
QueueTrigger with no functionAppScaleLimit
QueueTrigger with functionAppScaleLimit set to 2
QueueTrigger with functionAppScaleLimit set to 1
For now, I think I'm going to stick with the last example, but in the future I think I can safely set my functionAppScaleLimit to 2 or 3 if I upgrade to the premium plan. I also am going to test two queue triggers that listen to different storage queues with a functionAppScaleLimit of 2, but I suspect the safest thing for me to do is to create separate Azure Function apps for each queue trigger in that scenario.
Edit 2: add examples for two queue triggers within one function app
Here are the results when using two queue triggers within one Azure Function that are listening on two different storage queues. This is the legend for both queue triggers:
Both queue triggers running concurrently with functionAppScaleLimit set to 2
Both queue triggers running concurrently with functionAppScaleLimit set to 1
In the example where two queue triggers are running concurrently with functionAppScaleLimit set to 2 it looks like the scale limit is not working. Can someone from Microsoft please explain? There is no warning in the official documentation (https://learn.microsoft.com/en-us/azure/azure-functions/functions-scale#limit-scale-out) that this setting is in preview mode, yet we can clearly see that the Azure Function is scaling out to 4 instances when the limit is set to 2. In the following example, it looks like the limit is being respected, but the functionality is not what I want and we still see the waiting that is present in @Hury Shen's answer.
Conclusion
To limit concurrency and control scaling in Azure Functions with queue triggers, you must limit your Azure Function to use one queue trigger per function app and use the batchSize and functionAppScaleLimit settings. You will encounter race conditions and waiting that may lead to timeouts if you use more than one queue trigger.
A: Yes, you just need to set functionAppScaleLimit to 2. But there are some mechanisms about consumption plan you need to know. I test it in my side with batchSize as 1 and set functionAppScaleLimit to 2(I set WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT as 2 in "Application settings" of function app instead of set functionAppScaleLimit, they are same). And I test with the code below:
import logging
import azure.functions as func
import time
def main(msg: func.QueueMessage) -> None:
logging.info('=========sleep start')
time.sleep(30)
logging.info('=========sleep end')
logging.info('Python queue trigger function processed a queue item: %s',
msg.get_body().decode('utf-8'))
Then I add message to the queue, I sent 10 messages: 111, 222, 333, 444, 555, 666, 777, 888, 999, 000, I sent them one by one. The function was triggered success and after a few minutes, we can see the logs in "Monitor". Click one of the log in "Monitor", we can see the logs show as:
I use 4 red boxes on the right of the screenshot above, I named each of the four logs as "s1", "s2", "s3", "s4"(step 1-4). And summarize the logs in excel for your reference:
I make cells from "s2" to "s4" as yellow because this period of time refer to the execution time of the function task.
According the screenshot of excel, we can infer the following points:
1. The maximum number of instances can only be extended to 2 because we can find it doesn't exist more than two yellow cells in each line of the excel table. So the function can not scale beyond 2 instances as you mentioned in your question.
2. You want to allow both of these queue triggers to run concurrently, it can be implemented. But the instances will be scale out by mechanism of consumption. In simple terms, when one function instance be triggered by one message and hasn't completed the task, and now another message come in, it can not ensure another instance be used. The second message might be waiting on the first instance. We can not control whether another instance is enabled or not.
===============================Update==============================
As I'm not so clear about your description, I'm not sure how many storage queues you want to listen to and how many function apps and QueueTrigger functions you created in your side. I summarize my test result as below for your reference:
1. For your question about would the Maximum Burst you described on the premium plan behave differently than this ? I think if we choose premium plan, the instances will also be scale out with same mechanism of consumption plan.
2. If you have two storage queues need to be listen to, of course we should create two QueueTrigger functions to listen to each storage queue.
3. If you just have one storage queue need to be listen to, I test with three cases(I set max scale instances as 2 in all of three cases):
A) Create one QueueTrigger function in one function app to listen to one storage queue. This is what I test in my original answer, the excel table shows us the instances will scale out by mechanism of consumption plan and we can not control it.
B) Create two QueueTrigger functions in one function app to listen to same storage queue. The result is almost same with case A, we can not control how many instances be used to deal with the messages.
C) Create two function apps and create a QueueTrigger function in each of function app to listen to same storage queue. The result also similar to case A and B, the difference is the max instances can be scaled to 4 because I created two function apps(both of them can scale to 2 instances).
So in a word, I think all of the three cases are similar. Although we choose case 3, create two function apps with one QueueTrigger function in each of them. We also can not make sure the second message be deal with immediately, it still may be processed to first instance and wait for frist instance complete deal with the first message.
So the answer for your current question in this post is setting the functionAppScaleLimit to 2 enough to achieve that? is: If you want both of instances be enabled to run concurrently, we can't make sure of it. If you just want two instances to deal with the messages, the answer is yes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64795612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Lambda referencing itself Suppose I have a global function pointer like so:
void (*Callback)(My_special_t*);
If I want to assign it a lambda I do so like so:
Callback = [](My_special_t* instance) {
//Useful stuff
};
What I really would like to do is something like this:
Callback = [](My_special_t* instance) {
//Useful stuff
Callback = /* Somehow get the current lambda? */
};
So my question is this:
Is it possible to reference the a lambda object from inside of itself.....and if so how?
A: I know lambda is very cool feature and because its coolness it is overused.
Trying forcing lambda here is creating a problem.
Just define a function and problem is resolved.
void myNiceFunction(My_special_t *instance) {
instance->doStuff();
… … …
if (instance->next) {
myNiceFunction(instance->next);
}
}
It is better since it is self documenting (if good name is provided) and it is testable (test can reach this function directly).
A: You can do it with std::function, like this:
#include <functional>
#include <iostream>
using std::cout;
using std::function;
struct My_special_t
{
};
int main()
{
function<void(My_special_t*)> Callback;
auto otherCallback = [](My_special_t* instance)
{
cout << "otherCallback " << static_cast<void*>(instance) << "\n";
};
Callback = [&Callback, &otherCallback](My_special_t* instance)
{
cout << "first callback " << static_cast<void*>(instance) << "\n";
Callback = otherCallback;
};
My_special_t special;
Callback(&special);
Callback(&special);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/49833691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Installing JSON::XS on Mac OSX I hope you're all ready for a long adventure through my frustration:
I'm using Perl 5.10, on Mac OSX : Snow Leopard, with XCode 3.2.6.
What I am trying to do is use the module JSON::XS in a program.
This is the first time that I've ever done something with Perl, so I look up some simple example programs and try them out, and they all work. Then I go to use JSON::XS. I get:
Can't locate JSON/XS.pm in @INC (@INC contains: /Library/Perl/Updates/5.10.0 /System/Library/Perl/5.10.0/darwin-thread-multi-2level /System/Library/Perl/5.10.0 /Library/Perl/5.10.0/darwin-thread-multi-2level /Library/Perl/5.10.0 /Network/Library/Perl/5.10.0/darwin-thread-multi-2level /Network/Library/Perl/5.10.0 /Network/Library/Perl /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level /System/Library/Perl/Extras/5.10.0 .) at part1.pl line 5.
BEGIN failed--compilation aborted at part1.pl line 5.
I do some Googling, and figure that it needs to be installed. Also in my Googling I find heavy recommendations to use cpanm. So I try installing cpanm first using cpan (which I already have).
I get a very long printout, that basically says NO, NOT OK, "chances to succeed are limited", and FAILED over and over and over again. It's too long to include in its entirety, but the bits I found interesting are:
Warning (usually harmless): 'YAML' not installed, will not store persistent state
Running make test
Can't test without successful make
Running make install
Make had returned bad status, install seems impossible
Could not read '/Users/danielgierl/.cpan/build/ExtUtils-ParseXS-3.15-VSmBrZ/META.yml'. Falling back to other methods to determine prerequisites
I look into installing YAMl, because sections of output like the above (which I got repeatedly) make me think that's the problem. Essentially it also failed, and among other reasons it gave me warnings like the above.
Another section of the output says:
make: *** No rule to make target `/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE/config.h', needed by `Makefile'. Stop.
I look this up online, and though I understand why it says what it does, I see no way that I could fix this, since these are Makefiles gotten from cpan or elsewhere.
Another section:
Running install for module 'version'
Running make for J/JP/JPEACOCK/version-0.9901.tar.gz
Has already been unwrapped into directory /Users/danielgierl/.cpan/build/version-0.9901-U8IMPt
Could not make: Unknown error
Warning (usually harmless): 'YAML' not installed, will not store persistent state
Running make test
Can't test without successful make
Running make install
Make had returned bad status, install seems impossible
In general lots of unknown errors. I am also trouble by the section where it tells me:
Testing if you have a C compiler
ld: library not found for -lbundle1.o
collect2: ld returned 1 exit status
lipo: can't open input file: /var/folders/1C/1CnzBuv+F5y+8M5YPm6I4k+++TI/-Tmp-//ccTWRqov.out (No such file or directory)
error building /var/folders/1C/1CnzBuv+F5y+8M5YPm6I4k+++TI/-Tmp-/compilet.bundle from /var/folders/1C/1CnzBuv+F5y+8M5YPm6I4k+++TI/-Tmp-/compilet.o at /System/Library/Perl/5.10.0/ExtUtils/CBuilder/Base.pm line 213.
I cannot determine if you have a C compiler
so I will install a perl-only implementation
You can force installation of the XS version with
perl Makefile.PL --xs
I know for a fact that I have a c compiler, since gcc -v tells me that it is version 4.2.1. On the plus side, it does tell me that my kit looks good (whatever that is). If I try running the command they recommend me, I get told that there is no such file or directory.
After some more Googling, I keep seeing XCode re-re-referenced for Mac users. I already had a version of XCode, but because it might have problems, I uninstall it, then get version 3.2.6 (my OS is too old for the newer versions). The install fails for unknown reasons, and it tells me to contact the software company. However, before it fails, it does install the UNIX tools, including gcc (which I later got the latest version for), so I don't think that's the problem.
All in all, this has consumed some 6 hours of my time today, and I'm stressed and physically ill (though I was this morning before starting this futile venture, but it's been exacerbated), and I have no idea where I am supposed to go next.
I'm hoping that it's all been some stupid error that'll be remedied in 10 seconds by an experienced user, but I fear the worst. In general I've done every bit of advice I've found, from running with --sudo to sacrificing a goat to the sun gods. I'd appreciate any help.
A: You are best off leaving the system's perl alone. Instead, use perlbrew to install your own perls.
You can install cpanm without using cpan using
curl -L http://cpanmin.us | perl - App::cpanminus
for your local perl.
In addition, you might have to install the command line tools for XCode.
If you can't, you may want to look into installing build tools from MacPorts and putting those in your path ahead of the system paths, but after your local perl because you don't necessarily want to mess with a perl that MacPorts might install either.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/12718104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Does Netsh.exe command overwrite existing firewall rules? I'm using a netsh command that add a firewall rule on windows 2012 r2 server.
My command is like this:
Netsh.exe advfirewall firewall add rule name="name" protocol="TCP" localport=1234 dir=in enable=yes action=allow
What happen if i already have a rule with another name and same localport and protocol?
My command will overwrite that rule or it will create another one on same port?
I've tried to find any documentation, but i've found nothing about that.
A: There is a PowerShell module called NetSecurity.
You can make a statement in powershell which can tell if the rule already exist or not.
Get-NetFirewallRule you can use this command to discover which firewall rules are already defined.
https://learn.microsoft.com/en-us/powershell/module/netsecurity/?view=win10-ps
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/49729193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to deploy a multi-page react application in IIS server? I have created a multipage application using reactjs and have manage the routing using react router-dom. I tested it using npm start and it is working fine. The pages are properly redirecting. But when I run "npm run build" and add it to IIs server, only the home page is coming. No matter what link I click on, I am getting redirected to the home page.
I tried modifying the web.config file.
`
<?xml version="1.0"?>
<configuration>
<system.webServer>
<rewrite>
<rules>
<rule name="React Routes" stopProcessing="true">
<match url=".*" />
<conditions logicalGrouping="MatchAll">
<add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
<add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
<add input="{REQUEST_URI}" pattern="^/(api)" negate="true" />
</conditions>
<action type="Rewrite" url="/index.html" />
</rule>
</rules>
</rewrite>
</system.webServer>
</configuration>
`
But it seems the index.html is not updated with the rest of the pages other than the homepage. My user routes are `
<BrowserRouter>
<Routes>
<Route exact path="/" element={<HomepageAtm />} />
<Route
exact
path="/New-Registration"
element={<RegistrationAtm />}
/>
<Route exact path="/View" element={<VIEW_ATM />} />
<Route exact path="/List_View" element={<LIST_VIEW />} />
<Route exact path="/Airport_View" element={<AIRPORT_VIEW />} />
<Route exact path="/Map_View" element={<MAP_VIEW />} />
<Route exact path="/History_Data" element={<History_Data />} />
<Route exact path="/ContactUs" element={<Contact />} />
<Route exact path="/Terminalarea_View" element={<TERMINALAREA_VIEW />} />
</Routes>
</BrowserRouter>
`
I also tried using HashRouter but the results were same. Except for the home page, no other pages are loading.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74219446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to get nearest location based on user location from array of lat, long: react, leaflet I have a react app that shows bike stations. I want to display nearest stations on dashboard. Is there any easier way to do that with react ?
A: You can loop through all markers and look which has the shortest distance. map.distance(USER_LATLNG, BIKE_STATION_LATLNG)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71310635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: how to write to .txt file in UWP vb.net i am trying to have a .txt file containing a single line with a number on it. i need this txt file to be editable by the program and to be installed with it.
the .txt file will be completely overwritten when the number needs to be replaced.
how can i do this? all other tutorials refer to C# or JS I need vb.NET
A: Read this Create, write, and read a file topic about interaction with file in UWP.
To write string to StorageFile use FileIO.WriteTextAsync or FileIO.AppendTextAsync functions
Dim file As StorageFile = Await ApplicationData.Current.LocalFolder.CreateFileAsync("file.txt", CreationCollisionOption.ReplaceExisting)
Dim strData As String = "sample text to write"
Await FileIO.WriteTextAsync(file, strData)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/41705000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: How to get distinct dates (yyyy-mm-dd) using AutoFixture I have a test case where I need 4 distinct dates to build my objects.
Everything I found seem to tell that AutoFixture always generate unique elements but the thing is when it generates dates, it does so considering everything down to ticks. The result is that when I do .ToShortDateString() on the result, I may end up with duplicated results.
I know I could loop until I get only distinct values but it doesn't feel right.
For now, what I have is:
string[] dates;
do
{
dates = _fixture.CreateMany<DateTime>(4).Select(d => d.ToShortDateString()).ToArray();
} while (dates.Distinct().Count() != 4);
A: As mentioned by @MarkSeeman in this post about numbers
Currently, AutoFixture endeavours to create unique numbers, but it doesn't guarantee it. For instance, you can exhaust the range, which is most likely to happen for byte values [...]
If it's important for a test case that numbers are unique, I would recommend making this explicit in the test case itself. You can combine Generator with Distinct for this
So for this specific situation, I now use
string[] dates = new Generator<DateTime>(_fixture)
.Select(x => x.ToShortDateString())
.Distinct()
.Take(4).ToArray();
A: You can generate unique integers (lets say days) and then add it to some min date:
var minDate = _fixture.Create<DateTime>().Date;
var dates = _fixture.CreateMany<int>(4).Select(i => minDate.AddDays(i)).ToArray();
But I'm not sure that AutoFixture guarantees that all generated values will be unique (see this issue for example)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58773017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Avoiding clashing of events by comparing their time and venues This might be easy for some of you but for me its a big hitch:
What I want to do is make sure two events don't clash by comparing their dates and venues.
Here is what I have done so far:
<?php
require_once (LIB_PATH.DS.'database.php');
require_once ('../includes/initialize.php');
class Event extends DatabaseObject {
protected static $table_name="event_tbl";
protected static $db_fields = array('id', 'visible', 'event_title', 'start_date', 'end_date', 'start_time', 'end_time', 'venue', 'event_type', 'event_description', 'event_program', 'reminder_date', 'reminder_time');
public $id;
public $event_title;
public $start_date;
public $end_date;
public $start_time;
public $end_time;
public $venue;
public $event_type;
public $event_description;
public $event_program;
public $visible;
public $reminder_date;
public $reminder_time;
public function avoid_clash($time1, $time2, $database_venue, $time3, $time4, $venues) {
$timeStart = strtotime("{$time1}");
$timeEnd = strtotime("{$time2}");
$time = strtotime("{$time3}") - strtotime("{$time4}");
if ($result =($time > $timeStart && $time < $timeEnd) && $database_venue==$venues) {
$session->message('Warning: This event clashes with another!');
redirect_to('event_management.php');
}
}
}
/*function process_avoid_clash($time1, $time2, $time3, $time4, $venues) {
$event = self::find_from_event();
$row = fetch_array($event);
foreach ($row as $rows) {
$time1 = $rows['start_date']." ".$rows['start_time'];
$time2 = $rows['end_date']." ".$rows['end_time'];
$database_venue = $rows['venue'];
if ($time1 && $time2 && $database_venue) {
avoid_clash($time1, $time2, $time3, $time4, $venues);
} else {
die("find_from_event failed!");
}
}
}*/
?>
Then this is how I test it by using a form which when submitted will alert the user if two events clashed:
<?php
require_once ('../includes/initialize.php');
if(!$session->is_logged_in()) {
redirect_to("login.php");
}
?>
<?php
if (isset($_POST['submit'])) {
$event = new Event();
$even = Event::find_public();
$time3 = $_POST['start_date']." ".$_POST['start_time'];
$time4 = $_POST['end_date']." ".$_POST['end_time'];
$venues = $_POST['venue'];
foreach($even as $events):
$time1 = $events->start_date." ".$events->start_time;
$time2 = $events->end_date." ".$events->end_time;
$database_venue = $events->venue;
if ($time1 && $time2 && $database_venue) {
$event->avoid_clash($time1, $time2, $database_venue, $time3, $time4, $venues);
}
endforeach;
$event-> event_title = $_POST['event_title'];
$event-> start_date = $_POST['start_date'];
$event-> end_date = $_POST['end_date'];
$event-> start_time = $_POST['start_time'];
$event-> end_time = $_POST['end_time'];
$event-> venue = $_POST['venue'];
$event-> event_type = $_POST['event_type'];
$event-> event_description = $_POST['event_description'];
$event-> event_program = $_POST['event_program'];
$event-> visible = $_POST['visible'];
$event-> reminder_date = $_POST['reminder_date'];
$event-> reminder_time = $_POST['reminder_time'];
$event->create();
$session->message('Event successfully created!');
redirect_to('event_management.php');
}
?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1252" />
<title>New Event</title>
<link rel="stylesheet" type="text/css" href="../anytime/anytime.css"/>
<script src="../anytime/jquery.min.js"></script>
<script src="../anytime/jquery-migrate-1.0.0.js"></script>
<script src="../anytime/anytime.js"></script>
<link href="../assets/css/test_project.css" rel="stylesheet" type="text/css" />
<link href="../assets/css/links.css" rel="stylesheet" type="text/css" media="screen" />
</head>
<body>
<div id="container">
<div id="header">
<div class="logo"></div>
<div class="menu">
<div id="navcontainer">
<ul id="navlist">
</ul>
</div>
<br class="clear"/>
</div>
<br class="clear"/>
</div>
<div id="wrapper">
<div id="sidebar1">
<p><?php echo output_message($message); ?></p>
<form id="form1" name="form1" method="post" action="new_event.php">
<table width="578" height="322" border="0" cellpadding="0">
<tr>
<td colspan="3" align="right" valign="top">Event Title:</td>
<td colspan="5" align="left" valign="top"><input type="text" name="event_title" /></td>
</tr>
<tr>
<td width="3" align="center" valign="top"> </td>
<td width="89" align="center" valign="top"><input name="start_date" type="text" size="14" id="start_date" value="START DATE"/></td>
<td width="60" align="center" valign="top"><input name="start_time" type="text" size="8" id="start_time" value="TIME"/></td>
<script>
AnyTime.picker("start_date",
{format: "%Y-%c-%e"});
$("#start_time").AnyTime_picker(
{ format: "%H:%i", LabelTitle: "Time",
labelHour: "Hour", labelMinute: "Minute" });
</script>
<td width="54" align="center" valign="top"><strong>TO</strong></td>
<td width="71" align="center" valign="top"><input name="end_date" type="text" size="14" id="end_date" value="END DATE"/></td>
<td width="74" align="center" valign="top"><input name="end_time" type="text" size="8" id="end_time" value="TIME"/></td>
<script>
AnyTime.picker("end_date",
{format: "%Y-%c-%e"});
$("#end_time").AnyTime_picker(
{ format: "%H:%i", LabelTitle: "Time",
labelHour: "Hour", labelMinute: "Minute" });
</script>
<td width="67" align="center" valign="top"> </td>
<td width="142" align="center" valign="top"> </td>
</tr>
<tr>
<td colspan="3" align="right" valign="top">Venue:</td>
<td colspan="5" align="left" valign="top"><input type="text" name="venue" /></td>
</tr>
<tr>
<td colspan="3" align="right" valign="top">Event Type:</td>
<td colspan="5" align="left" valign="top"><input type="text" name="event_type" /></td>
</tr>
<tr>
<td height="59" colspan="3" align="right" valign="top">Event Description:</td>
<td colspan="5" align="left" valign="top"><textarea name="event_description"></textarea></td>
</tr>
<tr>
<td height="51" colspan="3" align="right" valign="top">Event Program: </td>
<td colspan="5" align="left" valign="top"><textarea name="event_program"></textarea></td>
</tr>
<tr>
<td height="24" colspan="3" align="right" valign="top">Visible:</td>
<td colspan="5" align="left" valign="top"><input name="visible" type="radio" value="0" />
No
<input name="visible" type="radio" value="1" />
Yes</td>
</tr>
<tr>
<td height="40" colspan="3" align="right" valign="top">Reminder:</td>
<td align="left" valign="top">
<input name="reminder_date" type="text" size="10" id="reminder_date" /> </td>
<td align="left" valign="top">
<input name="reminder_time" type="text" size="10" id="reminder_time"/></td>
<script>
AnyTime.picker("reminder_date",
{format: "%Y-%c-%e"});
$("#reminder_time").AnyTime_picker(
{ format: "%H:%i", LabelTitle: "Time",
labelHour: "Hour", labelMinute: "Minute" });
</script>
<td align="left" valign="top"> </td>
<td align="left" valign="top"> </td>
</tr>
<tr>
<td height="40" colspan="3" align="right" valign="top"><input type="submit" name="submit" value="Create Event" /></td>
<td height="40" align="left" valign="top"> <input name="New" type="button" id="New" value="Cancel" onclick="location.href='event_management.php'"/></td>
</tr>
</table>
</form>
</div>
<?php include_layout_template('footer.php'); ?>
Any information or suggestion is welcomed.
A: My suggestion would be to push this problem down to your database. PostgreSQL, for example, has great support for this kind of problem as documented in this blog post:
On 7th of December, Tom Lane committed patch by Jeff Davis that adds general exclusion constraints:
Log Message:
Add exclusion constraints, which generalize the concept of uniqueness to
support any indexable commutative operator, not just equality. Two rows
violate the exclusion constraint if "row1.col OP row2.col" is TRUE for
each of the columns in the constraint....This could be used for example for making sure that given room is only reserved by 1 user at any give point in time.
as well as a handy "Temporal" extension (see here for a slide-deck from PgCon 2012 discussing some of its features).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19723588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Using dplyr to summarise a running total of distinct factors I'm trying to generate a species saturation curve for a camera trapping survey. I have thousands of observations and do most of my manipulations in dplyr.
I have three field sites, with observation records of different animal species from a number of weeks of trapping. In some weeks there are no animals, in other weeks there may be more than one species. I want to generate a separate figure for each site to compare how quickly new species that are encountered over the sequential weeks of the study. These observations of new species should eventually saturate once the total species diversity has been captured in the area. Some field sites are likely to saturate faster than others.
The problem is that I have not come across a way of counting the number of distinct species to provide a running total by time. A simple dummy dataset is below.
field_site<-c(rep("A",4),rep("B",4),rep("C",4))
week<-c(1,2,2,3,2,3,4,4,1,2,3,4)
animal<-c("dog","dog","cat","rabbit","dog","dog","dog","rabbit","cat","cat","rabbit","dog")
df<-as.data.frame(cbind(field_site,week,animal),head=TRUE)
I can easily generate the number of unique species within each week grouping, e.g.
tbl_df(df)%>%
group_by(field_site,week) %>%
summarise(no_of_sp=n_distinct(animal))
But this is not sensitive to the fact that some species are encountered again in subsequent weeks. What I really need is a running count of the different species that counts the unique species per site from week 1 going down through the rows, assuming that the data is sorted by increasing time from the start of the survey.
The cumulative total of species encountered over the course of the study by week in the example for field Site A would be: week 1 = 1 species, week 2 = 2 species, week 3 = 3 species, week 4 = still 3 species.
For site B cumulative total of species would be: week 1 = 0 species, week 2 = 1 species, week 3 = 1 species,week 4 = 1 species, etc...
Any advice would be greatly appreciated.
cheers in advance!
A: I'm making two assumptions:
*
*Site B, week 4 = 2 species, both "dog" and "rabbit"; and
*All sites share the same weeks, so if at least on site has week 4, then all sites should include it. This only drives the mt (empty) variable, feel free to update this variable.
I first suggest an "empty" data.frame to ensure sites have the requisite week numbers populated:
mt <- expand.grid(field_site = unique(ret$field_site),
week = unique(ret$week))
The use of tidyr helps:
library(tidyr)
df %>%
mutate(fake = TRUE) %>%
# ensure all species are "represented" on each row
spread(animal, fake) %>%
# ensure all weeks are shown, even if no species
full_join(mt, by = c("field_site", "week")) %>%
# ensure the presence of a species persists at a site
arrange(week) %>%
group_by(field_site) %>%
mutate_if(is.logical, funs(cummax(!is.na(.)))) %>%
ungroup() %>%
# helps to contain variable number of species columns in one place
nest(-field_site, -week, .key = "species") %>%
group_by(field_site, week) %>%
# could also use purrr::map in place of sapply
mutate(n = sapply(species, sum)) %>%
ungroup() %>%
select(-species) %>%
arrange(field_site, week)
# # A tibble: 12 × 3
# field_site week n
# <fctr> <fctr> <int>
# 1 A 1 1
# 2 A 2 2
# 3 A 3 3
# 4 A 4 3
# 5 B 1 0
# 6 B 2 1
# 7 B 3 1
# 8 B 4 2
# 9 C 1 1
# 10 C 2 1
# 11 C 3 2
# 12 C 4 3
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42778960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Extract PNG from a PDF file using PyPDF2 and Pillow: not enough image data Using the code I found in this post, which corrects this example code, I'm trying to extract images on all pages of a PDF file. Now I'm getting an error for PNG images (works for JPG) at the second line of this piece (Image.frombytes):
if xObject[obj]['/Filter'] == '/FlateDecode':
img = Image.frombytes(mode, size, data)
img.save(imagename + ".png")
number += 1
This yields ValueError: not enough image data, which seems to occur because data cannot be correctly decoded.
A: The code is incorrect as the PDF files do not embed full PNG images (as opposed to JPEG). The images with FlateDecode filter include only raw image data has been compressed with Flate method.
You have to decompress the data to get the raw image data, convert it to RGB (based on the colorspace defined on the PDF image image) and using the other properties defined on the PDF image object (Width, Height, etc) you can construct a PNG image.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71738571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: cannot read property of undefined when trying to read from array object I am writing an jasmine test in angular 8 and getting error cannot read property of undefined. Its trying to read property of array object. I have tried initialising but it doesn't seem to work.
I am trying to initialise component.myData
Test
fit('should update when acceptSection is called ', () => {
let updateSpy: jasmine.Spy;
setupComponent();
component.myData = [] = ['reviewWindowExpiry'];
updateSpy = spyOn(component, 'update').withArgs(3).and.returnValue(true);
component.acceptSection(true);
expect(updateSpy).toHaveBeenCalled();
});
Component
myData: any;
getNextSectionContent(contentIndex: number) {
// Get scroll height of agreement
this.agreementScrollHeight = this.scroll.nativeElement.scrollHeight;
// Calculate remaining days left on next agreement
this.hoursUntil = null; // reset this value
this.daysUntil = differenceInDays(this.myData[contentIndex].reviewWindowExpiry, this.todaysDate);
if (this.daysUntil < 1) {
this.hoursUntil = differenceInHours(this.myData[contentIndex].reviewWindowExpiry, this.todaysDate);
this.daysUntil = null;
}
if (this.myData[contentIndex] !== undefined) {
this.agreementData = this.myData[contentIndex].data;
this.scroll.nativeElement.scrollTop -= this.agreementScrollHeight;
} else {
this.agreementData = 'NO MORE AGREEMENTS!!';
}
}
A: You are passing .withArgs(3) to the update function.
Does that set or influence the value of contentIndex? (e.g maybe loops the array 3 times).
As it looks like the array entry is attempting to be retrieved for an index that is not in the array as you only add one item to it.
Does this give you a value?
//contentIndex value is 0
this.myData[0].reviewWindowExpiry
Also think Date.UTC would return a function, try
component.myData = [{'reviewWindowExpiry': Date.now()}];
To get the numeric millisecond value.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/60525480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to change literals of type xsd:date to xsd:dateTime using Python? I have a graph which uses Literals of datatype xsd:date to save dates. However I want to use an .owl version of that graph in a reasoner, and the reasoner only accepts the xsd:dateTime format. Is there any way to change the datatype of my date literals?
I was thinking of using rdflib to get all the date nodes of my graph as such:
for birthday in g.objects(None,URIRef(ns+'hasBirthday')):
and then converting the birthday to xsd:dateTime somehow. But I can't figured out how to do the conversion.
If there was a way to do this in the ontology file, it would also help.
A: You could create a new xsd:dateTime literal based on the original xsd:date literal.
Here is an example if you want to replace the original triples in the graph with new triples with the converted literal as the object:
from rdflib import Literal, URIRef
from datetime import datetime
for s, p, o in g.triples((None, URIRef('hasBirthday', base=ns), None)):
d = o.toPython()
g.add((s, p, Literal(datetime(d.year, d.month, d.day))))
g.remove((s, p, o))
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72411111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to change StatusBar font color in wxWidgets? I want to display an error message in red on status bar if a user action results in error. I have tried setting the forground color to red but the it still displays the message in default black font. How do make the font color red on statusbar? I'm using wxWidgets 2.8 on red hat 5.5
Thanks!
A: Found out the answers from wxWidget forum:
this->StatusBar->SetForegroundColour(wxColour(wxT("RED")));
wxStaticText* txt = new wxStaticText( this->StatusBar, wxID_ANY,wxT("Validation failed"), wxPoint(10, 5), wxDefaultSize, 0 );
txt->Show(true);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/3773281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is it possible to develop Windows Phone 8 applications using the WP8 SDK on Windows 7 Is there any work around so developers who does not have Windows 8 can develop for windows phone using Windows Phone SDK 8.0?
A: Unfortunately, it is currently not possible to develop for Windows Phone 8 on Windows 7. In the system requirements of Windows Phone 8 SDK it states you need to have Windows 8 to develop Windows Phone 8 apps.
Directly from MSDN:
Windows Phone SDK 8.0 requires 64-bit Windows 8 Pro or higher. You can't develop Windows Phone 8 apps on Windows 7, on Windows Server 2008, or on Windows Server 2012. The Windows Phone 8 Emulator has special hardware, software, and configuration requirements. For more info, see System requirements for Windows Phone Emulator.
This is mainly because of the Hyper-V emulator that is in windows 8. Be sure if you buy Windows 8 you get the Pro (64bits) version because the normal version has no Hyper-V in it also your BIOS has to support virtualization to run the emulator in Hyper-V.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/15292172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Pandas: Re-indexing after picking those that meet a condition I'm trying to learn more about re-indexing.
For background context, I have a data frame called sleep_cycle.
In this data frame, the columns are: Name, Age, Average sleep time.
I want to pick out only those who's names begin with the letter 'B'.
I then want to re-index these 'B' people, so that I have a new data frame that has the same columns, but only has those who's name begins with B.
Here was my attempt to do it:
info = list(sleep_cycle.columns) #this is just to set a list of the existing columns
b_names = [name for name in sleep_cycle['name'] if name[0] == 'B']
b_sleep_cycle = sleep_cycle.reindex(b_names, columns = info) #re-index with the 'B' people, and set columns to the list I saved earlier.
Results: Re-indexing was succesful, managed to pick those who only began with the letter 'B', and the columns remained the same. Great! Problem was: All the data has been replaced with NaN.
Can someone help me with this one? What am I doing wrong? It would be best appreciated if you could suggest a solution that is only in one line of code.
A: Based on your description (example data and expected output would be better), this would work:
sleep_cycle[sleep_cycle['name'].str.startswith['B']]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70409701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: setsockopt() equivalent for non-socket file descriptors? Is anyone aware of an equivalent to setsockopt() that works on non-socket based file descriptors?
Specifically, consider this block of code:
int on = 1;
setsockopt(socketfd, SOL_SOCKET, SO_NOSIGPIPE, &on, sizeof(int));
All fine and dandy, and now we can avoid SIGPIPE and refer to EPIPE when writing instead. But this only works on socket file descriptors opened with accept(), socket(), etc.
I'm trying to gain similar functionality for a file descriptor opened by a pipe() call, which setsockopt() promptly rejects as being a non-socket file descriptor.
Is there an equivalent to the above (setsockopt()) for descriptors opened by pipe() or open()?
A: There is no equivalent, but you could use socketpair to create a Unix socket instead.
A: If your ultimate goal is to avoid signals altogeather, you can use init and cleanup functions as described here
Also see good example here:
Automatically executed functions when loading shared libraries
Just write an init function that handles the signals you wish to avoid and every program that loads your library will automatically have your handlers
I must say this is a strange necessity though, can you be more specific about the actual problem you're trying to solve?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/38289467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: deprecation errors when running rspec tests after updating to rails 5.0.0 I just upgraded an application from rails 4 to rails 5.0.0, however now when running my rspec tests I'm getting these deprecation warnings (they don't show up when running my server):
DEPRECATION WARNING: alias_method_chain is deprecated. Please, use Module#prepend instead. From module, you can access the original method using super. (called from <top (required)> at /Users/Documents/app/config/environment.rb:5)
DEPRECATION WARNING: alias_method_chain is deprecated. Please, use Module#prepend instead. From module, you can access the original method using super. (called from <top (required)> at /Users/Documents/app/config/environment.rb:5)
DEPRECATION WARNING: after_filter is deprecated and will be removed in Rails 5.1. Use after_action instead. (called from <top (required)> at /Users/Documents/app/config/environment.rb:5)
The alias_method_chain warning does show up twice. I'm not using neither alias_method_chain or after_filter in any of my code.
The gems are using for the test enviroment:
group :development do
gem 'dotenv-rails', '2.1.1'
gem 'byebug', '9.0.5'
gem 'bullet', '5.2.0'
gem 'bundler-audit', '0.5.0'
gem 'spring', '1.7.2'
gem 'web-console', '3.3.1'
gem 'guard-rspec', '4.7.3'
end
group :test do
gem 'capybara', '~> 2.1'
gem 'poltergeist', '1.10.0'
gem 'formulaic', '0.3.0'
gem 'rspec-rails', '3.5.1'
gem 'rspec-mocks', '3.5.0'
gem 'shoulda-matchers', '3.1.1'
gem 'timecop-console', '0.1.0'
gem 'database_cleaner', '1.5.3'
gem 'simplecov', '0.12.0'
gem 'rails-controller-testing', '0.1.1'
end
group :development, :test do
gem 'pry-rails', '0.3.4'
gem 'factory_girl', '4.7.0'
gem 'faker', '1.6.6'
gem 'jasmine', '2.4.0'
gem 'jasmine-ajax', '0.0.2'
end
I'm not using versions in my gemfile, I just included the current versions I'm using. Any clues of what's causing the deprecation warnings?
EDIT:
I found where the warnings are coming from, I'm using wicked_pdf, here's the issue
It seems it's been fixed, but I'm getting the deprecation warnings still even though I'm using the last version
A: The warnings were caused by the wicked_pdf gem, updating to version 1.1.0 solved the issue
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/38922455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Why cannot marshal struct with auto layout I encountered an odd behaviour when marshalling a struct with auto layout kind.
For example: let's take a simple code:
[StructLayout(LayoutKind.Auto)]
public struct StructAutoLayout
{
byte B1;
long Long1;
byte B2;
long Long2;
byte B3;
}
public static void Main()
{
Console.WriteLine("Sizeof struct is {0}", Marshal.SizeOf<StructAutoLayout>());
}
it throws an exception:
Unhandled Exception: System.ArgumentException: Type
'StructAutoLayout' cannot be marshaled as an unmanaged
structure; no meaningful size or offset can be computed.
So it means that compiler doesn't know struct size at compile time? I was sure that this attribute reorders struct fields and then compiles it, but it doesn't.
A: It doesn't make any sense. Marshalling is used for interop - and when doing interop, the two sides have to agree exactly on the structure of the struct.
When you use auto layout, you defer the decision about the structure layout to the compiler. Even different versions of the same compiler can result in different layouts - that's a problem. For example, one compiler might use this:
public struct StructAutoLayout
{
byte B1;
long Long1;
byte B2;
long Long2;
byte B3;
}
while another might do something like this:
public struct StructAutoLayout
{
byte B1;
byte B2;
byte B3;
byte _padding;
long Long1;
long Long2;
}
When dealing with native/unmanaged code, there's pretty much no meta-data involved - just pointers and values. The other side has no way of knowing how the structure is actually laid out, it expects a fixed layout you both agreed upon in advance.
.NET has a tendency to make you spoiled - almost everything just works. This is not the case when interoping with something like C++ - if you just guess your way around, you'll most likely end up with a solution that usually works, but once in a while crashes your whole application. When doing anything with unmanaged / native code, make sure you understand perfectly what you're doing - unmanaged interop is just fragile that way.
Now, the Marshal class is designed specifically for unmanaged interop. If you read the documentation for Marshal.SizeOf, it specifically says
Returns the size of an unmanaged type in bytes.
And of course,
You can use this method when you do not have a structure. The layout must be sequential or explicit.
The size returned is the size of the unmanaged type. The unmanaged and managed sizes of an object can differ. For character types, the size is affected by the CharSet value applied to that class.
If the type can't possibly be marshalled, what should Marshal.SizeOf return? That doesn't even make sense :)
Asking for the size of a type or an instance doesn't make any sense in a managed environment. "Real size in memory" is an implementation detail as far as you are concerned - it's not a part of the contract, and it's not something to rely on. If the runtime / compiler wanted, it could make every byte 77 bytes long, and it wouldn't break any contract whatsoever as long as it only stores values from 0 to 255 exactly.
If you used a struct with an explicit (or sequential) layout instead, you would have a definite contract for how the unmanaged type is laid out, and Marshal.SizeOf would work. However, even then, it will only return the size of the unmanaged type, not of the managed one - that can still differ. And again, both can be different on different systems (for example, IntPtr will be four bytes on a 32-bit system and eight bytes on a 64-bit system when running as a 64-bit application).
Another important point is that there's multiple levels of "compilation" in a .NET application. The first level, using a C# compiler, is only the tip of the iceberg - and it's not the part that handles reordering fields in the auto-layout structs. It simply marks the struct as "auto-layouted", and it's done. The actual layouting is handled when you run the application by the CLI (the specification is not clear on whether the JIT compiler handles that, but I would assume so). But that has nothing to do with Marshal.SizeOf or even sizeof - both of those are still handled at runtime. Forget everything you know from C++ - C# (and even C++/CLI) is an entirely different beast.
If you need to profile managed memory, use a memory profiler (like CLRProfiler). But do understand that you're still profiling memory in a very specific environment - different systems or .NET versions can give you different results. And in fact, there's nothing saying two instances of the same structure must be the same size.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31857624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Unable to cast object of type 'System.DBNull' to type 'System.String'. I have a problem at hand here that I can't seem to figure out. I have this Button which fires a OnClick. On this event I'm calling a stored procedure in the SQL server. Before creating a new record I have a check in place to see if the already exist. If it does then I'm outputting an error. This part works fine but when record doesn't exist I get this error in ASP.NET:
"Unable to cast object of type 'System.DBNull' to type 'System.String'."
Any help would be really appreciated
protected void createloctype_Click(object sender, EventArgs e)
{
con.Open();
cmd = new SqlCommand("sp_CREATE_LOCATION_TYPE", con);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("@lt_code", SqlDbType.VarChar, 2).Value = loctype.Text;
cmd.Parameters.Add("@lt_description", SqlDbType.VarChar, 50).Value = loctypedescription.Text;
cmd.Parameters.Add("@lt_putaway_zone", SqlDbType.VarChar, 5).Value = putaway_zone_txt.Text;
cmd.Parameters.Add("@lt_rule1", SqlDbType.VarChar, 5).Value = rule1_txt.Text;
cmd.Parameters.Add("@lt_rule2", SqlDbType.VarChar, 5).Value = rule2_txt.Text;
cmd.Parameters.Add("@lt_rule3", SqlDbType.VarChar, 5).Value = rule3_txt.Text;
cmd.Parameters.Add("@lt_rule4", SqlDbType.VarChar, 5).Value = rule4_txt.Text;
cmd.Parameters.Add("@lt_rule5", SqlDbType.VarChar, 5).Value = rule5_txt.Text;
cmd.Parameters.Add("@lt_misc1", SqlDbType.VarChar, 500).Value = misc1_txt.Text;
cmd.Parameters.Add("@lt_misc2", SqlDbType.VarChar, 500).Value = misc2_txt.Text;
cmd.Parameters.Add("@lt_misc3", SqlDbType.VarChar, 500).Value = misc3_txt.Text;
cmd.Parameters.Add("@lt_misc4", SqlDbType.VarChar, 500).Value = misc4_txt.Text;
cmd.Parameters.Add("@lt_misc5", SqlDbType.VarChar, 500).Value = misc5_txt.Text;
cmd.Parameters.Add("@lt_user", SqlDbType.VarChar, 10).Value = user;
cmd.Parameters.Add("@error", SqlDbType.VarChar, 250);
cmd.Parameters["@error"].Direction = ParameterDirection.Output;
cmd.ExecuteNonQuery();
message = (string)cmd.Parameters["@error"].Value;
error_message.Text = message;
con.Close();
}
/STORED PROCEDURE/
USE [1_WMS]
GO
ALTER PROCEDURE [dbo].sp_CREATE_LOCATION_TYPE
@LT_CODE VARCHAR(5)
, @LT_DESCRIPTION VARCHAR(250)
, @LT_PUTAWAY_ZONE VARCHAR(5)
, @LT_RULE1 VARCHAR(5)
, @LT_RULE2 VARCHAR(5)
, @LT_RULE3 VARCHAR(5)
, @LT_RULE4 VARCHAR(5)
, @LT_RULE5 VARCHAR(5)
, @LT_MISC1 VARCHAR(20)
, @LT_MISC2 VARCHAR(20)
, @LT_MISC3 VARCHAR(20)
, @LT_MISC4 VARCHAR(20)
, @LT_MISC5 VARCHAR(20)
, @LT_USER VARCHAR(20)
, @ERROR VARCHAR(250) OUT
AS
DECLARE
@LT_DATE VARCHAR(10)
, @LT_TIME VARCHAR(8)
, @LT_ROW_COUNT INT
SET @LT_DATE = CONVERT(VARCHAR(10), GETDATE(),101)
SET @LT_TIME = CONVERT(VARCHAR(8), GETDATE(),114)
/* CHECK FOR EXISTING */
SELECT @LT_ROW_COUNT = COUNT(*)
FROM LOC_TYPE(NOLOCK)
WHERE lt_code = @LT_CODE
/* */
IF @LT_ROW_COUNT = 0
BEGIN
INSERT INTO LOC_TYPE
(
lt_code , lt_description , lt_putaway_zone , lt_rule_1
, lt_rule_2 , lt_rule_3 , lt_rule_4 , lt_rule_5
, lt_misc1 , lt_misc2 , lt_misc3 , lt_misc4
, lt_misc5 , lt_created_date , lt_created_time , lt_created_by
, lt_modify_date , lt_modify_time , lt_modify_by
)
VALUES
(
@LT_CODE , @LT_DESCRIPTION , @LT_PUTAWAY_ZONE , @LT_RULE1
, @LT_RULE2 , @LT_RULE3 , @LT_RULE4 , @LT_RULE5
, @LT_MISC1 , @LT_MISC2 , @LT_MISC3 , @LT_MISC4
, @LT_MISC5 , @LT_DATE , @LT_TIME , @LT_USER
, @LT_DATE , @LT_TIME , @LT_USER
)
END
ELSE IF @LT_ROW_COUNT = 1
BEGIN
SET @ERROR = 'LOCATIOIN TYPE ALREADY EXIST'
EXEC sp_CREATE_ERROR_MESSAGE
'CRT_LTY' , @ERROR , @LT_CODE , @LT_PUTAWAY_ZONE
, @LT_RULE1 , @LT_RULE2 , @LT_RULE2 , @LT_RULE3
, @LT_RULE4 , @LT_RULE5 , '' , ''
, @LT_USER
END
GO
A: Try the short version of IF
var result = cmd.Parameters["@error"].Value;
message = (result == DBNull.Value) ? string.Empty : result.ToString();
or simply
var result = cmd.Parameters["@error"].Value;
message = (result == DBNull.Value) ? string.Empty : result.ToString();
or
var result = cmd.Parameters["@error"].Value;
message = ((result == DBNull.Value) || (result == DBNull.Value)) ? string.Empty : result.ToString();
A: A shorter alternative:
message = cmd.Parameters["@error"].Value as string ?? "";
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/12545599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Node Express: axios shows Request failed with status code 404, even if the request was completed(failed to redirect) I am using axios to send a post / put request to a Express route.
put
const handleSubmit = async (e) =>
{
e.preventDefault();
const data = new FormData(e.currentTarget);
const body = {
title: data.get('title'),
description: data.get('description'),
};
await axios.put(`${process.env.NEXT_PUBLIC_DR_HOST}/view/${_id}`, body)
};
//example link: http://localhost:3000/view/61e038051b755034d31d49a2
put route
router.put("/:id", async (req, res) =>
{
const { id } = req.params;
await Declaration.findByIdAndUpdate(id, req.body)
res.redirect("/")
})
post
const handleSubmit = async (e) =>
{
e.preventDefault();
const data = new FormData(e.currentTarget);
const body = {
title: data.get('title'),
description: data.get('description'),
};
await axios.post(process.env.NEXT_PUBLIC_DR_HOST, body)
};
post route
router.post('/', async (req, res) =>
{
const declaration = new Declaration({ ...req.body })
await declaration.save();
res.redirect("/")
})
Now, when submitting to the put form, It displays this error: Unhandled Runtime Error
Error: Request failed with status code 404, but the request was actually cmopleted and the model saved, even if the redirect never happened and the error page remained. When submitting to the post form, No Error, and similar to the first, the request was completed but no redirect, and the form page remains the same. So how can I fix this?
A: using axios
await axios.post(process.env.NEXT_PUBLIC_DR_HOST, body)
.then((res) =>
{
window.location = res.request.responseURL;
})
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70918064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to check the length of each child in a list and conditionally apply a class to the children of another I've been searching and searching for the solution to this, but I'm quite new to javascript and I must be missing something obvious.
What I'm trying to do is check each nth-child li by number to see whether they're empty or not, and, if they do contain something, add a class to the corresponding nth-child of another element.
This is what I've ended up at:
if ($('#availList ul li:nth-child(1)').text().length > 0){
$('map area:nth-child(1)').addClass("areaSold");
}
While firebug doesn't have any syntax errors, it doesn't seem to matter if the li is empty or not, because the class will always be applied.
I must be doing something thoroughly dense.
UPDATE:
jsfiddle.net/filmcryptic/ed8LF/1
This actually works as intended... Which means that something is messing up on the site:
Gah.
UPDATE 2:
It works! Thanks blender_noob!
A: I have created a jsFiddle below. Based on my understanding on your question, you want to add a class to the li if it contains a certain text on it. Please update me if this answers your question. Thanks
$(function(){
$('#availList li').each(function(i,val){
if($(this).text() == "Area Sold"){
$('#mapArea li:nth-child('+(i+1)+')').addClass('areaSold');
}else{
$('#mapArea li:nth-child('+(i+1)+')').addClass('notSold');
}
});
});
Check sample here:
http://jsfiddle.net/tBLhN/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/17801760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: React-sound doesn't work on mobile I use the npm package 'react-sound' - it's fine and easy to configurate. But! It doesn't work on mobile devices. I tested that on android phones and iPad.
Maybe someone had the same problem, or idea how to resolve this?
A: By default, a restriction on mobile prevent you from playing multiple sounds. To avoid this, you need to set the ignoreMobileRestrictions property to true when setting up soundManager2.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/39204894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Safe Calls in Kotlin with Array is confuse There is an array: notes: Array<KeyValueNote>? and I use the Kotlin 1.0.5-2 in the following code.
I want
if (notes != null) {
for (note in notes) {
// Put the note to the payload Json object only if the note is non-null.
payloadJson.put(note.key, note.value)
}
}
But there are several alternations
// Alternative 1.
notes?.let {
it.takeWhile { it != null /** Inspection will note The condition 'it != null' is always true' in here**/ }.forEach { payloadJson.put(it.key, it.value) }
}
// Alternative 2.
notes?.takeWhile { it != null /** Inspection will note The condition 'it != null' is always true' in here**/ }?.forEach { payloadJson.put(it.key, it.value) }
// Alternative 3.
notes?.filterNotNull()?.forEach { payloadJson.put(it.key, it.value) }
My Question
*
*You can see there is Inspection Note The condition 'it != null' is always true in the Alternative 1&2, whether the inspection is right? because I want to ensure only the non-null item in the notes can be put to the payloadJson.
*In Alternative 3, you can see there is a Safe Call in filterNotNull()?., whether the ? is needed in here?, because I review the source code, the result of the filterNotNull() can't be null, but when I remove ? in there, the compile is failed.
A: Inspection is right. You declare your notes variable to be nullable array of not nullable items.
notes: Array<KeyValueNote>? // Can be null, cannot contain nulls.
notes: Array<KeyValueNote?> // Cannot be null, can contain nulls.
With this in mind, filterNotNull()?. is necessary for this array because it is nullable. You can find more information on Kotlin null safety in Kotlin documentation.
A: The type of notes is Array<KeyValueNote>?, which means that the elements of the array can not be null, but the array itself can. Thus, your code in the "I want" section is correct. A shorter alternative for it would be:
notes?.forEach { payloadJson.put(it.key, it.value) }
About your alternatives:
*
*Alternative 1: Never use let like this. It should be a safe call ?. (like in Alternative 2), nothing else. My heart bleeds when I see let in those situations :(
*Alternative 2: takeWhile and filter are obviously not the same thing. I guess you wanted filterNotNull, like in the Alternative 3
*Alternative 3: Since the elements of the array can NOT be null (because of their type), filterNotNull is equivalent to toList since it just copies the content
A: I guess you're confused by it parameter used in different scopes. The first alternative can be rewritten as:
notes?.let { notesSafe:Array<KeyValueNote> -> // notesSafe is not null here
notesSafe
.takeWhile { item:KeyValueNote -> item != null } // item is already not null by it's type definition
.forEach { payloadJson.put(it.key, it.value) }
}
The second alternative is pretty much the same and the compiler note about item:KeyValueNote is true for the same reason: val items:Array<KeyValueNote>? cannot hold null values - but the items itself could be null.
The 3rd alternative has a safe call to filterNotNull which returns source collection with null values removed. However as mentioned Array<KeyValueNote> cannot have null values in it hence the filterNotNull is not required.
In conclusion the expression can be written as:
notes?.forEach { payloadJson.put(it.key, it.value) }
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40826449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: jsTree Clear Tree, Rebuild Tree jsTree appears to be a beautiful lightweight jquery Tree building package, but after working with it for a few days it really left a sour taste in my mouth for the lack of clear and useful documentation online.
I'm including the following methods below in case any future devs are trying to figure out how to accomplish this and don't feel like spending hours or days stumbling through it. I'm sure there's a more elegant or Best Practice way to code this, but I couldn't find any examples that didn't involve building the ajax method into the jstree function itself (very much not an option in my case). in my example below, I'm free to call an Ajax method in makeTree() to get my data, build the JSON as I see fit and insert it into the tree manually.
Anyway, below is some sample code I created to Build a tree on page load, Empty the tree of all its nodes on emptyTree(), and rebuild the tree from JSON on makeTree(). This is obviously an incomplete reference, but if you just need a quick and dirty way to do what I'm doing with this package, you might appreciate this.
$(document).ready(function(){
$("#myJSTree").jstree({
'core' : {
'check_callback' : true
}
});
});
function emptyTree()
{
var myTree = $("#myJSTree").jstree(true);
$("#myJSTree .jstree-leaf, .jstree-anchor").each(function(){
myTree.delete_node($(this).attr('id'));
});
}
function makeTree()
{
var myTree = $("#myJSTree").jstree(true);
// This JSON is just an example, you can create it how you need it, I'll include the sample Valid JSON at the bottom for reference.
var nodesJSON = { "text": "Passed Testing", "icon": "images/success.png", "state": { "opened": true }, "children": [{ "text": "Passed Testing", "icon": "images/success.png", "state": { "opened": true }},{ "text": "Passed Testing", "icon": "images/success.png", "state": { "opened": true }}]};
myTree.create_node("#", nodesJSON, "last", function() {}, true);
}
// Adding the HTML that gets built into the tree at page load in my example
<div id="myJSTree">
<ul>
<li>Node 1</li>
<li class="jstree-open">
<ul>
<li>SubNode 1</li>
<li>SubNode 2</li>
</ul>
</li>
<li>Node 2</li>
</ul>
</div>
// Valid JSON format that took me a while to find on the jstree website
// I didn't write these. it's copied and pasted.
// sample JSON 1
{
id : "string" // will be autogenerated if omitted
text : "string" // node text
icon : "string" // string for custom
state : {
opened : boolean // is the node open
disabled : boolean // is the node disabled
selected : boolean // is the node selected
},
children : [] // array of strings or objects
li_attr : {} // attributes for the generated LI node
a_attr : {} // attributes for the generated A node
}
// Sample JSON 2
{
id : "string" // required
parent : "string" // required
text : "string" // node text
icon : "string" // string for custom
state : {
opened : boolean // is the node open
disabled : boolean // is the node disabled
selected : boolean // is the node selected
},
li_attr : {} // attributes for the generated LI node
a_attr : {} // attributes for the generated A node
}
Please feel free to share any insights you have if you have a better way to accomplish this, but this package's documentation is not super intuitive to me. I've combed through their webpage, it was painful.
A: To clear all nodes in the tree, you can call
myTree.delete_node(myTree.get_node("#").children);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/37314713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Set color to a QTableView row void MyWindow::initializeModelBySQL(QSqlQueryModel *model,QTableView *table,QString sql){
model = new QSqlQueryModel(this);
model->setQuery(sql);
}
With this method i can set a QSQlQueryModels to my QTableviews.
But How i can set color to a row based on a cell value?
A: Your best bet is to define a custom model (QAbstractTableModel subclass). You probably want to have a QSqlQueryModel as a member in this custom class.
If it's a read-only model, you need to implement at least these methods:
int rowCount(const QModelIndex &parent) const;
int columnCount(const QModelIndex &parent) const;
QVariant data(const QModelIndex &index, int role) const;
and for well behaved models also
QVariant headerData(int section, Qt::Orientation orientation, int role) const;
If you need the model to be able to edit/submit data, things get a bit more involved and you will also need to implement these methods:
Qt::ItemFlags flags(const QModelIndex &index) const;
bool setData(const QModelIndex &index, const QVariant &value, int role=Qt::EditRole);
bool insertRows(int position, int rows, const QModelIndex &index=QModelIndex());
bool removeRows(int position, int rows, const QModelIndex &index=QModelIndex());
What will actually change a row appearance lies in the return value of this method:
QVariant data(const QModelIndex &index, int role) const;
A dumb example:
QVariant MyCustomModel::data(const QModelIndex &index, int role) const
{
if ( !index.isValid() )
return QVariant();
int row = index.row();
int col = index.column();
switch ( role )
{
case Qt::BackgroundRole:
{
if(somecondition){
// background for this row,col is blue
return QVariant(QBrush (QColor(Qt::blue)));
}
// otherwise background is white
return QVariant(QBrush (QColor(Qt::white)));
}
case Qt::DisplayRole:
{
// return actual content for row,col here, ie. text, numbers
}
case Qt::TextAlignmentRole:
{
if (1==col)
return QVariant ( Qt::AlignVCenter | Qt::AlignLeft );
if (2==col)
return QVariant ( Qt::AlignVCenter | Qt::AlignTrailing );
return QVariant ( Qt::AlignVCenter | Qt::AlignHCenter );
}
}
}
A: The view draws the background based on the Qt::BackgroundRole role of the cell which is the QBrush value returned by QAbstractItemModel::data(index, role) for that role.
You can subclass the QSqlQueryModel to redefine data() to return your calculated color, or if you have Qt > 4.8, you can use a QIdentityProxyModel:
class MyModel : public QIdentityProxyModel
{
QColor calculateColorForRow(int row) const {
...
}
QVariant data(const QModelIndex &index, int role)
{
if (role == Qt::BackgroundRole) {
int row = index.row();
QColor color = calculateColorForRow(row);
return QBrush(color);
}
return QIdentityProxyModel::data(index, role);
}
};
And use that model in the view, with the sql model set as source with QIdentityProxyModel::setSourceModel.
OR
You can keep the model unchanged and modify the background with a delegate set on the view with QAbstractItemView::setItemDelegate:
class BackgroundColorDelegate : public QStyledItemDelegate {
public:
BackgroundColorDelegate(QObject *parent = 0)
: QStyledItemDelegate(parent)
{
}
QColor calculateColorForRow(int row) const;
void initStyleOption(QStyleOptionViewItem *option,
const QModelIndex &index) const
{
QStyledItemDelegate::initStyleOption(option, index);
QStyleOptionViewItemV4 *optionV4 =
qstyleoption_cast<QStyleOptionViewItemV4*>(option);
optionV4->backgroundBrush = QBrush(calculateColorForRow(index.row()));
}
};
As the last method is not always obvious to translate from C++ code, here is the equivalent in python:
def initStyleOption(self, option, index):
super(BackgroundColorDelegate,self).initStyleOption(option, index)
option.backgroundBrush = calculateColorForRow(index.row())
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/10219739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Thymeleaf Form Request method 'POST' not supported, HTTP Error = Method Not Allowed, status=405 I am new into Thymeleaf and here I made a Form using Thymeleaf but running into this issue.
I read many many articles but didn't find any solution.
Here's my Controller Class ->
@Controller
public class SavingUser{
@Autowired
private UserRepository userRepository;
@PostMapping("/registerUser")
public ModelAndView user(@ModelAttribute Customer customer, ModelMap model){
System.out.println("User in registration page..");
userRepository.save(customer);
model.addAttribute("saveUser", customer);
return new ModelAndView("index");
}
}
And here's my HTML Form -
<div id="form">
<form action="registerUser" th:action="@{/registerUser}" th:object="${saveUser}" method="POST">
<br />
<input type="hidden" name="${_csrf.parameterName}" value="${_csrf.token}"/>
<label for="name">Your Name:</label><br />
<input type="text" th:field="*{name}" placeholder="" /><br />
<label for="suburb">Your Suburb</label><br />
<input type="text" th:field="*{suburb}" placeholder="" /><br />
<input class="submit" type="submit" value="Submit" />
<br /><br />
</div>
</form>
</div>
I tried not put action="", still it didn't work.
Any solution, thanks in advance.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/68767187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to modify or remove older commit in Git? From past few days i had kept on adding and committing files in my local. My branch is now 16 commits ahead of 'origin/master'.
I wanted to push them to my git repo but one of the file being very large my push command fails. Is there any way to remove that commit from the stash or will i have to do a hard reset on the head.
A: If you just want to skip a commit, do git rebase -i master and select drop for the commit to be skipped. If you just want to remove a single file from it, select edit and amend the commit to remove the file.
A: *
*You can use the squash
*BFG
*Move HEAD back to previous commit (link)
squash
# edit all the commits up to the given sha-1
git rebase -i <sha-1>
but one of the file being very large my push command fails...
How to remove big files from the repository
You can use git filter-branch or BFG.
https://rtyley.github.io/bfg-repo-cleaner/
BFG Repo-Cleaner
an alternative to git-filter-branch.
The BFG is a simpler, faster alternative to git-filter-branch for cleansing bad data out of your Git repository history:
* Removing Crazy Big Files*
* Removing Passwords, Credentials & other Private data
Examples (from the official site)
In all these examples bfg is an alias for java -jar bfg.jar.
# Delete all files named 'id_rsa' or 'id_dsa' :
bfg --delete-files id_{dsa,rsa} my-repo.git
A: If you just want to remove the file then why not just do that by deleting the file and then committing the change?
Now, in your latest commit the file doesn't exist.
You should now be able to push your branch to the remote without any problem.
A: what you can do is you can use git rebase option and squash all the commits.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43278008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is there a tool for previewing both LaTeX and MetaPost files? I'm trying to find a tool for Windows that I can use to preview .tex and .mp files as they are saved. Does such a tool exist? I have TeXworks installed which has a handy build button (this displays the result in another window), but I'd rather use Emacs. Also, TeXworks does not compile .mp files.
Perhaps there's a more elegant way to preview files, other than the approach I'm thinking of.
A: Solution is to use pdflatex and Sumatra PDF, since this viewer auto-reloads the file.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/3894944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: VBA (Excel) 2010: specific script how-to, dealing with range comparisons I have 3 columns that follow these rules:
*
*Any cell in Col A may be empty, otherwise it contains a string.
*If a row in Col A is empty, that row in Col's B and C will be empty.
*If a row in Col A is populated, That row in Col B will be populated with an integer, and that row in Col C may be empty or have a "1".
I need a script that, for each cell in Col A,
*
*Checks the string in that cell and looks for all identical strings in Col A.
*For each row containing said matching string, checks Col C for a "1"
*For each row where both of the above are true, sum the values in Col B and replace each of them with that sum.
So for example, this:
A B C
x 1
x 2
z 2 1
y 1 1
y 2 1
y 1
z 2 1
z 1 1
Should become this:
A B C
x 1
x 2
z 5 1
y 3 1
y 3 1
y 1
z 5 1
z 5 1
So 3 different strings are found in Col A, (x y and z). Duplicates of x are found but the values in col B are not summed because there was no "1" in Col C. Dupes of y are found, but only those with a "1" to the right are summed. All dupes of z found have a "1", so all are summed.
What's the best way to do this? Please let me know if I need to clarify something about this question (I know it's convoluted, but I spent a lot of time trying to make it as clear as possible haha).
A: A simple formula should do the job : assuming the data are found in A:C
=IF(ISBLANK(A2),"",IF(ISBLANK(C2),B2, SUMIFS(B:B,A:A,A2,C:C,1)))
The outer IF display nothing when column A is empty.
The inner IF displaycolumn B when column C is empty.
The SUMIFS will add upcolumn B where column A is the same(as current row) and when column C is 1.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28054092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Converting Unix timestamp to hours All,
I have a database field which actually computes the time spent from the time the quiz started to the current time. This was done at some point of time (recorded as current time) and now I have an Unixtimestamp value for it
i.e. Suppose the start time was 5/5/2011 1pm and the current time was populated at 5/5/2011 2pm. So the difference is calculated and stored as an Unix timestamp.
Now I don't know when this was actually done and now I have to revert back to hours spent taking the quiz. So here a Unix timestamp needs to be converted back to hours, in this case return 1 hour. Can anyone please help me understand how to do this?
A: You have the seconds, so just do something like
SELECT secondsField / 3600 as 'hours' FROM tableName
A: Here is an example of finding the difference between two timestamps in days.
//Get current timestamp.
$current_time = time();
//Convert user's create time to timestamp.
$create_time = strtotime('2011-09-01 22:12:55');
echo "Current Date \t".date('Y-m-d H:i:s', $current_time)."\n";
echo "Create Date \t".date('Y-m-d H:i:s', $create_time)."\n";
echo "Current Time \t ".$current_time." \n";
echo "Create Time \t ".$create_time." \n";
$time_diff = $current_time - $create_time;
$day_diff = $time_diff / (3600 * 24);
echo "Difference \t ".($day_diff)."\n";
echo "Quote Index \t ".intval(floor($day_diff))."\n";
echo "Func Calc \t ".get_passed_days($create_time)."\n";
function get_passed_days($create_time)
{
return intval(floor((time() - $create_time) / 86400));
}
To convert to hours, instead of 86400 put 3600 instead.
Hope this helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/5904612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Get access token facebook-sdk I'm trying to figure out, how to log in to facebook account and get access token so I can post into my groups.
To create a GraphAPI instance, I need to check access_token I need and put it into the code.
I'm curious whether is it possible to use username and password to get the access_token automatically so the program can be used by more users.
oauth_access_token = 'ACCESS TOKEN WHICH I HAVE TO GET FROM https://developers.facebook.com/tools/explorer/..../'
def post_to_1_group(user,password,post):
graph = facebook.GraphAPI(oauth_access_token)
groups = graph.get_object("me/groups")
group_id = groups['data'][1]['id'] # select the first group I'm in
graph.put_object(group_id, "feed", message=post)
This code works but it doesn't use user and password arguments. I have to check my access_token everytime I want to run the program.
Is it possible?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/29705475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Creating a loop in JS for a web application I'm trying to loop this function at least ten times but can't seem to get it to work on my web application.
I am learning JS at school and this is as far as i've come
$(function fade_in_pictures() {
for (int i = 0; i < 10; ++i) {
$('.wi2').delay(2000).fadeIn(0);
$('.wi1').delay(2000).fadeOut(0);
$('.wi1').delay(4000).fadeIn(0);
$('.wi2').delay(4000).fadeOut(0);
$('.wi2').delay(2000).fadeIn(0);
$('.wi1').delay(2000).fadeOut(0);
}
});
ERROR:Parsing error: Unexcepted Token i
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56213648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Resource leak not closed I'm using Eclipse, and doing the function below, it happens that I'm opening a Scanner, and then, in the end I close it, but Eclipse keeps saying that it is not closed "resource leak: 'scanner' is not closed".
I can do it with try with resources, and the warning disappears, but I want to understand why the way I'm trying in here doesn't work
private void follow(String userID) {
if(!(new File("../users/"+userID+"/")).exists()){
System.out.println("User especificado nao existe");
return;
}
File list = new File("../users/"+userID+"/followers.txt");
try {
list.createNewFile();
Scanner scanner = new Scanner(list);
while(scanner.hasNextLine()){
String line = scanner.nextLine();
if(line.equals(clientID)){
System.out.println("Cliente ja segue este user");
return;
}
}
PrintWriter out = new PrintWriter(new FileWriter(list, true));
out.append(clientID+"\n");
out.close();
scanner.close();
} catch (IOException e) {
e.printStackTrace();
}
}
A: #createNewFile() can throw an IOException before you get to the close() statement, keeping it from being executed.
A: put out.close() in a finally statement:
finally {
out.close();
}
What finally does is that any code within it gets executed even if an error is thrown.
A: Scanner implements AutoCloseable; you may use try-with-resources
https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/66431637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Loading a shared library in a same directory with the executable file In Windows, an executable(.exe) which needs a shared library (.dll) can be run when the exe and dll files are in a same directory.
In Linux, even though the executable and the shared library(.so) are in the same directory, Linux always looks for it in the absolute directory where .so was first built, then fails to run the executable.
Setting LD_LIBRARY_PATH or RPATH environment variable before running the executable is a ad-hoc solution, but I want to do it without setting the environment variable, and make it behaves like Windows.
How can I do it? I added "-rpath=$ORIGIN" to CMakelists but it still fails.
For just experimenting, I made a simple program and another shared library and tried dlopen and it works as I wanted. However, I don't use dlopen for this case.
A: I found the solution: add rpath option to CMakelists.txt for the executable, not for the shared library.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/54569976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to DRY a Bootstrap view I'm looking for a better way to effectively DRY my Rails views that use Bootstrap 3.0
Here's the HAML code of the view:
%h1 Profiles
.visible-xs
%table.table.table-striped
%tbody
- @profiles.each do |profile|
%tr
%td
= profile.name
%td{style: "text-align: right"}
=link_to "<i class='fa fa-eye fa-small'></i>".html_safe, setup_profile_path(profile), class: "btn"
=link_to "<i class='fa fa-edit fa-small'></i>".html_safe, edit_setup_profile_path(profile), class: "btn"
.hidden-xs
%table.table
%tr
%th Name
%th Last Updated
%th Actions
- @profiles.each do |profile|
%tr
%td= profile.name
%td= time_ago_in_words profile.updated_at
%td
=link_to "<i class='fa fa-eye fa-small'></i> Details".html_safe, setup_profile_path(profile), class: "btn btn-sm"
=link_to "<i class='fa fa-edit fa-small'></i> Change".html_safe, edit_setup_profile_path(profile), class: "btn btn-sm"
=link_to "<i class='fa fa-trash-o fa-small'></i> Remove".html_safe, setup_profile_path(profile, method: :destroy), class: "btn btn-sm"
As you can see, I have an additional column (presently) and may add two or three more for the larger viewports (hence a table layout) as the app develops, but for the mobile devices, I just show the Profile name and buttons to get to more details (which is all that's really practical on small screens).
I tried using rows and cols of the grid system, but its a lot of work to get a usable interface at each viewport size and the template code looks like a big ball of mud when all in one. Separating into XS and "everything else" so far is the cleanest implementation, but two blocks and two loops though same data to present. Before I go and create 20 more of these, is there a DRY'er way to implement mobile (xs) vs. tablet/desktop (sm, md, lg, ...) viewports that is still very clear to glance at and understand?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/23426714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: change image in a div with Onclick in React I have several divs with an image inside each of them and I have an onCLick on the wrapping div, which executes two functions -clickHandler() and toggleCollapse2(). The idea is that the image in the div represents a styled ticked box which replaces the image of the empty box with the onClick. The onclick in the beginning was on the image but when i put the Onclick on the wrapping div, I cannot change the source of the pictures when the div is clicked. I have put the div in a dropdown item,because i have about 20 buttons with ticked icons with text and I'm trying to rewrite the code in React, but unfortunately I am new to React yet. So I'll be thankful for any help. The idea is to use this code, instead of writing all the buttons with the onClick and the functions one by one. Moreover the ticked icons with text must be in 3 columns.
The problem is to make the checkbox-yellow replace the checkbox_empty image when clicking on the div.
//in the constructor
//constructor(props) {
// super(props);
//this.state = {
buttons :
[
{id:1, name:"check1", isVisited: false, value:"name1"},
{id:2, name:"check2", isVisited: false, value:"name2"},
{id:3, name:"check3", isVisited: false, value:"name3"},
{id:4, name:"check4", isVisited: true, value:"name4"},
{id:5, name:"check5", isVisited: false, value:"name5"},
{id:6, name:"check6", isVisited: false, value:"name6"},
{id:7, name:"check7", isVisited: true, value:"name7"},
{id:8, name:"check8", isVisited: false,value:"name8"},
{id:9, name:"check9", isVisited: false,value:"name9"},
{id:10, name:"check10", isVisited: false, value:"name10"},
{id:11, name:"check11", isVisited: false, value:"name11"},
{id:12, name:"check12", isVisited: false, value:"name12"},
{id:13, name:"check11", isVisited: false, value:"name13"},
{id:14, name:"check12", isVisited: false, value:"name14"}
],
//Outside the constructor:
dropdownButton(startIndex,endIndex){
let uiButtons = [];
this.state.buttons.slice(startIndex,endIndex).map((button)=>{
uiButtons.push(
<DropdownItem>
<div key={button.id} className="checkbox-business" onClick={() => {
this.clickHandler();
this.toggleCollapse2(button.name, button.isVisited)}} >
<img style={{maxWidth: '20px'}} src={button.isVisited === true ? checkbox_yellow : checkbox_empty}/>
<div>
{button.value}
</div>
</div>
</DropdownItem>
);
}
);
return uiButtons;
}
toggleCollapse2 = (sectionName) => {
this.setState({check1: false, check2: false, check3: false, check4: false})
let obj = {};
obj[sectionName] = !this.state[sectionName];
this.setState(state => (obj));
}
//in the render I have this code:/because I want the buttons distributed equally in 3 columns/
let startIndexFirstColumn = 0;
let endIndexFirstColumn = 0;
let startIndexSecondColumn = 0;
let endIndexSecondColumn = 0;
let startIndexThirdColumn = 0;
let endIndexThirdColumn = 0;
if ( this.state.buttons.length % 3 === 2) {
endIndexFirstColumn = (this.state.buttons.length / 3 + 1);
startIndexSecondColumn = (this.state.buttons.length / 3 + 1);
endIndexSecondColumn = ((this.state.buttons.length / 3) * 2 + 1);
startIndexThirdColumn = ((this.state.buttons.length / 3) * 2 + 1);
endIndexThirdColumn = (this.state.buttons.length / 3 * 3);
}
//and in the return part I have 3 similar divs,containing:
<div className="box-business">
{this.dropdownButton(startIndexFirstColumn,endIndexFirstColumn)}
</div>
//with different indexes for each column
A: I need to make the img source show the checkbox_yellow image instead of checkbox-empty image,when the div containing the image and text is clicked.And to change it back when the div is clicked again.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58302737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Python - Split List into a Dictionary I am trying to split a big list into a dictionary. Given a list with say 363 elements I want to divide them such that each key in the dictionary will have a list of 10 values. So a list of 363 elements will have 37 keys (rounded up to nearest 10).The keys will start from 1 and go upto 37. I wrote a function for the rounding up but i dont know how to split the list while making sure the sequence is maintained and if all the lists (which will be the values of the keys) are joined they will mirror the original list.
This is my rounding up function:
def roundup(list_length, atatime=10):
return ((atatime-(list_length%atatime))+list_length)/atatime
Any ideas , suggestions will be really helpfull
A: If you want to round up and slice appropriately:
l = list(range(363))
at_a_time = 10
n = len(l)
d, r = divmod(n, at_a_time)
# if there is a remainder, add 1 to the keys, 363 -> 37 keys 360 -> 36 keys
num_keys = d + 1 if r else d
# get correct slice size based on amount of keys
sli = n // num_keys
# create "sli" sized chunks
vals = (l[i:i+sli] for i in range(0, n, sli+1))
# make dict from `1 to num_keys inclusive and slices
dct = dict(zip(range(1,num_keys+1),vals))
print(dct)
{1: [0, 1, 2, 3, 4, 5, 6, 7, 8], 2: [10, 11, 12, 13, 14, 15, 16, 17, 18], 3: [20, 21, 22, 23, 24, 25, 26, 27, 28], 4: [30, 31, 32, 33, 34, 35, 36, 37, 38], 5: [40, 41, 42, 43, 44, 45, 46, 47, 48], 6: [50, 51, 52, 53, 54, 55, 56, 57, 58], 7: [60, 61, 62, 63, 64, 65, 66, 67, 68], 8: [70, 71, 72, 73, 74, 75, 76, 77, 78], 9: [80, 81, 82, 83, 84, 85, 86, 87, 88], 10: [90, 91, 92, 93, 94, 95, 96, 97, 98], 11: [100, 101, 102, 103, 104, 105, 106, 107, 108], 12: [110, 111, 112, 113, 114, 115, 116, 117, 118], 13: [120, 121, 122, 123, 124, 125, 126, 127, 128], 14: [130, 131, 132, 133, 134, 135, 136, 137, 138], 15: [140, 141, 142, 143, 144, 145, 146, 147, 148], 16: [150, 151, 152, 153, 154, 155, 156, 157, 158], 17: [160, 161, 162, 163, 164, 165, 166, 167, 168], 18: [170, 171, 172, 173, 174, 175, 176, 177, 178], 19: [180, 181, 182, 183, 184, 185, 186, 187, 188], 20: [190, 191, 192, 193, 194, 195, 196, 197, 198], 21: [200, 201, 202, 203, 204, 205, 206, 207, 208], 22: [210, 211, 212, 213, 214, 215, 216, 217, 218], 23: [220, 221, 222, 223, 224, 225, 226, 227, 228], 24: [230, 231, 232, 233, 234, 235, 236, 237, 238], 25: [240, 241, 242, 243, 244, 245, 246, 247, 248], 26: [250, 251, 252, 253, 254, 255, 256, 257, 258], 27: [260, 261, 262, 263, 264, 265, 266, 267, 268], 28: [270, 271, 272, 273, 274, 275, 276, 277, 278], 29: [280, 281, 282, 283, 284, 285, 286, 287, 288], 30: [290, 291, 292, 293, 294, 295, 296, 297, 298], 31: [300, 301, 302, 303, 304, 305, 306, 307, 308], 32: [310, 311, 312, 313, 314, 315, 316, 317, 318], 33: [320, 321, 322, 323, 324, 325, 326, 327, 328], 34: [330, 331, 332, 333, 334, 335, 336, 337, 338], 35: [340, 341, 342, 343, 344, 345, 346, 347, 348], 36: [350, 351, 352, 353, 354, 355, 356, 357, 358], 37: [360, 361, 362]}
If you don't want the list sliced evenly and the remainder to be added to the last key just use your atatime for each slice (10 in the example code):
vals = (l[i:i+at_a_time] for i in range(0, n, sli+1))
We can also use num_keys = n + (at_a_time - 1) // at_a_time to round as any value that has a remainder // at_a_time will be rounded up adding at_a_time - 1 to it , any number evenly divisible will not.
You can do it all in a dict comprehension making some changes to the code but I think you will hopefully learn more from explicit code.
A: The following dict comprehension slices the list into the required number of slices.
In other words, idx+1 is the key, whereas lst[atatime*idx:atatime*(idx+1)] is the value of a dictionary defined using dict-comprehension. The value is a slice of atatime length. We iterate over the (rounded upwards) size of the list divided by the chunk-size using an xrange iterator.
from math import ceil
{idx+1: lst[atatime*idx:atatime*(idx+1)] for idx in xrange(int(ceil(float(len(lst))/atatime)))}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28521731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Enriching Gensim W2V with spatial Information (such as geo-location) Anyone have experince, or idea of the best way to train a W2V model , while enrichnig is with geo-location context (using Gensim library)?
*
*I have a dataset of scripted conversations from different english specking coutries.
*I would like to train the model to understand the relation between words, but also consider the location in which the conversations took place.
*So when I'm "questioning" the model, I can give it a context to a certain country and potentially improve its relevancy.
What I have in mind, is to inject a geo-location ID with every phrase, as a (fake) word.
Example -
p1 [us, the, lion, king, is, a, great, movie, us]
p2 [uk, king, charles, ascended, the, throne, uk]
The desired result should be something along the lines of:
vec(“us”) + vec(“king”) --> vec ("Lion")
vec(“uk”) + vec(“king”) --> vec ("Charles")
Anyone have a more sturctured idea to do that, while still sticking to the Gensim library?
A: I've not seen any proven techniques for your need.
But, it is a bit similar to how people try to track the drift in word meanings over different eras. There's been some published work like HistWords from Stanford on that task.
I have also in past answers suggested people working on the eras-drift task try probabilistically replacing words whose sense may vary with alternate, context-labeled tokens. That is, if king is one of the words that you expect to vary based on your geography-contexts, expand your training corpus to sometimes replace king in UK contexts with king_UK, and in US contexts with king_US. (In some cases, you might even repeat your texts to do this.) Then, at the end of training, you'll have separate (but close) vectors for all of king, king_UK, & king_US – and the subtle difference between them may be reflective of what you're trying to study/capture.
You can see other discussion of related ideas in previous answers:
https://stackoverflow.com/a/57400356/130288
https://stackoverflow.com/a/59095246/130288
I'm not sure how well this approach might work, nor (if it does) optimal ways to transform the corpus to capture all the geography-flavored meaning-shifts.
I suspect the extreme approach of transforming every word in a UK-context to its UK-specific token, & same for other contexts, would work less well than only sometimes transforming the tokens – because a total transformation would mean each region's tokens only get trained with each other, never with shared (non-regionalized) words that help 'anchor' variant-meanings in the same shared overall context. But, that hunch would need to be tested.
(This simple "replace-some-tokens" strategy has the advantage that it can be done entirely via corpus preprocessing, with no change to the algorithms. If willing/able to perform big changes to the library, another approach could be more fasttext-like: treat every instance of king as a sum of both a generic king_en vector and a region king_UK (etc) vector. Then every usage example would update both.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/75106776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Unable to change the link of a button using Javascript Hello I am a total newbie (I don't code) and I made a wordpress website in two different languages (french and english) using the Polylang plugin.
But I am facing the following problem : in my website, there is a button linking to a Facebook page in english and when the website is switched to french, I would like the button to link to another Facebook page (that will be in french).
From what I searched so far, I understand that this would be possible using javascript.
And I tried many variations of the following code with no result :
<script>
jQuery(document).ready(function(){
$('html[lang=|fr] .fa-facebook').attr('href', 'https://myfacebookpage.com');
});
</script>
"fa-facebook" is the css class of the social media button.
What am I doing wrong and how can I fix that please ?
Thank you !
Edit : here is the html code of the french version : jsfiddle.net/yup5zxng
A: I also tried with not result :
<script>
jQuery(document).ready(function(){
$('html:lang(fr-FR) .fa-facebook').attr('href', 'https://myfacebookpage.com'); }); </script>
A: Basically what you should do is check the language of the <html> tag on document ready and after loop through all the facebook links and update their href. Something like this:
$(document).ready(function() {
const documentLanguage = $("html").attr("lang");
const facebookLinks = $(".fa-facebook");
facebookLinks.each(function(index, link) {
if (documentLanguage === "fr-FR") {
$(link).parent().attr("href", "https://facebook.com/my-french-site/");
} else if (documentLanguage === "en-EN") {
$(link).parent().attr("href", "https://facebook.com/my-english-site/");
}
});
});
Here is a live sample: https://jsfiddle.net/rhernando/kwndv609/3
EDIT
I forgot about the tree structure of your DOM the class element is an <i> tag so you have to point to it's parent element. The updated code should work.
A: you have a typo in the attribute selector:
| should be before =, not after:
$('html[lang|=fr] .fa-facebook')
More about attribute selectors here: https://developer.mozilla.org/en-US/docs/Web/CSS/Attribute_selectors
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/59695232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Use of M2M Table and relationship to get specific data in sqlalchemy I have Table
# File : MyRelations.py
ACC_ADD_TABLE = Table('acc_add_rel', METADATA,
Column('acc_id', ForeignKey('acc.id'),
nullable=False),
Column('add_id', ForeignKey('address.id'),
nullable=False),
PrimaryKeyConstraint('add_id', 'acc_id'),
)
# File : Address.py
class Address(Base):
id = Column(Integer, primary_key=True,)
type = Column(String(length=10), nullable=False)
# File : Account.py
class Account(Base):
id = Column(Integer, primary_key=True,)
addresses = relationship('Address',
secondary=ACC_ADD_TABLE
)
# default_address = relationship('Address',
# secondary=ACC_ADD_TABLE,
# primaryjoin=and_("ACC_ADD_TABLE.add_id==Address.id",
# "ACC_ADD_TABLE.acc_id==Account.id",
# "Address.type='default'")
# )
As per the example I want to access the all default addresses in account. I can use declared_attr or can write the function but is there any way to combine Table and Class attribute in single and_ operation?
Note: Address.py and Account.py both are different files and due to cycle dependency I cant import any model in other model
Thx for you help.
A: This works without requiring an import:
default_address = relationship('Address',
secondary=ACC_ADD_TABLE,
primaryjoin="acc.c.id==acc_add_rel.c.acc_id",
secondaryjoin="and_(address.c.id==acc_add_rel.c.add_id, address.c.type=='default')",
#uselist = True,
)
If you are certain that there is only one default address, you might use uselist=True for convenience.
Sometimes I prefer the other structure for such situations though: add a column to the Account table: default_address_id and build 1-[0..1] relationship based on this column, still checking that the referenced Address is also part of Account.addresses M-N relationship.
On the side note, a typo: in your (commented) code you should use == instead of = in "Address.type='default'". This does not solve the problem though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8922118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Responsive WYSIWYG editor in jquerymobile website and mobile app I'm struggling but i couldn't find a single rich text editor which is compatible with jquerymobile and Cordova.
I Have a container div now i want to put editor in it and the width and height of editor should change if container div width and height change i Have tried many editors like TinyMCE but they are not responsive and i don't want to use bootstrap library. Please help?
A: This question was old, but not answered very well and I ran across this same problem earlier today. jHtmlArea can be used on a responsive page with a couple of small modifications to its source code.
The original documentation for jHtmlArea can be found here:
https://pietschsoft.com/post/2009/07/22/jhtmlarea-the-all-new-html-wysiwyg-editor-for-jquery
The original source code has migrated to GitHub: https://github.com/crpietschmann/jHtmlArea
To make jHtmlArea responsive, simply replace the init function code in jHtmlArea-0.8.js with the code below:
init: function (elem, options) {
if (elem.nodeName.toLowerCase() === "textarea") {
var opts = $.extend({}, jHtmlArea.defaultOptions, options);
elem.jhtmlareaObject = this;
var textarea = this.textarea = $(elem);
var container = this.container = $("<div/>").addClass("jHtmlArea").insertAfter(textarea);
var toolbar = this.toolbar = $("<div/>").addClass("ToolBar").appendTo(container);
priv.initToolBar.call(this, opts);
var iframe = this.iframe = $("<iframe/>").height(textarea.height());
iframe.width('inherit');
var htmlarea = this.htmlarea = $("<div/>").width('100%').append(iframe);
container.append(htmlarea).append(textarea.hide());
priv.initEditor.call(this, opts);
priv.attachEditorEvents.call(this);
// Fix total height to match TextArea
iframe.height(iframe.height() - toolbar.height());
// toolbar.width(textarea.width());
if (opts.loaded) { opts.loaded.call(this); }
}
},
Hopefully this will save someone else some time.
A: jHtmlArea is a nice WYSIWYG editor
is very simple to use
// Turn all <textarea/> tags into WYSIWYG editors
$(function() {
$("textarea").htmlarea();
});
You can download the library from here
http://pietschsoft.com/demo/jHtmlArea/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/22685250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: found button but unable to click on it I have a methods which fills facebook sign up for and should to press "Create an account" button. It seams that it founds button, but due to unclear reason unable to click on it
code of button is:
<button type="submit" class="_6j mvm _6wk _6wl _58mi _3ma _6o _6v" name="websubmit" id="u_0_s">Create Account</button>
and my method is:
def submit_new_account_form(self, **credentials):
firstname = self.driver.find_element_by_css_selector(self.__first_name_field_css)
lastname = self.driver.find_element_by_css_selector(self.__last_name_field_css)
number_or_email = self.driver.find_element_by_css_selector(self.__mobile_number_or_email_field_css)
newpass = self.driver.find_element_by_id(self.__new_password_field_id)
maleradio = self.driver.find_element_by_css_selector(self.__male_radio_css)
femaleradio = self.driver.find_element_by_css_selector(self.__female_radio_css)
submit_button = self.driver.find_element_by_id(self.__create_account_button_id)
if submit_button:
print ("submit button found")
if maleradio:
print("maleradio found")
if femaleradio:
print ("femaleradio found")
#firstname.clear()
if credentials['first_name']:
firstname.send_keys(credentials['first_name'])
#lastname.clear()
if credentials['last_name']:
lastname.send_keys(credentials['last_name'])
#number_or_email.clear()
if credentials['phone_or_email']:
number_or_email.send_keys(credentials['phone_or_email'])
re_enter_email_field = WebDriverWait(self.driver, 10).until(
expected_conditions.presence_of_element_located((By.CSS_SELECTOR,self.__re_enter_new_email_field_css)))
re_enter_email = self.driver.find_element_by_css_selector(self.__re_enter_new_email_field_css).send_keys(
credentials['phone_or_email'])
#newpass.clear()
if credentials['newpass']:
newpass.send_keys(credentials['newpass'])
if credentials['sex'] == 'male':
maleradio.click()
if credentials['sex'] == 'female':
femaleradio.click()
submit_button.click()
if submit_button.click():
print('submit button clicked')
each time I run script submit button clicked doesn't appears and error occurs:
selenium.common.exceptions.ElementNotVisibleException: Message: element not visible
A: <button type="submit" class="_6j mvm _6wk _6wl _58mi _3ma _6o _6v"
name="websubmit" id="u_0_s">Create Account</button>
From above,I could see id for submit button is 'id='u_0_s'.
Can you confirm that you are passing id correctly for submit?If not,please correct it with 'id='u_0_s',try and let me know if it works.
Also,check if the Submit button is visible while your test case is getting executed.You can add code to maximize browser window for this.
A: You can use this xpath: "//button[contains(text(), 'Create an account')]", it will solve your problem
A: I have added implicit wait
WebDriverWait(self.driver,10).until(expected_conditions.visibility_of_element_located((By.ID, self.__create_account_button_id)))
And after that everything works good
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/46724944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: apache beam running locally but unable to read data from csv in Python import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
p = beam.Pipeline(options=PipelineOptions())lines = p | beam.io.ReadFromText('file:///C:/Users/Lenovo/Desktop/dataflow.csv')
Apache beam is running locally but is unable to read data from csv.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/52790021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Parallel processing data analysis - Is there a benefit to having more splits than processor cores? I am using a naive Bayesian classifier to predict some test data in R. The test data has >1,000,000,000 records, and takes far too long to process with one processor. The computer I am using has (only) four processors in total, three of which I can free-up to run my task (I could use all four, but prefer to keep one for other work I need to do).
Using the foreach and doSNOW packages, and following this tutorial, I have things set up and running. My question is:
I have the dataset split into three parts, one part per processor. Is there a benefit to splitting the dataset into say 6,9, or 12 parts? In other words, what is the trade-off between more splits, vs, just having one big block of records for each processor core to run?
I haven't provided any data here because I think this question is more theoretical. But if data are needed, please let me know.
A: Broadly speaking, the advantage of splitting it up into more parts is that you can optimize your processor use.
If the dataset is split into 3 parts, one per processor, and they take the following time:
Split A - 10 min
Split B - 20 min
Split C - 12 min
You can see immediately that two of your processors are going to be idle for a significant amount of time needed to do the full analysis.
Instead, if you have 12 splits, each one taking between 3 and 6 minutes to run, then processor A can pick up another chunk of the job after it finishes with the first one instead of idling until the longest-running split finishes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/48590763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Speed up a Laravel query where a field from relationship matches at least all elements of a given array I'm using the query below to get all results matching at least all elements of an array:
$colors_array = ['red', 'green', 'blue'];
Foo::with('colors')
->whereHas(
'colors',
function ($query) use ($colors_array) {
$query->distinct()->whereIn('name', $colors_array);
},
'=',
count($colors_array)
);
But this query is very slow, is there a better syntax to speed it up ?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64770925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: CMD.exe consumes all the CPU, blocking other CMD.exe to execute We have some Batch scripts (.bat) in Windows to execute the “backups” and “archive log” for the databases. These scripts are called from Tivoli periodically.
For each executed script, the process creates a sub session in order to load the DB2cmd environment, in order to execute the db2 commands and exit.
daily.bat
call db2cmd hourly.cmd
The content of the script is this:
db2_job_saveddaily.cmd
db2 -fE:\DB2\scripts\tmp\db2_job_savedbhourly.db2 -zE:\DB2\scripts\tmp\db2_job_savedbhourly.log
exit
The content of the db2 file is (however, it is not important because it is executed correctly)
db2_job_saveddaily.db2
archive log for database ICMNLSDB
We are facing a problem with these scripts, and I think it is related with the exit. At one execution, the script freezes, and it starts to consume the whole CPU (see attached image). After this behavior, we cannot execute any other DB2 command from the CLP.
We kill all the CMD.exe and db2bp.exe processes, but the error persists.
There is nothing in the db2diag.log file, and the only solution is to restart the machine.
Probably, the CMD.exe process losses the communication with the db2bp.exe, and the exit cannot be executed. I would like to understand the origin of this problem and learn how to execute db2 processes in Windows.
A: Our friend @AngocA seems to check into SO often but hasn't been checking this dangling question even though he did something to close it. Let's at least put his answer in here so folk know it's CLOSED by user. :) Courtesy of tonight's Point Pimp. :-D
"The problem was in another db2cmd session where there was an
infinitive loop. This created a scenario when new db2cmd session
blocked because the first session used the whole CPU. – AngocA"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8229015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to Group Documents by a Field and Calculate Count in mongodb using java How to write the below aggregation query in java.Can anyone please help me out regarding this ...
My documents in mongodb:
{ "_id" : ObjectId("56d6b5849d6e45832c36482a"), "name" : "abc", "count" : 100 }
{ "_id" : ObjectId("56d6b5899d6e45832c36482b"), "name" : "abc", "count" : 200 }
{ "_id" : ObjectId("56d6b5949d6e45832c36482c"), "name" : "xyz", "count" : 50 }
My query:
db.orders.aggregate([
{$group:{_id:"$name",total:{$sum:"$count"}}}
])
o/p:
{ "_id" : "abc", "total" : 300 }
{ "_id" : "xyz", "total" : 50 }
A: db.getCollection("orders").aggregate(Arrays.asList(new Document("$group", new Document("_id", "$name").append("total", new Document("$sum", "$count")))));
Hopes this will help.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/35740592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Stored procedure select VS select from external connection I am trying to find the pros and cons of using stored procedures instead of SQL queries from an external connection, but I am unable to find any direct comparison.
*
*What is the benefit of using stored procedures instead of SQL queries from an external connection?
*Is there any execution speed differences between them for small volume and big volume outputs?
*Is there any benefits for the database management as well?
A:
What is the benefit of using stored procedures instead of SQL queries from an external connection?
*
*Stored Procedures can be complex. Very complex. They can do things
that a single SQL query cannot do. (Execute Block aside.)
*They have their own set of grants so they can do things that current user
cannot do at all.
*Firebird optimizer is not that bad but obviously complex queries require more time for optimization and the result still may be suboptimal. Using imperative language the programmer can split complex query to set of simpler ones making Data Access Paths more predictable.
Is there any execution speed differences between them for small volume
and big volume outputs?
No.
Is there any benefits for the database management as well?
It depends on what you call "database management" and what benefits you have on mind. Most likely - no.
A:
What is the benefit of using stored procedures instead of SQL queries from an external connection?
One benefit, in terms of execution, is stored procedures store their query plan whereas dynamic sql query plans will not be stored and must be calculated each time the query is executed.
Is there any execution speed differences between them for small volume and big volume outputs?
Once the query plan is calculated, no, there is no speed difference.
Is there any benefits for the database management as well?
This is very subjective! In the past I worked at a place where ALL database access went through stored procedures so that they could lock down access to just the SPs. Other places I've worked didn't use stored procs at all because they are generally outside source control and problematic for developers who aren't SQL gurus. Also, business logic spread across multiple systems can become a real problem.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70522863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: Hibernate gives a strange ClassCast exception (using Transformers) This code:
@Override
public List<FactCodeDto> getAllFactsWithoutParentsAsFactDto() {
String completeQuery = FactCodeQueries.SELECT_DTO_FROM_FACT_WITH_NO_PARENTS;
Query query = createHibernateQueryForUnmappedTypeFactDto(completeQuery);
List<FactCodeDto> factDtoList = query.list(); //line 133
return factDtoList;
}
calling this method:
private Query createHibernateQueryForUnmappedTypeFactDto(String sqlQuery) throws HibernateException {
return FactCodeQueries.addScalars(createSQLQuery(sqlQuery)).setResultTransformer(Transformers.aliasToBean(FactCodeDto.class));
}
gives me a ClassCastException -> part of the trace:
Caused by: java.lang.ClassCastException: org.bamboomy.cjr.dto.FactCodeDto cannot be cast to java.util.Map
at org.hibernate.property.access.internal.PropertyAccessMapImpl$SetterImpl.set(PropertyAccessMapImpl.java:102)
at org.hibernate.transform.AliasToBeanResultTransformer.transformTuple(AliasToBeanResultTransformer.java:78)
at org.hibernate.hql.internal.HolderInstantiator.instantiate(HolderInstantiator.java:75)
at org.hibernate.loader.custom.CustomLoader.getResultList(CustomLoader.java:435)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2423)
at org.hibernate.loader.Loader.list(Loader.java:2418)
at org.hibernate.loader.custom.CustomLoader.list(CustomLoader.java:336)
at org.hibernate.internal.SessionImpl.listCustomQuery(SessionImpl.java:1898)
at org.hibernate.internal.AbstractSessionImpl.list(AbstractSessionImpl.java:318)
at org.hibernate.internal.SQLQueryImpl.list(SQLQueryImpl.java:125)
at org.bamboomy.cjr.dao.factcode.FactCodeDAOImpl.getAllFactsWithoutParentsAsFactDto(FactCodeDAOImpl.java:133)
Which is pretty strange because, indeed, if you look up the source code of Hibernate it tries to do this:
@Override
@SuppressWarnings("unchecked")
public void set(Object target, Object value, SessionFactoryImplementor factory) {
( (Map) target ).put( propertyName, value ); //line 102
}
Which doesn't make any sense...
target is of type Class and this code tries to cast it to Map,
why does it try to do that???
any pointers are more than welcome...
I'm using Hibernate 5 (and am upgrading from 3)...
edit: I also use Spring (4.2.1.RELEASE; also upgrading) which calls these methods upon deploy, any debugging pointers are most welcome as well...
edit 2: (the whole FactCodeDto class, as requested)
package org.bamboomy.cjr.dto;
import org.bamboomy.cjr.model.FactCode;
import org.bamboomy.cjr.model.FactCodeType;
import org.bamboomy.cjr.utility.FullDateUtil;
import org.bamboomy.cjr.utility.Locales;
import lombok.Getter;
import lombok.Setter;
import lombok.ToString;
import org.springframework.util.Assert;
import java.util.*;
@Getter
@Setter
@ToString
public class FactCodeDto extends TreeNodeValue {
private String cdFact;
private String cdFactSuffix;
private Boolean isSupplementCode;
private Boolean isTitleCode;
private Boolean mustBeFollowed;
private Date activeFrom;
private Date activeTo;
private Boolean isCode;
private Long idFact;
private Long idParent;
private String type;
Map<Locale, String> description = new HashMap<Locale, String>(3);
public FactCodeDto() {
}
public FactCodeDto(String prefix, String suffix) {
super();
this.cdFact = prefix;
this.cdFactSuffix = suffix;
}
public FactCodeDto(String cdFact, String cdFactSuffix, Boolean isSupplementCode, Boolean mustBeFollowed) {
super();
this.cdFact = cdFact;
this.cdFactSuffix = cdFactSuffix;
this.isSupplementCode = isSupplementCode;
this.mustBeFollowed = mustBeFollowed;
}
public FactCodeDto(String cdFact, String cdFactSuffix, Boolean isSupplementCode, Boolean mustBeFollowed, Long idFact, Long idParent, Boolean isCode, Boolean isTitleCode, Date from, Date to, Map<Locale, String> descriptions,String type) {
super();
this.cdFact = cdFact;
this.cdFactSuffix = cdFactSuffix;
this.isSupplementCode = isSupplementCode;
this.mustBeFollowed = mustBeFollowed;
this.idFact = idFact;
this.idParent = idParent;
this.isCode = isCode;
this.isTitleCode = isTitleCode;
this.activeFrom = from;
this.activeTo = to;
if (descriptions != null) {
this.description = descriptions;
}
this.type = type;
}
public FactCodeDto(FactCode fc) {
this(fc.getPrefix(), fc.getSuffix(), fc.isSupplementCode(), fc.isHasMandatorySupplCodes(), fc.getId(), fc.getParent(), fc.isActualCode(), fc.isTitleCode(), fc.getActiveFrom(), fc.getActiveTo(), fc.getAllDesc(),fc.getType().getCode());
}
public String formatCode() {
return FactCode.formatCode(cdFact, cdFactSuffix);
}
public boolean isActive() {
Date now = new Date(System.currentTimeMillis());
return FullDateUtil.isBetweenDates(now, this.activeFrom, this.activeTo);
}
public void setDescFr(String s) {
description.put(Locales.FRENCH, s);
}
public void setDescNl(String s) {
description.put(Locales.DUTCH, s);
}
public void setDescDe(String s) {
description.put(Locales.GERMAN, s);
}
/**
* public String toString() {
* StringBuilder sb = new StringBuilder();
* sb.append(getIdFact() + ": ")
* .append(getIdParent() + ": ")
* .append(" " + cdFact + cdFactSuffix + ": " + (isSupplementCode ? "NO Principal " : " Principal "))
* .append((mustBeFollowed ? " Must Be Followed " : "NOT Must Be Followed "));
* return sb.toString();
* }
*/
public Map<Locale, String> getDescription() {
return description;
}
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
String fullCode = formatCode();
result = prime * result + ((fullCode == null) ? 0 : fullCode.hashCode());
return result;
}
@Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
FactCodeDto other = (FactCodeDto) obj;
return formatCode().equals(other.formatCode());
}
@Override
public boolean isChildOf(TreeNodeValue value) {
Assert.notNull(value);
boolean isChild = false;
if (value instanceof FactCodeDto) {
if (this.getIdParent() != null) {
isChild = this.getIdParent().equals(((FactCodeDto) value).getIdFact());
}
}
return isChild;
}
@Override
public boolean isBrotherOf(TreeNodeValue value) {
Assert.notNull(value);
boolean isBrother = false;
if (value instanceof FactCodeDto) {
if (this.getIdParent() != null) {
isBrother = this.getIdParent().equals(((FactCodeDto) value).getIdParent());
}
}
return isBrother;
}
@Override
public boolean isParentOf(TreeNodeValue value) {
Assert.notNull(value);
boolean isParent = false;
if (value instanceof FactCodeDto) {
isParent = this.getIdFact().equals(((FactCodeDto) value).getIdParent());
}
return isParent;
}
@Override
public int compareTo(TreeNodeValue to) {
if (to instanceof FactCodeDto) {
return formatCode().compareTo(((FactCodeDto) to).formatCode());
} else return 1;
}
public String getCode() {
return formatCode();
}
}
A: I found that AliasToBean has changed in Hibernate 5. For me adding getter for my field fixed the problem.
A: This exception occurs when the setters and getters are not mapped correctly to the column names.
Make sure you have the correct getters and setters for the query(Correct names and correct datatypes).
Read more about it here:
http://javahonk.com/java-lang-classcastexception-com-wfs-otc-datamodels-imagineexpirymodel-cannot-cast-java-util-map/
A: I do some investigation on this question. The problem is that Hibernate converts aliases for column names to upper case — cdFact becomesCDFACT.
Read for a more deeply explanation and workaround here:
mapping Hibernate query results to custom class?
A: In the end it wasn't so hard to find a solution,
I just created my own (custom) ResultTransformer and specified that in the setResultTransformer method:
private Query createHibernateQueryForUnmappedTypeFactDto(String sqlQuery) throws HibernateException {
return FactCodeQueries.addScalars(createSQLQuery(sqlQuery)).setResultTransformer(new FactCodeDtoResultTransformer());
//return FactCodeQueries.addScalars(createSQLQuery(sqlQuery)).setResultTransformer(Transformers.aliasToBean(FactCodeDto.class));
}
the code of the custom result transformer:
package org.bamboomy.cjr.dao.factcode;
import org.bamboomy.cjr.dto.FactCodeDto;
import java.util.Date;
import java.util.List;
/**
* Created by a162299 on 3-11-2015.
*/
public class FactCodeDtoResultTransformer implements org.hibernate.transform.ResultTransformer {
@Override
public Object transformTuple(Object[] objects, String[] strings) {
FactCodeDto result = new FactCodeDto();
for (int i = 0; i < objects.length; i++) {
setField(result, strings[i], objects[i]);
}
return result;
}
private void setField(FactCodeDto result, String string, Object object) {
if (string.equalsIgnoreCase("cdFact")) {
result.setCdFact((String) object);
} else if (string.equalsIgnoreCase("cdFactSuffix")) {
result.setCdFactSuffix((String) object);
} else if (string.equalsIgnoreCase("isSupplementCode")) {
result.setIsSupplementCode((Boolean) object);
} else if (string.equalsIgnoreCase("isTitleCode")) {
result.setIsTitleCode((Boolean) object);
} else if (string.equalsIgnoreCase("mustBeFollowed")) {
result.setMustBeFollowed((Boolean) object);
} else if (string.equalsIgnoreCase("activeFrom")) {
result.setActiveFrom((Date) object);
} else if (string.equalsIgnoreCase("activeTo")) {
result.setActiveTo((Date) object);
} else if (string.equalsIgnoreCase("descFr")) {
result.setDescFr((String) object);
} else if (string.equalsIgnoreCase("descNl")) {
result.setDescNl((String) object);
} else if (string.equalsIgnoreCase("descDe")) {
result.setDescDe((String) object);
} else if (string.equalsIgnoreCase("type")) {
result.setType((String) object);
} else if (string.equalsIgnoreCase("idFact")) {
result.setIdFact((Long) object);
} else if (string.equalsIgnoreCase("idParent")) {
result.setIdParent((Long) object);
} else if (string.equalsIgnoreCase("isCode")) {
result.setIsCode((Boolean) object);
} else {
throw new RuntimeException("unknown field");
}
}
@Override
public List transformList(List list) {
return list;
}
}
in hibernate 3 you could set Aliasses to queries but you can't do that anymore in hibernate 5 (correct me if I'm wrong) hence the aliasToBean is something you only can use when actually using aliasses; which I didn't, hence the exception.
A: Im my case :
=> write sql query and try to map result to Class List
=> Use "Transformers.aliasToBean"
=> get Error "cannot be cast to java.util.Map"
Solution :
=> just put \" before and after query aliases
ex:
"select first_name as \"firstName\" from test"
The problem is that Hibernate converts aliases for column names to upper case or lower case
A: I solved it by defining my own custom transformer as given below -
import org.hibernate.transform.BasicTransformerAdapter;
public class FluentHibernateResultTransformer extends BasicTransformerAdapter {
private static final long serialVersionUID = 6825154815776629666L;
private final Class<?> resultClass;
private NestedSetter[] setters;
public FluentHibernateResultTransformer(Class<?> resultClass) {
this.resultClass = resultClass;
}
@Override
public Object transformTuple(Object[] tuple, String[] aliases) {
createCachedSetters(resultClass, aliases);
Object result = ClassUtils.newInstance(resultClass);
for (int i = 0; i < aliases.length; i++) {
setters[i].set(result, tuple[i]);
}
return result;
}
private void createCachedSetters(Class<?> resultClass, String[] aliases) {
if (setters == null) {
setters = createSetters(resultClass, aliases);
}
}
private static NestedSetter[] createSetters(Class<?> resultClass, String[] aliases) {
NestedSetter[] result = new NestedSetter[aliases.length];
for (int i = 0; i < aliases.length; i++) {
result[i] = NestedSetter.create(resultClass, aliases[i]);
}
return result;
}
}
And used this way inside the repository method -
@Override
public List<WalletVO> getWalletRelatedData(WalletRequest walletRequest,
Set<String> requiredVariablesSet) throws GenericBusinessException {
String query = getWalletQuery(requiredVariablesSet);
try {
if (query != null && !query.isEmpty()) {
SQLQuery sqlQuery = mEntityManager.unwrap(Session.class).createSQLQuery(query);
return sqlQuery.setResultTransformer(new FluentHibernateResultTransformer(WalletVO.class))
.list();
}
} catch (Exception ex) {
exceptionThrower.throwDatabaseException(null, false);
}
return Collections.emptyList();
}
It worked perfectly !!!
A: Try putting Column names and field names both in capital letters.
A: This exception occurs when the class that you specified in the AliasToBeanResultTransformer does not have getter for the corresponding columns. Although the exception details from the hibernate are misleading.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/33433345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: RxJava: Observable and default thread I have the following code:
Observable.create(new ObservableOnSubscribe<String>() {
@Override
public void subscribe(@NonNull final ObservableEmitter<String> s) throws Exception {
Thread thread = new Thread(new Runnable() {
@Override
public void run() {
s.onNext("1");
s.onComplete();
}
});
thread.setName("background-thread-1");
thread.start();
}
}).map(new Function<String, String>() {
@Override
public String apply(@NonNull String s) throws Exception {
String threadName = Thread.currentThread().getName();
logger.logDebug("map: thread=" + threadName);
return "map-" + s;
}
}).subscribe(new Observer<String>() {
@Override
public void onSubscribe(Disposable d) {}
@Override
public void onNext(String s) {
String threadName = Thread.currentThread().getName();
logger.logDebug("onNext: thread=" + threadName + ", value=" + s);
}
@Override
public void onError(Throwable e) {}
@Override
public void onComplete() {
String threadName = Thread.currentThread().getName();
logger.logDebug("onComplete: thread=" + threadName);
}
});
And here's the output:
map: thread=background-thread-1
onNext: thread=background-thread-1, value=map-1
onComplete: thread=background-thread-1
Important detail: I'm calling the subscribe method from another thread (main thread in Android).
So looks like Observable class is synchronous and by default and it performs everything (operators like map + notifying subscribers) on the same thread which emits events (s.onNext), right? I wonder... is it intended behaviour or I just misunderstood something? Actually I was expecting that at least onNext and onComplete callbacks will be called on the caller's thread, not on the one emitting events. Do I understand correctly that in this particular case actual caller's thread doesn't matter? At least when events are generated asynchronously.
Another concern - what if I receive some Observable as a parameter from some external source (i.e. I don't generate it on my own)... there is no way for me as its user to check if whether it is synchronous or asynchronous and I just have to explicitly specify where I want to receive callbacks via subscribeOn and observeOn methods, right?
Thanks!
A: RxJava is unopinionated about concurrency. It will produce values on the subscribing thread if you do not use any other mechanisem like observeOn/ subscribeOn. Please don't use low-level constructs like Thread in operators, you could break the contract.
Due to the use of Thread, the onNext will be called from the calling Thread ('background-thread-1'). The subscription happens on the calling (UI-Thread). Every operator down the chain will be called from 'background-thread-1'-calling-Thread. The subscription onNext will also be called from 'background-thread-1'.
If you want to produce values not on the calling thread use: subscribeOn. If you want to switch the thread back to main use observeOn somewhere in the chain. Most likely before subscribing to it.
Example:
Observable.just(1,2,3) // creation of observable happens on Computational-Threads
.subscribeOn(Schedulers.computation()) // subscribeOn happens only once in chain. Nearest to source wins
.map(integer -> integer) // map happens on Computational-Threads
.observeOn(AndroidSchedulers.mainThread()) // Will switch every onNext to Main-Thread
.subscribe(integer -> {
// called from mainThread
});
Here is a good explanitation.
http://tomstechnicalblog.blogspot.de/2016/02/rxjava-understanding-observeon-and.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43436640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: How to use a for loop to display an image depending on a position in an index? This is inside a react-native project:
I have a progress bar that has 10 positions, and each position as a possibility of three states - incomplete, active, complete. There is an image that corresponds to each of those states. I was thinking I could use a for loop that would look for the index number of the page, which I'll store in an object, and depending on the relationship between the position in the loop and and the index number, that would determine which image gets displayed. Something like this:
if the position in the loop > index number, display the incomplete image
if the position in the loop = index number, display the active image
if the position in the loop < index number, display the complete image
Beyond this, I'm not even sure on where to start.
A: In your class/component:
state = {
indexNum: 4, // arbitrary value
}
displayStatus(item) {
if(item.id > this.state.indexNum){ // Incomplete
return <View style={styles.progressPoint}><Text>I</Text></View>;
}
else if(item.id == this.state.indexNum){ // Active
return <View style={styles.progressPoint}><Text>A</Text></View>;
}
else if(item.id < this.state.indexNum){ // Complete : you can use only 'else' here
return <View style={styles.progressPoint}><Text>C</Text></View>;
}
}
In render() of your class/component:
// Positions/Pages - these will serve as basis for .map - you can add more than 'id'
const positions = [{"id": 1},{"id": 2},{"id": 3},{"id": 4},{"id": 5},
{"id": 6},{"id": 7},{"id": 8},{"id": 9},{"id": 10}];
return (
<View style={styles.container}>
{positions.map((item) => (
this.displayStatus(item)
))}
</View>
);
Here's an Expo Snack of the above (based on the scope of your question) to get you started.
You can store the index number of the page in state and update this state at completion of each progress position. Note: if you are not using redux, you may need to pass/handle state (index number) on each page individually (depending on your navigation or components structure).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/49784029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: CORS Configuration Flask, Angular and NGINX I'm trying to upload an angular application to a digital ocean droplet using nginx. This app consumes an API in flask. In local everything works pretty well, but on the VPS is not working. It says something about Cross-origin request blocked: Same origin policy does not allow reading of remote resources at http://127.0.0.1:5001/api/talk/
Here's my nginx configuration:
server {
server_name x.domain.com;
location / {
proxy_pass http://127.0.0.1:4200;
proxy_set_header Host x.domain.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Origin "";
add_header "Access-Control-Allow-Origin" "*";
add_header "Access-Control-Allow-Methods" "GET, POST, OPTIONS, HEAD";
add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept";
}
}
server {
location / {
proxy_pass http://127.0.0.1:5001;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Origin "";
add_header "Access-Control-Allow-Origin" "*";
add_header "Access-Control-Allow-Methods" "GET, POST, OPTIONS, HEAD";
add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept";
}
location /api/talk/ {
proxy_pass http://127.0.0.1:5001;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Origin "";
}
location /api/response/ {
proxy_pass http://127.0.0.1:5001;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Origin "";
}
}
Here's my flask configuration:
app = Flask(__name__)
cors = CORS(app, resources={r"/api/*": {"origins": "*"}})
chat = Chat()
@app.route('/api/talk/', methods=['POST'])
@cross_origin(origin='localhost')
def index():
print(request.data.decode("utf-8"))
chat.send_message(request.data)
return request.data
@app.route('/api/response/', methods=['GET'])
@cross_origin(origin='localhost')
def get():
response = chat.response
return response
def options(self):
return {'Allow': 'POST'}, 200, \
{'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST,GET'}
if __name__ == '__main__':
app.run(host='127.0.0.1', port=5001, debug=True)
And angular consumes the flask API here:
export class Message {
constructor(public content: string, public sentBy: string){ }
}
@Injectable({
providedIn: "root",
})
export class MessageService {
conversation = new BehaviorSubject<Message[]>([]);
readonly SERVER_URL = "http://127.0.0.1:5001";
public httpClient = axios.create();
constructor(){}
update(msg: Message) {
this.conversation.next([msg]);
}
converse(msg: string) {
const userMessage = new Message(msg, 'user');
this.update(userMessage);
this.sendMessage(userMessage.content)
}
sendMessage(msg: string) {
console.log(msg)
const config = {
headers: {
'Content-Type': 'text/plain',
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Allow-Methods': 'GET POST',
'Access-Control-Allow-Origin': '*'
},
};
this.httpClient.post(this.SERVER_URL+'/api/talk/', btoa(msg), config)
.then(function (response) {
})
.catch(function (error) {
console.log(error);
});
}
async getResponse() {
await this.delay(900);
let msg = ''
return this.httpClient.get(this.SERVER_URL+'/api/response/')
.then(response => {
msg = response.data
const botMessage = new Message(msg, 'chucho');
this.update(botMessage);
return msg
})
.catch(function (error) {
console.log(error);
});
}
delay(ms: number) {
return new Promise( resolve => setTimeout(resolve, ms) );
}
}
My angular App runs on the 4200 port and the Flask API is on the 5001 port.
Hope you can help me.
A: Try enabling CORS like this -
but first install latest flask-cors by running -
pip install -U flask-cors
from flask import Flask
from flask_cors import CORS, cross_origin
app = Flask(__name__)
cors = CORS(app) # This will enable CORS for all routes
@app.route("/")
@cross_origin()
def helloWorld():
return "Helloworld!"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/63495626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Difference between SomeValue {get;} = true; vs SomeValue => true; in Properties In C# you could declare a property in two ways that seem very similar:
public string SomeProperty => "SomeValue";
vs
public string SomeProperty { get; } = "SomeValue";
Is there a difference between these two? (Ignoring the fact that "SomeValue" is not a very interesting value, it could be the result of a method call or whatever else would express a difference between the two forms).
A: In your example there is no functional difference because you're always returning a constant value. However if the value could change, e.g.
public string SomeProperty => DateTime.Now.ToString();
vs
public string SomeProperty { get; } = DateTime.Now.ToString();
The first would execute the expression each time the property was called. The second would return the same value every time the property was accessed since the value is set at initialization.
In pre-C#6 syntax the equivalent code for each would be
public string SomeProperty
{
get { return DateTime.Now.ToString();}
}
vs
private string _SomeProperty = DateTime.Now.ToString();
public string SomeProperty
{
get {return _SomeProperty;}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/39260137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: OrientDB: Select unique vertices but retain the edges I am trying to do the following weird thing. I have a group of edges that point towards a group of vertices, but there is some repetition - multiple edges point to the same vertices.
Given a SELECT command that gives me the list of edges, I want to:
*
*SELECT the unique vertices from all of the 'out' vertices
*Return, along with the @rid of the unique vertices, a list of all the edges that pointed towards them.
E.g. the result should be a list of vertices with (vertex rid, [edge 1, edge 2, edge 3]).
Another way to think about this is I want to GROUP BY outgoing vertex but somehow retain in a field the @rid's of all the edges I grouped.
Thanks!
A: You can try this:
in this you get for every vertex the outgoing edges
select $a.@rid, $a.outE() from 'your class'
let $a = (select from 'your class' where $parent.current.@rid = @rid)
if you want the ingoing vertices you have to change $a.outE() with inE(), like below:
select $a.@rid, $a.inE() from 'your class'
let $a = (select from 'your class' where $parent.current.@rid = @rid)
Hope it helps.
Regards.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40027603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Use if/else statement without else branch I don't know how to use if else in this case. When score > 10, stop insert. Else continue insert as normally. But what is the syntax to do that?
CREATE TRIGGER invalidScore ON dbo.dbo_score
AFTER INSERT
AS
DECLARE @score DECIMAL;
SET @score = (SELECT s.score FROM Inserted s);
IF(@score > 10)
BEGIN
RETURN 'score must be less than 10'
ROLLBACK TRAN
END
ELSE
BEGIN
END
A: First, creating these types of sql objects should use begin.. end blocks. Second is,you can ignore the else statement.
CREATE TRIGGER invalidScore ON dbo.dbo_score
AFTER INSERT
AS
BEGIN
DECLARE @score DECIMAL;
SET @score = (SELECT s.score FROM Inserted s);
IF(@score > 10)
BEGIN
RETURN 'score must be less than 10'
ROLLBACK TRAN
END
END
A: There are 3 things you need to change for this trigger to work:
*
*Remove the else section - its optional.
*Handle the fact that Inserted may have multiple rows.
*Throw the error rather than using the return statement so you can handle it in the client. And throw it after rolling back the transaction in progress.
Corrected trigger follows:
create trigger invalidScore on dbo.dbo_score
after insert
as
begin
if exists (select 1 from Inserted S where S.Score > 10) begin
rollback tran;
throw 51000, 'score must be less than 10', 1;
end
end
A: 'Else' is an option section you can remove this and use it,but i may like you to consider using check constraints for scenarios like this rather than adding a trigger check on score column
e.g.
CREATE TABLE dbo.dbo_score(
Score int CHECK (score < 10)
);
A CHECK constraint is faster, simpler, more portable, needs less code and is less error prone
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/59024580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Getting Warning on Play Store for using WRITE_CALL_LOG permission in android app I'm working on a project based on cloud backup which saves and restore user's call log. It was working fine for the previous version but now getting following warning. I added a description before taking permission from the user but still getting the warning.
Your app is requesting the following permission which is used by less
than 1% of functionally similar apps: WRITE_CALL_LOG
Users prefer apps that request fewer permissions and requesting
unnecessary permissions can affect your app's visibility on the Play
Store. If these permissions aren't necessary, you may be able to use
alternative methods in your app and request fewer permissions. If they
are, we recommend providing an explanation to users of why you need
the permissions. Learn more
Note: This guidance is based on a comparison with functionally similar
apps, which change over time as new apps get published and existing
apps change behavior. Therefore the warning may change even if you
don't change your permission usage.
A: Its a warning. If you need that permission (and it seems your app does), then you're fine. If you didn't really need it, you should remove it. Google isn't going to scan your description to see if you explain it, that level of AI isn't really possible yet. So you'll continue to get the warning.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51035148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: panels expanded in viewport in my west region panel there is smth. like the task panel here:
http://dev.sencha.com/deploy/dev/examples/tasks/tasks.html
the data is loaded from 2 different 's containing only with links
the first "task" group is always expanded to all the height of the document, though there are much less data there.
here is the code:
new Ext.Panel({
region: 'west',
title: 'דוחות',
id: 'w',
header: false,
width: 190,
split: true,
layout: 'fit',
collapseMode: 'mini',
//minWidth: 100,
baseCls:'x-plain',
margins: '0 1 0 0',
items: [ new Ext.Panel({
id:'wp',
frame:true,
title: 'דוחות לעובדים',
collapsible:true,
contentEl: 'workerRep',
//titleCollapse: true
}),
new Ext.Panel({
frame:true,
id:'mp'
title: 'דוחות למכונות',
collapsible:true,
contentEl:'machRep',
layout: 'fit',
//titleCollapse: true
})
]
What could be the problem?
A: Ext.layout.FitLayout ( which is what layout: 'fit' stands for) is for situations when you anly have one item in a container, because it tries to 'fit' this one component to the full size of the container.
From manual:
This is a base class for layouts that contain a single item that automatically expands to fill the layout's container.
If you have more than one item in container use different layout like Ext.layout.ContainerLayout (default one), Ext.layout.VBoxLayout or perhaps Ext.layout.TableLayout.
A: found the answer, the problem was here:
baseCls:'x-plain',
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/4531801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to append to array in for loop? How can I append my results in this array so I can use it later ?
var animalQuestion = [(orderlist: Int, questionid:String, question: String, description: String)]()
This is my array which I declared in my class above.
let stmt = try db.prepare("SELECT * FROM allquestion_other where allquestion_other.name= 'animalChoice' and allquestion_other.id NOT IN (select answerresult.questionId FROM answerresult where answerresult.friendUserId='\(friendUserId)')")
for row in stmt {
let orderlist = row[4]
let questionid = row[0]
let question = row[6]
let description = row[7]
animalQuestion.append(orderlist: orderlist, questionid: questionid, question: question, description: description)
}
When i'm running this code , i'm getting the error "Cannot call value of non-function type '[(orderlist: Int, questionid: String, question: String, description: String)]'"
row[4] row [0]row [6]row [7] is returning some values which I need to append in my array
A: Try to use:
animalQuestion.append((orderlist: orderlist, questionid: questionid, question: question, description: description))
And do not forgot:
let orderlist = row[4] as Int
Your code incorrect because of:
1) For appending new objects to array you must to use array-method
animalQuestion.append(<Array object>)
2) In your case Array object is a tuple. So first you need to create a tuple, something like:
let orderTuple = (orderlist, questionid, question, description)
and after that appaned orderTuple to animalQuestion array like:
animalQuestion.append(orderTuple)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/47788432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Sl4A Selecting running application from notification window I have a SL4A program I have written. I have one issue before i'm ready to publish it.
For some reason when the app is running, if I home screen out of the app, I see it running in the notification area, but when I select it nothing happens. However if I click my icon from apps area it will bring the app back up.
Any suggestions?
A: Notifications created by SL4A do nothing; they have no callback and can only alert users. Unfortunately there isn't really any way around this: BeanShell, JRuby and Rhino can make Java API calls (eg., to add the 'open my app when clicked' part) but can't use Contexts (which notifications require), and you could make your own version of the API facade, but then users would be required to install your specific version of x (eg., Python) for Android.
Otherwise all I could think of would be to get tricky with Intents or something and include an activity in the /src of your app to show the notification, though that would likely require learning Java / Android programming meaning you may as well follow through and write the entire app natively.
Sorry, but there really isn't an easy way to do this
A: You stated you wanted to publish it so I am assuming you were implying that in the long run you will be compiling it into a standalone apk?
If so, which package are/will you be using? py4a's method, python27, kivy? From my experiences when you compile to an apk with python27, there is no notification window at the top at all but if you were to compile it using py4a's method it should create a workable notification item for you. Please see the following link for further information: http://code.google.com/p/android-scripting/wiki/SharingScripts
Otherwise ProfSmiles answer is correct but it appears to be a much more complex solution then using the py4a method.
You can also see the python27 project if you would like a more embedded approach, although as mentioned previously it does not have a notification setup by default like py4a.
Kivy's implementation also looks promising but I am unfamiliar with it, it might also be worth looking at further: https://github.com/kivy/python-for-android
A: Well It seems that you can see the notifications started by SL4A "como.googlecode.android_scripting" package with the following command:
This is more like a hack.
dumpsys statusbar | grep "pkg=com.googlecode.android_scripting"
Every notification initiated by SL4A will have an "id". For example the "id=1" is the notification started by SL4A when the server is running. The one you click to stop the server.
With this in mind you can can actually list every notification started by your package and block until the id of your notification disappear.
If so then your next notifications should have an id with 2 or above. Note that this can change if SL4A is stopped or crash. The next time you may get the "id=2" for the (RPC) server notification and then "id=3" and over for your app notifications until you restart your device and so the RPC server notification goes back to "id=1". Knowing this means that you need to keep searchig for new notifications within a loop.
For example in bash and using adb:
while read Info; do echo "$Info" | grep 'pkg=com.googlecode.android_scripting'; done < <(adb shell dumpsys statusbar)
You'll get something like this:
1: StatusBarNotification(pkg=com.googlecode.android_scripting id=2 tag=null score=0 notn=Notification(pri=0 contentView=com.googlecode.android_scripting/0x109008f vibrate=null sound=null defaults=0x0 flags=0x62 kind=[null]) user=UserHandle{0}) # SL4A RPC Notification
7: StatusBarNotification(pkg=com.googlecode.android_scripting id=3 tag=null score=0 notn=Notification(pri=0 contentView=com.googlecode.android_scripting/0x109008f vibrate=null sound=null defaults=0x0 flags=0x10 kind=[null]) user=UserHandle{0}) # My Notification
Let's play with this!
Running:
while read Info; do echo "$Info" | grep "pkg=com.googlecode.android_scripting" | awk '{print $3}' | cut -s -d '=' -f2 ; done < <(adb shell dumpsys statusbar)
Will get you for example:
# Without Using Cut
id=2 # SL4A Notificaion
id=3 # My Notification
Or:
# Using Cut
2 # SL4A Notification
3 # My Notification
Let's get to the action! (An ugly solution)
# Start ADB USB Serial Connection
adb devices
# Activate Wireless ADB (Needs Root) - Not Needed
# adb shell setprop service.adb.tcp.port 5555
# stop adbd
# start adbd
Or:
# Start ADB Wireless
adb connect 192.168.1.3
NotifyCount=0
NotifyList=()
while read Notify; do
DumpNotify=`echo "$Notify" | grep "pkg=com.googlecode.android_scripting" | awk '{print $3}' | cut -s -d '=' -f2`
if [ ! -z "$DumpNotify" ] ; then
NotifyList[$NotifyCount]="$DumpNotify"
((NotifyCount++))
fi
done < <(adb shell dumpsys statusbar)
SL4ARPCNotification="2"
MyScriptNotification="3"
if [[ ${NotifyList[*]} != *"$MyScriptNotification"* ]] ; then
adb shell am start -a android.intent.action.MUSIC_PLAYER
fi
This should be better in 2 functions with arguments for MyNotification and SL4ARPCNotification variables. That way you can verify from anywhere in the code and divide the job: FunctionX for listing the notifications and FunctionY for comparing the results.
This can easily be done in Pyhon or other interpreters. You need to remember that there's always a notification from SL4A itself. By using Threading in python you can continuously search for new or old notifications without the need to block your program waiting for a change and thus you can continue runninig your script normally.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/12169546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Values of the synthetic control created by Synth package in R I'm using the Synth package in R. I properly created my dataprep.out and synth(dataprep.out). Following, I created my path.plot and it's very fine.
Now I want to find the values estimated by Synth for my outcome variable for each time period. Where can I find this info, please?
Thanks in advance
I cannot find the values of the outcome variable created by Synth throughout my time periods. The plot created by path.plot shows the trajectory of the synthetic estimator, but not the values estimated.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/75440536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: File Upload Error in Asp.net VB I am trying to add a fileupload control to my aspx page so the user can add pictures, but when I am implementing the code behind on VB the fileuploader controler is not recognized.
I have this on aspx page inside a formview:
<InsertItemTemplate>
<div id="TaskScreenError">
Upload a Screenshot of Error:
<asp:FileUpload ID="ErrorScreen" runat="server" />
</div>
<InsertItemTemplate>
And I have the following code on my VB, but it says ErrorScreen is not declared.
Dim filereceived As String = ErrorScreen.PostedFile.FileName
' validate the file to ensure it is an image
Select Case Right(filereceived, 4)
Case ".jpg", ".tif", ".bmp", ".gif"
Case Else
lblErrMsg.Text = "Image is in a format we don't accept, please use jpg, tif, bmp or gif."
Exit Sub
End Select
...
It might be something really stupid but I cant figure it out what is the problem.
Please help.
Cheers
A: Since your FileUpload control is inside the InsertTemplate, you cannot access the FileUpload control directly. You have to do something like this:
Dim fileUpload As FileUpload = TryCast(YOURFORMVIEWID.FindControl("ErrorScreen"), FileUpload)
If fileUpload Is Nothing Then
' Handle if the FileUpload can't be found
Else
Dim filereceived = fileUpload.PostedFile.FileName
' Continue your code here...
End If
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/9771320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: image rotation going to wide off of my screen how to keep static or shorten width? Hello guys I was wondering if anyone has a quick fix to my image rotating way to far off my screen and causes my app to have scroll bars and not appealing to the eye maybe have the image just spinning in place to make my formatting issues easier
here is my code
JS
<head>
<script>
var looper;
var degrees = 0;
function rotateAnimation(el,speed){
var elem = document.getElementById(el);
if(navigator.userAgent.match("Chrome")){
elem.style.WebkitTransform = "rotate("+degrees+"deg)";
} else if(navigator.userAgent.match("Firefox")){
elem.style.MozTransform = "rotate("+degrees+"deg)";
} else if(navigator.userAgent.match("MSIE")){
elem.style.msTransform = "rotate("+degrees+"deg)";
} else if(navigator.userAgent.match("Opera")){
elem.style.OTransform = "rotate("+degrees+"deg)";
} else {
elem.style.transform = "rotate("+degrees+"deg)";
}
looper = setTimeout('rotateAnimation(\''+el+'\','+speed+')',speed);
degrees++;
if(degrees > 359){
degrees = 1;
}
document.getElementById("status").innerHTML = "rotate("+degrees+"deg)";
}
</script>
</head>
HTML
<body>
<div data-role="page" id="pageone">
<img id="img1" src="http://s8.postimg.org/h719p5x85/transimage.png" alt="cog1">
<script>rotateAnimation("img1",15);</script>
</body>
A: As markE said set the transform-origin to the center of the image, so something like this:
elem.style.transform-origin = "50% 50%";
elem.style.transform = "rotate("+degrees+"deg)";
You can use -ms- and -webkit- for this in your code too for cross compatability.
Slightly unrelated, I suggest using:
degrees = degrees%360;
instead of your if statement where you wrap from 359 to 1 degrees.
This is because it will work in more general situations, so is less likely to break. For example if you changed the amount of degrees no by +1 but by +10 or -10 it would still wrap correctly between 0 and 360.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/34245087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Choosing marker size in Matplotlib I am doing a scatter plot with square marker in matplotlib like this one:
.
I want to achieve something like this:
Which means I have to adjust the marker size and the figure size/ratio in such a way that there are no white space between markers. Also there should be a marker per index unit (x and y are both integers) so if y goes from 60 to 100, there should be 40 markers in y direction. At the moment I am tuning it manually. Any idea on what is the best way to achieve this?
A: I found two ways to go about this:
The first is based on this answer. Basically, you determine the number of pixels between the adjacent data-points and use it to set the marker size. The marker size in scatter is given as area.
fig = plt.figure()
ax = fig.add_subplot(111, aspect='equal')
# initialize a plot to determine the distance between the data points in pixel:
x = [1, 2, 3, 4, 2, 3, 3]
y = [0, 0, 0, 0, 1, 1, 2]
s = 0.0
points = ax.scatter(x,y,s=s,marker='s')
ax.axis([min(x)-1., max(x)+1., min(y)-1., max(y)+1.])
# retrieve the pixel information:
xy_pixels = ax.transData.transform(np.vstack([x,y]).T)
xpix, ypix = xy_pixels.T
# In matplotlib, 0,0 is the lower left corner, whereas it's usually the upper
# right for most image software, so we'll flip the y-coords
width, height = fig.canvas.get_width_height()
ypix = height - ypix
# this assumes that your data-points are equally spaced
s1 = xpix[1]-xpix[0]
points = ax.scatter(x,y,s=s1**2.,marker='s',edgecolors='none')
ax.axis([min(x)-1., max(x)+1., min(y)-1., max(y)+1.])
fig.savefig('test.png', dpi=fig.dpi)
The downside of this first approach is, that the symbols overlap. I wasn't able to find the flaw in the approach. I could manually tweak s1 to
s1 = xpix[1]-xpix[0] - 13.
to give better results, but I couldn't determine a logic behind the 13..
Hence, a second approach based on this answer. Here, individual squares are drawn on the plot and sized accordingly. In a way it's a manual scatter plot (a loop is used to construct the figure), so depending on the data-set it could take a while.
This approach uses patchesinstead of scatter, so be sure to include
from matplotlib.patches import Rectangle
Again, with the same data-points:
x = [1, 2, 3, 4, 2, 3, 3]
y = [0, 0, 0, 0, 1, 1, 2]
z = ['b', 'g', 'r', 'c', 'm', 'y', 'k'] # in your case, this is data
dx = [x[1]-x[0]]*len(x) # assuming equally spaced data-points
# you can use the colormap like this in your case:
# cmap = plt.cm.hot
fig = plt.figure()
ax = fig.add_subplot(111, aspect='equal')
ax.axis([min(x)-1., max(x)+1., min(y)-1., max(y)+1.])
for x, y, c, h in zip(x, y, z, dx):
ax.add_artist(Rectangle(xy=(x-h/2., y-h/2.),
color=c, # or, in your case: color=cmap(c)
width=h, height=h)) # Gives a square of area h*h
fig.savefig('test.png')
One comment on the Rectangle: The coordinates are the lower left corner, hence x-h/2.
This approach gives connected rectangles. If I looked closely at the output here, they still seemed to overlap by one pixel - again, I'm not sure this can be helped.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/16819193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: How to sort search results by relevance in javascript I'm building a custom search, as of now if I enter "The R" I get the result list with The Fellow ship of the Ring first, because the phrase "the ring" it's in its .text. I want The Return of the King to be first. Is there a way I can give more relevance to the .name field or sort the match array based on the name .field and the input text?
HTML
<section class="container-fluid px-0 justify-content-center">
<div class="row no-gutters">
<div class="col d-flex justify-content-center search">
<form class="form-inline position-relative">
<input id="search" class="form-control form-control-search" type="text" placeholder="Search..." aria-label="Search">
</form>
<div id="match-list" class="d-none"></div>
</div>
</div>
</section>
JAVASCRIPT
const searchIndex = async searchText => {
const res = await fetch('/data/index.json');
const index = await res.json();
matchList.classList.remove("d-none");
// Get matches to current text input
let matches = index.filter(index => {
const regex = new RegExp(`${searchText}`, 'gi');
return index.name.match(regex) || index.text.match(regex);
});
// Clear when input or matches are empty
if (searchText.length === 0) {
clearSearch();
}
outputHtml(matches);
};
function clearSearch(){
matches = [];
matchList.classList.add("d-none");
}
// Show results in HTML
const outputHtml = matches => {
if (matches.length > 0) {
const html = matches.map(function(match){
return `<a href="${match.url}">
<div class="media mb-2">
<div class="component-icon-slot my-auto" style="background-image: url('/img/${match.url}/icon.png"></div>
<div class="media-body pl-2">
<h3 class="mt-0 mb-0">${match.name}</h3>
<b>${match.type}</b><br/>
<i>Found in <b>${match.product}</b></i><br/>
${match.text}
</div>
</div></a>`
}
}).join('');
matchList.innerHTML = html;
}
};
index.JSON
[
{
"name": "The Fellowship of the Rings",
"type": "book",
"text": "Bilbo reveals that he intends to leave the Shire for one last adventure, and he leaves his inheritance, including the Ring, to his nephew Frodo. Gandalf investigates...",
"url": "books/the-fellowship-of-the-rings",
"product": "Books"
},
{
"name": "The Two Towers",
"type": "book",
"text": "Awakening from a dream of Gandalf fighting the Balrog in Moria, Frodo Baggins and Samwise Gamgee find themselves lost in the Emyn Muil near Mordor and discover they are being tracked by Gollum, a former bearer of the One Ring.",
"url": "books/the-two-towers",
"product": "Books"
},
{
"name": "The Return of the King",
"type": "book",
"text": "Gandalf flies in with eagles to rescue the Hobbits, who awaken in Minas Tirith and are reunited with the surviving Fellowship.",
"url": "books/the-return-of-the-king",
"product": "Books"
}
]
A: You could map your data to include relevance points:
const index = await res.json();
const searchTextLowercased = searchText.toLowerCase();
const rankedIndex = index.map(entry => {
let points = 0;
if (entry.name.toLowerCase().includes(searchTextLowercased)) {
points += 2;
}
if (entry.text.toLowerCase().includes(searchTextLowercased)) {
points += 1;
}
return {...entry, points};
}).sort((a, b) => b.points - a.points);
This way, you have ranked results in rankedIndex const.
Keep in mind that your code probably needs some refactoring, because you're fetching data on each search. I'm assuming your searchIndex() is called with every key press or something like that.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61857573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: ttk Button: relief of the button is not taking effect I have button defined as below:
button_style = ttk.Style()
button_style.configure("button_style.TButton", relief=tk.RAISED,
width = 20, padding=6, font=('Helvetica', 12) )
self.button = ttk.Button (self.myContainer1, text="Browse", style='button_style.TButton',
command = self.browse_file)
self.button.grid(row = 0,column = 3,padx=5, pady=10, ipadx=5, ipady=5)
For some reason the relief effect of the button is not applied. Could somebody help me please?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/44025234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: how to change the size of the file from kilobyte into gigabyte using php i have here a code that saves a file the size is kilobyte but i want a size of a file that have a size into gigabyte but i do not know how to change it.
i need a help from you because im a beginner in the php and i don't know how to change the size of the file and i want to save a video in my database.
here is my code:
<?php
function output_file($file, $name, $mime_type='') {
if(!is_readable($file)) die('File not found or inaccessible!');
$size = filesize($file);
$name = rawurldecode($name);
$known_mime_types=array(
"pdf" => "application/pdf",
"txt" => "text/plain",
"html" => "text/html",
"htm" => "text/html",
"exe" => "application/octet-stream",
"zip" => "application/zip",
"doc" => "application/msword",
"xls" => "application/vnd.ms-excel",
"ppt" => "application/vnd.ms-powerpoint",
"gif" => "image/gif",
"png" => "image/png",
"jpeg"=> "image/jpg",
"jpg" => "image/jpg",
"php" => "text/plain"
);
if($mime_type=='') {
$file_extension =
strtolower(substr(strrchr($file,"."),1));
if(array_key_exists($file_extension, $known_mime_types)) {
$mime_type=$known_mime_types[$file_extension];
} else {
$mime_type="application/force-download";
};
};
@ob_end_clean();
if(ini_get('zlib.output_compression'))
ini_set('zlib.output_compression', 'Off');
header('Content-Type: ' . $mime_type);
header('Content-Disposition: attachment; filename="'.$name.'"');
header("Content-Transfer-Encoding: binary");
header('Accept-Ranges: bytes');
header("Cache-control: private");
header('Pragma: private');
header("Expires: Mon, 26 Jul 1997 05:00:00 GMT");
if(isset($_SERVER['HTTP_RANGE']){
list($a, $range) = explode("=",$_SERVER['HTTP_RANGE'],2);
list($range) = explode(",",$range,2);
list($range, $range_end) = explode("-", $range);
$range=intval($range);
if(!$range_end) {
$range_end=$size-1;
} else {
$range_end=intval($range_end);
}
$new_length = $range_end-$range+1;
header("HTTP/1.1 206 Partial Content");
header("Content-Length: $new_length");
header("Content-Range: bytes $range-$range_end/$size");
} else {
$new_length=$size;
header("Content-Length: ".$size);
}
$chunksize = 1*(1024*1024);
$bytes_send = 0;
if ($file = fopen($file, 'r') {
if(isset($_SERVER['HTTP_RANGE']))
fseek($file, $range);
while(!feof($file) &&
(!connection_aborted()) &&
($bytes_send<$new_length)) {
$buffer = fread($file, $chunksize);
print($buffer);
flush();
$bytes_send += strlen($buffer);
}
fclose($file);
} else
die('Error - can not open file.');
die();
}
set_time_limit(0);
$file_path='files/'.$_REQUEST['filename'];
output_file($file_path, ''.$_REQUEST['filename'].'',
'text/plain');
?>
A: For kilobyte divide it by 1048576. Did you need something more complicated than that?
$sizeInGB = $sizeInKB / 1048576;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28879583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: MVC3 gives a Blank chart My LoggedUserHome.Controller has this action for chart
public ActionResult GetGender()
{
var AllGender = new List<string>(from c in db.tbl_Profile select c.sex).ToList();
var Groping = AllGender
.GroupBy(i => i)
.Select(i => new { sex = i.Key, CountGet = i.Count() }).ToArray(); //get a count for each
var key = new Chart(width: 300, height: 300)
.AddSeries(
chartType: "pie",
legend: "Gender Popularity",
xValue: Groping, xField: "sex",
yValues: Groping, yFields: "CountGet")
.Write("gif");
return null;
}
and in my view I have given
<img src="/LoggedUserHome/GetGender"/>
Thanks to Nexuss ToArray() suggestion I manage to populate the chart by reading the database.. Thanks a lot Nexuzz
A: Simple solution is to try to replace your
<img src="/LoggedUserHome/GetGender"/>
in the view with
@{
var d = new PrimeTrekkerEntities1();
var AllGender = new List<string>(from c in d.tbl_Profile select c.sex).ToList();
var Groping = AllGender
.GroupBy(i => i)
.Select(i => new { sex = i.Key, Count = i.Count() }); //get a count for each
var key = new Chart(width: 600, height: 400)
.AddSeries(
chartType: "pie",
legend: "Gender Popularity",
xValue: Groping, xField: "sex")
.Write("gif");
}
And "yes" you don't need action GetGender in this case.
More complex solution would be to leave
<img src="/LoggedUserHome/GetGender"/>
in the view, but make GetGender() action return chart image URL. So, what you can do in GetGender() is somehow render chart into image file and return response that contains the image's path.
update:
I've updated your example a bit, so it displays data in the chart. Here is what I've got:
@{
var AllGender = new List<string>() { "Male", "Female", "Male"};
var Groping = AllGender
.GroupBy(i => i)
.Select(i => new { sex = i.Key, Count = i.Count() }).ToArray(); //get a count for each
var key = new Chart(width: 600, height: 400)
.AddSeries(
chartType: "pie",
legend: "Gender Popularity",
xValue: Groping, xField: "sex",
yValues: Groping, yFields: "count")
.Write("gif");
}
There are two main differences with your example:
*
*Groping is now an array. As far as I understand Chart expects data to be enumerated
*I added yValues and yFields parameters, so chart knows what values to use on Y axis
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/11921187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Xamarin Android load local html file with routing How can I load local html file with routing in Android Webview?
The url looks like this:
file:///android_asset/Content/index.html/#/first-view
file:///android_asset/Content/index.html/#/second-view
I tried with this code but it's not working:
AppWebView.LoadUrl("file:///android_asset/Content/index.html/#/first-view");
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51279392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Calling base component's methods/features Having Base.vue
<template><div>{{ name }}</div></template>
<script>
export default {
data() {
return {name: 'Example Component'};
}
};
</script>
And Extended.vue:
<script>
import Base from './Base.vue';
export default {
extends: Base,
data() {
return {name: 'Extended Example Component'};
}
};
</script>
Is it possible to reuse data from base one instead of hardcoding Extended Example Component? Some kind of super from OOP? Looking for generic solution involving stuff like mount, methods, computed, etc.
UPDATE
When using OOP we use such approach (Python example):
class Base:
def __init__(self):
self.name = 'Example Component'
class Extended(Base):
def __init__(self):
super().__init__()
self.name = f'Extended {self.name}'
This way we reuse self.name field.
A: extends and mixins merge options in a specified way, this is the official way to inherit component functionality. They don't provide full control over the inheritance.
For static data that may change between component definitions (Base and Extended) custom options are commonly used:
export default {
myName: 'Component',
data() {
return { name: this.$options.myName };
}
}
and
export default {
extends: Base,
myName: `Extended ${Base.options.myName}`
}
Notice that name is existing Vue option and shouldn't be used for arbitrary values.
Any reusable functions can be extracted from component definition to separate exports.
For existing components that cannot be modified data and other base component properties can be accessed where possible:
data() {
let name;
name = this.$data.name;
// or
name = Base.options.data.call(this).name;
return { name: `Extended ${name}` };
}
This approach is not future-proof and not compatible with composition API, as there's no clear distinction between data, etc, and this is unavailable in setup function. An instance can be accessed with getCurrentInstance but it's internal API and can change without notice.
A: The data object is a kind of state for a single component, hence the data object inside a component can only be accessed in that component and not from outside or other components.
For your specific requirement, you can use Vuex. It helps you access the state in multiple components. Simply define a state in the vuex store and then access the state in the component.
If you don't want to use Vuex, then another simple solution is to store the data in localStorage and then access the data in multiple components.
A: You can access your data, from child component. You can do that by this.$parent.nameOfTheDataYouHave.
This cheatsheet can help you with the basics of the component anatomy and lifecycle hooks(mounted,created,etc..).
And last this is the proper way of two-way binding data (parent-child).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/67403046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: I get "IndexError: list index out of range" when installing Pandas in python So, I installed pandas with cmd :
but when I try to import it i get this error:
Traceback (most recent call last): File
"C:/Users/Uros/Desktop/fasda.py", line 1, in
import pandas ModuleNotFoundError: No module named 'pandas'
and when I try to install it in setting I get this error:
A: As the second screenshot shows, you need to install pandas for your the python interpreter that you use, like this:
C:\Users\Uros\untitled\Scripts\python.exe -m pip install -U pandas
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/59142961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Generate TIE cells with Yosys? I am using Yosys to synthesise my RTL design which includes a couple of literal constants, such as tied output ports as in the following code:
module my_module (
input a,
input b,
output c,
output d);
assign c = a & b;
assign d = 1'b1;
endmodule
In this case, output d will obviously always be a logical one. The flow I am using includes the abc -liberty my_stdcells.lib call to map the combinatorial logic to the standard cells provided by the library, followed by the clean and write_verilog calls.
The cell library I am using also provides TIELO and TIEHI cells, but the synthesised Verilog netlist doesn't include any instances of those cells but instead still shows literal constants like in the example above.
I could probably write a script to post-process the synthesised netlist to replace these literals with TIE* cell instances from the library, but I am wondering if I could get Yosys to do that for me somehow, resulting in something like
TIEHI tiehi_d_inst(.Y(d));
for the assign d = 1'b1 line in the code above.
A: The command you are looking for is hilomap. For example, to map to TIEHI and TIELO cells with Y outputs use something like:
hilomap -hicell TIEHI Y -locell TIELO Y
This will create an individual TIEHI/TIELO cell for each constant bit in the design. Use the option -singleton to only create single TIEHI/TIELO cells with a higher fan-out.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/33336463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Python Pandas match exact pattern I'm trying to match a pattern with a column of a dataframe
pattern='pgk'
df = pd.DataFrame([['merged_pgk', 10], ['merged_Pgk', 3], ['merged_pgk_stim', 12], ['merged_Scp1', 5]], columns=['condition','count'])
I want the row where pattern is in the column condition.
df[df['condition'].str.lower().str.contains(pattern)]
I tried this but the problem here is that it will return pgk row (what I want) but I got also pgk_stim row (what I don't want)
A: try np.where with str.contains and case=False to ignore the case.
and $ to only match pgk at the end of a string.
df['check'] = np.where(df['condition'].str.contains('pgk$',case=False), True,False)
print(df)
condition count check
0 merged_pgk 10 True
1 merged_Pgk 3 True
2 merged_pgk_stim 12 False
3 merged_Scp1 5 False
A: you can use endswith function
df[df['condition'].str.lower().str.endswith(pattern)]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62590794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to find duplicate items in list<>? I have:
List<string> list = new List<string>() { "a", "a", "b", "b", "r", "t" };
How can I get only "a","b"?
I tried to do like this:
List<string> list = new List<string>() { "a", "a", "b", "b", "r", "t" };
List<string> test_list = new List<string>();
test_list = list.Distinct().ToList();
Now test_list has {"a", "b", "r", "t"}
And then:
test_list = test_list.Except(list).ToList();
So that was my fail point, cause Except() deleted all elements.
Could you help me with solution?
A: Try this
var duplicates = list.GroupBy(a => a).SelectMany(ab => ab.Skip(1).Take(1)).ToList();
A: A simple approach is using Enumerable.GroupBy:
var dups = list.GroupBy(s => s)
.Where(g => g.Count() > 1)
.Select(g => g.Key);
A: List<string> list = new List<string>() { "a", "a", "b", "b", "r", "t" };
var dups = list.GroupBy(x => x)
.Where(x => x.Count() > 1)
.Select(x => x.Key)
.ToList();
A: var list = new List<string> { "a", "a", "b", "b", "r", "t" };
var distinct = new HashSet<string>();
var duplicates = new HashSet<string>();
foreach (var s in list)
if (!distinct.Add(s))
duplicates.Add(s);
// distinct == { "a", "b", "r", "t" }
// duplicates == { "a", "b" }
A: var duplicates = list.GroupBy(s => s).SelectMany(g => g.Skip(1).Take(1)).ToList();
A: var duplicates = list.GroupBy(a => a).SelectMany(ab => ab.Skip(1).Take(1)).ToList();
It will be more efficient then the one using Where(g => g.Count() > 1) and will return only one element from every group.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/15866780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: WPF GradientBrush? How Many type of gradient brushes are available like LinearGradientBrush, SolidColorBrush?
and when we create a GradientStop how the offset works?
LinearGradientBrush LGB = new LinearGradientBrush();
LGB.StartPoint = new Point(0, 0);
LGB.EndPoint = new Point(0, 1);
LGB.GradientStops.Add(new GradientStop(Color.FromRgb(255,251,255) , 0));
LGB.GradientStops.Add(new GradientStop(Color.FromRgb(206,207,222), 1));
LGB.GradientStops.Add(new GradientStop(Color.FromRgb(0, 247, 0), 2));
rect.Fill = LGB;
Why the third one "Color.FromRgb(0, 247, 0)" is not reflecting?
Please suggest,where i am wrong?
A: The GradientStop.Offset property is a value which ranges from 0.0 to 1.0. From the MSDN documentation:
A value of 0.0 specifies that the stop is positioned at the beginning of the gradient vector, while a value of 1.0 specifies that the stop is positioned at the end of the gradient vector.
Change your second stop's offset to 0.5 and your third's to 1.0 and it should work.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/1420043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: GWT application generating IE insecure item warning Our service runs over HTTPS and we're currently experimenting with running a compiled GWT-application within it, only client side, no RPC:s.
It is included within an IFRAME, which seems to be recommended (here for example: http://developerlife.com/tutorials/?p=231 under the heading HTTPS and HTTP).
When doing certain operations within the GWT-app, IE it generates an insecure item warning.
http://bagonca.com/insecure_item.png
You may ask yourself why I don't use some nifty Firefox plugin to see what request might be over http. Or why I don't use HTTPWatch in Internet Explorer for the same reason. I have. There are no insecure requests that I can find, anywhere.
What I have read about on the other hand is that Internet Explorer throws this warning for iframes without the src attribute set. And that a potential fix is using src="javascript:false" for any iframe that is populated dynamically.
As I've said, the whole app is included via an IFRAME, and within it GWT itself generates a hidden IFRAME that looks like below.
<iframe tabIndex="-1" id="gwt-app" src="javascript:''" style="border-bottom: medium none; position: absolute; border-left: medium none; width: 0px; height: 0px; border-top: medium none; border-right: medium none;">
I've tried hard coding the src attribute above to a blank page that actually exists and is called with HTTPS on the same domain. I've tried the javascript:false; approach. No luck. The app works like a charm, but IE throws the useless, and false warning.
The warning turns up when I do certain actions within the app, not when it is loaded. Actually when dragging and dropping appointments within the http://code.google.com/p/gwt-calendar/ component.
Has anyone tangled with a similar issue before? Any clues?
A:
Any clues?
I'm not sure in this case, but I did some experiments with iframes (on a somewhat similar topic) about a year ago. I would assume, that gwt-calendar tries to communicate with the host page via javascipt's parent reference. AFAIR, that's not allowed, when the host page isn't loaded from the same origin (including protocol).
A: There other snippets of Javascript that can also cause a problem. Please see:
http://blog.httpwatch.com/2009/09/17/even-more-problems-with-the-ie-8-mixed-content-warning/
Also, have a look through the pile of comments on:
http://blog.httpwatch.com/2009/04/23/fixing-the-ie-8-warning-do-you-want-to-view-only-the-webpage-content-that-was-delivered-securely/
Some of the commenters have found and fixed other causes of the warning too.
A: This can happen if you have your app running over HTTPS and are fetching images or some other resource over over plain HTTP. Check if you have image or css paths hardcoded to http://.
For example, if your app if running at https://example.com and you wish to load an image foo.jpg , the html you should be using is:
<img src="https://example.com/images/foo.jpg"/>
or (ideally)
<img src="images/foo.jpg"/>
and not
<img src="http://example.com/images/foo.jpg"/>
Note that the third example fetches the foo.jpg image over http instead of https. Hence it would cause the issue which you are facing.
To avoid such problems, the best practice is either to use ImageResources and relative URLs.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/4286517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Error retrieving data from a remote database in PHP
How can I solve the problem shown in the table? This happened while I was trying to retrieve data from my database. In my java code, the json array that I try to call is null.
db_connect.php
<?php
class DB_CONNECT{
// constructor
function __construct() {
// connecting to database
$this->connect();
}
// destructor
function __destruct() {
// closing db connection
$this->close();
}
/**
* Function to connect with database
*/
function connect() {
// import database connection variables
require_once __DIR__ . '/db_config.php';
// Connecting to mysql database
$con = mysql_connect(DB_SERVER, DB_USER, DB_PASSWORD) or die(mysql_error());
// Selecing database
$db = mysql_select_db(DB_DATABASE) or die(mysql_error()) or die(mysql_error());
// returing connection cursor
return $con;
}
/**
* Function to close db connection
*/
function close() {
// closing db connection
mysql_close();
}
}
?>'
retEqp.php
<?php
/*
* Following code will list all the products
*/
// array for JSON response
$response = array();
// include db connect class
require_once __DIR__ . '/db_connect.php';
// connecting to db
$db = new DB_CONNECT();
// get all products from products table
$result = mysql_query("SELECT *FROM facilities_equipments where item_Type='Equipment'") or die(mysql_error());
// check for empty result
if (mysql_num_rows($result) > 0) {
// looping through all results
// products node
$response["equipments"] = array();
while ($row = mysql_fetch_array($result)) {
// temp user array
$equipment = array();
$equipment["item_ID"] = $row["item_ID"];
$equipment["item_Name"] = $row["item_Name"];
// push single product into final response array
array_push($response["equipments"], $equipment);
}
// success
$response["success"] = 1;
// echoing JSON response
echo json_encode($response);
} else {
// no products found
$response["success"] = 0;
$response["message"] = "No products found";
// echo no users JSON
echo json_encode($response);
}
?>
A: Replace your code
$con = mysql_connect(DB_SERVER, DB_USER, DB_PASSWORD) or die(mysql_error());
with
$con = mysqli_connect(DB_SERVER, DB_USER, DB_PASSWORD) or die(mysql_error());
Turn off all deprecated warnings including them from mysql_*:
error_reporting(E_ALL ^ E_DEPRECATED);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28913471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Find Hours Between Startdate and Enddate of different table TimeSheetData timesheetdata = new TimeSheetData();
TimeSheet timeSheet = new TimeSheet();
TimeSheetData timesheetdata = new TimeSheetData();
timeSheet.StartDate = thisWeekStart;
timeSheet.EndDate = thisWeekEnd;
timesheetdata.Date = dt;
string HourData = item.Value;
string TaskID = item.ID.Split(':')[0];
timesheetdata.TaskID = Guid.Parse(TaskID);
timesheetdata.TimeSheet = timeSheet;
timesheetdata.Hours = Convert.ToDouble(HourData);
db.TimesheetData.Add(timesheetdata);
db.SaveChanges();
var TMS = db.TimeSheets.Where(t => t.StartDate == thisWeekStart
&& t.EndDate == thisWeekEnd).ToList();
My tables are:
*
*Timesheets (Guid, Startdate, Enddate)
*TimeSheetDatas (Guid, hours, Date, Task GUID, Timesheet Guid)
I want to find hours (TimesheetDatas) between StartDate and EndDate (Timesheets).
Hours find = var TMS
A: You didn't provide some details but do you want something like that;
var query = from ts in db.TimeSheets
join tsd in db.TimesheetDatas on ts.Guid equals tsd.TimesheetGuid
where ts.StartDate > thisWeekStart && ts.StartDate < thisWeekEnd
select tsd.hour
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/47588250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: How to know if email was sent from intent In my app, I'm creating an intenet to send an email... it looks like this
final Intent emailIntent = new Intent(Intent.ACTION_SEND)
.putExtra(Intent.EXTRA_EMAIL, new String[]{mBuilder.mEmail})
.putExtra(Intent.EXTRA_SUBJECT, mBuilder.mSubject)
.putExtra(Intent.EXTRA_TEXT, Html.fromHtml(getBody()))
.putExtra(Intent.EXTRA_STREAM, zipUri)
.setType("application/zip");
mBuilder.mContext.startActivity(Intent.createChooser(
emailIntent, mBuilder.mContext.getString(R.string.send_using)));
I want to know if it's possible to know if the email was actually sent or not, so I can perform some actions after that happens.
If so, please explain me how and if possible add a code snippet, please.
Thanks in advance.
A:
I want to know if it's possible to know if the email was actually sent or not
No.
First, there is no requirement that the user chooses an email client for this startActivity() request.
Second, there is nothing in the ACTION_SEND protocol that lets the app offering to share the content know whether or not the user did anything with that content.
A: Maybe you could try startActivityForResult() and see whether the resultcode changes depending on what the user did
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40563941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Asp.net - Starting server with Kestrel on OS X I have not worked with Asp.net before. I'm trying to get an MVC Project running on my machine. I'm running OS X 10.9.3.
I tried following the instructions in this tutorial: http://docs.asp.net/en/latest/tutorials/your-first-mac-aspnet.html
But when I run $ dnx . kestrel in the projects directory I get the following error:
System.InvalidOperationException: Unable to resolve project 'XYZ' from /Users/jeff/Sites/XYZ/XYZ.com
at Microsoft.Framework.Runtime.ApplicationHostContext..ctor (IServiceProvider serviceProvider, System.String projectDirectory, System.String packagesDirectory, System.String configuration, System.Runtime.Versioning.FrameworkName targetFramework, ICache cache, ICacheContextAccessor cacheContextAccessor, INamedCacheDependencyProvider namedCacheDependencyProvider, IAssemblyLoadContextFactory loadContextFactory, Boolean skipLockFileValidation) [0x00000] in <filename unknown>:0
at Microsoft.Framework.Runtime.DefaultHost.Initialize (Microsoft.Framework.Runtime.DefaultHostOptions options, IServiceProvider hostServices) [0x00000] in <filename unknown>:0
at Microsoft.Framework.Runtime.DefaultHost..ctor (Microsoft.Framework.Runtime.DefaultHostOptions options, IServiceProvider hostServices) [0x00000] in <filename unknown>:0
at Microsoft.Framework.ApplicationHost.Program.Main (System.String[] args) [0x00000] in <filename unknown>:0
at (wrapper managed-to-native) System.Reflection.MonoMethod:InternalInvoke (System.Reflection.MonoMethod,object,object[],System.Exception&)
at System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x00000] in <filename unknown>:0
at Microsoft.Framework.Runtime.Common.EntryPointExecutor.Execute (System.Reflection.Assembly assembly, System.String[] args, IServiceProvider serviceProvider) [0x00000] in <filename unknown>:0
at dnx.host.Bootstrapper.RunAsync (System.Collections.Generic.List`1 args, IRuntimeEnvironment env, System.Runtime.Versioning.FrameworkName targetFramework) [0x00000] in <filename unknown>:0
What can I try to fix this?
A: It means that dnx cannot find Startup.cs file. Try to run dnx . kestrel inside folder which contains Startup.cs
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/32209117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Can someone explain this css margin behaviour? A friend of mine had a problem with this simple html/ css task: http://jsfiddle.net/kaHzY/6/
The problem was that the .headline 's margin-top was pushing down the #main div although it should directly follow the nav without any space between.
Adding a div .inner with a padding of 1px solved this issue. The .headline still had the margin but was not pushing down the #main div anymore.
I know this behaviour but I cant explain it. How can you explain that, especially to someone who is learning css? Why does it behave like that?
Thanks
A: This is classic collapsing margin behavior.
The people who wrote the CSS spec thought this was a good idea in order to prevent excessive white space from being created by margins. Without this behavior, it would be a lot more work to control margin/whitespace between block elements.
References:
CSS2 Spec - http://www.w3.org/TR/CSS2/box.html#collapsing-margins
Andy Budd Article - http://andybudd.com/archives/2003/11/no_margin_for_error/
Eric Meyer Article - http://www.complexspiral.com/publications/uncollapsing-margins/
Why Collapsing Margins Are a Good Idea
Collapsing margins are a feature of the CSS Box Model, which forms the basic working unit for the Visual Formatting Model, which allows us to have a rational environment in which to develop web pages.
In designing the CSS specification, the authors had to balance how how rules would be written, for example: margin: 1.0em, and how these rules would work in a CSS formatting engine that needs to layout block of content from top-to-bottom and left-to-right (at least in Western European languages).
Following the train-of-thought from Eric Meyer's article cited above, suppose we have a series of paragraphs with margins styled according to:
p { margin: 1.00em; }
For those of us who used to seeing printed pages with regular spacing between paragraphs, one would expect the space between any two adjacent paragraphs to be 1.00em. One would also expect the first paragraph to have a 1.00em margin before it and the last paragraph to have a 1.00em margin after it. This is a reasonable, and perhaps simplified, expectation of how the simple CSS rule for p should behave.
However, for the programmers building the CSS engine that interprets the simple p rule, there is a lot of ambiguity that needs to be resolved. If one is expecting the printed page interpretation of the CSS rule, then this naturally leads to the collapsing margin behavior. So the programmers come up with a more complicated rule like: if there are two adjacent p tags with top and bottom margins, merge the margins together.
Now this begs the question, how do you "merge margins together"? min or max of the adjacent top and bottom margins, average of the two? margin of first element always? And what if you have other block elements other than p's? and if you add a border or background? what next?
Finally, how do you compute all of these margin settings in such a way so that you don't have to iterate through the entire set of HTML elements several times (which would make complicated pages load slowly especially in early generations of browsers)?
In my opinion, collapsing margins offered a solution to a complicated problem that balanced the ease of writing CSS rules for margins, the user expectation of how printed pages are laid out in our typographic heritage, and finally, provided a procedure that could be implemented within the programming environment that browsers exist in.
A: I achieved your desired layout by:
*
*Resetting the default margins and padding on the p tags:
p{
margin: 0px auto;
padding: 0px;
}
Make sure you add this to the top of your CSS.
*Then changed the .headline and .text classes to use padding instead of margin; using your same values.
*Removed the #main .inner CSS entirely.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/17155254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Overcome VBA InputBox Character Limit The current function I use to collect text InputBox can't accept more than 255 characters apparently, and I need to be able to collect more than that? Is there a parameter or different function I can use to increase this limit?
A: To be pedantic, the Inputbox will let you type up to 255 characters, but it will only return 254 characters.
Beyond that, yes, you'll need to create a simple form with a textbox. Then just make a little "helper function" something like:
Function getBigInput(prompt As String) As String
frmBigInputBox.Caption = prompt
frmBigInputBox.Show
getBigInput = frmBigInputBox.txtStuff.Text
End Function
or something like that...
A: Thanks BradC for the info that. My final code was roughly as follows, I have a button that calls the form that I created and positions it a bit as I was having some issues with the form being in the wrong spot the everytime after the first time I used.
Sub InsertNotesAttempt()
NoteEntryForm.Show
With NoteEntryForm
.Top = 125
.Left = 125
End With
End Sub
The userform was a TextBox and two CommandButtons(Cancel and Ok). The code for the buttons was as follows:
Private Sub CancelButton_Click()
Unload NoteEntryForm
End Sub
Private Sub OkButton_Click()
Dim UserNotes As String
UserNotes = NotesInput.Text
Application.ScreenUpdating = False
If UserNotes = "" Then
NoteEntryForm.Hide
Exit Sub
End If
Worksheets("Notes").ListObjects("Notes").ListRows.Add (1)
Worksheets("Notes").Range("Notes").Cells(1, 1) = Date
Worksheets("Notes").Range("Notes").Cells(1, 2) = UserNotes
Worksheets("Notes").Range("Notes").Cells(1, 2).WrapText = True
' Crap fix to get the wrap to work. I noticed that after I inserted another row the previous rows
' word wrap property would kick in. So I just add in and delete a row to force that behaviour.
Worksheets("Notes").ListObjects("Notes").ListRows.Add (1)
Worksheets("Notes").Range("Notes").Item(1).Delete
NotesInput.Text = vbNullString
NotesInput.SetFocus ' Retains focus on text entry box instead of command button.
NoteEntryForm.Hide
Application.ScreenUpdating = True
End Sub
A: I don't have enough rep to comment, but in the sub form_load for the helper you can add:
me.AutoCenter = True
Outside of that form, you can do it like this:
NoteEntryForm.Show
Forms("NoteEntryForm").AutoCenter = True
My Access forms get all confused when I go from my two extra monitors at work to my one extra monitor at home, and are sometimes lost in the corner. This AutoCenter has made it into the form properties of every one of my forms.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2969516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Proper method to timing how long a sorting algorithm takes (Using Java)
I'm testing sorting arrays to see literally how fast different sorting arrays take. I want to weed out the erroneous times, so ideally I would like to start a timer, run the sort in a loop of say 100 times, stop the timer, then divide by 100 to get a pretty accurate measure.
The problem is if I were to loop the same array, it'll sort properly the first time, then each sort after, it'll keep sorting the already sorted array, which isn't what I want.
Maybe I'm missing an obvious solution, but is there any way I can make it keep sorting the same initial randomized array?
I thought about reassigning the newly sorted array back to the initial random array each time, but that would mess up my timer..
thanks for any suggestions
what i would like to do:
startTime = System.nanoTime();
for(int i=0; i<cntr; i++) {
sort array
}
endTime = System.nanoTime();
time = (endTime - startTime)/cntr;
A: You can make a copy before you start sorting, and then copy from that stored copy into the array being sorted in each iteration of the loop.
int[] toBeSorted = new int[10000];
// fill the array with data
int[] copied = new int[10000];
System.arrayCopy(toBeSorted, 0, copied, 0, copied.length);
// prepare the timer, but do not start it
for (int = 0 ; i != 100 ; i++) {
System.arrayCopy(copied, 0, toBeSorted, 0, copied.length);
// Now the toBeSorted is in its initial state
// Start the timer
Arrays.sort(toBeSorted);
// Stop the timer before the next iteration
}
A: You can use the System.currentTimeMillis() method to get the current time then subtract when the method finishes executing.
long totalRuntime = 0;
for(int i = 0; i < 100; i++)
{
long startTime = System.currentTimeMillis();
sortArrays()
long endTime = System.currentTimeMillis();
totalRuntime += (endTime - startTime);
}
System.out.println("Algorithm X on average took "
+ totalRuntime/100 + " milliseconds);
If you want to do this X times just keep a counter for each algorithm and increment. Then you can divide by the total number of runs at the end and compare.
A: Generally you would stop and start the timer in between each run of the algorithm you're testing, adding up the individual times and then dividing by the number of runs. That way any "setup time" isn't included because the timer isn't running during the setup.
A: Go with something like this
new array equals random array,
start timer
sort new array
stop timer
add time to your list of times
repeat until necessary
As long as you copy the original array each time, you will never sort the original one
A: put the sorting algorithm in a function and keep calling the function with the same again and again and passing the array to the function after cloning it using the clone method.
You can call the current time function and print it out every time you run the loop.
A: If you're willing to blow memory, then just make 100 (or however many) copies of the same array before you start timing. If you're not, then time sorting and copying together, and then time just copying to see approximately how much of your sorting + copying time was spent copying.
Also, sidenote: look into using a benchmarking framework like Caliper instead of doing your own "manual" benchmarking. It's easier, and they've solved a lot of problems that you might not even know are going on that could screw up your timings.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/9561110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Getting graphics object to draw with buffer strategy I've made a JFrame with a canvas on it and I want to draw on that canvas. At a later date the canvas will be updating many times a second so I am using a buffer strategy for this. Here is the code:
package mainPackage;
import java.awt.Canvas;
import java.awt.Color;
import java.awt.Graphics;
import java.awt.image.BufferStrategy;
import javax.swing.JFrame;
public class TickPainter {
//just some presets for a window.
public static JFrame makeWindow(String title, int width, int height) {
JFrame mainWindow = new JFrame();
mainWindow.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
mainWindow.setSize(width, height);
mainWindow.setVisible(true);
mainWindow.setLocationRelativeTo(null);
mainWindow.setTitle(title);
return mainWindow;
}
public static void main(String[] args) {
JFrame mainWindow = makeWindow("Practice", 800, 600);
Canvas mainCanvas = new Canvas();
mainWindow.add(mainCanvas);
mainCanvas.setSize(mainWindow.getWidth(), mainWindow.getHeight());
mainCanvas.setBackground(Color.white);
mainCanvas.createBufferStrategy(3);
BufferStrategy bufferStrat = mainCanvas.getBufferStrategy();
Graphics g = bufferStrat.getDrawGraphics();
g.setColor(Color.black);
g.fillRect(250, 250, 250, 250);
g.dispose();
bufferStrat.show();
}
}
The program does not draw the black rectangle as intended, I feel like I've missed something really obvious here and I just can't see it. At the moment the program just makes a blank white canvas. I feel like part of the issue is that the buffer is just passing the frame with the rectangle faster than I can see, but there is no frame to load after that so I don't know why it's doing this.
A: A BufferStrategy has a number of initial requirements which must be meet before it can be rendered to. Also, because of the nature of how it works, you might need to repeat a paint phases a number of times before it's actually accepted by the hardware layer.
I recommend going through the JavaDocs and tutorial, they provide invaluable examples into how you're suppose to use a BufferStrategy
The following example uses a Canvas as the base component and sets up a rendering loop within a custom Thread. It's very basic, but shows the basic concepts you'd need to implement...
import java.awt.Canvas;
import java.awt.Color;
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.image.BufferStrategy;
import java.util.concurrent.atomic.AtomicBoolean;
import javax.swing.JFrame;
import javax.swing.SwingUtilities;
public class Test {
public static void main(String[] args) {
new Test();
}
public Test() {
SwingUtilities.invokeLater(new Runnable() {
@Override
public void run() {
TestCanvas canvas = new TestCanvas();
JFrame frame = new JFrame();
frame.add(canvas);
frame.setTitle("Test");
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
canvas.start();
}
});
}
public class TestCanvas extends Canvas {
private Thread thread;
private AtomicBoolean keepRendering = new AtomicBoolean(true);
@Override
public Dimension getPreferredSize() {
return new Dimension(200, 200);
}
public void stop() {
if (thread != null) {
keepRendering.set(false);
try {
thread.join();
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
}
public void start() {
if (thread != null) {
stop();
}
keepRendering.set(true);
thread = new Thread(new Runnable() {
@Override
public void run() {
createBufferStrategy(3);
do {
BufferStrategy bs = getBufferStrategy();
while (bs == null) {
System.out.println("get buffer");
bs = getBufferStrategy();
}
do {
// The following loop ensures that the contents of the drawing buffer
// are consistent in case the underlying surface was recreated
do {
// Get a new graphics context every time through the loop
// to make sure the strategy is validated
System.out.println("draw");
Graphics graphics = bs.getDrawGraphics();
// Render to graphics
// ...
graphics.setColor(Color.RED);
graphics.fillRect(0, 0, 100, 100);
// Dispose the graphics
graphics.dispose();
// Repeat the rendering if the drawing buffer contents
// were restored
} while (bs.contentsRestored());
System.out.println("show");
// Display the buffer
bs.show();
// Repeat the rendering if the drawing buffer was lost
} while (bs.contentsLost());
System.out.println("done");
try {
Thread.sleep(100);
} catch (InterruptedException ex) {
ex.printStackTrace();
}
} while (keepRendering.get());
}
});
thread.start();
}
}
}
Remember, the point of BufferStrategy is to give you full control over the painting process, so it works outside the normal painting process generally implemented by AWT and Swing
"At a later date the canvas will be updating many times a second so I am using a buffer strategy for this" - Before going down the "direct to hardware" solution, I'd consider using a Swing Timer and the normal painting process to see how well it works
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/47377513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Async Loop seems to partially parallel I'm trying to implement a function, which slices a file into chunks and then sends them to my backend one after another.
The function has to hash each file & validate if the hash is already known before starting the upload.
The following code is the code part, where my problematic function is called.
process: async (
fieldName,
file,
metadata,
load,
error,
progress,
abort,
transfer,
options,
) => {
// fieldName is the name of the input field - No direct relevance for us
// logger.log(`fieldName: ${fieldName}`);
// Usually Empty - Can be added with Metadata-Plugin
// logger.log(metadata);
const source = this.$axios.CancelToken.source();
const abortProcess = () => {
// This function is entered if the user has tapped the cancel button
source.cancel('Operation cancelled by user');
// Let FilePond know the request has been cancelled
abort();
};
let chunks = [];
const {
chunkForce,
chunkRetryDelays,
chunkServer,
chunkSize,
chunkTransferId,
chunkUploads,
} = options;
// Needed Parameters of file
const { name, size } = file;
if (chunkTransferId) {
/** Here we handle what happens, when Retry-Button is pressed */
logger.log(`Already defined: ${chunkTransferId}`);
return { abortProcess };
}
this.hashFile(file)
.then((hash) => {
logger.log(`File Hashed: ${hash}`);
if (hash.length === 0) {
error('Hash not computable');
}
return hash;
})
.then((hash) => {
logger.log(`Hash passed through: ${hash}`);
return this.requestTransferId(file, hash, source.token)
.then((transferId) => {
logger.log(`T-ID receieved: ${transferId}`);
return transferId;
})
.catch((err) => {
error(err);
});
})
.then((transferId) => {
transfer(transferId);
logger.log(`T-ID passed through: ${transferId}`);
// Split File into Chunks to prepare Upload
chunks = this.splitIntoChunks(file, chunkSize);
// Filter Chunks - Remove all those which have already been uploaded with success
const filteredChunks = chunks.filter(
(chunk) => chunk.status !== ChunkStatus.COMPLETE,
);
logger.log(filteredChunks);
return this.uploadChunks(
filteredChunks,
{ name, size, transferId },
progress,
error,
source.token,
).then(() => transferId);
})
.then((transferId) => {
// Now Everything should be uploaded -> Set Progress to 100% and make item appear finished
progress(true, size, size);
load(transferId);
logger.log(transferId);
})
.catch((err) => error(err));
return { abortProcess };
},
uploadChunks is where the problem starts.
async uploadChunks(chunks, options, progress, error, cancelToken) {
const { name, size, transferId } = options;
for (let index = 0; index < chunks.length; index += 1) {
let offset = 0;
const chunk = chunks[index];
chunk.status = ChunkStatus.PROCESSING;
// eslint-disable-next-line no-await-in-loop
await this.uploadChunk(chunk.chunk, options, offset)
.then(() => {
chunk.status = ChunkStatus.COMPLETE;
offset += chunk.chunk.size;
progress(true, offset, size);
logger.log(offset); // This is always chunk.chunk.size, instead of getting bigger
})
.catch((err) => {
chunk.status = ChunkStatus.ERROR;
error(err);
});
}
},
uploadChunk(fileChunk, options, offset) {
const { name, size, transferId } = options;
const apiURL = `${this.$config.api_url}/filepond/patch?id=${transferId}`;
return this.$axios.$patch(apiURL, fileChunk, {
headers: {
'content-type': 'application/offset+octet-stream',
'upload-name': name,
'upload-length': size,
'upload-offset': offset,
},
});
},
As you can see uploadChunks takes an array of chunks, some options, two functions (progress & error) and a cancelToken (which I currently don't use, since I'm still stuck at this problem)
Each chunk in the array has the form of:
{
status: 0, // Some Status indicating, if it's completed or not
chunk: // binary data
}
The Function uploadChunks iterates over the chunk array and should in theory upload one chunk after another and always increment offset after each upload and then call progress. After this it should start the next iteration of the loop, where offset would be bigger than in the call before.
The calls themselves get executed one after another, but every call has the same offset and progress does not get repeatedly called. Instead my progress-bar locks until everything is uploaded and them jumps to 100%, due to the load-call in the first function right at the end.
So the upload itself works fine in the correct order, but all the code after the await this.uploadChunk... doesn't get called after each chunk and blocks somehow.
A: You are setting offset to 0 inside the loop. So offset is always 0. You should move this line:
let offset = 0;
before the for statement.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/67427015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Return Array from Function in Swift So im a bit new to swift and object-c as well and was wondering if someone could help me out a bit.
I'm used to creating usually a utils file where I have functions I use often in programming.
In this case im trying to call a function from another swift file and return an array of data.
For example in my mainViewController.swift im calling the function:
var Data = fbGraphCall()
In the Utils.swift file I have a function that Im trying to get it to return an array of data collected.
func fbGraphCall() -> Array<String>{
var fbData: [String] = [""]
if (FBSDKAccessToken.currentAccessToken() != nil){
// get fb info
var userProfileRequestParams = [ "fields" : "id, name, email, about, age_range, address, gender, timezone"]
let userProfileRequest = FBSDKGraphRequest(graphPath: "me", parameters: userProfileRequestParams)
let graphConnection = FBSDKGraphRequestConnection()
graphConnection.addRequest(userProfileRequest, completionHandler: { (connection: FBSDKGraphRequestConnection!, result: AnyObject!, error: NSError!) -> Void in
if(error != nil) {
println(error)
} else {
// DEBUG
println(result)
let fbEmail = result.objectForKey("email") as! String
// DEBUG
println(fbEmail)
fbData.append("\(fbEmail)")
let fbID = result.objectForKey("id") as! String
if(fbEmail != "") {
PFUser.currentUser()?.username = fbEmail
PFUser.currentUser()?.saveEventually(nil)
}
println("Email: \(fbEmail)")
println("FBUserId: \(fbID)")
}
})
graphConnection.start()
}
println(fbData)
return fbData
}
I can confirm that im getting the fbEmail and fbID back from facebook with my debug statements but as I said im still new on how to return data back.
Ideally I usually want an array back if its more than one value or the ability to get back data like Data.fbEmail, Data.fbID or an array maybe like ["email" : "email@gmail.com", "id" : "1324134124zadfa"]
When I hit the return statement its blank.. so not sure why the constants are not keeping values or passing values into my fbData array.. I'm trying fbData.append(fbEmail) for example ..
any thoughts on what might be wrong?
A: The graphConnection.addRequest is an asynchronous function and you are trying to synchronously return the array of strings back. This won't work because the graphConnection.addRequest is done in the background to avoid blocking the main thread. So instead of returning the data directly make a completion handler. Your function would then become this:
func fbGraphCall(completion: ([String]) -> Void, errorHandler errorHandler: ((NSError) -> Void)?) {
if (FBSDKAccessToken.currentAccessToken() != nil) {
// get fb info
var userProfileRequestParams = [ "fields" : "id, name, email, about, age_range, address, gender, timezone"]
let userProfileRequest = FBSDKGraphRequest(graphPath: "me", parameters: userProfileRequestParams)
let graphConnection = FBSDKGraphRequestConnection()
graphConnection.addRequest(userProfileRequest, completionHandler: { (connection: FBSDKGraphRequestConnection!, result: AnyObject!, error: NSError!) -> Void in
if(error != nil) {
println(error)
errorHandler?(error!)
} else {
var fbData = [String]() // Notice how I removed the empty string you were putting in here.
// DEBUG
println(result)
let fbEmail = result.objectForKey("email") as! String
// DEBUG
println(fbEmail)
fbData.append("\(fbEmail)")
let fbID = result.objectForKey("id") as! String
if(fbEmail != "") {
PFUser.currentUser()?.username = fbEmail
PFUser.currentUser()?.saveEventually(nil)
}
println("Email: \(fbEmail)")
println("FBUserId: \(fbID)")
completion(fbData)
}
})
graphConnection.start()
}
}
I added the completion handler and the error handler blocks that get executed according to what's needed.
Now at the call site you can do something like this:
fbGraphCall( { println($0) // $0 refers to the array of Strings retrieved }, errorHandler: { println($0) // TODO: Error handling }) // Optionally you can pass `nil` for the error block too incase you don't want to do any error handling but this is not recommended.
Edit:
In order to use the variables you would do something like this at the call site
fbGraphCall( { array in
dispatch_async(dispatch_get_main_queue(), { // Get the main queue because UI updates must always happen on the main queue.
self.fbIDLabel.text = array.first // array is the array we received from the function so make sure you check the bounds and use the right index to get the right values.
self.fbEmailLabel.text = array.last
})
}, errorHandler: {
println($0)
// TODO: Error handling
})
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31691433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: RobotFramework - Handle 2 Browser tabs at the same time and test them I have this problem where i need to test a functionality of my web app with 2 tabs open and check if I update someting on tab 1 Tab 2 refreshes, I am trying to get this done using the Press key keyword.
I am targeting the body of and Using the Ascii number for CTRL+T to open a new tab, A new browser window opens rather than a New Tab i am using latests version of Chrome.
I have also tried to to use \\09 but that gives me the same result
Press Key tag=body \\20
Then i try to go back to the window using the Select Window MAIN Keyword but that doesn't work.
QUESTION: how can i open 2 tabs at the same time and test them using RobotFramework with SeleniumLibrary?
A: I think your test would be just as valid with two windows as it would with one window and two tabs.
You can call the open browser keyword multiple times, giving each window its own unique alias. You can then switch between them with the switch browser keyword and the appropriate alias.
Example
*** Settings ***
Library SeleniumLibrary
Suite Teardown close all browsers
*** Variables ***
${browser} chrome
*** Test cases ***
Example using two windows
open browser http://www.example.com ${browser} alias=tab1
open browser http://www.w3c.org ${browser} alias=tab2
switch browser tab1
location should be http://www.example.com/
switch browser tab2
location should be https://www.w3.org/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51842127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to create CosmosDB User Defined Function programmatically I know how to create CosmosDB databases and collection using ARM templates. I have a UDF (User Defined Function) that I would like to deploy using an ARM template as well but it doesn't seem to be supported.
Am I missing something? Is there a different way to programmatically deploy/maintain a UDF?
A: You could consider using Cosmos Db sdk or REST API to deploy udf into your collection.
sample code:
string udfId = "Tax";
var udfTax = new UserDefinedFunction
{
Id = udfId,
Body = {...your udf function body},
};
Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
await client.CreateUserDefinedFunctionAsync(containerUri, udfTax);
A: Simple answer No, Stored procedures and User Defined Functions are not supported via Azure Resource Management Templates as of today.
A: This is now possible..
"type": "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/userDefinedFunctions",
"apiVersion": "2019-08-01",
https://learn.microsoft.com/en-us/azure/templates/microsoft.documentdb/2019-08-01/databaseaccounts/sqldatabases/containers/userdefinedfunctions
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56158535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Roslyn Refactor Find References to a method As part of a refactor project I am working on I need to replace calls to Obsolete methods with correct calls. This has worked for other calls but I have hit a problem with my current one.
I need to replace all calls to the following:
[Obsolete("use string.IsNullOrWhiteSpace instead of this extension")]
public static bool IsNullOrWhiteSpace(this string s)
{
return string.IsNullOrWhiteSpace(s);
}
with the simple string.IsNullOrWhiteSpace(string here) from the framework. When run it did work in that it fixed all the instances that it found. What I can't account for is the 200+ instances that it did not identify.
Here are two examples that it did not find, this one is from the same class.
public static string TrimToUpper(this string s)
{
return !IsNullOrWhiteSpace(s) ? s.Trim().ToUpperInvariant() : s;
}
I thought perhaps because the method was called directly, but that doesn't make sense to me as far as finding the reference. A second example is simple if in another class in a project in the same solution if (enterpriseCode.IsNullOrWhiteSpace()) The first past detected several other instances just like it but not this one.
Here is the code I am using to find all the references to the method I wish to replace:
public sealed class ReferenceFinder
{
public ReferenceFinder(string solutionPath)
{
SolutionInfo = GetSolutionInfo(solutionPath).GetAwaiter().GetResult();
}
public ReferenceTree RetreiveSolutionReferences(string typeName, string methodName)
=> new ReferenceTree(typeName, methodName, GetReferenceLocationsByProject(GetReferenceSymbols(GetTypeSymbols(typeName), methodName)));
private IEnumerable<INamedTypeSymbol> GetTypeSymbols(string typeName)
=> SolutionInfo.ProjectInfo.Select(pi => pi.Compilation.GetTypeByMetadataName(typeName)).Where(x => x != null);
private IEnumerable<ReferencedSymbol> FindConstructorReferences(INamedTypeSymbol symbol)
=> symbol.Constructors.SelectMany(c => SymbolFinder.FindReferencesAsync(c, SolutionInfo.Solution).Result);
private IEnumerable<ReferencedSymbol> FindMethodReferences(INamedTypeSymbol symbol, string methodName)
=> symbol.GetMembers(methodName).SelectMany(m => SymbolFinder.FindReferencesAsync(m, SolutionInfo.Solution).Result);
private IEnumerable<ReferencedSymbol> GetReferenceSymbols(IEnumerable<INamedTypeSymbol> symbols, string methodName)
=> symbols.SelectMany(x => x.Name == methodName ? FindConstructorReferences(x) : FindMethodReferences(x, methodName));
private ILookup<Project, ReferenceLocation> GetReferenceLocationsByProject(IEnumerable<ReferencedSymbol> symbols)
=> symbols.SelectMany(x => x.Locations).Distinct().ToLookup(x => x.Document.Project);
private async Task<SolutionInfo> GetSolutionInfo(string path)
{
using (var workspace = MSBuildWorkspace.Create())
{
var solution = await workspace.OpenSolutionAsync(path);
var compilations = await Task.WhenAll(solution.Projects.Select(async x => (x, await x.GetCompilationAsync())).AsEnumerable());
return new SolutionInfo((solution, compilations));
}
}
public readonly SolutionInfo SolutionInfo;
}
I am passing the RetreiveSolutionReferencs the following "DVWorkshop.StringExtensions" (the class and namespace of the method to replace) "IsNullOrWhiteSpace" (the method to replace).
I even wrote a short method to try step through and see what references were being picked up.
public void GetReferencesForSpecificProject(string typeName, string methodName, string projectName)
{
var pi = SolutionInfo.ProjectInfo.First(x => x.Project.Name == projectName);
var symbol = pi.Compilation.GetTypeByMetadataName(typeName);
var members = symbol.GetMembers(methodName).ToList();
var rList = new List<IEnumerable<ReferenceLocation>>();
var cList = new List<IEnumerable<Location>>();
foreach (var m in members)
{
var referenceLocations = SymbolFinder.FindReferencesAsync(m, SolutionInfo.Solution).Result.SelectMany(r => r.Locations);
var callerLocations = SymbolFinder.FindCallersAsync(m, SolutionInfo.Solution).Result.SelectMany(r => r.Locations);
rList.Add(referenceLocations);
cList.Add(callerLocations);
}
}
I see several locations in the reference and callers collections, but not nearly enough and none of them are for the project I'm passing in "Application" even though I see the calls and go to definition takes me to my method.
What is causing me to miss references to my method?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/49636079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Python Scripts is not running after working C Project and VSCode IntelliSense is on Partial Partial Mode So I was just done editing my C project in VS Code, and I went to work back on my Python project in VS Code, but now Python scripts is not working or running anymore or even showing anything on the output window on VS Code (as shown in the figure below). I keep pressing Run & Run Python File in Terminal but everything remains exactly the same with no output.
I could see that the IntelliSense is mentioning that it's on Partial Mode which it was not the case before working on the C project.
I reset my settings.json file to reset everything as it was before but an it doesn't work. Even uninstall and install VS Code again was not helpful and Python script is still not working.
I have the "Python" extension installed (the one made by Microsoft).
Any recommendation or advice would be highly appreciated.
A: Partial Mode is relatively rare. According to the content you provide, I think it may be caused by the following reasons:
The project is currently loading. Once loading completes, you will start getting project-wide IntelliSense for it. In these cases, VS Code's IntelliSense will operate in partial mode. Partial mode tries its best to provide IntelliSense for any Python files you have open, but is limited and is not able to offer any cross-file IntelliSense features.
I think you could spend more time waiting for vscode to load. Of course, if it still doesn't work, you could reinstall the python extension.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71729250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Sounds using audio in my simple javascript game won't play The following is an excerpt from my little game (which all works apart from the sound, no errors in Chrome either, console.log proves code is run)???
<!DOCTYPE html>
<html>
<head>
<title> Final Game Code + HS RA</title>
</head>
<body>
<script>
var myAudio = new Audio();
myAudio.source = "Sounds/Impact_1.mp3";
myAudio.volume = 1;
// a bunch of code for the game that all works goes here including a function that calls the below...
nanonautTouchedARobot = true;
myAudio.load();
myAudio.play(); // play the sound #############################################
// below line is just test code to prove progress on the console
console.log('OUCH!');
// a bit more code for the game that all works goes here...
</script>
</body>
</html>
A: myAudio.source = "Sounds/Impact_1.mp3";
This is incorrect. You want the src property:
myAudio.src = 'Sounds/Impact_1.mp3';
Additionally, you don't need .load() before .play() like that. And, ensure that you're calling .play() on a user action so that you don't run afoul of autoplay policies.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61883537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.