id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23529600
|
struct Bar {
operator int() const {
return 0;
}
};
// std::result_of<Bar>::type value; ???
// std::result_of<Bar::operator ??? >::type value;
I could use:
std::is_convertible<Bar, int>::value
but is_convertible is also true for float, unsigned int etc.... I would like to have the exact type.
Edit: Because my question seems unclear, why I want to know the implicit conversion type. Please think a step further to template classes. So i do not know Bar at all...
template<typename T, typename Sfinae = void>
struct ImplicitType
{
static_assert(sizeof(T) != sizeof(T), "Unknown type.");
};
template<typename T>
struct ImplicitType<T,
typename std::enable_if<std::is_convertible<T, int>::value && std::is_class<T>::value>::type>
{
using type = int;
};
template<typename T>
struct ImplicitType<T,
typename std::enable_if<std::is_convertible<T, float>::value && std::is_class<T>::value>::type>
{
using type = int;
};
struct Foo
operator float() const {
return 0.0f;
}
};
struct Bar {
operator int() const {
return 0;
}
};
ImplicitType<Foo> r; // <--- ambiguous template instantiation
ImplicitType<Bar> r; // <--- ambiguous template instantiation
For Foo I would like to get float. For Bar int.
But because I can define one or more implicit conversions for class, it gets tricky.
struct FooBar {
operator float() const {
return 0;
}
operator int() const {
return 0;
}
};
Not working live example.
So all in all it is not possible to get the right implicit conversation type of a class?
A: #include <iostream>
#include <typeinfo>
struct Bar {
operator int() const {
return 0;
}
operator double() const {
return 0.0;
}
struct Foo {
};
operator Foo() const {
return Foo();
}
};
int main() {
std::cout << typeid( decltype((int)Bar()) ).name();
std::cout << std::endl;
std::cout << typeid( decltype((double)Bar()) ).name();
std::cout << std::endl;
std::cout << typeid( decltype((Bar::Foo)Bar()) ).name();
std::cout << std::endl;
}
According to that fact, that function Bar::operator int() is a member function of class Bar, you guarantee, that there is a this reference for it, so that's why I provide a default object Bar() for all the stuff.
The result is:
i
d
N3Bar3FooE
| |
doc_23529601
|
Thank you.
A: Doing this is currently not possible (see https://github.com/Microsoft/vscode-python/issues/82).
Fortunately you can achieve a similar thing using vscode tasks (see https://github.com/Microsoft/vscode-python/issues/82).
{
"label": "pylint: whole project",
"type": "shell",
"command": ".venv/bin/pylint --msg-template \"{path}:{line}:{column}:{category}:{symbol} - {msg}\" mycloud",
"windows": {
"command": ".venv/Scripts/pylint --msg-template \"{path}:{line}: {column}:{category}:{symbol} - {msg}\" mycloud"
},
"presentation": {
"reveal": "never",
"panel": "shared"
},
"problemMatcher": {
"owner": "python",
"fileLocation": [
"relative",
"${workspaceFolder}"
],
"pattern": {
"regexp": "^(.+):(\\d+):(\\d+):(\\w+):(.*)$",
"file": 1,
"line": 2,
"column": 3,
"severity": 4,
"message": 5
}
}
}
A: You have also the Tab Problem in Terminal which will show all files with problems.
(You need Php Intelephense or similar Extension for other language)
| |
doc_23529602
|
My code is written on app.component.html and can't run.
the html code is:
<div>
<div >
<object ID="activexFirst" CLASSID="CLSID:xxxxx-xxx" width="300" height="200"></object>
</div>
<div>
<object ID="activeSencod" classid="CLSID:xxxxx-xxx" width="600" height="500"></object>
</div>
</div>
the app.component.ts code is:
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css'],
encapsulation: ViewEncapsulation.None,
})
export class AppComponent implements AfterViewInit {
//todo..
}
but it is write on Index.html is ok.
How should I fix it?
A: maybe angular-element different native html-element,now it's ok.
the app.component.ts code is:
ngAfterViewInit(): void {
ActiveX.SetActiveXHtml(this.tActive1Div);
}
the ActiveX.ts is
public static SetActiveXHtml(activeXDiv: ElementRef): void {
let obj = document.getElementById('simplename') as any;
if (obj == null) {
obj = document.createElement('object');
obj.name = 'name';
obj.id = 'name';
obj.classid = 'CLSID:xxxxxxxxxxxxxxxxx';
}
const cloneObj = obj.cloneNode(true);
activeXDiv.nativeElement.appendChild(cloneObj);
}
cloneNode method is very very Important !!!
| |
doc_23529603
|
I have tried these-
User::all()->toArray();//returns array but don't know how to add filters, as it will give all the rows from the 'users' table. what if I want some of them. how to do it?
User::select('id','>','5')->toArray(); //call to undefined method toArray()
What should I use then? Really confused!
One more thing, where are the working of these functions mentioned exactly in the docs? I found the API docs very confusing. They don't point directly to functions I think. Please help.
A: This worked for me-
Thought::where('id','>',$last_message_id)->get()->toArray();
Closely observe the toArray() at the end.
A: get() runs the query on the database and will return a Illuminate\Database\Eloquent\Collection, which has the method toArray(). So you need to use this to get your array:
User::select('id','>','5')->get()->toArray();
But maybe you dont even need that array, the Collection class has some powerful methods you maybe could use instead.
A: Why dont you also check out this original documentation
*
*http://laravel.com/docs/queries#selects
Dont you have to be using DB class instead of User::* in your code
Take a look at this example
$users = DB::table('users')
->where('votes', '>', 100)
//->orWhere('name', 'John')
->get();
and
$users->toArray(); or
$users->all()->toArray();
A: I think you have to use method chaining and where() is you solution here.
User::where("item", "=", "value")->get();
| |
doc_23529604
|
I have a button on the accordion header that I want to trigger a click on whatever radial form they have picked.
It kind of looks like this:
Accordion Header ActionButton
radial option 1 (not selected)
radial option 2 (selected)
input field
input field
OriginalButton (that already works. I want to copy this functionality to the above button on the header)
WITH THE OTHER RADIAL BUTTON SELECTED
Accordion Header ActionButton
radial option 1 (selected)
input field
input field
input field
input field
OriginalButton (same button as above, but performs a different function based on the different inputs. I need the actionButton to now perform the function of this button)
radial option 2 (not selected)
Because the html references different files, I was trying to trigger the button click like so:
$side-bar.find('#actionButton').on('click', function(){
$side-bar.find('#accordionHeader').find('form').find('button').trigger('click');
});
but this does not work, is there anything like this I could use? Just using
$side-bar.find('#actionButton).on('click', function(){
$side-bar.find('#originalButton').click();
});
does not work.
Help is much appreciated!
A: I've made a fiddle on what I believe it is that you want to do.
Using jquery's trigger method on the radio buttons click events will let you call another elements event triggers.
$('input[type=radio]').on('click', function() {
$('#test_btn').trigger( "click" );
});
http://jsfiddle.net/Villike/P3V4M/
A: Try saving the #originalButton object into a variable and then using that instead of finding it again. Perhaps you're removing it from ´side-bar´ at some point and that's causing it to not be found?
A: I was thinking about it way too much. apparently, all I needed was
$side-bar.find('#actionButton').on('click', function(){
$side-bar.find('#originalButtonInReferencedHTMLFile').click();
});
I thought because it was being referenced by the side-bar.html file, I needed to .find().find() to go through all the references.
Turns out, if it is called by the side-bar at all, then the side-bar can find it itself without the extra .find()s or whatever!
| |
doc_23529605
|
"guardar.periodoLaboral.personal.vital{0}{1}.presentado"
I need extract "0" and "1" (This values are variables in the string).
How extract this with regex?
Update: I tried.
var a ="guardar.periodoLaboral.personal.vital{1}{0}.presentado"
var expresion=/([0-9]+)/ig;
var values=a.match(expresion);
console.log(values);
//This return 0 and 1 as I need
but I need only the values within "{" and "}" because the string "guardar.periodoLaboral.personal.vital{1}{0}.presentado10" returns 1,0,10.
I new in javascript regex.
A: You can use the JavaScript Replace method along with the regex that matches what you are looking for to strip it out. Then assign it to a variable.
var testString = "guardar.periodoLaboral.personal.vital{0}{1}.presentado";
var donish = testString.replace(/({.})/g,'');
console.log( donish );
Returns this "guardar.periodoLaboral.personal.vital.presentado"
I use this sito test and create my Regex. enter link description here
and this is the documentation for the JavaScript replace method enter link description here
| |
doc_23529606
|
An exception occured in driver: SQLSTATE[HY000] [2003] Can't connect to MySQL
server on '127.0.0.1' (61)
And here are the various ways I have tried to correct this maddening problem:
*
*I verified that I can indeed connect on the command line to both localhost and 127.0.0.1 using the mysql command. I made sure the ports were both filled in and not so I could see the results of an actual error.
*I checked the socket while I was in there via SHOW variables LIKE 'socket' and saw it pointed correctly to my MAMP socket.
*I've oscillated between the MAMP default ports and the standard MySQL ports (3306) just in case it was a weird port thing.
*I made sure my /var/mysql and /tmp/mysql mysql.sock files were correctly symlinked to MAMP.
*I commented out the skip-networking lines in MAMP's configuration file.
*I toggled the "Allow Network Access" in every configuration possible
*I added the line, unix_socket: /Applications/MAMP/tmp/mysql/mysql.sock, to my config.yml file just in case my symlinking trickery failed me.
*I've done several rain dances and other tribal spells I read in a magazine trying to get this to connect.
I'm no stranger to development and MySQL but this has become a lost cause. Any help would be appreciated and rewarded with my unflinching respect for you.
A: The error code 2003 means "Can't connect to MySQL server", you can try to use the following methods.
Check your config file, is the parameters.yml can be access ? If you are on linux, just sudo chmod 777 /path/to/parameters.yml, and the mysql connection config parameters looks like this:
parameters:
database_driver: pdo_mysql
database_host: 127.0.0.1 or localhost
database_port: null
database_name: yourdbname
database_user: youraccount
database_password: yourpassword
Try to use localhost instead of 127.0.0.1;
| |
doc_23529607
|
array:4 [▼
0 => "juego de tronos"
1 => "tagaryen"
2 => "house targaryen"
3 => "casa targaryen"
]
and I want get this result:
array:4 [▼
0 => "juegodetronos"
1 => "tagaryen"
2 => "housetargaryen"
3 => "casatargaryen"
]
I´m using this function but not works: array_map('trim',$myarray)
A: trim() only removes whitespace from the beginning and the end. You probably want
array_map(function($a){
return str_replace(' ', '', $a);
}, $myarray);
A: We have done this
please try the below code. working fine. i have checked.
$array1 = array(
"0" => "juego de tronos",
"1" => "tagaryen",
"2" => "house targaryen",
"3" => "casa targaryen"
);
$newArray = array();
foreach ($array1 as $data){
$data = str_replace(" ","",$data);
$newArray[]=$data;
}
A: $arrayWithSpace = array(
"juego de tronos",
"tagaryen",
"house targaryen",
"casa targaryen"
);
$arrayWithoutSpace = array_map(function($value){
return str_replace(' ', '', $value);
}, $arrayWithSpace);
print_R($arrayWithoutSpace);
A: You can use preg_replace function...
$array = [
"juego de tronos"
"tagaryen"
"house targaryen"
"casa targaryen"
];
$result = preg_replace('/\s+/', '', $array);
and this will be the result:
array:4 [▼
0 => "juegodetronos"
1 => "tagaryen"
2 => "housetargaryen"
3 => "casatargaryen"
]
more info here: preg_replace
A: trim() removes space from ends for this either you can use loop and replace space ' ' with '' OR using same array_map in the following way:
array_map(function($variable){
return str_replace(" ", "", $variable);
}, $myarray);
OR
$myarray = array(0 => "juego de tronos",1 => "tagaryen",2 => "house targaryen",3 => "casatargaryen");
$newarr = [];
foreach($myarray as $arr)
{
array_push($newarr,(str_replace(' ', '', $arr)));
}
echo "<pre>"; print_r($newarr); echo "</pre>";
| |
doc_23529608
|
However in those questions I only used a simplified toy example.
When I move over to lager problems I still struggle to reach optimal solutions.
In short I have tasks to schedule on a give set of machines where the following applies:
*
*Each task has a random duration from 1-1000
*Around 20% of the tasks can consume a global resource which means that no other task using the same resource can be scheduled at the same time.
*Around 20% of the tasks can not be executed on all available machines.
Ex:
Task| Dur | Exe on | Resources
------+------+----------+----------
t1 | 318 | All |
t2 | 246 | All |
t3 | 797 | m4 m3 |
t4 | 238 | m2 m3 | r1 r2 r3
t5 | 251 | m1 |
t6 | 279 | All |
t7 | 987 | m2 m5 m1 | r1
t8 | 847 | All |
t9 | 426 | All |
t10 | 787 | All |
t11 | 6 | All |
t12 | 681 | All |
t13 | 465 | All |
t14 | 46 | All |
t15 | 3 | All | r1 r3 r2
t16 | 427 | All | r3 r2
t17 | 956 | All | r3
t18 | 657 | All |
t19 | 113 | All |
t20 | 251 | All | r3 r2
In this small example we have 20 task to be scheduled on 5 machines. t4, t7, t15 use resource r1 can therefor not be executed at the same time even if they are scheduled on different machines.
t4, t15, t16, t20 use r2 etc.
t3 can only run on m3 and m4, t4 only on m2 and m3 etc.
A complete model is given here:
:- use_module(library(clpfd)).
:- use_module(library(lists)).
s1(Ss, Es, Ms, Maxspan, Timeout ) :-
Ss = [S1,S2,S3,S4,S5,S6,S7,S8,S9,S10,S11,S12,S13,S14,S15,S16,S17,S18,S19,S20],
Es = [E1,E2,E3,E4,E5,E6,E7,E8,E9,E10,E11,E12,E13,E14,E15,E16,E17,E18,E19,E20],
Ms = [M1,M2,M3,M4,M5,M6,M7,M8,M9,M10,M11,M12,M13,M14,M15,M16,M17,M18,M19,M20],
Ds = [318, 246, 797, 238, 251, 279, 987, 847, 426, 787,6, 681,465,46, 3, 427,956,657,113,251],
Ds = [D1,D2,D3,D4,D5,D6,D7,D8,D9,D10,D11,D12,D13,D14,D15,D16,D17,D18,D19,D20],
domain(Ss, 1, 10000),
domain(Es, 1, 10000),
domain(Ms, 1, 5),
%task( StartTime, Duration, EndTime, ResourceCons, MachineId)
Tasks = [
task( S1, D1, E1, 1, M1),
task( S2, D2, E2, 1, M2),
task( S3, D3, E3, 1, M3),
task( S4, D4, E4, 1, M4),
task( S5, D5, E5, 1, M5),
task( S6, D6, E6, 1, M6),
task( S7, D7, E7, 1, M7),
task( S8, D8, E8, 1, M8),
task( S9, D9, E9, 1, M9),
task(S10, D10,E10, 1,M10),
task(S11, D11,E11, 1,M11),
task(S12, D12,E12, 1,M12),
task(S13, D13,E13, 1,M13),
task(S14, D14,E14, 1,M14),
task(S15, D15,E15, 1,M15),
task(S16, D16,E16, 1,M16),
task(S17, D17,E17, 1,M17),
task(S18, D18,E18, 1,M18),
task(S19, D19,E19, 1,M19),
task(S20, D20,E20, 1,M20)
],
%machine( Id, ResourceBound ),
Machines = [
machine( 1, 1),
machine( 2, 1),
machine( 3, 1),
machine( 4, 1),
machine( 5, 1)
],
%Set up constraints where task can only run on spesific machines:
%t3 can only run on m3 and m4
%t4 can only run on m2 and m3
%t5 can only run on m1
%t7 can only run on m1, m2 and m5
%all other can run on any machine
list_to_fdset( [3,4], T3RunOn ),
list_to_fdset( [2,3], T4RunOn ),
list_to_fdset( [1], T5RunOn ),
list_to_fdset( [1,2,5],T7RunOn ),
M3 in_set T3RunOn,
M4 in_set T4RunOn,
M5 in_set T5RunOn,
M7 in_set T7RunOn,
%Set up constraints of tasks using global resources:
%t4 use r1,r2,r3
%t7 use r1
%t15 use r1,r2,r3
%t16 use r2,r3
%t17 use r3
%t20 use r2,r3
%r1:
cumulative(
[
task( S4, D4, E4,1,4),
task( S7, D7, E7,1,7),
task( S15,D15,E15,1,15)
],
[limit(1)]),
%%r2:
cumulative(
[
task( S4, D4, E4,1,4),
task( S15,D15,E15,1,15),
task( S20,D20,E20,1,20)
],
[limit(1)]),
%r3:
cumulative(
[
task( S4, D4, E4,1,4),
task( S15,D15,E15,1,15),
task( S16,D16,E16,1,16),
task( S17,D17,E17,1,17),
task( S20,D20,E20,1,20)
],
[limit(1)]),
maximum( Maxspan, Es ),
cumulatives(Tasks, Machines, [bound(upper), task_intervals(true)] ),
%Plain order (M1,M2,...M20)
%MOrder=Ms,
%Order by task using many resources first, then task only runnon on some machines, then rest
MOrder=[M4, M15, M16, M20, M7, M17, M5, M3, M10, M1, M2, M6, M8, M9, M11, M12, M13, M14, M18, M19],
%Order by duration
%MOrder=[M15, M11, M14, M19, M4, M2, M20, M5, M6, M1, M9, M16, M13, M18, M12, M10, M3, M8, M17, M7],
%This order (discovered my misstake) gives optimal sollution in less then a second:
%MOrder=[M4, M15, M20, M16, M17, M5, M3, M7, M10, M1, M2, M6, M8, M9, M11, M12, M13, M14, M18, M19],
append([MOrder, Ss ], Vars),
labeling([ minimize(Maxspan), time_out( Timeout, _LabelingResult) ], Vars).
My best heuristics so far has been to order the MachineId variable in the following order:
First by task using many resources, then by task only executable on some machine, then the rest.
| ?- s1(Ss, Es, Ms, Maxspan, 5000 ).
Ss = [1,319,1,1635,1635,798,565,1,848,1077,1552,1,1274,1558,1886,1,428,682,1739,1384],
Es = [319,565,798,1873,1886,1077,1552,848,1274,1864,1558,682,1739,1604,1889,428,1384,1339,1852,1635],
Ms = [2,2,3,2,1,3,2,4,4,3,2,5,4,2,1,1,1,5,4,1],
Maxspan = 1889 ?
Any suggestions of a better search heuristic and/or symmetry breaking that could behave better?
| |
doc_23529609
|
like controlling the exceptions that can be thrown out of a function.
also i was unable to modify the functioning of terminate() using set_terminate() .
is it a specification too for visual c++ to modify terminate()?...& if so,then can anyone explain that why microsoft is creating these specifications in its compilers?...:-x
A: what do you mean you were unable to modify terminate
have you tried something like this ?
// set_terminate example
#include <iostream>
#include <exception>
#include <cstdlib>
using namespace std;
void myterminate () {
cerr << "terminate handler called\n";
abort(); // forces abnormal termination
}
int main (void) {
set_terminate (myterminate);
throw 0; // unhandled exception: calls terminate handler
return 0;
}
Don't try to run from VS. Compile and exec from command line.
| |
doc_23529610
|
For example I want to store user rating by other users and some other data.
I don't want to develop my own server and API.
I mean I want any cloud-base platform for it.
What can I use for server-side data storage?
A: Fire-base is the Best tool which replaced the own custom server concept for small and new developers. If you need any further detail and help let me know, I'll help you. It'll provide you online real database and storage.
But if you need some clod functions to process data on cloud you can do so by using cloud functions and utilize it as a Server also.
Fire-base
A: You can use Firebase>>Firebase
Its totally free for small user. And not need to web related Code
| |
doc_23529611
|
I have seen folks suggesting that I need to register for the android.location.PROVIDERS_CHANGED broadcast receiver in my AndroidManifest.xml file. I've done that -- but this broadcast receiver only gets called when I add and remove test providers in my test application. I don't receive it at all when toggling Location Services in the device settings.
Am I doing something wrong? Does anyone know how to reliably determine when the user toggles Location Services in the device settings? I'd like to be able to see these events in the background -- even if my app is not running.
I figure I could set up an repeating task that runs in the background and periodically checks to see if Location Services has been toggled, but it sounds gross and inefficient.
If it helps, I'm testing on an Moto G running Android 4.4.2. Everything I have done with geofences has worked fine until now.
EDIT: After doing more research I have found that the behaviour of the PROVIDERS_CHANGED broadcast is highly variable depending on phone version and model. My Nexus 5 running Android 5.1 seems to work fine actually - I am able to get the PROVIDERS_CHANGED broadcast very regularly. I also have a Moto G and Moto X phone running 4.4.x and they never produced the PROVIDERS_CHANGED broadcast for me. A Samsung Galaxy S3 on Android 4.2 would produce the broadcast for me but stop doing it after I used my Map Activity with test location providers.
Anyways, I have decide to stop pursuing this problem for now. I think Android is just being buggy.
A: It's hard to tell without seeing your code if you are doing anything wrong, but I just got a simple example working and tested by using the code from here as a guide, and adapting it to have the BroadcastReceiver in a library.
One thing to note: The app using the library will need to have the receiver in its AndroidManifest.xml.
See here and here, both contain information from @CommonsWare.
Adding quote here just in case the link goes bad:
Is there any way for a library project to independently register for a
receiver in it's own manifest file?
Not at the present time.
So, any app that uses your library will need to have the receiveradded in its AndroidManifest.xml.
Note that this was tested on Android 4.4.4 on a Samsung S4.
I got this working using a BroadcastReceiver in a library project compiled to a aar.
Using a receiver element set up in the AndroidManifest.xml of the main app that links to the BroadcastReceiver in the library, I was able to receive the event when any change is made to the Location settings:
*
*Location toggle on
*Location toggle off
*Change Location Mode to High Accuracy
*Change Location Mode to Power Saving
*Change Location Mode to GPS only
Every time a change is made in the location settings, the BroadcastReceiver in the library is triggered, even if the app is not running.
I used a simple test where the BroadcastReceiver in the library shows a Toast message every time the location settings are changed.
Here is LocationProviderChangedReceiver.java in the library project:
import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.location.LocationManager;
import android.util.Log;
import android.widget.Toast;
public class LocationProviderChangedReceiver extends BroadcastReceiver {
private final static String TAG = "LocationProviderChanged";
boolean isGpsEnabled;
boolean isNetworkEnabled;
@Override
public void onReceive(Context context, Intent intent) {
if (intent.getAction().matches("android.location.PROVIDERS_CHANGED"))
{
Log.i(TAG, "Location Providers changed");
LocationManager locationManager = (LocationManager) context.getSystemService(Context.LOCATION_SERVICE);
isGpsEnabled = locationManager.isProviderEnabled(LocationManager.GPS_PROVIDER);
isNetworkEnabled = locationManager.isProviderEnabled(LocationManager.NETWORK_PROVIDER);
Toast.makeText(context, "GPS Enabled: " + isGpsEnabled + " Network Location Enabled: " + isNetworkEnabled, Toast.LENGTH_LONG).show();
}
}
}
Then, in the main app, I put the compiled myLibrary.aar in the libs folder, and set up the build.gradle to compile the aar:
repositories{
flatDir{
dirs 'libs'
}
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile "com.android.support:appcompat-v7:22.1.1"
compile "com.google.android.gms:play-services:7.3.0"
compile 'com.mylibrary.daniel:mylibrary:1.0@aar'
}
Set up the receiver in the AndroidManifest.xml of the main app:
<receiver
android:name="com.mylibrary.daniel.mylibrary.LocationProviderChangedReceiver"
android:exported="false" >
<intent-filter>
<action android:name="android.location.PROVIDERS_CHANGED" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
</receiver>
Then, installed the app, and modified location settings in a few different ways. The BroadcastReceiver in the library was triggered every time:
A: Actually, you can use location.MODE_CHANGED and then register your receiver to receive the broadcast from it
<receiver android:name=".Receivers.LocationModeReceiver">
<intent-filter>
<action android:name="android.location.MODE_CHANGED" />
</intent-filter>
</receiver>
| |
doc_23529612
|
I have the following string in my (JSON) file, and due to invalid JSON i need to replace it.
"duration": ,
I need to have
"duration": NULL,
I tried
str.replace(""duration":", "duration": NULL")
str.replace("""duration":"", """duration": NULL"")
But nothing worked. What is the correct way?
A: You can either wrap your search string in single quotes or escape the double quotes with \"
str.replace('"duration":', '"duration": NULL')
or
str.replace("\"duration\":", "\"duration\": NULL")
| |
doc_23529613
|
private void doneEDT() {
Runnable doDone =
new Runnable() {
public void run() {
done();
}
};
if (SwingUtilities.isEventDispatchThread()) {
doDone.run();
} else {
doSubmit.add(doDone);
}
}
The above doneEDT is called from SwingWorker constructor as below,
public SwingWorker() {
Callable<T> callable =
new Callable<T>() {
public T call() throws Exception {
setState(StateValue.STARTED);
return doInBackground();
}
};
future = new FutureTask<T>(callable) {
@Override
protected void done() {
doneEDT();
setState(StateValue.DONE);
}
};
state = StateValue.PENDING;
propertyChangeSupport = new SwingWorkerPropertyChangeSupport(this);
doProcess = null;
doNotifyProgressChange = null;
}
Question
Why the done() method is wrapped inside the Runnable in doneEDT method?. The done() method can be called directly without wrapping it inside a Runnable right?. So I was thinking any advantage of wrapping it inside a Runnable?
So this question came in my mind.
A: You're conflating two different done() methods. One is from SwingWorker that you're intended to override. Nothing special with Runnable needs to happen in your implementation of done(), only your own code. (A robust implementation of done() will check for isCancelled, possibly call get() if you're not calling it elsewhere and handle any exceptions that might have been thrown by the worker, etc. But you don't need to wrap a Runnable to do any of that.)
The other method is FutureTask's. This gets confusing because SwingWorker itself uses a FutureTask to do the "real" work of handling concurrency. The FutureTask#done() is also meant to be overridden by clients, but in this case the client is SwingWorker, not you.
When you call execute(), SwingWorker's future (the private instance variable of FutureTask) gets kicked into gear, and here's what happens:
*
*future.run() runs in a worker thread.
*future.run() calls the Callable passed in as you quoted above, changing the state to STARTED (which sends a property event) and calling doInBackground. Your own worker code runs and returns a result.
*future.run() calls its own 'set' method to store the result and mark computation as complete. It also calls a private method finishCompletion.
*future.finishCompletion() wakes up any other threads that have called SwingWorker's get() (and therefore have been blocked this whole time). Those calls to get() can be returning the computation result to their callers at the same time as the rest of this list happens.
*future.finishCompletion() calls its own done(). Normally this is an empty method, but as you've quoted above, SwingWorker made an override. So this particular FutureTask done() does two more things: first, it calls SwingWorker#doneEDT(), which simply calls SwingWorker#done() on the event dispatch thread. That's the version of done() that you override in your subclass of SwingWorker. And secondly, it changes the state to DONE which sends another property change event.
*future's finishCompletion() returns to its 'set' method, which returns to run(), which does some housekeeping, and returns. At that point, the worker thread is freed up for other things (new jobs, sleep, a hobby, whatever).
Your own involvement would only be with SwingWorker's done() if you wanted to subclass it. Thanks to the juggling done by SwingWorker, your own override of done() is guaranteed to run on the event dispatch thread, so you can do things with Swing visual components safely. Be aware that done() also gets called if the SwingWorker is cancelled, even if cancel() was called before execute() -- in fact, done() is called before cancel() returns -- so check the status accordingly inside your override.
| |
doc_23529614
|
Is there a difference on how to do this between ExtJS 3.x and 4?
var combo = new Ext.form.ComboBox(config);
var selectedIndex = combo.selectedIndex; // TODO: Implement
if(selectedIndex > 2) {
// Do something
}
Bonus-points for how to add it as a property to the ComboBox-object.
A: In Ext 4.0.2 the same code would look like:
Ext.override(Ext.form.ComboBox, {
getSelectedIndex: function() {
var v = this.getValue();
var r = this.findRecord(this.valueField || this.displayField, v);
return(this.store.indexOf(r));
}
});
Jad, you're missing a closing parenthesis on your return statement... just thought you should know.
A: I think you'll have to use the combo's store for that. Combos have a private findRecord method that'll do a simple search over the store by property and value. You can see an example in the sourcecode itself (Combo.js line 1119).
1) Based on this you could find the selected index this way :
var v = combobox.getValue();
var record = combobox.findRecord(combobox.valueField || combobox.displayField, v);
var index = combobox.store.indexOf(record);
2) Or you could bind yourself to the "select" event which is fired with the combo, the record selected and its index as a parameter.
3) You could also access the view's getSelectedIndexes() but I doubt it's a good solution (in that I'm not sure it's available all the time)
Finally if you want to extend the combobox object I think this should work (if you go with the first solution) :
Ext.override(Ext.form.ComboBox({
getSelectedIndex: function() {
var v = this.getValue();
var r = this.findRecord(this.valueField || this.displayField, v);
return(this.store.indexOf(r));
}
});
A: If you have a combo where valueField is the id used by the combo's store, you can simply avoid the search:
var v = combobox.getValue();
var record = combobox.findRecord(combobox.valueField || combobox.displayField, v);
var index = combobox.store.indexOf(record);
using this:
var id = combobox.getValue();
var record = store_combobox.getById(id);
| |
doc_23529615
|
const express = require('express')
const app = express()
const port = 3000
app.get('/home', (req, res) => {
res.send('Hello World!')
})
app.get('/route1', (req, res) => {
var num = 0;
for(var i=0; i<1000000; i++) {
num = num+1;
console.log(num);
}
res.send('This is Route1 '+ num)
})
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
I first call the endpoint /route1 and then immediately the endpoint /home. The /route1 has for loop and takes some time to finish and then /home runs and finishes. My question is while app was busy processing /route1, how was the request to /home handled, given node js is single threaded?
A: The incoming request will be queued in the nodejs event queue until nodejs gets a chance to process the next event (when your long running event handler is done).
Since nodejs is an event-driven system, it gets an event from the event queue, runs that event's callback until completion, then gets the next event, runs it to completion and so on. The internals of nodejs add things that are waiting to be run to the event queue so they are queued up ready for the next cycle of the event loop.
Depending upon the internals of how nodejs does networking, the incoming request might be queued in the OS for a bit and then later moved to the event queue until nodejs gets a chance to serve that event.
My question is while app was busy processing /route1, how was the request to /home handled, given node js is single threaded?
Keep in mind that node.js runs your Javascript as single threaded (though we do now have Worker Threads if you want), but it does use threads internally to manage things like file I/O and some other types of asynchronous operations. It does not need threads for networking, though. That is managed with actual asynchronous interfaces from the OS.
A: Nodejs has event loop and event loop allows nodejs to perform non blocking I/O operation. Each event loop iteration is called a tick. There are different phases of the event loop.
First is timer phase, since there are no timers in your script event loop will go further to check I/O script.
*
*When you hit route /route1, Node JS Web Server internally maintains a Limited Thread pool to provide services to the Client Requests. It will be placed in FIFO queue then event loop will go further to polling phase.
*Polling phase will wait for pending I/O, which is route /route1. Even Loop checks any Client Request is placed in Event Queue. If no, then wait for incoming requests for indefinitely.
*Meanwhile next I/O script arrives in FIFO queue which is route /home.
FIFO means, first in first out. Therefore first /route1 will get execute the route /home
Below you can see this via diagram.
A Node.js application runs on single thread and the event loop also runs on the same thread
Node.js internally uses the libuv library which is responsible for handling operating system related tasks, like asynchronous I/O based operation systems, networking, concurrency.
More info
A: Node has an internal thread pool from which a thread is assigned when a blocking(io or memeory or network) request is sent. If not, then the request is processed and sent back as such. If the thread pool is full, the request waits in the queue. Refer How, in general, does Node.js handle 10,000 concurrent requests? for more clear answers.
| |
doc_23529616
|
how should i do that?
i have tried to access them in the following way
data[10];
but it didn't worked
public JsonResult ShowEventsList()
{
using (EventSchedularEntities db = new EventSchedularEntities())
{
var EventsList = from evnt in db.Events
select new
{
id = evnt.EventID,
Subject = evnt.EventName,
Description = evnt.EventDesc,
StartTime = evnt.EventDateBegin,
EndTime = evnt.EventDateEnd,
Color = evnt.Importance,
};
List<object[]> LstEvents = new List<object[]>();
foreach (var ev in EventsList)
{
LstEvents.Add(new object[] { ev.id, ev.Subject, ev.Description,DateTime.Parse(ev.StartTime.ToString()).ToString("M/dd/yyyy hh:mm").Replace("-", "/"), DateTime.Parse(ev.EndTime.ToString()).ToString("M/dd/yyyy hh:mm").Replace("-", "/"), ev.Color });
}
return Json( LstEvents, JsonRequestBehavior.AllowGet);
}
}
$.ajax({
type: "POST",
url: "../EventScheduler/ShowEventsList",
datatype: "json",
success: function (result)
{
var data = JSON.parse(result);
alert(data);
}
| |
doc_23529617
|
class SubscribeLauncherCommand extends ContainerAwareCommand
{
protected static $defaultName = 'app:subscribe-launcher';
private $mailer;
protected $em;
public function __construct(EntityManagerInterface $em, \Swift_Mailer $mailer)
{
$this->mailer = $mailer;
$this->em = $em;
parent::__construct();
}
protected function execute(InputInterface $input, OutputInterface $output)
{
$news = $this->em->getRepository(News::class)->findBy(array(), array('date_created' => 'DESC'));
$subscribers = $this->em->getRepository(NewsSubscribe::class)->findBy(array('confirmed' => true));
$tags = $this->em->getRepository(Tag::class)->findAll();
$first_new_date = $news[0]->getDateCreated();
/** @var NewsSubscribe $subscribers */
/** @var \Swift_Message $message */
foreach ($subscribers as $subscriber) {
foreach ($news as $new)
{
if ($new->getDateCreated() < $first_new_date) {
$message = (new \Swift_Message('Test Email'))
->setFrom('send@example.com')
->setTo($subscriber->getEmail())
->setBody(
'test',
'text/html');
$first_new_date = $new->getDateCreated();
}
}
}
}
}
But it doesn't work. Can you please help?
A: So I take it subscribers have a collection of tags they are subscribed to and news is related to a tag.
If that is the case you need to add a couple of conditions in your loop, something like :
foreach ($subscribers as $subscriber) {
$subscribedTags = $subscriber->getSubscribedTags();
foreach ($news as $new)
{
if ($new->getDateCreated() < $first_new_date) {
$relatedTag = $new->getTag();
if(in_array($relatedTag, $subscribedTags)){ //Check if the user is subscribed to the particular tag of this news
...Send the email...
}
}
}
}
| |
doc_23529618
|
I had a look here and here but it is not really related to my question.
let's say we have two examples:
*
*Example 1:
public partial class Form1 : Form
{
private void button1_Click(object sender, EventArgs e)
{
string My_Variable;
.
// do stuff with My_Variable ...
}
}
Example 2:
public partial class Form1 : Form
{
string My_Variable;
private void button1_Click(object sender, EventArgs e)
{
.
// do stuff with My_Variable ...
}
}
In example 1, is firing the button1 event multiple times mean that the My_Variable is declared and assigned memory multiple times?
Which example is the best practice to declare a variable and why?
Thank you
A: My_Variable in your two examples performs different functions..
In the first its in scope purely for the duration of the click - meaning if you never click, its never made, and if you click once, its made and forgotten.
In your second, its part of the form class, and is available throughout any of the methods in your form..
You are comparing apples and pears.
A: The first declarationvariable has its scope throughout the click event only where as the second declaration variable has its scope throughout the class.
It has nothing to do with the best practice. It depends on your requirement completely.
| |
doc_23529619
|
How can I do that?
export default class BaseballPlayerList extends React.Component {
constructor() {
super();
this.state = {
baseBallPlayers: [
{
name: "Barry Bonds",
seasons: [
{
year: 1994,
homeRuns: 37,
hitting: 0.294
},
{
year: 1996,
homeRuns: 40,
hitting: 0.294
}
]
}
]
};
this.addPlayer = this.addPlayer.bind(this);
}
addPlayer(e) {
e.preventDefault();
const newPLayer = {
baseBallPlayers: this.state.baseBallPlayers.name,
seasons: []
};
console.log(newPLayer);
this.setState(prevState => ({
baseBallPlayers: [...prevState.baseBallPlayers, newPlayer]
}));
}
render() {
return (
<div>
<div>
<ul>
{this.state.baseBallPlayers.map((player, idx) => (
<li key={idx}>
<PlayerSeasonInfo player={player} />
</li>
))}
</ul>
</div>
<input value={this.state.baseBallPlayers.name} />
<button onClick={this.addPlayer}>AddPlayer</button>
</div>
);
}
}
export default class PlayerSeasonInfo extends Component {
constructor(props) {
super(props);
this.player = this.props.player;
}
render() {
return (
<div>
{this.player && (
<div>
<span>{this.baseBallPlayers.name}</span>
<span>
<input placeholder="year" />
<input placeholder="homeRuns" />
<input placeholder="hitting" />
<button>AddInfo</button>
</span>
</div>
)}
</div>
);
}
}
A: here do you have a working example: https://codesandbox.io/s/nervous-yonath-9u2d6
the problem was where you where storing the new name and how you where updating the whole players state.
hope the example helps!
| |
doc_23529620
|
Admin
User/viewer
But the problems is that I want to authenticate (sign up /sign in) for both users with facebook or google_oauth2 with same callbacks address without devise gem:
/auth/:provider/callback
On internet, github and youtube I have found articles and videos only for authenticate single model. But I want to authenticate (sign up /sign in) for two models.
Can anyone help?
Please give an idea to authenticate both model with some example without devise gem that an beginner can understand.
A: Generally it is not advised to authenticate two models. Only in really rare cases this is needed. (Think of Uber which has drivers and clients booking the rides).
If you want to have an admin and a normal user, the way to do it is to add a column in the user model called admin which is a boolean. This way you only have one user model - most of the times the admin can do at leat what the user can do.
If your user is an admin, you just set the admin boolean to true :)
| |
doc_23529621
|
A: It´s not!
When an Android-APP was uninstalled there is an INTENT coming up.
http://developer.android.com/reference/android/content/Intent.html#ACTION_PACKAGE_REMOVED
So every other App might be able to recognize your app beeing uninstalled but not your app itself.
| |
doc_23529622
|
A | B | C | D
when i click on B this query runs
select name from user order by 'b'
but result showing all records starting with a or c or d i want to show records only starting with b
thanks for help
A:
but result showing all records
starting with a or c or d i want to
show records only starting with b
You should use WHERE in that case:
select name from user where name = 'b' order by name
If you want to allow regex, you can use the LIKE operator there too if you want. Example:
select name from user where name like 'b%' order by name
That will select records starting with b. Following query on the other hand will select all rows where b is found anywhere in the column:
select name from user where name like '%b%' order by name
A:
i want to show records only starting with b
select name from user where name LIKE 'b%';
i am trying to sort MySQL data alphabeticaly
select name from user ORDER BY name;
i am trying to sort MySQL data in reverse alphabetic order
select name from user ORDER BY name desc;
A: You can use:
SELECT name FROM user WHERE name like 'b%' ORDER BY name
A: If you want to restrict the rows that are returned by a query, you need to use a WHERE clause, rather than an ORDER BY clause. Try
select name from user where name like 'b%'
A: You do not need to user where clause while ordering the data alphabetically.
here is my code
SELECT * FROM tbl_name ORDER BY field_name
that's it.
It return the data in alphabetical order ie; From A to Z.
:)
A: Wildcard Characters are used with like clause to sort records.
If we want to search a string which is starts with B then code is like the following:
select * from tablename where colname like 'B%' order by columnname ;
If we want to search a string which is ends with B then code is like the following:
select * from tablename where colname like '%B' order by columnname ;
If we want to search a string which is contains B then code is like the following:
select * from tablename where colname like '%B%' order by columnname ;
If we want to search a string in which second character is B then code is like the following:
select * from tablename where colname like '_B%' order by columnname ;
If we want to search a string in which third character is B then code is like the following:
select * from tablename where colname like '__B%' order by columnname ;
Note : one underscore for one character.
A: I had the same challenge, but after little research I came up with this and it gave me what I wanted, and I was able to overcome that path.
SELECT * from TABLE ORDER BY name
A: I try to sort data with query it working fine for me please try this:
select name from user order by name asc
Also try below query for search record by alphabetically
SELECT name FROM `user` WHERE `name` LIKE 'b%'
A: MySQL solution:
select Name from Employee order by Name ;
Order by will order the names from a to z.
| |
doc_23529623
|
I have the following conditions/queries.
*
*I have set the Sector Size to 512 bytes, and block erase size for SPI flash is 4K. As in the SPI Flash, block needs to be erased before written. Do I need to keep track on whether a particular block is erased or not or its the file System who is managing this?
*How can I verify that the sector, which I am writing in erased or not? What I am currently doing is, Erase the Complete Block for the sector, which I am going to write?
*How can I make sure, The Block for SPI flash I am going to erase will not affect any Sector containing useful data?
Thanking in an Anticipation,
Regards,
AK
A: The simplest solution is to define the "cluster" size to 4K, the same as the page size of your flash. That mean each file, even if only 1 byte, takes 4K, which is 8 consecutive sectors of 512 bytes each.
As soon as you need to reserve one more cluster, when the file grow above 4096 bytes, you pick a free cluster, chain it to the FAT, and write the next byte.
For performance reason and to increase the durability of the flash, you should avoid to erase of flash sector when not needed. It is many order of magnitude faster to read then erasing. So, as you select a free cluster, you can start a loop to read each of the 8 sectors. As soon as you find even a single byte not equal to 0xFF, then you abort the loop and call the flash erase for that sector.
A further optimization is possible if the flash controller is able to perform the blank test directly. Such test could be done in a few microsecond while reading 8 sectors and looping to check each of the 4096 bytes is probably slower.
| |
doc_23529624
|
Here is the crash report
W/ActivityThread( 3835): handleWindowVisibility: no activity for token android.os.BinderProxy@d2a7401
D/AndroidRuntime( 3835): Shutting down VM
E/AndroidRuntime( 3835): FATAL EXCEPTION: main
E/AndroidRuntime( 3835): Process: com.asd.android, PID: 3835
E/AndroidRuntime( 3835): java.lang.NoSuchMethodError: No static method createProxyAuthIntent(Landroid/content/Context;Ljava/lang/String;Ljava/util/Collection;Ljava/lang/String;ZZLcom/facebook/login/DefaultAudience;Ljava/lang/String;Ljava/lang/String;)Landroid/content/Intent; in class Lcom/facebook/internal/NativeProtocol; or its super classes (declaration of 'com.facebook.internal.NativeProtocol' appears in /data/app/com.asd.android-yFArFXU1ndqk1qS4cpm6sA==/base.apk)
E/AndroidRuntime( 3835): at com.facebook.login.KatanaProxyLoginMethodHandler.tryAuthorize(KatanaProxyLoginMethodHandler.java:44)
E/AndroidRuntime( 3835): at com.facebook.login.LoginClient.tryCurrentHandler(LoginClient.java:264)
E/AndroidRuntime( 3835): at com.facebook.login.LoginClient.tryNextHandler(LoginClient.java:216)
E/AndroidRuntime( 3835): at com.facebook.login.LoginClient.authorize(LoginClient.java:121)
E/AndroidRuntime( 3835): at com.facebook.login.LoginClient.startOrContinueAuth(LoginClient.java:102)
E/AndroidRuntime( 3835): at com.facebook.login.LoginFragment.onResume(LoginFragment.java:160)
E/AndroidRuntime( 3835): at androidx.fragment.app.Fragment.performResume(Fragment.java:2649)
E/AndroidRuntime( 3835): at androidx.fragment.app.FragmentManagerImpl.moveToState(FragmentManagerImpl.java:922)
E/AndroidRuntime( 3835): at androidx.fragment.app.FragmentManagerImpl.moveFragmentToExpectedState(FragmentManagerImpl.java:1238)
E/AndroidRuntime( 3835): at androidx.fragment.app.FragmentManagerImpl.moveToState(FragmentManagerImpl.java:1303)
E/AndroidRuntime( 3835): at androidx.fragment.app.FragmentManagerImpl.dispatchStateChange(FragmentManagerImpl.java:2659)
E/AndroidRuntime( 3835): at androidx.fragment.app.FragmentManagerImpl.dispatchResume(FragmentManagerImpl.java:2625)
E/AndroidRuntime( 3835): at androidx.fragment.app.FragmentController.dispatchResume(FragmentController.java:268)
E/AndroidRuntime( 3835): at androidx.fragment.app.FragmentActivity.onResumeFragments(FragmentActivity.java:479)
E/AndroidRuntime( 3835): at androidx.fragment.app.FragmentActivity.onPostResume(FragmentActivity.java:468)
E/AndroidRuntime( 3835): at android.app.Activity.performResume(Activity.java:7317)
E/AndroidRuntime( 3835): at android.app.ActivityThread.performResumeActivity(ActivityThread.java:3776)
E/AndroidRuntime( 3835): at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:3816)
E/AndroidRuntime( 3835): at android.app.servertransaction.ResumeActivityItem.execute(ResumeActivityItem.java:51)
E/AndroidRuntime( 3835): at android.app.servertransaction.TransactionExecutor.executeLifecycleState(TransactionExecutor.java:145)
E/AndroidRuntime( 3835): at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:70)
E/AndroidRuntime( 3835): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1808)
E/AndroidRuntime( 3835): at android.os.Handler.dispatchMessage(Handler.java:106)
E/AndroidRuntime( 3835): at android.os.Looper.loop(Looper.java:193)
E/AndroidRuntime( 3835): at android.app.ActivityThread.main(ActivityThread.java:6669)
E/AndroidRuntime( 3835): at java.lang.reflect.Method.invoke(Native Method)
E/AndroidRuntime( 3835): at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493)
E/AndroidRuntime( 3835): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:858)
E/CrashlyticsCore( 3835): Can't find SessionUser data for session ID 5FB67E1B0149-0001-7208-82D4D8267C33
I/Process ( 3835): Sending signal. PID: 3835 SIG: 9
| |
doc_23529625
|
What I wanted is to print the illegal html tag(s) as is and activate/perform(I don't know the right term) the html tags I have allowed.
Any suggestions?
A: This is the best solution i came up.
First replaced all the < and > for html code then replaced back the allowed tags.
<?php
$original_str = "<html><b>test</b><strong>teste</strong></html>";
$allowed_tags = array("b", "strong");
$sans_tags = str_replace(array("<", ">"), array("<",">"), $original_str);
$regex = sprintf("~<(/)?(%s)>~", implode("|",$allowed_tags));
$with_allowed = preg_replace($regex, "<\\1\\2>", $sans_tags);
echo $with_allowed;
echo "\n";
Result:
guax@trantor:~$ php teste.php
<html><b>test</b><strong>teste</strong></html>
I wonder if there's any solution for replacing all at once. But it works.
| |
doc_23529626
|
IF EXISTS (SELECT * FROM table WHERE id = @id)
BEGIN
UPDATE table
SET stock = stock + @stock
WHERE id = @id
END
ELSE
BEGIN
INSERT INTO [table] ([id], [name], [stock])
VALUES (@id, @name, @stock)
END
But, this code isn't working and I am unable to find the root cause for the same. Can someone please help me?
A: table is a reserved keyword. so I guess you have a trivial syntax error: Incorrect syntax near the keyword 'table'. Wrap it with [], as you already did for INSERT statement
IF EXISTS (
SELECT * FROM [table] WHERE id = @id)
BEGIN
UPDATE [table] SET stock = stock + @stock
WHERE id = @id
END
ELSE
BEGIN
INSERT INTO [table] ([id]
,[name]
,[stock])
VALUES
(
@id,@name,@stock
)
END
A: I do not see any error in your code, I tried to replicate the process and it is working fine for me. Can you tell me what is the error you are facing exactly.
The following is the code I tried to replicate your scenario:
CREATE TABLE stocks (
id INT
,NAME VARCHAR(100)
,stock BIGINT
)
CREATE PROCEDURE InsertStocks @id INT
,@name VARCHAR(100)
,@stock BIGINT
AS
BEGIN
IF EXISTS (
SELECT *
FROM stocks
WHERE id = @id
)
BEGIN
UPDATE stocks
SET stock = stock + @stock
WHERE id = @id
END
ELSE
BEGIN
INSERT INTO stocks (
[id]
,[name]
,[stock]
)
VALUES (
@id
,@name
,@stock
)
END
END
INSERT INTO stocks
VALUES (
1
,'abc'
,200
)
INSERT INTO stocks
VALUES (
2
,'abc'
,300
)
INSERT INTO stocks
VALUES (
3
,'abc'
,500
)
EXEC Insertstocks 1
,'abc'
,700
This is updated successfully in my case.
A: Your code and syntax is correct. Let's see a sample example:
if EXISTS(select * from dbo.tbName where Id=1)
BEGIN
print 1
END
ELSE
BEGIN
print 2
END
| |
doc_23529627
|
specifically
how can we use the result to save it in another array
take in consideration that for iOS
A: an example:
NSMutableDictionary * params = [NSMutableDictionary dictionaryWithObjectsAndKeys:
@"SELECT uid,name FROM user WHERE uid=4", @"query",
nil];
[facebook requestWithMethodName: @"fql.query"
andParams: params
andHttpMethod: @"POST"
andDelegate: self];
| |
doc_23529628
|
e.g.
3 -> 10
432 -> 1,000
241,345 -> 1,000,000
Is there a way to get it in a single O(1) line?
A simple way I can see is to use a for loop and increment the power of ten until n / (10 ^ i) < 1, but then that isn't O(1) and is O(log n) instead. (well I'm taking a guess it's log n as it involves a power!)
A: If you're looking for a string, you can use Math.log10 to find the right index into an array:
// Do more of these in reality, of course...
private static final String[] MESSAGES = { "1", "10", "100", "1,000", "10,000" };
public static final String roundUpToPowerOf10(int x) {
return MESSAGES[(int) Math.ceil(Math.log10(x))];
}
If you want it to return the integer with the right value, you can use use Math.pow:
public static final int roundUpToPowerOf10(int x) {
return (int) Math.pow(10, Math.ceil(Math.log10(x)));
}
A: Try
double input = ...
double output = Math.pow(10, Math.ceil(Math.log10(input)));
You can cast your output to an integer then. The operations amount is constant so O(1) is guaranteed for a single input.
| |
doc_23529629
|
When the Simple Add button is clicked, why does the component renders twice?
This causes problem when the the state has nested data and arrays because each render causes the event to be handled multiple times (see https://stackblitz.com/edit/react-ts-t7ynzi?file=App.tsx)
How can I prevent the rendering and duplicate processing of the onclick event?
interface SimpleFormProps {
data: number;
onAdd: () => void;
}
const SimpleForm = ({ data, onAdd }: SimpleFormProps) => {
return (
<form>
{data}
<button type="button" onClick={() => onAdd()}>
Simple Add
</button>
</form>
);
};
export default function App() {
const [simpleData, setSimpleData] = React.useState(10);
const handleSimpleAdd = () => {
setSimpleData((prev) => {
const newData = prev + 1;
// expect to be called once each click,
// but actually it is called twice each click
console.log('handleSimpleAdd');
return newData;
});
};
return (
<div>
<SimpleForm data={simpleData} onAdd={() => handleSimpleAdd()} />
</div>
);
}```
A: Change this:
class ComplexFormData {
stuff: number[];
constructor(prev: ComplexFormData | undefined = undefined) {
if (prev) {
this.stuff = prev.stuff;
} else {
this.stuff = [];
}
}
}
to this:
class ComplexFormData {
stuff: number[];
constructor(prev: ComplexFormData | undefined = undefined) {
if (prev) {
this.stuff = [...prev.stuff];
} else {
this.stuff = [];
}
}
}
The problem is that you are modifying the existing reference instead of creating a new one which causes bugs like that
fixed example
| |
doc_23529630
|
Heres my code:
- (void)loginToMistar {
//Create POST request
NSMutableURLRequest *request = [[NSMutableURLRequest alloc] init];
//Create and send request
NSURL *url = [NSURL URLWithString:@"https://mistar.oakland.k12.mi.us/novi/StudentPortal/Home/Login"];
NSURLRequest *urlRequest = [NSURLRequest requestWithURL:url];
NSString *postString = [NSString stringWithFormat:@"Pin=%@&Password=%@", @"20005012", @"wildcats"];
NSData * postBody = [postString dataUsingEncoding:NSUTF8StringEncoding];
[request setHTTPBody:postBody];
[request addValue:@"application/x-www-form-urlencoded; charset=utf-8" forHTTPHeaderField:@"Content-Type"];
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
[NSURLConnection sendAsynchronousRequest:urlRequest queue:queue completionHandler:^(NSURLResponse *response, NSData *data, NSError *error) {
// do whatever with the data...and errors
if ([data length] > 0 && error == nil) {
NSString *loggedInPage = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
NSLog(loggedInPage);
}
else {
NSLog(@"error");
}
}];
}
It logs in, and returns the HTML content of the page saying my login failed. So it succeeds in passing a request, but it fails and gives me the Error with login page instead.
What's wrong with my request?
A: Since you are posting data to server you must call this method for your NSMutableURLRequest object-
[urlRequest setHTTPMethod:@"POST"];
The default HTTP method is "GET". Read more-
https://developer.apple.com/library/ios/documentation/cocoa/reference/foundation/Classes/NSMutableURLRequest_Class/Reference/Reference.html#//apple_ref/occ/instm/NSMutableURLRequest/setHTTPMethod:
A: If you use the convenience API of NSURLConnection, sendAsynchronousRequest:queue:completionHandler: and require authentication, you MUST pass the credentials within the URL in some appropriate way. This implies that the server supports this naive authentication method.
In order to have more control over the authentication process, you should use NSURLSession or NSURLConnection using delegates.
See also: sendAsynchronousRequest:queue:completionHandler:
and URL Loading System Programming Guide
| |
doc_23529631
|
{
CancellationTokenSource cts = new CancellationTokenSource();
ThreadPool.QueueUserWorkItem(o => DoWork(cts.Token, 100));
Thread.Sleep(500);
try
{
cts.Token.Register(CancelCallback3);
cts.Token.Register(CancelCallback2);
cts.Token.Register(CancelCallback1);
cts.Cancel(false);
}
catch (AggregateException ex)
{
foreach (Exception curEx in ex.Data)
{
Trace.WriteLine(curEx.ToString());
}
}
Console.ReadKey();
}
private static void CancelCallback1()
{
Trace.WriteLine("CancelCallback1 was called");
throw new Exception("CancellCallback1 exception");
}
private static void CancelCallback2()
{
Trace.WriteLine("CancelCallback2 was called");
throw new Exception("CancellCallback2 exception");
}
private static void CancelCallback3()
{
Trace.WriteLine("CancelCallback3 was called");
}
private static void DoWork(CancellationToken cancellationToken, int maxLength)
{
int i = 0;
while (i < maxLength && !cancellationToken.IsCancellationRequested)
{
Trace.WriteLine(i++);
Thread.Sleep(100);
}
}
The output is:
0
1
2
3
4
CancelCallback1 was called
According to http://msdn.microsoft.com/en-us/library/dd321703.aspx I expected to get AggregateException, it looks like that throwOnFirstException parameter doesn't make any sense here. What's wrong with my code.
A: You need to use the Task<> class to get an AggregateException. It is a substitute for ThreadPool.QueueUserWorkItem().
A: The problem is with lack of strong debugging experience in Visual Studio. My VS debugger settings were set to stop at first exception occurence.
FYI CancellationTokenSource.Cancel(false) works fine with ThreadPool as well as with Tasks.
| |
doc_23529632
|
enter image description here
When I know the value, I use the following command:
$RCLOCAL= "option rclocal '0'" or
$RCLOCAL= "option rclocal '1'"
sed -i "s/option rclocal ''/$RCLOCAL/g" test
| |
doc_23529633
|
@SuppressWarnings({ "rawtypes", "unchecked" })
@Override
public <T extends EssentialFilter> Class<T>[] filters() {
Class[] filters = { CSRFFilter.class };
return filters;
}
Which is fine in most of the cases. But I want to setup Facebook Canvas page which points to our website. The thing is Facebook sends POST request to our site and it is prevented by the CSRF check. It always return "Invalid CSRF Token"
So I want to selectively disable CSRF check in some actions say www.ourwebsite.com/canvas
Is this feasible?
A: I created a blog post on how to do this, see here:
http://dominikdorn.com/2014/07/playframework-2-3-global-csrf-protection-disable-csrf-selectively/
2017-Update: Starting with PlayFramework 2.6, this is now included in the Framework itself: https://www.playframework.com/documentation/2.6.x/JavaCsrf#applying-a-global-csrf-filter
| |
doc_23529634
|
With what I have so far, the mouse rotates towards the cursors direction when I bring the cursor down or right. It does not work properly when going up or left.
I have seen somewhere that atan assumes the coordinate system is +x right and +y up. I am unsure of how to proceed from there.
I have inserted my code below and replaced the image with a white rectangle.
var click_degrees;
const $dial = $('#item');
const radius = $dial.outerWidth();
const center_x = $dial.offset().left + radius;
const center_y = $dial.offset().top + radius;
function get_degrees(mouse_x, mouse_y) {
const radians = Math.atan2(mouse_y - center_y, mouse_x - center_x);
const degrees = radians * (180 / Math.PI);
return degrees;
}
var dragItem = document.querySelector("#item");
var active = false;
var currentX;
var currentY;
var initialX;
var initialY;
var xOffset = 0;
var yOffset = 0;
container.addEventListener("mousedown", dragStart, false);
container.addEventListener("mouseup", dragEnd, false);
container.addEventListener("mousemove", drag, false);
function dragStart(e) {
click_degrees = get_degrees(e.pageX, e.pageY);
initialX = e.clientX - xOffset;
initialY = e.clientY - yOffset;
if (e.target === dragItem) {
active = true;
}
}
function dragEnd(e) {
initialX = currentX;
initialY = currentY;
active = false;
}
function drag(e) {
if (active) {
e.preventDefault();
const degrees = get_degrees(e.pageX, e.pageY) - click_degrees;
currentX = e.clientX - initialX;
currentY = e.clientY - initialY;
xOffset = currentX;
yOffset = currentY;
setTranslate(currentX, currentY, dragItem, degrees);
}
}
function setTranslate(xPos, yPos, el, deg) {
el.style.transform = "translate3d(" + xPos + "px, " + yPos + "px, 0) rotate(" + deg + "deg)";
}
#container {
width: 100%;
height: 1000px;
background-color: #333;
overflow: hidden;
touch-action: none;
}
#item {
touch-action: none;
user-select: none;
transform-origin: 0 50%;
width: 100px;
height: 50px;
background:#fff;
}
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width,
initial-scale=1.0,
user-scalable=no" />
<title>Drag/Drop/Bounce</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
</head>
<body>
<div id="outerContainer">
<div id="container">
<div id="item"></div>
</div>
</div>
</body>
</html>
Please let me know if you would like for me to clarify anything or if I should add more information.
| |
doc_23529635
|
x = [1,2,3]
y = [4,5,6]
x = y
print x
print y
y.remove(4)
print x
print y
When I remove 4 from the list [4,5,6], both the x and y variables will point to the same [5,6] list in memory am I right?
If so, then how can I make x equal to a replica of the y list so that x and y both point to different places in memory yet those places both hold the same value? (As opposed to them both pointing to the same place in memory as seen above)
A: For lists in particular, you can make a shallow copy by taking a slice of the whole list:
x = y[:]
This isn't guaranteed to work for arbitary sliceable objects (eg, numpy array slices don't create a new array), so it can be useful to use the builtin copy module:
import copy
x = copy.copy(y)
can be expected to work for an arbitary y.
A: Just clone the list:
x = list(y)
A: Make a copy:
>>> x = [1,2,3]
>>> y = [4,5,6]
>>> x = y[:]
>>> y.remove(4)
>>> y
[5, 6]
>>> x
[4, 5, 6]
A: x = [1,2,3]
y = [4,5,6]
x = y[:]
print x
print y
y.remove(4)
print x
print y
So [:] makes a copy of a list.
| |
doc_23529636
|
class X: pass
def f(self): pass
x = X()
x.f = f.__get__(x)
What I want to know is where this behavior is specified in the reference. Here's the closest I've found:
PEP 590
Descriptor HowTo Guide
I'd like to know if this behavior is in fact specified in the language reference somewhere.
It seems like an important enough use case to be guaranteed by the documentation (i.e. it's not clear if what appears in a HowTo demonstrates a guaranteed language feature or makes use of an implementation detail, and I'd like to think that, in principle, all guaranteed functionality can be deduced from the spec without referring to PEPs).
A: You're probably looking for this bit`:
object.__get__(self, instance, owner=None)
Called to get the attribute of the owner class (class attribute access) or of an instance of that class (instance attribute access). The optional owner argument is the owner class.
You're essentially calling function.__get__, whose rather simple implementation (in CPython anyway) is here; it basically calls PyMethod_New, which basically just binds a function with a self.
| |
doc_23529637
|
class Device < ActiveRecord::Base
has_many :prices
has_many :commercial_offer, :through => :prices
class CommercialOffer < ActiveRecord::Base
has_many :prices
has_many :devices, :through => :prices
class Price < ActiveRecord::Base
belongs_to :device
belongs_to :commercial_offer
end
commercial_offer/_form.html.erb
<% for device in Device.find(:all) %>
<div>
<%= check_box_tag "commercial_offer[device_ids][]", device.id, @commercial_offer.devices.include?(device) %>
<%= device.name %>
<%= form_for( @price) do |f| %>
<%= f.label :price %><br />
<%= f.text_field :price %>
i get undefined model name for <%= form_for( @price) do |f| %>
A: You can accept nested attributes on the Device model like so:
# app/models/device.rb
class Device < ActiveRecord::Base
has_many :prices
accepts_nested_attributes_for :prices
end
# app/controllers/devices_controller.rb
class DevicesController < ApplicationController
def index
@devices = Device.all
end
end
# app/views/devices/_form.html.erb
<% @devices.each do |device| %>
<div>
<%= check_box_tag "commercial_offer[device_ids][]", device.id, @commercial_offer.devices.include?(device) %>
<%= form_for device do |f| %>
<%= device.name %>
<%= f.fields_for :prices do |p| %>
<%= p.label :price %><br />
<%= p.text_field :price %>
<% end %>
<%= f.submit "Submit" %>
<% end %>
</div>
<% end %>
It's a pretty clean way to include children objects in forms.
| |
doc_23529638
|
So for deployment, I want to know how to build loopback 3 project so that I can use these commands into my bitbucket.yml file.
I checked the documentations but for lb3 there is nothing for building the project. I got this into documentations: Preparing-for-deployment. But I am not user how I can use this into the yml file.
For loopback 4 we can use @loopback/build, and its working fine there. But I couldn't find anything for loopback 3.
Is there any other way to build loopback 3 project ?
Thanks in advance!
A: I didn't find anything for creating bundle for my loopback 3 app,
we can't make bundle of lb3. we can run server.js file and that's what I did using PM2. AZURE_EXTENSION_COMMAND here you can see that I have pulled to code from my branch and run the server.js file from that.
I used following things into my bitbucket.yml :
> pipelines:
branches:
> master:
> - step:
> script:
> - npm install
>
> - npm run posttest
>
> - step:
> name: Deploy to master
> deployment: production
> script:
> - echo "Deploying to master"
>
> - pipe: microsoft/azure-vm-linux-script-deploy:1.0.1
> variables:
> AZURE_APP_ID: '<appid>'
> AZURE_PASSWORD: '<pass>'
> AZURE_TENANT_ID: '<tenantid>'
> AZURE_RESOURCE_GROUP: '<rg>'
> AZURE_VM_NAME: '<vm name>'
> AZURE_EXTENSION_COMMAND: 'cd <path to my folder> && git remote add origin <my repo> && git pull origin master && npm install -g npm && npm install && sudo -E pm2 start server/server.js'
And in my package.json I have used below script for auditing:
"scripts": {
"posttest": "npm run lint && npm audit --audit-level high"
}
And it is working fine.
I am not sure if this is the very right methode, but i just found it useful.
Hope it can help someone as well.
Thanks!
A: You can't build a loopback 3 server you can only run it.
To run a loopback server you simply use npm start or node . or even node server/server
Your postest script is running a linter and an audit, not the actual server.
What is running your server is not the script in the package.json it's the AZURE_EXTENSION_COMMAND part.
It's running pm2 start server/server.js which is a process manager that run your node server.
Using pm2 is correct making a separate step for testing and lining is also correct the problem is that you are confusing which part do what role.
This resulted in a response to the wrong question.
| |
doc_23529639
|
langugae:Objective-C
When the status bars height is 40px,we run the code ,the error happened like the following
Unable to simultaneously satisfy constraints.
Probably at least one of the constraints in the following list is one you don't want.
Try this:
(1) look at each constraint and try to figure out which you don't expect;
(2) find the code that added the unwanted constraint or constraints and fix it.
[
<NSLayoutConstraint:0x7fe903dd5940 UIInputSetContainerView:0x7fe903dd2e50.top == UITextEffectsWindow:0x7fe903e554b0.top + 20>,
<NSLayoutConstraint:0x7fe903d82c00 UIInputSetContainerView:0x7fe903dd2e50.top == UITextEffectsWindow:0x7fe903e554b0.top>
]
Will attempt to recover by breaking constraint
<NSLayoutConstraint:0x7fe903dd5940 UInputSetContainerView:0x7fe903dd2e50.top == UITextEffectsWindow:0x7fe903e554b0.top>
So what should I do?Window of AppDelegate changed position.
normal situation
problem
| |
doc_23529640
|
date X
2012-10-02 2210
2012-10-02 2215
2012-10-03 410
2012-10-03 430
2012-10-03 535
2012-10-03 550
2012-10-04 555
2012-10-04 600
2012-10-04 605
2012-10-04 610
How do I aggregate/Group on date and select only the last value on X in R language.
date X
2012-10-02 2215
2012-10-03 550
2012-10-04 610
If I need to sum X by date, then I can use aggregate function
aggregate(x, by=list(x=date), FUN=sum)
But my requirement is to select only the last row from each group. How to do this. Please advise.
A: You can try
library(data.table)
setDT(df1)[,list(X=X[.N]) , date]
# date X
#1: 2012-10-02 2215
#2: 2012-10-03 550
#3: 2012-10-04 610
Or using base R
aggregate(X~date, df1,FUN=tail,1)
# date X
#1 2012-10-02 2215
#2 2012-10-03 550
#3 2012-10-04 610
A: Or using dplyr:
library(dplyr)
df %>%
group_by(date) %>%
slice(n()) # selects only the last row (nth row of n total) within each subgroup
Produces:
Source: local data frame [3 x 2]
Groups: date
date X
1 2012-10-02 2215
2 2012-10-03 550
3 2012-10-04 610
| |
doc_23529641
|
My email server, in addition to postfix and dovecot runs Roundcube on an internal web server. The mail server is at 192.168.18.12. No there is no SMTP or IMAP access to this server from or to the internet. Delivery of email in and out is handled through other secure means.
Here is my reverse proxy config on the main server:
<VirtualHost *:80>
ServerName emailserver.larkat.com
ProxyRequests off
ProxyPreserveHost On
ProxyPass "/" "http://192.168.18.12:80/"
ProxyPassReverse "/" "http://192.168.18.12:80/"
</VirtualHost>
instead of getting the emailserver server I am getting the web site at latkat.com.
All computers are Debian Linux.
What am I not doing correctly?
I have 2 web servers on my home network. My main server is at 192.168.18.2 and has 4 virtual domains that have been working fine for years. My home network DNS server gives out 192.168.18.2 for queries of larkat.com, www.larkat.com and emailserver.larkat.com.
My email server, in addition to postfix and dovecot runs Roundcube on an internal web server. The mail server is at 192.168.18.12. No there is no SMTP or IMAP access to this server from or to the internet. Delivery of email in and out is handled through other secure means.
Here is my reverse proxy config on the main server:
<VirtualHost *:80>
ServerName emailserver.larkat.com
ProxyRequests off
ProxyPreserveHost On
ProxyPass "/" "http://192.168.18.12:80/"
ProxyPassReverse "/" "http://192.168.18.12:80/"
instead of getting the emailserver server I am getting the web site at latkat.com.
All computers are Debian Linux.
What am I not doing correctly?
| |
doc_23529642
|
i have a list of datasets where each of those sets should have a row length 5 for each month (Jan-May). it should look like this:
data.frame(name = rep("B", 5),
doc_month = c("2022.01", "2022.02", "2022.03", "2022.04", "2022.05"),
i_name = rep("Aa",5),
aggregation = rep("34"), 5)
but some of my datasets dont have data for certain months, or are completely empty, and therefore have a shorter row length/no rows at all. like this:
data.frame(name = "A",
doc_month = "2022.01",
i_name = "Aa",
aggregation = "34")
I would like to extend each dataset, even empty ones, with the specific months , copy all the other information into the row and put a 0 for aggregation.
I tried to use extend and complete by tidyr but couldnt make it work.
A: With tidyr's complete with purrr's reduce to add more dataframes.
Also tweaked aggregation = rep(34, 5).
library(tidyverse)
df1 <- data.frame(name = rep("B", 5),
doc_month = c("2022.01", "2022.02", "2022.03", "2022.04", "2022.05"),
i_name = rep("Aa",5),
aggregation = rep(34, 5))
df2 <- data.frame(name = "A",
doc_month = "2022.01",
i_name = "Aa",
aggregation = 34)
reduce(list(df1, df2, df1), bind_rows) |>
complete(doc_month, nesting(name, i_name), fill = list(aggregation = 0))
#> # A tibble: 15 × 4
#> doc_month name i_name aggregation
#> <chr> <chr> <chr> <dbl>
#> 1 2022.01 A Aa 34
#> 2 2022.01 B Aa 34
#> 3 2022.01 B Aa 34
#> 4 2022.02 A Aa 0
#> 5 2022.02 B Aa 34
#> 6 2022.02 B Aa 34
#> 7 2022.03 A Aa 0
#> 8 2022.03 B Aa 34
#> 9 2022.03 B Aa 34
#> 10 2022.04 A Aa 0
#> 11 2022.04 B Aa 34
#> 12 2022.04 B Aa 34
#> 13 2022.05 A Aa 0
#> 14 2022.05 B Aa 34
#> 15 2022.05 B Aa 34
Created on 2022-06-10 by the reprex package (v2.0.1)
A: You could create a skeleton dataset with the five months and then join it to each of your partial datasets.
library(dplyr)
library(tidyr)
data_A <- data.frame(name = "A",
doc_month = "2022.01",
i_name = "Aa",
aggregation = "34")
reference <- data.frame(doc_month = c("2022.01", "2022.02", "2022.03", "2022.04", "2022.05"))
data_A |>
full_join(reference, by = "doc_month") |>
mutate(aggregation = replace_na(aggregation, "0")) |>
fill(name, i_name)
Output:
#> name doc_month i_name aggregation
#> 1 A 2022.01 Aa 34
#> 2 A 2022.02 Aa 0
#> 3 A 2022.03 Aa 0
#> 4 A 2022.04 Aa 0
#> 5 A 2022.05 Aa 0
Created on 2022-06-10 by the reprex package (v2.0.1)
| |
doc_23529643
|
def func():
try:
result = calculate()
finally:
try:
cleanup()
except Exception:
pass
return result
There is a warning about Local variable 'result' might be referenced before assignment:
But I can't really see how that's possible. One of these must be true:
*
*calculate() raises an exception --> the return statement will never get reached, so result is not referenced again
*calculate() does not raise an exception --> result is successfully assigned, and the return statement returns that value
How would you ever get result referenced before assignment? Is there an implementation of calculate and cleanup which could demonstrate that happening?
A: This is a false positive of PyCharms warning heuristics. As per the Python specification, the code behaves as you describe and result can only be reached when set.
According to 8.4 in the Python documentation:
If the finally clause executes a return, break or continue statement, the saved exception is discarded:
>>> def f():
... try:
... 1/0
... finally:
... return 42
...
>>> f()
42
The Python interpreter will ignore the exception that was caused by calculate() if the finally block contains a return, break, or continue statement.
This means that with the implementation you provided, where the finally block has neither of the words specified above, the exception caused by calculate won't be discarded, so the result variable won't be referenced, meaning that this warning is useless.
| |
doc_23529644
|
in my controller:
$state = false;
return redirect()->route('newPr')->with('errorMessageDuration', 'error', $state);
and in my view I try to get it like this:
<input id="PrId" type="text" value="{{ isset($state) ? $state : '' }}">
but somehow it does not work, it keeps being empty.
I want it to be saved permanently.
A: Use withErrors() with an associative array instead, like so:
return redirect()->route('newPr')->withErrors(compact('state'));
Then you can simply use the $errors variable in your view.
From the docs:
The $errors variable is always defined and can be safely used. The
$errors variable will be an instance of Illuminate\Support\MessageBag.
For more information on working with this object, check out its
documentation.
A: Try using the session to display it,
<input id="PrId" type="text" value="{{ isset(session($state)) ? $session($state) : '' }}">
A: Use sessions.
$request->session()->flash('error', 'Some error message');
return redirect()->route('newPr');
And in the view:
@if (session('error'))
The error is {{ session('error') }}
@endif
A: with method takes 2 parameters. The 1st one being the key and 2nd one value Or if the 1st parameter is an array its key will be the key and value will be the value.
So in your case you can do ->with('state', $state) Or ->with(compact('state'))
with method will flush this value to the session so in order to retrieve the value you need to get it from session like this session('state')
You can pass any value in this manner.
A: You can't send variable with redirect, but if you want try with session.
Take a look at my old answer here: Trying to pass a variable to a view, but nothing is shown in the view - laravel
| |
doc_23529645
|
The problem is that I cannot pass parameters to new Page. For example, I crash when passing title. Also looking for examples of sending multiple parameters, such as a map.
REQUEST: Could someone Modify the _gotoThirdPage() method to send parameters to Class SecondPage.
//====================. Code below ===================
import 'package:flutter/material.dart';
void main() => runApp(new MyApp());
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return new MaterialApp(
title: 'Flutter Demo',
theme: new ThemeData(
primarySwatch: Colors.blue,
),
home: new MyHomePage(title: 'Flutter Demo Home Page'),
// Added ===
routes: <String, WidgetBuilder>{
SecondPage.routeName: (BuildContext context) => new SecondPage(title: "SecondPage"),
},
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => new _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
_counter++;
});
}
@override
Widget build(BuildContext context) {
return new Scaffold(
appBar: new AppBar(
title: new Text(widget.title),
),
body: new Center(
child: new Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
new RaisedButton(
onPressed: _gotoSecondPage,
child: new Text("Goto SecondPage- Normal"),
),
new RaisedButton(onPressed: _gotoThirdPage,
child: new Text("goto SecondPage = with PRB"),
),
new Text(
'You have pushed the button this many times:',
),
new Text(
'$_counter',
style: Theme.of(context).textTheme.display1,
),
],
),
),
floatingActionButton: new FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: new Icon(Icons.add),
), // This trailing comma makes auto-formatting nicer for build methods.
);
}
void _gotoSecondPage() {
//=========================================================
// Transition - Normal Platform Specific
print('==== going to second page ===');
print( SecondPage.routeName);
Navigator.pushNamed(context, SecondPage.routeName);
}
void _gotoThirdPage() {
//=========================================================
// I believe this is where I would be adding a PageRouteBuilder
print('==== going to second page ===');
print( SecondPage.routeName);
//Navigator.pushNamed(context, SecondPage.routeName);
//=========================================================
//=== please put PageRouteBuilderCode here.
final pageRoute = new PageRouteBuilder(
pageBuilder: (BuildContext context, Animation animation,
Animation secondaryAnimation) {
// YOUR WIDGET CODE HERE
// I need to PASS title here...
// not sure how to do this.
// Also, is there a way to clean this code up?
return new SecondPage();
},
transitionsBuilder: (BuildContext context, Animation<double> animation,
Animation<double> secondaryAnimation, Widget child) {
return SlideTransition(
position: new Tween<Offset>(
begin: const Offset(1.0, 0.0),
end: Offset.zero,
).animate(animation),
child: new SlideTransition(
position: new Tween<Offset>(
begin: Offset.zero,
end: const Offset(1.0, 0.0),
).animate(secondaryAnimation),
child: child,
),
);
},
);
Navigator.of(context).push(pageRoute);
}
}
class SecondPage extends StatefulWidget {
SecondPage({Key key, this.title}) : super(key: key);
static const String routeName = "/SecondPage";
final String title;
@override
_SecondPageState createState() => new _SecondPageState();
}
/// // 1. After the page has been created, register it with the app routes
/// routes: <String, WidgetBuilder>{
/// SecondPage.routeName: (BuildContext context) => new SecondPage(title: "SecondPage"),
/// },
///
/// // 2. Then this could be used to navigate to the page.
/// Navigator.pushNamed(context, SecondPage.routeName);
///
class _SecondPageState extends State<SecondPage> {
@override
Widget build(BuildContext context) {
return new Scaffold(
appBar: new AppBar(
//======== HOW TO PASS widget.title ===========
title: new Text(widget.title),
//title: new Text('==== second page ==='),
),
body: new Container(),
floatingActionButton: new FloatingActionButton(
onPressed: _onFloatingActionButtonPressed,
tooltip: 'Add',
child: new Icon(Icons.add),
),
);
}
void _onFloatingActionButtonPressed() {
}
}
A: I am not quite sure what you are trying to do. The easiest way to pass parameters to new Page for me is using the constructor of the new page if I need to do some processing- show/ use/ pass them later on. Although I dont think it is appropriate if you have large number of parameters.
So in some cases I would use SharedPrefences. I think this will be better for sending multiple parameters, such as a map.
Btw about your navigation I advise you to use Navigate with named routes. Something like:
MaterialApp(
// Start the app with the "/" named route. In our case, the app will start
// on the FirstScreen Widget
initialRoute: '/',
routes: {
// When we navigate to the "/" route, build the FirstScreen Widget
'/': (context) => FirstScreen(),
// When we navigate to the "/second" route, build the SecondScreen Widget
'/second': (context) => SecondScreen(),
},
);
And then:
Navigator.pushNamed(context, '/second');
Documentation.
| |
doc_23529646
|
Code in c# forming html:
.....
names.Add("<tr><td>");
names.Add("\u2022 " + collection.name);
names.Add("</td></tr>");
.....
I tried with \u2022 with little space leaving at the end as in above code.
But, I am not to get the space in between bullet and text.
How to get the space in between bullet and text?
A: You probably need to use for adding space in html
names.Add(" " + collection.name);
A: Have you tried that html code for spacing?
| |
doc_23529647
|
Rails version: 3.0.9
Since the update some of the existing Chinese strings in the database are no longer displayed correctly. This does not affect all strings, but only those that are part of a serialized hash. All other strings that are stored as plain strings still appear to be correct.
Example:
This is a serialized hash that is stored as a UTF8 string in the database:
broken = "--- !map:ActiveSupport::HashWithIndifferentAccess \ncheckbox: \"1\"\nchoice: \"Round Paper Clips \\xEF\\xBC\\x88\\xE5\\x9B\\x9E\\xE5\\xBD\\xA2\\xE9\\x92\\x88\\xEF\\xBC\\x89\\r\\n\"\ninfo: \"10\\xE7\\x9B\\x92\"\n"
In order to convert this string to a ruby hash, I deserialize it with YAML.load:
broken_hash = YAML.load(broken)
This returns a hash with garbled contents:
{"checkbox"=>"1", "choice"=>"Round Paper Clips ï¼\u0088å\u009B\u009Eå½¢é\u0092\u0088ï¼\u0089\r\n", "info"=>"10ç\u009B\u0092"}
The garbled stuff is supposed to be UTF8-encoded Chinese. broken_hash['info'].encoding tells me that ruby thinks this is #<Encoding:UTF-8>. I disagree.
Interestingly, all other strings that were not serialized before look fine, however. In the same record a different field contains Chinese characters that look just right---in the rails console, the psql console, and the browser. Every string---no matter if serialized hash or plain string---saved to the database since the update looks fine, too.
I tried to convert the garbled text from a possible wrong encoding (like GB2312 or ANSI) to UTF-8 despite ruby's claim that this was already UTF-8 and of course I failed. This is the code I used:
require 'iconv'
Iconv.conv('UTF-8', 'GB2312', broken_hash['info'])
This fails because ruby doesn't know what to do with illegal sequences in the string.
I really just want to run a script to fix all the old, presumably broken serialized hash strings and be done with it. Is there a way to convert these broken strings to something resembling Chinese again?
I just played with the encoded UTF-8 string in the raw string (called "broken" in the above example). This is the Chinese string that is encoded in the serialized string:
chinese = "\\xEF\\xBC\\x88\\xE5\\x9B\\x9E\\xE5\\xBD\\xA2\\xE9\\x92\\x88\\xEF\\xBC\\x89\\r\\n\"
I noticed that it is easy to convert this to a real UTF-8 encoded string by unescaping it (removing the escape backslashes).
chinese_ok = "\xEF\xBC\x88\xE5\x9B\x9E\xE5\xBD\xA2\xE9\x92\x88\xEF\xBC\x89\r\n"
This returns a proper UTF-8-encoded Chinese string: "(回形针)\r\n"
The thing falls apart only when I use YAML.load(...) to convert the string to a ruby hash. Maybe I should process the raw string before it is fed to YAML.load. Just makes me wonder why this is so...
Interesting! This is likely due to the YAML engine "psych" that's used by default now in 1.9.3. I switched to the "syck" engine with YAML::ENGINE.yamler = 'syck' and the broken strings are correctly parsed.
A: From the 1.9.3 NEWS file:
* yaml
* The default YAML engine is now Psych. You may downgrade to syck by setting
YAML::ENGINE.yamler = 'syck'.
Apparently the Syck and Psych YAML engines treat non-ASCII strings in different and incompatible ways.
Given a Hash like you have:
h = {
"checkbox" => "1",
"choice" => "Round Paper Clips (回形针)\r\n",
"info" => "10盒"
}
Using the old Syck engine:
>> YAML::ENGINE.yamler = 'syck'
>> h.to_yaml
=> "--- \ncheckbox: "1"\nchoice: "Round Paper Clips \\xEF\\xBC\\x88\\xE5\\x9B\\x9E\\xE5\\xBD\\xA2\\xE9\\x92\\x88\\xEF\\xBC\\x89\\r\\n"\ninfo: "10\\xE7\\x9B\\x92"\n"
we get the ugly double-backslash format the you currently have in your database. Switching to Psych:
>> YAML::ENGINE.yamler = 'psych'
=> "psych"
>> h.to_yaml
=> "---\ncheckbox: '1'\nchoice: ! "Round Paper Clips (回形针)\\r\\n"\ninfo: 10盒\n"
The strings stay in normal UTF-8 format. If we manually screw up the encoding to be Latin-1:
>> Iconv.conv('UTF-8', 'ISO-8859-1', "\xEF\xBC\x88\xE5\x9B\x9E\xE5\xBD\xA2\xE9\x92\x88\xEF\xBC\x89")
=> "ï¼\u0088å\u009B\u009Eå½¢é\u0092\u0088ï¼\u0089"
then we get the sort of nonsense that you're seeing.
The YAML documentation is rather thin so I don't know if you can force Psych to understand the old Syck format. I think you have three options:
*
*Use the old unsupported and deprecated Syck engine, you'd need to YAML::ENGINE.yamler = 'syck' before you YAML anything.
*Load and decode all your YAML using Syck and then re-encode and save it using Psych.
*Stop using serialize in favor of manually serializing/deserializing using JSON (or some other stable, predictable, and portable text format) or use an association table so that you're not storing serialized data at all.
A: This seems to have been caused by a difference in the behaviour of the two available YAML engines "syck" and "psych".
To set the YAML engine to syck:
YAML::ENGINE.yamler = 'syck'
To set the YAML engine back to psych:
YAML::ENGINE.yamler = 'psych'
The "syck" engine processes the strings as expected and converts them to hashes with proper Chinese strings. When the "psych" engine is used (default in ruby 1.9.3), the conversion results in garbled strings.
Adding the above line (the first of the two) to config/application.rb fixes this problem. The "syck" engine is no longer maintained, so I should probably only use this workaround to buy me some time to make the strings acceptable for "psych".
| |
doc_23529648
|
This is my readCreditCard method that asks for the credit card number and determines whether it is valid:
public void readCreditCard(CreditCard creditCard)
{
System.out.print("\nPlease enter your Credit Card Number: ");
String creditCardNum = keyboard.next();
while (creditCard.isCardValid(creditCardNum) == false)
{
System.out.println("Invalid credit card number" + creditCardNum);
System.out.println("Please try again.");
System.out.print("\nPlease enter your Credit Card Number: ");
creditCardNum = keyboard.next();
}
}
I want a method called printCreditCard that allows me to use the variable creditCardNum from the readCreditCard method so that I can use my other two methods in the CreditCard class, but I'm not sure how to go about this.
Here is the contents of the CreditCard class:
public boolean isCardValid(String creditCard)
{
boolean valid;
String singleDigitPrefix = creditCard.substring(0, 1);
String doubleDigitPrefix = creditCard.substring(0, 2);
if ((creditCard.length() == 13 || creditCard.length() == 16) && singleDigitPrefix.equals("4"))
{
valid = true;
}
else if (creditCard.length() == 16 && (doubleDigitPrefix.equals("51") ||
doubleDigitPrefix.equals("52") || doubleDigitPrefix.equals("53") ||
doubleDigitPrefix.equals("54") || doubleDigitPrefix.equals("55")))
{
valid = true;
}
else if (creditCard.length() == 15 && doubleDigitPrefix.equals("37"))
{
valid = true;
}
else
{
valid = false;
}
return valid;
}
public String getCardType(String creditCard)
{
String cardType = null;
String singleDigitPrefix = creditCard.substring(0, 1);
String doubleDigitPrefix = creditCard.substring(0, 2);
if ((creditCard.length() == 13 || creditCard.length() == 16) && singleDigitPrefix.equals("4"))
{
cardType = "Visa";
}
else if (creditCard.length() == 16 && (doubleDigitPrefix.equals("51") ||
doubleDigitPrefix.equals("52") || doubleDigitPrefix.equals("53") ||
doubleDigitPrefix.equals("54") || doubleDigitPrefix.equals("55")))
{
cardType = "MasterCard";
}
else if (creditCard.length() == 15 && doubleDigitPrefix.equals("37"))
{
cardType = "American Express";
}
else
{
}
return cardType;
}
public String maskCardNumber(String creditCard)
{
String formattedNum = null;
if (getCardType(creditCard).equals("Visa"))
{
if (creditCard.length() == 13)
{
formattedNum = "XXXXXXXXX" + creditCard.substring(8, 13);
}
else
{
formattedNum = "XXXXXXXXXXXX" + creditCard.substring(11, 16);
}
}
else if (getCardType(creditCard).equals("MasterCard"))
{
formattedNum = "XXXXXXXXXXXX" + creditCard.substring(11, 16);
}
else if (getCardType(creditCard).equals("American Express"))
{
formattedNum = "XXXXXXXXXXX" + creditCard.substring(10, 15);
}
else;
return formattedNum;
}
}
A: create a bean with Member variables called creditCardNum and etc ..the method return this bean to which method need.
A: If i have understand your question correctly i see two option
Option 1: Return credit card number if it is single value you want.
public int readCreditCard(CreditCard creditCard)
{
System.out.print("\nPlease enter your Credit Card Number: ");
String creditCardNum = keyboard.next();
while (creditCard.isCardValid(creditCardNum) == false)
{
System.out.println("Invalid credit card number" + creditCardNum);
System.out.println("Please try again.");
System.out.print("\nPlease enter your Credit Card Number: ");
creditCardNum = keyboard.next();
}
}
Option 2: If you want to pass/return multiple values from method, passing the credit card object (bean/model) will solve your problem.
Here is the example.
public class CreditCardBean{
private String creditCardNumber;
private Date expirationDate;
public String getCreditCardNumber() {
return creditCardNumber;
}
public void setCreditCardNumber(String creditCardNumber) {
this.creditCardNumber = creditCardNumber;
}
public Date getExpirationDate() {
return expirationDate;
}
public void setExpirationDate(Date expirationDate) {
this.expirationDate = expirationDate;
}
}
public CreditCardBean readCreditCard(CreditCard creditCard)
{
CreditCardBean bean = new CreditCardBean();
System.out.print("\nPlease enter your Credit Card Number: ");
String creditCardNum = keyboard.next();
while (creditCard.isCardValid(creditCardNum) == false)
{
System.out.println("Invalid credit card number" + creditCardNum);
System.out.println("Please try again.");
System.out.print("\nPlease enter your Credit Card Number: ");
creditCardNum = keyboard.next();
bean.setCreditCardNumber(creditCardNum);
}
// Set other details which you want to pass.
// bean.setExpirationDate(<setDate here>);
return bean;
}
| |
doc_23529649
|
Scenario:
I have a form which is part of a responsive site that sends both Desktop and Mobile form fills as conversions into Adwords.
The form is the same for Desktop and Mobile so there is only one "submit" button for the form, however, I want to know whether it was a Mobile form submit or a Desktop form submit, hence Google Analytics can only track one or the other, right? I use "m_form_submit" for mobile and "d_form_submit" for desktop.
So I guess my question is:
Is there a way I can track 2(or more) GA events on a singular element without having the user trigger both events at once?
Ideally: tell GA I want to serve this event for Mobile and that event for Desktop?
A: You tagged this question with "jQuery" so I assume you're using jQuery to listen for the form submit and then send the events to GA.
All you need to go do only send the event associated with the mobile or desktop version is a simple conditional check before sending:
$('#myform').one('submit', function(e) {
// Prevent form from submitting, and then submit once the GA hit succeeds.
e.preventDefault();
// Change this to whatever you're using to determine what "mobile" is.
var mobile = '(max-width: 600px)';
if (window.matchMedia && window.matchMedia(mobile).matches) {
// Send mobile event.
ga('send', {
hitType: 'event',
eventCategory: 'Form Indentifier',
eventAction: 'submit',
eventLabel: 'mobile label...',
hitCallback: afterHitSucceeds
});
}
else {
// Send desktop event.
ga('send', {
hitType: 'event',
eventCategory: 'Form Indentifier',
eventAction: 'submit',
eventLabel: 'desktop label...',
hitCallback: afterHitSucceeds
});
}
// Once the GA hit succeeds, submit the form.
function afterHitSucceeds() {
$('#myform').submit();
}
});
This function uses window.matchMedia to determine if a particular media query matches (allowing you to match the @media rule in your CSS file).
If the passed media matches, a mobile event will be sent, if the passed media doesn't match (or the browser doesn't support it) a desktop event will be sent. Note that pretty much all mobile browsers support matchMedia so falling back to the desktop event is a safe way to go.
| |
doc_23529650
|
to search a particular pattern from the tables
But getting error:
"Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression "
DECLARE @SearchStr nvarchar(100)
SET @SearchStr = ''
--drop table #Results
CREATE TABLE #Results (ColumnName nvarchar(370), ColumnValue nvarchar(3630))
SET NOCOUNT ON
DECLARE @TableName nvarchar(256), @ColumnName nvarchar(128), @SearchStr2 nvarchar(110)
SET @TableName = ''
SET @SearchStr2 = QUOTENAME('%' + @SearchStr + '%','''')
WHILE @TableName IS NOT NULL
BEGIN
SET @ColumnName = ''
SET @TableName = 'RAP1'
WHILE (@TableName IS NOT NULL) AND (@ColumnName IS NOT NULL)
BEGIN
SET @ColumnName =
(
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = PARSENAME(@TableName, 1)
AND DATA_TYPE IN ('char', 'varchar', 'nchar', 'nvarchar', 'int', 'decimal')
)
IF @ColumnName IS NOT NULL
BEGIN
INSERT INTO #Results
EXEC
(
'SELECT ''' + @TableName + '.' + @ColumnName + ''', LEFT(' + @ColumnName + ', 3630) FROM ' + @TableName + ' (NOLOCK) ' +
' WHERE ' + @ColumnName + ' LIKE ' + @SearchStr2
)
END
END
END
SELECT ColumnName, ColumnValue FROM #Results
DROP TABLE #Results
What is wrong? how to get result? Please help.
A: It's most likely because table RAP1 has more than 1 columns, and you're trying to save the values (names) of every column of that table in a single variable (in SET @ColumnName).
If the following query returns more than 1 row then that is your problem:
DECLARE @TableName nvarchar(256)
SET @TableName = 'RAP1'
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = PARSENAME(@TableName, 1)
AND DATA_TYPE IN ('char', 'varchar', 'nchar', 'nvarchar', 'int', 'decimal')
I would change your script to look something like this:
DECLARE @SearchStr nvarchar(100)
SET @SearchStr = ''
--drop table #Results
CREATE TABLE #Results (ColumnName nvarchar(370), ColumnValue nvarchar(3630))
SET NOCOUNT ON
DECLARE @TableName nvarchar(256), @ColumnName nvarchar(128), @SearchStr2 nvarchar(110), @PrevColumnName nvarchar(128)
SET @TableName = ''
SET @SearchStr2 = QUOTENAME('%' + @SearchStr + '%','''')
WHILE @TableName IS NOT NULL
BEGIN
SET @ColumnName = ''
SET @PrevColumnName = ''
SET @TableName = 'RAP1'
IF @TableName IS NOT NULL
BEGIN
WHILE (@ColumnName IS NOT NULL)
BEGIN
SELECT TOP 1 @ColumnName = COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = PARSENAME(@TableName, 1)
AND DATA_TYPE IN ('char', 'varchar', 'nchar', 'nvarchar', 'int', 'decimal')
AND COLUMN_NAME > COALESCE(@PrevColumnName, @ColumnName)
IF @ColumnName = @PrevColumnName
begin
SET @ColumnName = NULL
SET @TableName = NULL
end
SET @PrevColumnName = @ColumnName
IF @ColumnName IS NOT NULL
BEGIN
INSERT INTO #Results
EXEC
(
'SELECT ''' + @TableName + '.' + @ColumnName + ''', LEFT(' + @ColumnName + ', 3630) FROM ' + @TableName + ' (NOLOCK) ' +
' WHERE ' + @ColumnName + ' LIKE ' + @SearchStr2
)
END
END
END
END
SELECT ColumnName, ColumnValue FROM #Results
| |
doc_23529651
|
I have bundled all the JS and CSS in seperate files using Webpack.
Simplified folder structure:
admin
-index.php
scheduler (react's part)
-bundled.js
-bundled.css
What i need to achieve is to include those bundled files to display in index.php, which is outside React's directory (obviously). I was thinking to include them as follows:
index.php:
<!DOCTYPE html>
<html lang="en">
<head>
<title>React Big Scheduler Examples</title>
</head>
<body>
<div>
<div id="root"></div>
</div>
<link rel="stylesheet" type="text/css" href="../scheduler/bundled.css">
<script type="text/babel" src="../scheduler/bundled.js"></script>
</body>
</html>
bundled.js:
import React, {Component} from 'react'
import {render} from 'react-dom'
import {HashRouter as Router, Route} from 'react-router-dom'
import Basic from './Basic'
import Locale from './Locale'
etc imports....
render((
<Router>
<Route exact path="/" component={Basic}/>
<Route path="/readonly" component={Readonly}/>
<Route path="/locale" component={Locale}/>
<Route path="/views" component={Views}/>
etc routes....
</Router>
), document.getElementById('root'))
bundled.css
body {
backgroud-color: grey;
}
etc styles....
As I have tested in Developer tools the paths are correct and no errors are shown in console.
Tho, nothing is being rendered to <div id="root"></div> in index.php.
Any help would be very appreciated!
| |
doc_23529652
|
from pycorenlp import StanfordCoreNLP
nlp = StanfordCoreNLP('http://localhost:9000')
def depparse(text):
parsed=""
output = nlp.annotate(text, properties={
'annotators': 'depparse',
'outputFormat': 'json'
})
for i in output["sentences"]:
for j in i["basicDependencies"]:
parsed=parsed+str(j["dep"]+'('+ j["governorGloss"]+' ')+str(j["dependentGloss"]+')'+' ')
return parsed
text='I shot an elephant in my sleep'
depparse(text)
This gives me output as:
'ROOT(ROOT shot) nsubj(shot I) det(elephant an) dobj(shot elephant) case(sleep in) nmod:poss(sleep my) nmod(shot sleep) '
To convert the relationships into tree, I am encountered one stackoverflow post Stanford NLP parse tree format. However, the output of the parser is in "bracketed parse (tree)". Hence, I am not sure how can I achieve it. I tried changing the outputformat as well but it gives an error.
I also found Python - Generate a dictionary(tree) from a list of tuples and implemented
list_of_tuples = [('ROOT','ROOT', 'shot'),('nsubj','shot', 'I'),('det','elephant', 'an'),('dobj','shot', 'elephant'),('case','sleep', 'in'),('nmod:poss','sleep', 'my'),('nmod','shot', 'sleep')]
nodes={}
for i in list_of_tuples:
rel,parent,child=i
nodes[child]={'Name':child,'Relationship':rel}
forest=[]
for i in list_of_tuples:
rel,parent,child=i
node=nodes[child]
if parent=='ROOT':# this should be the Root Node
forest.append(node)
else:
parent=nodes[parent]
if not 'children' in parent:
parent['children']=[]
children=parent['children']
children.append(node)
print forest
I got the following output [{'Name': 'shot', 'Relationship': 'ROOT', 'children': [{'Name': 'I', 'Relationship': 'nsubj'}, {'Name': 'elephant', 'Relationship': 'dobj', 'children': [{'Name': 'an', 'Relationship': 'det'}]}, {'Name': 'sleep', 'Relationship': 'nmod', 'children': [{'Name': 'in', 'Relationship': 'case'}, {'Name': 'my', 'Relationship': 'nmod:poss'}]}]}]
A: A bit off-topic indeed (this is not really an answer to your original question, but to your last comment). Posting it as an answer because the code wouldn't really fit nicely into a comment. But by just changing your depparse function slightly, you can get it in the desired format:
def depparse(text):
parsed=""
output = nlp.annotate(text, properties={
'annotators': 'depparse',
'outputFormat': 'json'
})
for i in output['sentences']: # not sure if there can be multiple items here. If so, it just returns the first one currently.
return [tuple((dep['dep'], dep['governorGloss'], dep['dependentGloss'])) for dep in i['basicDependencies']]
| |
doc_23529653
|
On my client side I'm using angularjs.
Here's the http request when I initiate the signalr connection:
> GET
> http://example.com/signalr/negotiate?clientProtocol=1.4&connectionData=%5B%7B%22name%22%3A%22main%22%7D%5D&_=1416702959615 HTTP/1.1 Host: mysite.net Connection: keep-alive Pragma: no-cache
> Cache-Control: no-cache Accept: text/plain, */*; q=0.01 Origin:
> http://lh:51408 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64)
> AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.65
> Safari/537.36 Content-Type: application/x-www-form-urlencoded;
> charset=UTF-8 Referer: http://localhost:51408/ Accept-Encoding: gzip,
> deflate, sdch Accept-Language: en-US,en;q=0.8 Cookie:
> ARRAffinity=9def17406de898acdc2839d0ec294473084bbc94a8f600c867975ede6f136080
And the response:
> HTTP/1.1 200 OK Cache-Control: no-cache Pragma: no-cache
> Transfer-Encoding: chunked Content-Type: application/json;
> charset=UTF-8 Expires: -1 Server: Microsoft-IIS/8.0
> X-Content-Type-Options: nosniff X-AspNet-Version: 4.0.30319
> X-Powered-By: ASP.NET Date: Sun, 23 Nov 2014 00:36:13 GMT
>
> 187
> {"Url":"/signalr","ConnectionToken":"6BKcLqjNPyOw4ptdPKg8jRi7xVlPMEgFUdzeJZso2bnXliwfY4WReQWHRpmB5YEZsbg14Au7AS5k5xS5/4qVheDxYoUkOjfFW0W8eAQsasjBaSQOifIilniU/L7XQ1+Y","ConnectionId":"f2fc7c47-c84f-49b8-a080-f91346dfbda7","KeepAliveTimeout":20.0,"DisconnectTimeout":30.0,"ConnectionTimeout":110.0,"TryWebSockets":true,"ProtocolVersion":"1.4","TransportConnectTimeout":5.0,"LongPollDelay":0.0}
> 0
However, in my javascript I'm getting the following error response when connecting:
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhos:51408' is therefore not allowed access.
On my server side my startup method looks the following:
public void Configuration(IAppBuilder app)
{
System.Web.Mvc.AreaRegistration.RegisterAllAreas();
ConfigureOAuth(app);
GlobalConfiguration.Configure(WebApiConfig.Register);
app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
}
Shouldn't this make sure that cors is used in signalr too or am I missing something?
A: For those who had the same issue with Angular, SignalR 2.0 and Web API 2.2,
I was able to solve this problem by adding the cors configuration in web.config and not having them in webapiconfig.cs and startup.cs
Changes Made to the web.config under <system.webServer> is as below
<httpProtocol>
<customHeaders>
<add name="Access-Control-Allow-Origin" value="http://localhost" />
<add name="Access-Control-Allow-Methods" value="*" />
<add name="Access-Control-Allow-Credentials" value="true" />
</customHeaders>
</httpProtocol>
A: You can look at this snippet from here https://github.com/louislewis2/AngularJSAuthentication/blob/master/AngularJSAuthentication.API/Startup.cs
and see if it helps you out.
public void Configuration(IAppBuilder app)
{
HttpConfiguration config = new HttpConfiguration();
ConfigureOAuth(app);
app.Map("/signalr", map =>
{
// Setup the CORS middleware to run before SignalR.
// By default this will allow all origins. You can
// configure the set of origins and/or http verbs by
// providing a cors options with a different policy.
map.UseCors(CorsOptions.AllowAll);
var hubConfiguration = new HubConfiguration
{
// You can enable JSONP by uncommenting line below.
// JSONP requests are insecure but some older browsers (and some
// versions of IE) require JSONP to work cross domain
//EnableJSONP = true
EnableDetailedErrors = true
};
// Run the SignalR pipeline. We're not using MapSignalR
// since this branch already runs under the "/signalr"
// path.
map.RunSignalR(hubConfiguration);
});
WebApiConfig.Register(config);
app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
app.UseWebApi(config);
Database.SetInitializer(new MigrateDatabaseToLatestVersion<AuthContext, AngularJSAuthentication.API.Migrations.Configuration>());
}
A: I have faced the CORS issue in MVC.net
The issue is SignalR negoiate XHR request is issued with credentials = include thus request always sends user cookies
The server has to reply back with response header Access-Control-Allow-Credentials set to true
First approach and this is due the help of other people , configure it at web.config
<configuration>
<system.webServer>
<httpProtocol>
<customHeaders>
<add name= "Access-Control-Allow-Origin" value="http://yourdomain:port"/>
<add name="Access-Control-Allow-Credentials" value = "true"/>
</customHeaders>
</httpProtocol>
</system.webServer>
</configuration>
Second approach which I fought to make it work is via code using OWIN CORS
[assembly::OwinStartup(typeof(WebHub.HubStartup), "Configuration"]
namespace WebHub
{
public class HubStartUp
{
public void Configuration(IAppBuilder app)
{
var corsPolicy = new CorsPolicy
{
AllowAnyMethod = true,
AllowAnyHeader = true,
SupportsCredentials = true,
};
corsPolicy.Origins.Add("http://yourdomain:port");
var corsOptions = new CorsOptions
{
PolicyProvider = new CorsPolicyProvider
{
PolicyResolver = context =>
{
context.Headers.Add("Access-Control-Allow-Credentials", new[] { "true" });
return Task.FromResult(corsPolicy);
}
}
};
app.Map("/signalR", map =>
{
map.UseCors(corsOptions);
var config = new HubConfiguration();
map.RunSignalR(config);
});
}
}}
A: In ConfigureServices-method:
services.AddCors(options => options.AddPolicy("CorsPolicy", builder =>
{
builder.AllowAnyHeader()
.AllowAnyMethod()
.SetIsOriginAllowed((host) => true)
.AllowCredentials();
}));
In Configure-method:
app.UseCors("CorsPolicy");
app.UseSignalR(routes =>
{
routes.MapHub<General>("/hubs/general");
});
| |
doc_23529654
|
What I am trying to do is have a number of icons next to the segment based off some info included in the segment data. My case involves having 3 different binary variables and including different icons depending the values.
var chartData = [{
category: task.name,
segments: [
{
start: task.parallel ? lastStart : moment(latestEnd).format(string),
end: task.parallel ? moment(lastStart).add(time,'m').format(string) : moment(latestEnd).add(time,'m').format(string),
color: '#1C7DDB',
time: task.time,
indicator1: task.checkOne== 1 ? '../img/path_to_icon.svg' : '',
indicator2: task.checkTwo== 1 ? '../img/path_to_icon2.svg' : '',
indicator3: task.checkThree== 1 ? '../img/path_to_icon3.svg': ''
}
]
},
...
}]
So far this works OK when I set the customeBullet to one of the variables:
However I want to be able to have the ability to have all 3 (or none) of the icons shown.
I think what I need to do is add the segment data first then add the icons as three graphs to the gantt with no visible line.
My current chart init code is here, I tried changing graph: {} to graphs: [] but that causes an error.
var chart = AmCharts.makeChart( "plannerChart", {
"type": "gantt",
"marginRight": 70,
"period": "DD",
"dataDateFormat": "YYYY-MM-DD HH:mm",
"columnWidth": 0.75,
"addClassNames": true,
"valueAxis": {
"type": "date",
"guides": [
{
"value": AmCharts.stringToDate( start, "YYYY-MM-DD HH:NN"),
"toValue": AmCharts.stringToDate( moment(start).add(timeWindow,'h').format('YYYY-MM-DD HH:mm'), "YYYY-MM-DD HH:NN"),
"lineAlpha": 0.2,
"lineColor": guideColor,
"lineThickness": 3,
"fillAlpha": 0.1,
"fillColor": guideColor,
"label": "Available time",
"inside": false
}
]
},
"brightnessStep": 7,
"graph": {
"fillAlphas": 1,
"lineAlpha": 1,
"bulletOffset": 25,
"bulletSize": 20,
"customBulletField": "indicator1",
"lineColor": "#0F238C",
"fillAlphas": 0.85,
"balloonText": "<b>Start</b>: [[start]]<br /><b>Finish</b>: [[end]]"
},
"rotate": true,
"categoryField": "category",
"segmentsField": "segments",
"colorField": "color",
"startDateField": "start",
"endDateField": "end",
"dataProvider": chartData,
"chartCursor": {
"cursorColor": "#0F238C",
"valueBalloonsEnabled": false,
"cursorAlpha": 0,
"valueLineAlpha": 0.5,
"valueLineBalloonEnabled": true,
"valueLineEnabled": true,
"zoomable": false,
"valueZoomable": false
},
} );
}
Any help appreciated!
M
A: This isn't possible in a Gantt chart as it only takes a single graph object, as you noticed when trying to use an array.
You can emulate a Gantt chart by making a multi-segmented floating bar chart, using the graph's openField property to simulate where the column starts. You can extend this to add your additional graph objects for your indicators and use a date-based valueAxis for your values (note that valueAxes require date objects or millisecond timestamps, so strings won't work) or use relative values like in the demo. Note that this is a little clunker than the Gantt chart in that you have to define multiple graph objects to simulate the segments on the same category.
| |
doc_23529655
|
Now I wanted to actually "see" this so I constructed a small C# program that created three System.DateTime objects as so:
DateTime dtUtc = new System.DateTime(2014, 4, 29, 9, 10, 30, System.DateTimeKind.Utc);
DateTime dtLocal = new System.DateTime(2014, 4, 29, 9, 10, 30, System.DateTimeKind.Local);
DateTime dtU = new System.DateTime(2014, 4, 29, 9, 10, 30, System.DateTimeKind.Unspecified);
I then dumped the ticks property for each and, as expected, they were all equal. Finally, I applied .ToBinary()
long bitUtc = dtUtc.ToBinary();
long bitLocal = dtLocal.ToBinary();
long bitU = dtU.ToBinary();
These longs were all different, again as expected. HOWEVER, I then tried to "inspect" the upper two bits to see which state corresponded to what settings, and found that the upper two bits were set the same in all three. I used the following routine to return the bit status:
public static bool IsBitSet<T>(this T t, int pos) where T : struct, IConvertible
{
var value = t.ToInt64(CultureInfo.CurrentCulture);
return (value & (1 << pos)) != 0;
}
(I got this from another post on SO), and called it like this:
Boolean firstUtc = Class1.IsBitSet<long>(bitUtc, 63);
Boolean secondUtc = Class1.IsBitSet<long>(bitUtc, 62);
Boolean firstLocal = Class1.IsBitSet<long>(bitLocal, 63);
Boolean secondLocal = Class1.IsBitSet<long>(bitLocal, 62);
Boolean firstU = Class1.IsBitSet<long>(bitU, 63);
Boolean secondU = Class1.IsBitSet<long>(bitU, 62);
Again, the first and second bits were set the same in all three (first was true, second false). I don't understand this, as I THOUGHT these would all be different, corresponding to the different SystemKind values.
Finally, I did some more reading and found (or at least it was said in one source) that MS doesn't serialize the Kind information in .ToBinary(). OK, but then why are the outputs of the .ToBinary() method all different?
I would appreciate info from anyone who could point me in the direction of a resource that would help me understand where I've gone wrong.
A:
These longs were all different, again as expected. HOWEVER, I then tried to "inspect" the upper two bits to see which state corresponded to what settings, and found that the upper two bits were set the same in all three.
I really don't think that's the case - not with the results of ToBinary. Here's a short but complete program demonstrating the difference, using your source data, showing the results as hex (as if unsigned):
using System;
class Test
{
static void Main()
{
DateTime dtUtc = new System.DateTime(2014, 4, 29, 9, 10, 30, System.DateTimeKind.Utc);
DateTime dtLocal = new System.DateTime(2014, 4, 29, 9, 10, 30, System.DateTimeKind.Local);
DateTime dtU = new System.DateTime(2014, 4, 29, 9, 10, 30, System.DateTimeKind.Unspecified);
Console.WriteLine(dtUtc.ToBinary().ToString("X16"));
Console.WriteLine(dtLocal.ToBinary().ToString("X16"));
Console.WriteLine(dtU.ToBinary().ToString("X16"));
}
}
Output:
48D131A200924700
88D131999ECDDF00
08D131A200924700
The top two bits are retrospectively 01, 10 and 00. The other bits change for the local case too, as per Marcin's post - but the top two bits really do indicate the kind.
The IsBitSet method is broken because it's left-shifting an int literal rather than a long literal. That means the shift will be mod 32, rather than mod 64 as intended. Try this instead:
public static bool IsBitSet<T>(this T t, int pos) where T : struct, IConvertible
{
var value = t.ToInt64(CultureInfo.CurrentCulture);
return (value & (1L << pos)) != 0;
}
Finally, I did some more reading and found (or at least it was said in one source) that MS doesn't serialize the Kind information in .ToBinary().
It's easy to demonstrate that's not true:
using System;
class Test
{
static void Main()
{
DateTime start = DateTime.UtcNow;
Show(DateTime.SpecifyKind(start, DateTimeKind.Utc));
Show(DateTime.SpecifyKind(start, DateTimeKind.Local));
Show(DateTime.SpecifyKind(start, DateTimeKind.Unspecified));
}
static void Show(DateTime dt)
{
Console.WriteLine(dt.Kind);
DateTime dt2 = DateTime.FromBinary(dt.ToBinary());
Console.WriteLine(dt2.Kind);
Console.WriteLine("===");
}
}
A: ToBinary() works differently for different DateTimeKind. You can see it on .NET source code:
public Int64 ToBinary() {
if (Kind == DateTimeKind.Local) {
// Local times need to be adjusted as you move from one time zone to another,
// just as they are when serializing in text. As such the format for local times
// changes to store the ticks of the UTC time, but with flags that look like a
// local date.
// To match serialization in text we need to be able to handle cases where
// the UTC value would be out of range. Unused parts of the ticks range are
// used for this, so that values just past max value are stored just past the
// end of the maximum range, and values just below minimum value are stored
// at the end of the ticks area, just below 2^62.
TimeSpan offset = TimeZoneInfo.GetLocalUtcOffset(this, TimeZoneInfoOptions.NoThrowOnInvalidTime);
Int64 ticks = Ticks;
Int64 storedTicks = ticks - offset.Ticks;
if (storedTicks < 0) {
storedTicks = TicksCeiling + storedTicks;
}
return storedTicks | (unchecked((Int64) LocalMask));
}
else {
return (Int64)dateData;
}
}
That's why you get different bits - local time is adjusted before transformed into bits, and so it does no longer match utc time.
| |
doc_23529656
|
Find out what application (window) is in focus in Java
but I got the following error:
Exception in thread "main" java.lang.UnsatisfiedLinkError: Unable to load library 'XLib': Native library (linux-x86-64/libXLib.so) not found in resource path ([file:/home/zzhou/workspace/home_prioritization_plus/bin/, file:/home/zzhou/Downloads/jna-4.1.0.jar, file:/home/zzhou/Downloads/jna-platform-4.1.0.jar])
at com.sun.jna.NativeLibrary.loadLibrary(NativeLibrary.java:271)
at com.sun.jna.NativeLibrary.getInstance(NativeLibrary.java:398)
at com.sun.jna.Library$Handler.<init>(Library.java:147)
at com.sun.jna.Native.loadLibrary(Native.java:412)
at com.sun.jna.Native.loadLibrary(Native.java:391)
at FunctionalityTest$XLib.<clinit>(FunctionalityTest.java:15)
at FunctionalityTest.main(FunctionalityTest.java:23)
The code is:
import com.sun.jna.Native;
import com.sun.jna.Platform;
import com.sun.jna.Pointer;
import com.sun.jna.platform.unix.X11;
import com.sun.jna.win32.StdCallLibrary;
public class FunctionalityTest {
static class Psapi {
static { Native.register("psapi"); }
public static native int GetModuleBaseNameW(Pointer hProcess, Pointer hmodule, char[] lpBaseName, int size);
}
public interface XLib extends StdCallLibrary {
XLib INSTANCE = (XLib) Native.loadLibrary("XLib", Psapi.class); // <-- PROBLEM
int XGetInputFocus(X11.Display display, X11.Window focus_return, Pointer revert_to_return);
}
public static void main(String args[]) {
if(Platform.isLinux()) { // Possibly most of the Unix systems will work here too, e.g. FreeBSD
final X11 x11 = X11.INSTANCE;
final XLib xlib= XLib.INSTANCE;
X11.Display display = x11.XOpenDisplay(null);
X11.Window window=new X11.Window();
xlib.XGetInputFocus(display, window,Pointer.NULL);
X11.XTextProperty name=new X11.XTextProperty();
x11.XGetWMName(display, window, name);
System.out.println(name.toString());
}
}
}
To import JNA library, I downloaded jna and jna-platform from https://github.com/twall/jna and use Configure Build Path... in Eclipse to add library. I did not install anything. May that be the source of the problem?
Thanks for your help.
A: Afaik, even for JNA, you have to load the library in Java in order for JNA to find it. (tested for win32, not linux)
Try this just above Native.loadLibrary:
System.loadLibrary("XLib");
| |
doc_23529657
|
var do_this = function(params){console.log("executes automatically, but function is only declared")};
var delay = 50;
var timeoutID = setTimeout(do_this(), delay) //executes automatically
A: Functions such as setTimeout require a callback-function that can be called the moment the timer expires. When you just provide a function, it is called immediately because it is never added to the callback-queue.
Thus, you should pass it a callback-function:
setTimeout( () => ... )
| |
doc_23529658
|
if (!in_array(trim($data[2]),$myArray))
A: You should use array_key_exists check the key in Assoc Array
if (array_key_exists(trim($data[2]), $myArray))
You can use array_diff
if (array_diff(trim($data[2]), $myArray))
| |
doc_23529659
|
Example:
filename1.txt, filename2.txt, filename3.txt
filename1a.jpg,filename2a.jpg,filename3a.jpg
filename1b.jpg,filename2b.jpg,filename3b.jpg
filename1c.jpg,filename2c.jpg,filename3c.jpg
,filename2d.jpg,
The problem I am having is how to append to file with the right format as above?
for f in $(ls *.txt) do
csv_header="$f"
#get all jpgs in current txt file
array_jpgs=( $(get_jpgs "$f") )
for jpg in "${array_jpgs[@]}" do
#printf "%s," "$csv_header" >> "$CSV_FILE"
done
done
I am using GNU bash, version 4.3.33
A: Just use paste to combine a bunch of temporary files. (Assuming the output of get_jpgs is one file name per line.)
for f in *.txt; do # Do not parse ls
{ printf '%s\n' "$f"
get_jpgs "$f"
} > "$f.tmp"
done
paste -d, *.tmp > "$CSV_FILE"
rm *.tmp
| |
doc_23529660
|
namespace SimpleLsp
{
class Program
{
static async Task Main(string[] args)
{
var server = await OmniSharp.Extensions.LanguageServer.Server.LanguageServer
.From(
options => options
.WithInput(Console.OpenStandardInput())
.WithOutput(Console.OpenStandardOutput())
).ConfigureAwait(false);
await server.WaitForExit;
}
}
}
and the TypeScript extension
import * as vscode from 'vscode';
import path = require('path');
import { workspace } from 'vscode';
import {
LanguageClient,
LanguageClientOptions,
ServerOptions,
TransportKind,
} from "vscode-languageclient/node";
let client: LanguageClient;
export function activate(context: vscode.ExtensionContext) {
console.log('Congratulations, your extension "simplelspextension" is now active!');
const extensionPath = vscode.extensions.getExtension("PauloAboimPinto.simplelspextension")?.extensionPath as string;
const libsFolder = path.join(extensionPath, "libs");
const dllPath = path.join(libsFolder, "simpleLsp.dll");
let serverOptions: ServerOptions = {
run: {
command: "dotnet",
args: [dllPath],
transport: TransportKind.pipe
},
debug: {
command: "dotnet",
args: [dllPath],
transport: TransportKind.pipe,
runtime: ""
}
};
let clientOptions: LanguageClientOptions = {
documentSelector: [
{
pattern: "**/*.xaml",
},
{
pattern: "**/*.axaml",
},
{
pattern: "**/*.csproj",
},
],
synchronize: {
// Notify the server about file changes to '.clientrc files contained in the workspace
fileEvents: workspace.createFileSystemWatcher('**/.axaml')
}
};
client = new LanguageClient(
"simpleLsp",
"Simple Language Server Protocol",
serverOptions,
clientOptions
);
let disposableLsp = client.start();
context.subscriptions.push(disposableLsp);
}
// this method is called when your extension is deactivated
export function deactivate() {
if (!client) {
return undefined;
}
return client.stop();
}
During deactivation I'm sending the stop command to the client, expecting the execution DLL to be terminated, but it's not.
When I close the VSCode I would expect the simpleLsp.dll to be terminated but it's not and I have to terminate it manually.
What I'm missing or doing wrong here?
How I can terminate the execution of the DLL that contains the LSP?
thanks in advance
| |
doc_23529661
|
@font-face {
font-family: Antipasto;
src: url(Antipasto-ExtraBoldTrial.ttf);
}
h2 {
font-family: Antipasto;
}
p {
font-family: Antipasto;
}
div {
display: inline;
}
<div width="30%" style="float:left">
<video width="30%" height="30%" controls>
<source src="video1.mp4" type="video/mp4">
No video.
</video>
</div>
<div width="50%" style="float:left">
<h2>Title</h2>
<p>Sentence 1</p>
<p>Sentence 2.</p>
<p>Sentence 3</p>
</div>
It works in the preview there, but not on my local computer.
Many of these answers are not working on my laptop either, I guess it's a graphics issue with Chrome. Thanks for everybody for helping anyway.
A: To have more than one div. You have to have separate names for each div.
For example. You first div determines the margin. So you would just call it div.
The second div you want tells that you want a specific text to be center. So you would call this div2
| |
doc_23529662
|
Dim strClpTxt As String = txtResultsAll.SelectedRtf
Clipboard.SetText(strClpRtf, TextDataFormat.Rtf)
--> Paste possible in WordPad, but not in Notepad.
Dim strClpTxt As String = txtResultsAll.SelectedText
Clipboard.SetText(strClpTxt)
--> Past possible in WordPad AND Notepad, but without formatting.
However, if I copy the Content from WordPad by Ctrl+C it's pasteable in Notepad (of course, without formatting).
Is there a way to copy Rtf and plain text in VB.net?
A: Use txtResultsAll.Copy() it copy the selected text and works in both cases.
| |
doc_23529663
|
In the mongodb database documents are saved according to the following structure:
{
"_id" : ObjectId("595a9fc4fe3f36402b7edf0e"),
"id" : "123",
"priceInfo" : [
{object1: value1}, {object2: value2}, {object3: value3}
]
}
In order to retrieve the "priceInfo"-Array of a Document with a specific id, I wrote the following code:
collection.find(eq("id", id)).first().projection(fields(include("priceInfo"), excludeId()));
I wrote this code according too the documentation, which you can find here:
http://mongodb.github.io/mongo-java-driver/3.4/javadoc/?com/mongodb/client/model/Projections.html
The problem is that my IDE won't accept this code.
It's giving me the following error indication:
I have no clue why this code doesn't work. At first the IDE suggested including several classes - which I did. But after that I still got an error indication, namely the one you see above.
What's wrong with the code? How can I retrieve the priceInfo array of a Document with ID id?
********************************UPDATE**********************************
As per request, here's the whole class:
package DatabaseAccess;
import Models.GasStation;
import com.mongodb.MongoClient;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import static com.mongodb.client.model.Filters.eq;
import static com.mongodb.client.model.Projections.excludeId;
import static com.mongodb.client.model.Projections.fields;
import static com.mongodb.client.model.Projections.include;
import com.mongodb.client.model.Updates;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Date;
import java.util.logging.Level;
import org.bson.Document;
public class databaseAccess {
private final String DB_HOST = "localhost";
private final int DB_PORT = 27017;
private final String DB_NAME = "db1";
private final String DB_COLLECTION = "prices";
private final MongoClient mongoClient;
private final MongoDatabase database;
private final MongoCollection<Document> collection;
public databaseAccess(){
mongoClient = new MongoClient(DB_HOST, DB_PORT);
database = mongoClient.getDatabase(DB_NAME);
collection = database.getCollection(DB_COLLECTION);
}
public String readFromDB(String id){
collection.find(eq("id", id)).first().projection(fields(include("priceInfo"), excludeId()));
return null;
}
}
A: You're operating on chain of calls in your method.
Let's analyze each element in chain:
MongoCollection:
FindIterable< TDocument> find() - Finds all documents in the collection.
Return type is FindIterable<TDocument> and you calling the next method in chain on it:
FindIterable< TDocument>
Methods inherited from interface com.mongodb.async.client.MongoIterable:
batchCursor, first, forEach, into, map
Okay, we are going to MongoIterable:
MongoIterable< TResult>:
void first(SingleResultCallback callback) - Helper to return the first item in the iterator or null.
That means first(...) is returned nothing. You're calling projection(...) from nothing, of course this is not applicable, so the compiler marks this as an error.
For calling projection(Bson projection) you shoud have FindIterable<T> instance. MongoCollection.find() can provide you with this instance:
collection.find(eq("id", id)).projection(fields(include("priceInfo"), excludeId()));
| |
doc_23529664
|
import win32com.client
word = win32com.client.Dispatch("Word.Application")
word.Visible = True
document = word.Documents.Add()
document.VBProject.Name = "TEST"
wordModule = document.VBProject.VBComponents("ThisDocument") # WORKS
input()
You can then add VB code to wordModule.
I wanted to do the same using Golang. There is a OLE binding for Go, the code is on Github -> https://github.com/go-ole/go-ole
It's a bit less user friendly but I managed to make it work, except that I'm not able to retrieve the default VBComponents.
The default code resides in "ThisDocument" and can be retrieved with the simple python code document.VBProject.VBComponents("ThisDocument") except that, it doesn't work in Go...
You can see in the code below that I tried to get "ThisDocument" using multiple ways, without success. Each time, the error message is panic: Unknown name.
// +build windows
package main
import (
"fmt"
ole "github.com/go-ole/go-ole"
"github.com/go-ole/go-ole/oleutil"
)
func main() {
defer ole.CoUninitialize()
ole.CoInitialize(0)
unknown, _ := oleutil.CreateObject("Word.Application")
word, _ := unknown.QueryInterface(ole.IID_IDispatch)
oleutil.PutProperty(word, "Visible", true)
documents := oleutil.MustGetProperty(word, "Documents").ToIDispatch()
document := oleutil.MustCallMethod(documents, "Add").ToIDispatch()
vbproject := oleutil.MustGetProperty(document, "VBProject").ToIDispatch()
oleutil.PutProperty(vbproject, "Name", "TEST")
// oleutil.MustCallMethod(vbproject, "VBComponents", "ThisDocument").ToIDispatch() --> panic: Unknown name.
// oleutil.MustGetProperty(vbproject, "VBComponents", "ThisDocument").ToIDispatch() --> panic: Unknown name.
// vbcomponents := oleutil.MustGetProperty(vbproject, "VBComponents").ToIDispatch()
// oleutil.MustGetProperty(vbcomponents, "ThisDocument").ToIDispatch() --> panic: Unknown name.
var input string
fmt.Scanln(&input)
oleutil.PutProperty(document, "Saved", true)
oleutil.CallMethod(documents, "Close", false)
oleutil.CallMethod(word, "Quit")
word.Release()
}
Any ideas on why it doesn't work?
Thanks a lot.
A: Turns out "github.com/go-ole/go-ole" has a bug when using ForEach. VBComponets is a Collection, so you have to iterate as stated by Microsoft doc
Use the VBComponents collection to access, add, or remove components in a project. A component can be a form, module, or class. The VBComponents collection is a standard collection that can be used in a For...Each block.
This line -> https://github.com/go-ole/go-ole/blob/master/oleutil/oleutil.go#L106
should be replace by
newEnum, err := disp.CallMethod("_NewEnum")
Now it works as intended.
| |
doc_23529665
|
My requirement is to ignore all rows for FK, if FK has category M. Query should return only those rows whose category is N and type Z.
Also, FK is also given as ( in (A,B))
Basically in below example i need output to be the row for PK 5.
PK FK Category Type
1 A L X
2 A M Y
3 A N Z
4 B L X
5 B N Z
6 C L X
7 C M Y
8 C N Z
Thanks
A: Why not try as
SELECT *
FROM tab_name
WHERE UPPER (category) = 'N'
AND UPPER (type) = 'Z'
AND UPPER (fk) IN ('A', 'B')
SQL Fiddle
WITH t
AS (SELECT 1 AS pk,
'A' AS fk,
'L' AS category,
'X' AS TYPE
FROM DUAL
UNION
SELECT 2 AS pk,
'A' AS fk,
'M' AS category,
'Y' AS TYPE
FROM DUAL
UNION
SELECT 3 AS pk,
'A' AS fk,
'N' AS category,
'Z' AS TYPE
FROM DUAL
UNION
SELECT 4 AS pk,
'B' AS fk,
'L' AS category,
'X' AS TYPE
FROM DUAL
UNION
SELECT 5 AS pk,
'B' AS fk,
'N' AS category,
'Z' AS TYPE
FROM DUAL
UNION
SELECT 6 AS pk,
'C' AS fk,
'L' AS category,
'X' AS TYPE
FROM DUAL
UNION
SELECT 7 AS pk,
'C' AS fk,
'M' AS category,
'Y' AS TYPE
FROM DUAL
UNION
SELECT 8 AS pk,
'C' AS fk,
'N' AS category,
'Z' AS TYPE
FROM DUAL)
(SELECT distinct *
FROM t
WHERE UPPER (category) = 'N'
AND UPPER (TYPE) = 'Z'
AND UPPER (fk) IN ('A', 'B'))
Output
╔════╦════╦══════════╦══════╗
║ PK ║ FK ║ CATEGORY ║ TYPE ║
╠════╬════╬══════════╬══════╣
║ 3 ║ A ║ N ║ Z ║
║ 5 ║ B ║ N ║ Z ║
╚════╩════╩══════════╩══════╝
A: Simplest but may be not most effective is
SELECT *
FROM tab_name
WHERE FK not in (SELECT FK FROM tab_name WHERE catergory='M')
A: Try this.
SELECT *
FROM table_name
WHERE Category = 'N'
AND Type = 'Z'
AND FK IN ('A', 'B');
A: Try this
Select PK, FK, Category, Type
From tab_name
Where Category = Case when Category = 'M' Then 'N' else Category end
and Type = Case when Category = 'M' Then 'Z' else Type end
and FK in ('A', 'B')
A: You would seem to want:
select t.*
from t
where t.category = 'N' and t.type = 'Z' and
not exists (select 1
from t t2
where t2.fk = t.fk and t2.category = 'M'
);
You can also us analytic functions:
select t.*
from (select t.*,
sum(case when category = 'M' then 1 else end) over (partition by fk) as m_cnt
from t
) t
where category = 'N' and type = 'Z' and
m_cnt = 0;
| |
doc_23529666
|
<style>
#canvas:-webkit-full-screen {
height:100%;
width: 100%;
}
</style>
<button onclick="fullScreen()">Full Screen</button>
<canvas id="canvas"></canvas>
<script>
function fullScreen() {
var c = document.getElementById("canvas");
c.webkitRequestFullScreen();
}
</script>
This code makes the canvas full screen, but only the height is full screen.
A: Try using the CSS rule fixed for position with the canvas element:
position:fixed;
left:0;
top:0;
width:100%;
height:100%;
You might need to set html and body with width/height: 100% as well.
If you want to avoid zoom as this will give you, you would need to set the width and height of the canvas element directly (use window.innerHeight and .innerWidth).
A: I do full screen canvas this way:
More details can be found here.
var canvas = document.getElementById("mycanvas");
var ctx = canvas.getContext("2d");
function refreshCanvas() {//refresh size canvas function
document.body.style.width = window.innerWidth+'px';
document.body.style.height = window.innerHeight+'px';
canvas.setAttribute('width', window.innerWidth+'px');
canvas.setAttribute('height', window.innerHeight+'px');
}
window.addEventListener('resize', refreshCanvas); //refresh when resize window
window.addEventListener('load', refreshCanvas); //refresh when load window
body {
margin: 0px;
}
#mycanvas {
background-image: url('https://gmer.io/resource/community/backgroundCanvas.png');
}
<canvas id="mycanvas"></canvas>
| |
doc_23529667
|
I need to read a file from a blob using the rest api.
I want to use the eTag to cache it locally and then only read the file
if the etag has changed.
If the Etag is not the correct way to check if data has changed than please advice?
What I have noticed that I can read the file but the eTag is null all time.
What Am I doing wrong?
private const string EtagKey = "myEtag";
private readonly string xmlFilename
= Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData),
"MyFile.xml");
//get etag from cache if there
public string CachedEtagVersion => Preferences.Get(EtagKey, string.Empty);
private async Task<string> GetFileFromCloud()
{
using (var httpClient = new HttpClient())
{
if (!string.IsNullOrEmpty(CachedEtagVersion))
{
httpClient.DefaultRequestHeaders.Add("If-None-Match", CachedEtagVersion);
}
var response = await httpClient.GetAsync("https://mybloburl/XmlFiles/Myfile.xml");
string xmlFileContent;
if (response.StatusCode == HttpStatusCode.NotModified)
{
xmlFileContent = ReadFromCache();
}
else
{
xmlFileContent = await response.Content.ReadAsStringAsync();
}
//response.Headers.ETag always NULL WHY??????**
UpdateLocalCache(response.Headers.ETag, xmlFileContent);
return xmlFileContent;
}
}
private void UpdateLocalCache(EntityTagHeaderValue eTag, string xmlFileContent)
{
if (eTag == null || CachedEtagVersion == eTag.Tag) return;
//cache the etag it on the device using xam essentials
Preferences.Set(EtagKey, eTag.Tag);
File.WriteAllText(xmlFilename, xmlFileContent);
}
private string ReadFromCache()
{
if (!File.Exists(xmlFilename)) return string.Empty;
return File.ReadAllText(xmlFilename);
}
A: According to this
https://www.michaelcrump.net/azure-tips-and-tricks88/
The e tag is for optimistic concurrency and not the last time a file changed.
Use Timestamp for that
| |
doc_23529668
|
Right now, in my composer.json file, I have a repository path so that the project knows to always use my current version.
"repositories": {
"local": {
"type": "path",
"url": "../../packages/my-account/my-package"
}
}
The path is in the require path of the main project as it contains files that are extended in the project.
"require": {
"php": "^7.1.3",
"fideloper/proxy": "^4.0",
"laravel/framework": "5.8.*",
"laravel/tinker": "^1.0",
"nothingworks/blade-svg": "^0.3.0",
"my-account/my-package": "^0.3.8@dev",
...
I used to just have both projects open, and then update the package, and push it to packagist. Then wait while the other package downloads the update but it seemed to be such a time waster, especially since to run some unit tests I'd only find the issue after upgrading.
Instead, I have a soft link in my system (MacOS Mojave) from my vendor folder to my account folder to make sure that in PhpStorm, I can open the files I need to and make changes to the projects both at the same time. I basically only ever have to commit, when everything is working as expected. This has been a massive time save, however it has a drawback...
Every time I commit to the primary project I need to remove the repositories chunk from my composer.json to push it to staging or to production. Since there is no repositories-dev or something of that nature.
Is there a better workflow? Using 2 composer.jsons? OR some PhpStorm magic that can solve this so I'm not stripping out the hunk of code on each commit?
A: You can use https://github.com/franzliedke/studio which solves your exact problem. It saves you from modifying the composer.json file to add a symlink to your package, as it does the exact same thing on the fly, via a Composer plugin.
| |
doc_23529669
|
Sub Delete_External ()
'
' Delete_External Macro
Dim LastRow As Long
Dim i As Long
LastRow = Range("K1000").End(xlUp).Row
For i = LastRow To 1 Step -1
If Range("K" & i) = "External" Then
Range("K" & i).EntireRow.Delete
End If
Next
End Sub
A: Qualify your objects with a worksheet.
Sub Delete_External ()
'
' Delete_External Macro
Dim ws as Worksheet: Set ws = ThisWorkbook.Sheets("Sheet1")
Dim LastRow As Long
Dim i As Long
LastRow = ws.Range("K1000").End(xlUp).Row
For i = LastRow To 1 Step -1
If ws.Range("K" & i) = "External" Then
ws.Range("K" & i).EntireRow.Delete
End If
Next
End Sub
I would avoid deleting rows inside your loop as this can become time consuming depending on the data set and amount of criteria matches. Consider this alternative that loops through your range and creates a collection (Union) of cells that match your criteria. Once the loop is complete, delete the Union all at once.
This has also been updated to a more common last row finder methodology and removes the backwards loop since this method does not require it!
Sub Delete_External()
'
' Delete_External Macro
Dim ws As Worksheet: Set ws = ThisWorkbook.Sheets("Sheet1")
Dim LR As Long, i As Long
Dim DeleteMe As Range
LR = ws.Range("K" & ws.Rows.Count).End(xlUp).Row
For i = 2 To LR
If ws.Range("K" & i) = "External" Then
If Not DeleteMe Is Nothing Then
Set DeleteMe = Union(DeleteMe, ws.Range("K" & i))
Else
Set DeleteMe = ws.Range("K" & i)
End If
End If
Next i
If Not DeleteMe Is Nothing Then
DeleteMe.EntireRow.Delete
End If
End Sub
| |
doc_23529670
|
void *p = &i;
int *x = static_cast<int*>(p);
int *y = reinterpret_cast<int*>(p);
which cast should be used to convert from void* to int* and why?
A: static_cast provided that you know (by design of your program) that the thing pointed to really is an int.
static_cast is designed to reverse any implicit conversion. You converted to void* implicitly, therefore you can (and should) convert back with static_cast if you know that you really are just reversing an earlier conversion.
With that assumption, nothing is being reinterpreted - void is an incomplete type, meaning that it has no values, so at no point are you interpreting either a stored int value "as void" or a stored "void value" as int. void* is just an ugly way of saying, "I don't know the type, but I'm going to pass the pointer on to someone else who does".
reinterpret_cast if you've omitted details that mean you might actually be reading memory using a type other than the type is was written with, and be aware that your code will have limited portability.
By the way, there are not very many good reasons for using a void* pointer in this way in C++. C-style callback interfaces can often be replaced with either a template function (for anything that resembles the standard function qsort) or a virtual interface (for anything that resembles a registered listener). If your C++ code is using some C API then of course you don't have much choice.
A: In current C++, you can't use reinterpret_cast like in that code. For a conversion of void* to int* you can only use static_cast (or the equivalent C-style cast).
For a conversion between different function type pointers or between different object type pointers you need to use reinterpret_cast.
In C++0x, reinterpret_cast<int*>(p) will be equivalent to static_cast<int*>(p). It's probably incorporated in one of the next WPs.
It's a misconception that reinterpret_cast<T*>(p) would interpret the bits of p as if they were representing a T*. In that case it will read the value of p using p's type, and that value is then converted to a T*. An actual type-pun that directly reads the bits of p using the representation of type T* only happens when you cast to a reference type, as in reinterpret_cast<T*&>(p).
As far as I know, all current compilers allow to reinterpret_cast from void* and behave equivalent to the corresponding static_cast, even though it is not allowed in current C++03. The amount of code broken when it's rejected will be no fun, so there is no motivation for them to forbid it.
A: When should static_cast, dynamic_cast, const_cast and reinterpret_cast be used? gives some good details.
A: From the semantics of your problem, I'd go with reinterpret, because that's what you actually do.
| |
doc_23529671
|
Right now I use user controls for the content but I would like to change that to be more generic.
A: You can:
1) Place a UserControl inside a TabItem:
<TabControl>
<TabItem>
<local:MyUserControl/>
</TabItem>
</TabControl>
2) Inherit from TabItem rather than UserControl:
public class MyTabItem : TabItem { ... }
<TabControl>
<local:MyTabItem/>
</TabControl>
| |
doc_23529672
|
My thought was to create an external window that takes up only a portion of the screen, thereby (hopefully) leaving the "video mirror" content still visible. So I'm trying this admittedly dicey approach, and every time it "blows up" to be fullscreen (which I can see because I've set its background color to red.)
CGRect externalScreenFrame = CGRectMake(0, 0, 200, 200);
UIWindow* externalWindow = [[UIWindow alloc] initWithFrame:externalScreenFrame];
externalWindow.autoresizingMask = UIViewAutoresizingNone;
externalWindow.backgroundColor = [UIColor redColor];
externalWindow.screen = externalScreen;
externalWindow.hidden = NO;
Am I doing something wrong? Or is this just flat-out impossible?
| |
doc_23529673
|
If the value in textbox1 (which is numeric) is not found I set the focus to textbox2 so I can load the record by name.
This part works for textbox1
Dim rs As DAO.Recordset
Private Sub Form_Load()
Set rs = Me.RecordsetClone
End Sub
Private Sub textbox1_AfterUpdate()
If (Me.textbox1 & vbNullString) = vbNullString Then Exit Sub
rs.FindFirst "[UPC]=" & Me.textbox1
If rs.NoMatch Then
Me.textbox2.SetFocus
Else
Me.Recordset.Bookmark = rs.Bookmark
End If
End Sub
Private Sub Form_Close()
rs.Close
End Sub
This is not working
Private Sub textbox2_AfterUpdate()
rs.FindFirst "[ITEMNAME] Like '*" & Me.textbox2 & "*'"
If rs.NoMatch Then
Me.pmItem.SetFocus
Else
Me.Recordset.Bookmark = rs.Bookmark
End If
End Sub
| |
doc_23529674
|
if ( ary[3] == 0xFF && ary[4] == 0xFF && ary[5] == 0xFF && ary[6] == 0xFF ... )
{
// do something
}
I can obviously make my own function to do it like memcmp, but I don't want to reinvent the wheel if I don't have to.
A: I suppose a completely generic function would look something like this:
#include <string.h>
bool check_array (const void* array,
size_t array_n
const void* value,
size_t type_size)
{
const uint8_t* begin = array;
const uint8_t* end = begin + array_n * type_size;
const uint8_t* byte_val = value;
const uint8_t* byte_ptr;
for(byte_ptr = begin; byte_ptr < end; byte_ptr += type_size)
{
if( memcmp(byte_ptr,
byte_val,
type_size) != 0 )
{
return false;
}
}
return true;
}
int main()
{
int array [] = { ... };
bool result = check_array (array,
sizeof(array),
0x12345678,
sizeof(int));
}
A: From your answer I see you need to compare bytes, in that case string.h supports memchr, strchr and their derivatives, depending on how exactly you want to use it.
If you wanted to implement it yourself, and performance is an issue, i'd suggest inlining a block of assembly doing rep scasb
| |
doc_23529675
|
my models.py:
class postManager(models.Manager):
def repost(self, author, parent_obj):
if parent_obj.parent:
og_parent = parent_obj.parent
else:
og_parent = parent_obj
obj = self.model(
parent = og_parent,
author = author,
content = og_parent.content,
image = og_parent.image,
)
obj.save()
return obj
class post(models.Model):
parent = models.ForeignKey("self", on_delete=models.CASCADE, blank=True, null=True)
title = models.CharField(max_length=100)
image = models.ImageField(upload_to='post_pics', null=True, blank=True)
video = models.FileField(upload_to='post_videos', null=True, blank=True)
content = models.TextField()
likes = models.ManyToManyField(User, related_name='likes', blank=True)
date_posted = models.DateTimeField(default=timezone.now)
author = models.ForeignKey(User, on_delete=models.CASCADE)
objects = postManager()
def __str__(self):
return self.title
my views.py:
@login_required
def post_list(request):
count_filter = Q(likes=request.user)
like_case = Count('likes', filter=count_filter, output_field=BooleanField())
posts = post.objects.annotate(is_liked=like_case).all().order_by('-date_posted')
return render(request, 'blog/home.html', {'posts': posts, })
class PostDeleteView(LoginRequiredMixin, UserPassesTestMixin, DeleteView):
model = post
fields = ['content', 'image', 'video']
success_url = '/'
def test_func(self):
post = self.get_object()
if self.request.user == post.author:
return True
return False
my urls.py:
path('post/<int:pk>/delete/', PostDeleteView.as_view(), name='post-delete'),
i"m using the parent in model so, on delete(self) argument is not working.
A: This is normal behaviour because the instances of post only store a path to the file, like a string, not the actual file. The actual file is stored on a filesystem.
To delete the actual file upon deletion of an instance of post, you have some choices:
1. Override the delete function
e.g.
def delete(self, using=None, keep_parents=False):
self.image.delete()
super().delete()
2. Add a post delete signal
Although there is nothing inherently wrong with using signals, you should consider two points before using them:
*
*A signal can be sent multiple times for a single action, so you have to handle any code that might raise errors if run more than once on a single instance (e.g. attempting to delete a non existent file)
try:
os.remove(filename)
except OSError:
pass
*Because a signal is not within the expected flow of code, if it has a bug, it can be tricky to trace it down for less experienced developers
3. Use a package such as django-cleanup
A: For me post_delete was deleting the image but the object not not deleted strangely. After using the pre_delete signal issue was solved. If anyone knows the reason for this behavior then please explain. Thank you.
from django.db.models.signals import pre_save, pre_delete
from django.dispatch.dispatcher import receiver
@receiver(pre_delete, sender=Item)
def image_delete_handler(sender, instance, *args, **kwargs):
if instance.image and instance.image.url:
instance.image.delete()
A: you can try this:
def delete(self, using=None):
os.unlink(self.image.path)
super(post,self).delete()
A: You could use post_delete signal.
In your models.py:
from django.db.models.signals import post_delete
@receiver(post_delete, sender=post)
def photo_post_delete_handler(sender, **kwargs):
photo = kwargs['instance']
if photo.image:
storage, path = photo.image.storage, photo.image.path
storage.delete(path)
A: Use "django-cleanup" package to automatically remove uploaded files for FileField, ImageField and subclasses when they are deleted and updated. And setting "django-cleanup" is easy.
First, install "django-cleanup":
pip install django-cleanup
Then, set it to the bottom of "INSTALLED_APPS" in "settings.py". That's it:
INSTALLED_APPS = (
...,
'django_cleanup.apps.CleanupConfig',
)
This is the github link for "django-cleanup"
This is the pypi link for "django-cleanup"
| |
doc_23529676
|
A
1
2
3
INF
The column "A" is defined as numeric. But it was read it from R so the INF was the R code for infinite.
How to obtain the max of A using SQLite sql? I tried
select
max(a)
from
table
where a != INF
and
select
max(a)
from
table
where a != "INF"
As you can see I noob at SQLite.
A: You can use something like 9e999 for Inf, e.g.,
select
max(a)
from
table
where a != 9e999
A: Extending on the answer by Matthew Plourde
How about
select
max(a)
from
table
where a < 9e999
| |
doc_23529677
|
there are 2 projects ( project1 and project2 )
I have dev's and ops users. Devs to only be created on dev servers in projects that the user is assigned and ops's to be created in all projects and envs.
I'd like for all users to be defined in the same user definition file.
my user definitions :
- username:
profile: # dev / ops
projects: # project1 / project2 / all
key: #"ssh-rsa key
OSgroups: "" # which OS groups is user member of
OSpass: "" # hashed OS password
my user create playbook:
- name: Create users
become: yes
user:
name={{ item.username }}
shell={{ item.shell }}
groups={{ item.groups }}
createhome=yes
password={{ item.OSpass }}
## now the problem part
with_items:
- "{{ users }}"
when: "{{ defaults_for_env.environment }} == {{ item.profile }}"
##
------------------------------------------------------------
## environment defaults
---
defaults_for_env:
- environment: "dev"
when just running usercreate playbook users are created, so the commands work.
What I'd like is for the playbook to:
for host is in inventory group [development] to create dev's assigned to inventory group [project1] and all users of type ops.
And for hosts in inventory group [prod] to only create users of type ops.
I cant get my head around the loops and inventory'n'stuff
Hope my question makes sense ?
A: One possible solution to your current requirement.
Inventory
---
all:
children:
dev:
hosts:
devhost1:
devhost2:
prod:
hosts:
prodhost1:
prodhost2
group_vars/all.yaml
---
#....
default_users:
- name: opsuser1
shell: /bin/bash
groups:
- group1
- group2
createhome: true
password: S0S3cr3t
- name: opsuser2
shell: /bin/sh
groups:
- wheel
- docker
- users
createhome: false
password: n0ts0S3cr3t
users: "{{ default_users + (specific_users | default([])) }}"
group_vars/dev.yml
---
#....
specific_users:
- name: devuser1
shell: /bin/bash
groups:
- groupa
- groupb
createhome: true
password: v3rYS3cr3t
- name: devuser2
shell: /bin/sh
groups:
- titi
- toto
- tata
createhome: false
password: U1trAS3cr3t
Your playbook
- hosts: all
become: true
tasks:
- name: Create users
user:
name: "{{ item.username }}"
shell: "{{ item.shell }}"
groups: "{{ item.groups }}"
createhome: "{{ item.createhome | bool }}"
password: ""{{ item.password | password_hash('sha512', 'S3cretS4lt') }}"
loop: "{{ users | flatten(levels=1) }}"
The playbook will go over all your hosts. By default it will read the values in the all group where you have the definition of default_users (i.e. ops) + the calculation for the users list being default_users + specific_users.
For machines in the prod group, specific_users is null and will default to an empty list.
For machines in the dev group, specific_users will be added to the default ones.
The loop is then made on users which will have the correct values for each machine depending on its situation.
| |
doc_23529678
|
I have successfully made image uploaded to server I can't get my console.log percent show in my console.
if (ques_title == "" || ques_content == "") {
console.log("fill up all the fills.")
} else {
// if got image , then upload image , then start to get data and insert into articles
if (imgCheck != null) {
var file_data = $('#fileuploadInput').prop('files')[0];
var form = new FormData();
form.append("key", "60c61b2e38bc472d82b55d61ec29defc");
form.append("image", file_data, file_data.name);
var settings = {
"url": "https://api.imgbb.com/1/upload",
"method": "POST",
"async": false,
"timeout": 0,
"processData": false,
"mimeType": "multipart/form-data",
"contentType": false,
"data": form,
"success": function (result) {
console.log(result);
}
};
ajax = new XMLHttpRequest();
ajax.onreadystatechange = function () {
if (ajax.status) {
if (ajax.status == 200 && (ajax.readyState == 4)) {
//To do tasks if any, when upload is completed
console.log("Upload done")
}
}
}
ajax.upload.addEventListener("progress", function (event) {
var percent = (event.loaded / event.total) * 100;
//*percent* variable can be used for modifying the length of your progress bar.
console.log(percent);
});
ajax.open("POST", 'https://api.imgbb.com/1/upload', true);
ajax.send(form);
My Console only show me Upload done, but the uplaod.addEventListener don't work at all.
Please guide me and tell me what I've done wrong in this code
| |
doc_23529679
|
NSMutableDictionary *testLocal = [[NSMutableDictionary alloc] init];
[testLocal setObject:@"Test" forKey:@"title"];
[testLocal setObject:@"test notification" forKey:@"body"];
[testLocal setObject:@"1" forKey:@"repeat"];
[testLocal setObject:@"26.04.2011 - 12:53" forKey:@"start"];
NSMutableDictionary *dict = [[NSMutableDictionary alloc] initWithDictionary:[saver read]];
[[dict objectForKey:@"content"] addObject:testLocal]; //Crashes here! (SIGABRT)
The method [saver read] returns this:
{
content = (
{
body = "test notification";
repeat = 1;
start = "26.04.2011 - 13.06";
title = Test;
}
);
}
So I don't see the error because the dict I write to is mutable and the key "content" is an array.
Thanks in advance.
mavrick3.
[saver read]:
- (NSDictionary *)read {
return [[NSDictionary alloc] initWithContentsOfFile:[self filePath]];
}
A: Try checking out what class the object returned by [dict objectForKey:@"content"] is. Then things will be much clearer to you. I suspect it is not returning an NSMutableArray instance but something else, most likely NSArray which doesn't respond to method addObject:
A: From apple Documentation for objectForKey :
The value associated with aKey, or nil if no value is associated with aKey.
So your code could be like below
if([dict objectForKey:@"content"] != nil && [[dict objectForKey:@"content"] isKindOfClass:[NSMutableArray class]] )
{
[[dict objectForKey:@"content"] addObject:testLocal];
}
else
{
[dict setObject: textLocal forKey: @"content"];
}
A: Is the array to which you want to add a dictionary mutable?
| |
doc_23529680
|
I am trying to hide an element by sliding it to the left/right, and show an element by sliding in from the left or right:
$(oldBox).hide('slide',{direction:'right'},500);
$(newBox).show('slide',{direction:'right'},500);
What actually happens is that the old element hides without sliding, and the new element shows without sliding.
Relevant section(s) of HTML (the same code repeats several times, just has changes in id's and names):
<div class="x" id="waistBox">
<div class="measureCol measureColLeft">
<span class="measureHeader">Waist</span>
<div class="measureDescription">
Surround your waist with the tape measure at the height
you are most comfortable wearing your pants. Adjust the
tape measure to your desired snugness, as your purchase
waist will match this measurement exactly (e.g. 34.75
inches).
</div>
<div class="measureValue">
<input type="text" name="waist" value = "1" id="waistInput" class="editMeasureInfo"/>
<i>inches</i>
</div>
</div>
<div class="measureCol measureColRight">
<div class="measureVideoContainer">
<img src="../../images/Measure_Video.PNG" class="measureVideo" />
</div>
</div>
</div>
JQuery function:
function change_measurement_display(oldDisp, newDisp) {
var oldBox = '#' + oldDisp + 'Box';
var newBox = '#' + newDisp + 'Box';
$(oldBox).hide('slide', {
direction : 'right'
}, 500);
$(newBox).show('slide', {
direction : 'right'
}, 500);
}
A: You want to use 'effect' instead of 'hide'
$(oldBox).effect('slide', { direction: 'right', mode: 'show' }, 500);
$(newBox).effect('slide', { direction: 'right', mode: 'hide' }, 500);
| |
doc_23529681
|
I started on a project and found code:
<input (keyup)="someFunc($event)" [formControl]="someControl">
Because I am refactoring all forms, my first thought was to remove (keyup) and use someControl.valueChanges.subscribe(..), but now I have doubts because if I do so, I just add more code to the component (f.ex. import Subscription, OnDestroy also to unsubscribe, then someControl.valueChanges.subscribe). So compared to keyup, valueChanges.subscribe adds much more code.
Are there any benefits in it that I do not realise?
p.s. I know, that with using keyup, it fires when user releases button, while valueChanges.subscribe when user presses button. But in my case, such difference doesnt matter.
A: If u use valueChanges u can use pipe operator as well and manipulate input value. For example:
this.formControl.valueChanges
.pipe(
distinctUntilChanged(),
filter(v => v.length > 3),
tap(console.log))
.subscribe();
A: I would use valuesChanges() over keyup as you are less likely to miss anything as it is listening to a change to the form control value rather than an event. So however it is changed wither pragmatically or via dom the changes will be noticed.
This will need to be handled with unsub though as it is not a http request the subscription will not manually unsub when you are done. I would try something like.
public subscriptions = new Subscription();
this.subscriptions.add(this.formControl.valueChanges
.pipe(distinctUntilChanged()) // makes sure the value has actually changed.
.subscribe(data => console.log(data)));
public ngOnDestroy(): void
{
this.subscriptions.unsubscribe();
// ensure when component is destroyed the subscription is also and not left open.
}
example of unsubscribing can be found here
A: I agree with Tim's comment to your post: "keyUp" is not fired when you e.g. copy-paste some text into your form, while valueChanges AFAIK should also be fired in this case
A: As I mentionned in the comment, valueChanges will react to copy/pasted values or values that are inserted automatically from the browser (autofill forms). Also, key events might not be the right choice on mobile device where other events might lead to values being changed in the form.
In my opinion, KeyListeners should only be used if there is no good alternative. They might event affet the performance of your app.
| |
doc_23529682
|
I am aware about Access advisor tab to find the linked resources, any other function apart from that.
Thank you
A: You can use aws iam list-entities-for-policy --policy-arn <arn-of-policy>
For exaple:
*
*Open CloudShell or your configured aws CLI
*aws $ aws iam list-entities-for-policy --policy-arn arn:aws:iam::123456789:policy/service-role/AwsCodePipelineServiceRole-central-1-foo
| |
doc_23529683
|
ERROR: Service 'app' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder367230859/entrypoint.sh: no such file or directory
I tried to change the mongo to PostgreSQL, but continue.
my files are below, thanks in advance
that Dockerfile
version: '3'
services:
web:
image: nginx
restart: always
# volumes:
# - ${APPLICATION}:/var/www/html
# - ${NGINX_HOST_LOG_PATH}:/var/log/nginx
# - ${NGINX_SITES_PATH}:/etc/nginx/conf.d
ports:
- "80:80"
- "443:443"
networks:
- web
mongo:
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: password
ports:
- "27017:27017"
# volumes:
# - data:/data/db
networks:
- mongo
app:
build: .
volumes:
- .:/mm_api
ports:
- 3000:3000
depends_on:
- mongo
networks:
web:
driver: bridge
mongo:
driver: bridge´´
that docker-compose
FROM ruby:2.7.0
RUN apt-get update -qq && apt-get install -y nodejs
RUN mkdir /mm_api
WORKDIR /mm_api
COPY Gemfile /mm_api/Gemfile
COPY Gemfile.lock /mm_api/Gemfile.lock
RUN bundle install
COPY . /mm_api
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
CMD ["bundle", "exec", "puma", "-C", "config/puma,rb"]
#CMD ["rails", "server", "-b", "0.0.0.0"]
that entry point
#!/bin/bash
set -e
rm -f /mm_api/tmp/pids/server.pid
exec "$@"
A: I had a similar issue when working on a Rails 6 application using Docker.
When I run docker-compose build, I get the error:
Step 10/16 : COPY Gemfile Gemfile.lock ./
ERROR: Service 'app' failed to build : COPY failed: stat /var/lib/docker/tmp/docker-builder408411426/Gemfile.lock: no such file or directory
Here's how I fixed it:
The issue was that the Gemfile.lock was missing in my project directory. I had deleted when I was having some issues with my gem dependencies.
All I had to do was to run the command below to install the necessary gems and then re-create the Gemfile.lock:
bundle install
And then this time when I ran the command docker-compose build everything worked fine again.
So whenever you encounter this issue endeavour to check if the file is present in your directory and most importantly if the path you specified to the file is the correct one.
That's all.
I hope this helps
| |
doc_23529684
|
>>> help('os')
I get greeted with
Help on module os:
NAME
os - OS routines for NT or Posix depending on what system we're on.
MODULE REFERENCE
https://docs.python.org/3.8/library/os
The following documentation is automatically generated from the Python
source files. It may be incomplete, incorrect or include features that
are considered implementation detail and may vary between Python
implementations. When in doubt, consult the module reference at the
location listed above.
. . .
and so forth. It is quite simple to find the module reference by just looking at this, but say I was collecting the references to 100 different modules. It would take quite some time and would be very repetitive work.
How can I parse through each help() function for the link to the module's documentation? It would involve finding a similar value such as https:// or .org or .com.
A: I'd argue that you actually don't need to do any parsing, since as far as I know, all the standard library Python modules have documentation accessible at the URL https://docs.python.org/<version>/library/<modulename>. It would be far more efficient to construct the URL according to that pattern compared to parsing the help text.
That being said, if you really do want to parse the help text, the re.search function should be useful. You can write a regular expression to match the URL of a Python standard library documentation page and presumably the first match should be the result you want.
| |
doc_23529685
|
USE muziekdatabase
GO
CREATE FUNCTION fnSpecNivAantal
(
@Niveau as char(1)
)
RETURNS char(1)
AS
BEGIN
DECLARE @Aantal AS int
IF @Niveau = 'A'
SET @Aantal = (SELECT COUNT(*) FROM STUK WHERE niveaucode = 'A')
ELSE IF @Niveau = 'B'
SET @Aantal = (SELECT COUNT(*) FROM STUK WHERE niveaucode = 'B')
ELSE IF @Niveau = 'C'
SET @Aantal = (SELECT COUNT(*) FROM STUK WHERE niveaucode = 'C')
RETURN @Aantal
END
USE muziekdatabase
GO
ALTER FUNCTION fnHoogsteNummer
(
@EersteNummer as numeric,
@TweedeNummer as numeric
)
RETURNS numeric
AS
BEGIN
DECLARE @HoogsteNummer as VARCHAR(MAX)
IF MAX(@Eerstenummer) > MAX(@TweedeNummer)
SET @HoogsteNummer = @EersteNummer
ELSE IF
MAX(@Tweedenummer) > MAX(@Eerstenummer)
SET @HoogsteNummer = @TweedeNummer
ELSE IF
@EersteNummer = @TweedeNummer
SET @HoogsteNummer = 'Nummers zijn gelijk'
ELSE
SET @HoogsteNummer = 'Er is iets fout gegaan'
RETURN @HoogsteNummer
END
Now they pretty much work like they should. But there's one thing that's not quite right. When I insert a value into my function, the outcome is a whole list with the same answer. Like 10 rows with just a 3, where it only should be 1 row with the number 3. I know I can use DISTINCT, but I think something is wrong with the function. I tried to use CASE / WHEN but that's not working either..
A: CREATE FUNCTION fnSpecNivAantal
(
@Niveau as char(1)
)
RETURNS INT --<-- since you are returning count use INT variable not char
AS
BEGIN
DECLARE @Aantal AS int;
IF (@Niveau IN ('A' , 'B', 'C'))
BEGIN
SELECT @Aantal = COUNT(*)
FROM STUK WHERE niveaucode = @Niveau
END
RETURN @Aantal
END
And sorry cannot make any sense of your second function.
A: Changing these function to inline table valued functions will offer some performance benefits. They are a little different way to think about functions but well worth the effort. One habit you need to get into is defining the size and precision of your variables and parameters. Do NOT get lazy and do things like numeric. You should be precise and define the size. SQL Server does not automatically adjust the scale and precision to meet the values it finds. On the contrary it uses a default size which may or may not fit with your data.
Your first function can be great simplified to this.
CREATE FUNCTION fnSpecNivAantal
(
@Niveau as char(1)
)RETURNS TABLE AS RETURN
SELECT Niveau = COUNT(*)
FROM STUK
WHERE niveaucode = @Niveau
AND niveaucode IN ('A', 'B', 'C')
The second one is a little different. You defined your function as returning a numeric (no scale or precision) but your code can return a numeric or a string literal. This doesn't work because all parts of a function MUST return the same datatype. Not a huge deal you just have to realize that your return datatype is a varchar here and not a number. This function could be converted to an inline table valued function like this.
ALTER FUNCTION fnHoogsteNummer
(
@EersteNummer as numeric, --need to define scale and precision
@TweedeNummer as numeric --need to define scale and precision
)
RETURNS TABLE AS RETURN
SELECT CASE WHEN @Eerstenummer > @TweedeNummer THEN convert(varchar(25), @EersteNummer)
WHEN @TweedeNummer > @EersteNummer THEN convert(varchar(25), @TweedeNummer)
WHEN @TweedeNummer = @EersteNummer THEN 'Nummers zijn gelijk'
ELSE 'Er is iets fout gegaan'
END
Please note that both of these functions are a single statement. This is what makes them an inline table valued function. If you start defining variables and have multiple statements it becomes a multi-statement table valued function and the performance can be even worse than a scalar function.
| |
doc_23529686
|
How can I automatically remove the blurred images from this database?
I read about fourier transformation to remove the blurred images. First I need to transform my images into fourier domain and then by applying some threshold I will be able to identify the blurred images. Could anybody give me some sample code in matlab for this? I don't know how to determine the threshold. Are there any way to determining this threshold?
A: This task is really not so simple, if you remove all the images that doesn't contain high frequencies you will end up removing many images that contain smooth scenes even though they are not blurred.
There is no 100% in computer vision, the best thing for you (in my opinion) is to make a human aided software, your software should suggest on the images that it thinks should be removed, but the final call must be made by a human being.
| |
doc_23529687
|
above is the image of the code, my objective is to have 4 columns demonstrating the prices for a bill cycle, and 4 columns demonstrating the average of all bill cycles one year before the initial bill cycle.
the first inner select works by itself and adds the 4 columns to the right in the picture below, the problem starts when I want to add second inner select the 4 columns for yearly average. initially I didn't have the "select * from", but I couldn't find a better way
the base of the problem is that I can't implement 2 Where clauses on the same column.
below is the snap shot of what I get from running only the first inner select.
enter image description here
I have tried union , join, cross join ... but none seem to work, one person suggested "window function" but I haven't worked with that yet.
this is the error below:
enter image description here
RAW CODE TEXT IS BELOW:
VARIABLE START_DATE CHAR
EXEC :START_DATE :=''
VARIABLE END_DATE CHAR
EXEC :END_DATE=''
select * from
(SELECT
"REPORTS"."WEK_MTD_BILL_SUM_HIST_T"."BILL_CYCLE",
"REPORTS"."WEK_MTD_BILL_SUM_HIST_T"."MTD_TOT" AS NET_TOTAL,
SUM(
SUM(
CASE WHEN "REPORTS"."WEK_MTD_BILL_SUM_HIST_T"."BLI_TYPE" = 'Charges'
THEN "WEK_MTD_BILL_SUM_HIST_T"."MTD_TOT" END)) OVER() AS TOTAL_CHARGES,
SUM(
SUM(
CASE WHEN "REPORTS"."WEK_MTD_BILL_SUM_HIST_T"."BLI_TYPE" = 'Credits'
THEN "WEK_MTD_BILL_SUM_HIST_T"."MTD_TOT" END)) OVER() AS TOTAL_CREDIT,
SUM(
SUM(
CASE WHEN "REPORTS"."WEK_MTD_BILL_SUM_HIST_T"."BLI_TYPE" = 'Previous Weekly Billing Net Total'
THEN "WEK_MTD_BILL_SUM_HIST_T"."MTD_TOT" END)) OVER() AS PRV_INVOICED,
SUM(
SUM(
CASE WHEN "REPORTS"."WEK_MTD_BILL_SUM_HIST_T"."BLI_TYPE" = 'Total Due / Receivable'
THEN "WEK_MTD_BILL_SUM_HIST_T"."MTD_TOT"
ELSE 0 END)) OVER() AS Net_Due_Total
FROM "REPORTS"."WEK_MTD_BILL_SUM_HIST_T"),
(SELECT
"REPORTS"."WEK_MTD_BILL_SUM_HIST_T"."BILL_CYCLE",
"REPORTS"."WEK_MTD_BILL_SUM_HIST_T"."MTD_TOT" AS NET_TOTAL,
(SUM(
SUM(
CASE WHEN "REPORTS"."WEK_MTD_BILL_SUM_HIST_T"."BLI_TYPE" = 'Total Due / Receivable' AND ( "REPORTS"."WEK_MTD_BILL_SUM_HIST_T"."BILL_CYCLE" BETWEEN '1-JAN-2022 4' AND :END_DATE)
THEN "WEK_MTD_BILL_SUM_HIST_T"."MTD_TOT" ELSE 0 END)) OVER()/COUNT(DISTINCT(BILL_CYCLE))) AS X
FROM
"REPORTS"."WEK_MTD_BILL_SUM_HIST_T"
WHERE
("REPORTS"."WEK_MTD_BILL_SUM_HIST_T"."BILL_CYCLE" BETWEEN :START_DATE AND :END_DATE )
)
GROUP BY
"REPORTS"."WEK_MTD_BILL_SUM_HIST_T"."MTD_TOT" ,
"REPORTS"."WEK_MTD_BILL_SUM_HIST_T"."BILL_CYCLE"
;
| |
doc_23529688
|
Example of current code:
def func(col1,col2,col3)
if "string1" in col2
return col3
else "string2" in col2
return col1
df.apply(lambda x: func(x["column1"],x["column2"],x["column3"]), axis = 1)
In the example above I have simplified the function so for any solutions that do not utilise the format above the actual function also includes another elif which takes a column that contains lists in each cell and iterates through the list performing some string functions to remove certain substrings. That is why I originally chose to use a lambda as it was easiest to write as a function.
The error just states not enough memory to create array.
All solutions and suggestions welcome.
| |
doc_23529689
|
<div class="content-inner">
<div =" " =" " =" " =" " =" " =" " =" " =" " =" " =" " =" " =" " =" ">
</div>
</div>
A: Most DOM implementations (and possibly the spec, I haven't checked) are very tolerant about the data that they let you add. It wouldn't surprise me in the slightest if the DOM allows you to create an attribute with no name, or a name that consists of a zero-length string. I would suggest that Firebug is faithfully displaying the junk that your application has created in the DOM.
| |
doc_23529690
|
My basic setup is that I have an activity which is essentially a splashscreen. It's on this screen I would like to show the progress. I have a separate DbAdapter.java file where a DatabaseHelper class extends SQLiteOpenHelper, where I override onUpgrade (the upgrade part is working fine).
I've tried a few different places to implement the progress dialog, but I don't seem to find the right spot. I tried passing context from my splashscreen activity to onUpgrade, but when onUpgrade runs it seems to be getting the context from my ContentProvider instead.
Does anyone have a good example of how to display a progress dialog when upgrading a database?
A: You need to implement an AsyncTask. Example:
class YourAsyncTask extends AsyncTask<Void, Void, Void> {
private ProgressDialog progressDialog;
@Override
protected void onPreExecute() {
//show your dialog here
progressDialog = ProgressDialog.show(this, "title", "message", true, false)
}
@Override
protected Void doInBackground(Void... params) {
//update your DB - it will run in a different thread
return null;
}
@Override
protected void onPostExecute(Void result) {
//hide your dialog here
progressDialog.dismiss();
}
}
Then you just have to call
new YourAsyncTask().execute();
You can read more about AsyncTask here: http://developer.android.com/reference/android/os/AsyncTask.html
A: ProgressDialog myProgressDialog = null;
public void DownloadFiles() {
myProgressDialog = ProgressDialog.show(this, "Please wait !",
"Updating...", true);
new Thread() {
public void run() {
try {
//Your upgrade method !
YourUpdateFunction();
} catch (Exception e) {
Log.v(TAG, "Error");
}
myProgressDialog.dismiss();
}
}.start();
}
| |
doc_23529691
|
I need to get certain values from a postgresql table using a function and passing parameters to it. Here's a simplified table and some values:
CREATE TABLE testFoo (
id text NOT NULL,
"type" text,
value1 float,
value2 float,
value3 float
)
WITH (
OIDS=FALSE
);
INSERT INTO testFoo (id, "type", value1, value2, value3)
VALUES (1, 'testValue1', '0.11', '0.22', '0.33');
INSERT INTO testFoo (id, "type", value1, value2, value3)
VALUES (1, 'testValue2', '0.00', '0.00', '0.00');
I need to be able to fetch the values based on the 'type' column content.
So for instance if the type = 'testValue2' I need to get to content of 'reading3' column... if it is 'testValue1' then I need 'reading2' out...
Here's a function I came up:
DROP FUNCTION getvalues(_values text[])
CREATE OR REPLACE FUNCTION getvalues(_values text[])
RETURNS TABLE (id text, type text, value float) AS
$BODY$
BEGIN
EXECUTE 'SELECT t.id, t.type,
CASE
WHEN t.type = '|| _values[1] ||' THEN '|| _values[2] ||'
END AS value
FROM testFoo t';
END;
$BODY$
LANGUAGE plpgsql;
SELECT * FROM getvalues(ARRAY['testValue2','"reading3"']);
However it gives me the follwoing error:
ERROR: column "testvalue2" does not exist
LINE 3: WHEN t.type = testValue2 THEN "reading3"
^
QUERY: SELECT t.id, t.type,
CASE
WHEN t.type = testValue2 THEN "reading3"
END AS value
FROM testFoo t
CONTEXT: PL/pgSQL function getvalues(text[]) line 3 at EXECUTE
********** Error **********
ERROR: column "testvalue2" does not exist
I've tried format() and loads of other options unsuccessfully... Could you kindly look in this?
Thanks.
A: You should use the format() function with the appropriate format specifiers, in particular %I for identifiers (column name, in this case) and %L for string literals that need quoting. Also, you need to return the actual data from the function.
CREATE OR REPLACE FUNCTION getvalues(_values text[])
RETURNS TABLE (id text, type text, value float) AS
$BODY$
BEGIN
RETURN QUERY EXECUTE format('SELECT id, "type",
CASE WHEN "type" = %L THEN %I
END AS value
FROM testFoo', _values[1], _values[2]);
END;
$BODY$
LANGUAGE plpgsql;
| |
doc_23529692
|
vector<double> x_points;
Then, I need to get the values of minimum and maximum x coordinates inside that vector. So I used *max_element and *min_element by including <algorithm>;
double max_in=*max_element(x_points.begin(),x_points.end());
double min_in=*min_element(x_points.begin(),x_points.end());
then, when I print the values, using
cout<<" min-max-In "<<min_in<<" "<<max_in<<" ";
...it shows only the integer part. I need the whole value with decimal parts. So, how can I get the real value, because I need those to do another process with those values?
Thank you. Any help please.
A: You print the values with the default precision, which is 6 digits. Exactly what you get!
Try adding a cout.precision(10); to get more digits.
A: Any reason why you dont use std::set rather than vector? Then they would naturally be ordered. That said it would destroy the original order and any duplicates (unless you use std::multiset) if that's important to you.
Alternatively, you could work out the max / min as you read in as in the following example. I did a bit of breif profiling on it and really didn't add any appreciable processing time.
vector<double> xcoords;
double d, dMax, dMin;
bool firstLine(true);
ifstream inFile( argv[1] );
while ( inFile.good() )
{
inFile >> d;
if ( ! inFile.good() || inFile.eof() ) break;
xcoords.push_back(d);
if (firstLine)
{
dMax = d;
dMin = d;
firstLine = false;
}
else
{
dMax = max( dMax, d );
dMin = min( dMin, d );
}
}
cout << dMax << " " << dMin << " " << xcoords.size() << endl;
| |
doc_23529693
|
How do I get a DSC script resource to wait until the code has completed before moving on?
(The code is invoke-expression "path\file.exe")
Details)
I am using powershell version 5
and am trying to get DSC setup to handle our sql server installations.
My manager has asked me to use the out of the box DSC components.
i.e. no downloading of custom modules which may help.
I have built up the config file that handles the base server build - everything is good.
The script resource that installs sql server is good.
It executes, and waits until it has installed completely, before moving on.
When I get up to the script resource that installs the sql server cumulative update, I have issues.
The executable gets called and it starts installing (it should take 10-15 minutes), but the dsc configuration doesn't wait until it has installed, and moves on after a second.
This means that the DependsOn for future steps, gets called, before the installation is complete.
How can I make the script resource wait until it has finished?
A: Have you tried the keyword "DependsOn" like that ?
Script MyNewSvc
{
GetScript = {
$SvcName = 'MyNewSvc'
$Results = @{}
$Results['svc'] = Get-Service $SvcName
$Results
}
SetScript = {
$SvcName = 'MyNewSvc'
setup.exe /param
while((Get-Service $SvcName).Status -ne "Running"){ Start-Sleep 10 }
}
TestScript = {
$SvcName = 'MyNewSvc'
$SvcLog = 'c:\svc.log'
If (condition) { #like a a running svc or a log file
$True
}
Else {
$False
}
}
}
WindowsFeature Feature
{
Name = "Web-Server"
Ensure = "Present"
DependsOn = "[Script]MyNewSvc"
}
A: Invoke-Expression doesn't seem to wait until the process has finished - try this in a generic PowerShell console and you'll see the command returns before you close notepad:
Invoke-Expression -Command "notepad.exe";
You can use Start-Process instead:
Start-Process -FilePath "notepad.exe" -Wait -NoNewWindow;
And if you want to check the exit code you can do this:
$process = Start-Process -FilePath "notepad.exe" -Wait -NoNewWindow -PassThru;
$exitcode = $process.ExitCode;
if( $exitcode -ne 0 )
{
# handle errors here
}
Finally, to use command line arguments:
$process = Start-Process -FilePath "setup.exe" -ArgumentList @("/param1", "/param2") -Wait -PassThru;
$exitcode = $process.ExitCode;
| |
doc_23529694
|
1
2 3
4 5 6
7 8 9 10
I wish to store them in a 2D array, how would I achieve it?
I have the following code so far for the read method
private read(fileName){
def count = 0
def fname = new File(fileName)
if (!fname.exists())
println "File Not Found"
else{
def input = []
def inc = 0
fname.eachLine {line->
def arr = line.split(" ")
def list = []
for (i in 1..arr.length-1) {
list.add(arr[i].toInteger())
}
input.add(list)//not sure if this is correct
inc++
}
input.each {
print it
//not sure how to reference the list
}
}
}
I am able to print the lists but I am not sure how to use the list of lists in the program (for performing other operations on it). Can anyone please help me out here?
A: On the input.each all you need is to iterate again in each item in the row. If it were a collection of unknown depth, then you'd need to stick to a recursive method.
Made a small change and removed the inc, since it is not needed (at least in the snippet):
fname = """1
2 3
4 5 6
7 8 9 10"""
def input = []
fname.eachLine { line->
def array = line.split(" ")
def list = []
for (item in array) {
list.add item.toInteger()
}
input.add list
}
input.each { line ->
print "items in line: "
for (item in line) {
print "$item "
}
println ""
}
Prints:
items in line: 1
items in line: 2 3
items in line: 4 5 6
items in line: 7 8 9 10
That is plain simple iteration. You can use @Tim's suggestion to make it more idiomatic in Groovy :-)
| |
doc_23529695
|
When we count the number of ids created during a time period, we want to see the number of unique "groups" of connected_ids during a period. In other words, we wouldn't want to count both the husband and wife pair, we would only want to count one since they are truly one lead.
We want to be able to create a view that only has the "first" id based on the "created_at" date and then contains additional columns at the end for "connected_lead_id_1", "connected_lead_id_2", "connected_lead_id_3", etc.
We want to add in additional logic so that we take the "first" id's source, unless that is null, then take the "second" connected_lead_id's source unless that is null and so on. Finally, we want to take the earliest on_boarded_date from the connected_lead_id group.
id | created_at | connected_lead_id | on_boarded_date | source |
2 | 9/24/15 23:00 | 8 | |
4 | 9/25/15 23:00 | 7 | |event
7 | 9/26/15 23:00 | 4 | |
8 | 9/26/15 23:00 | 2 | |referral
11 | 9/26/15 23:00 | 336 | 7/1/17 |online
142 | 4/27/16 23:00 | 336 | |
336 | 7/4/16 23:00 | 11 | 9/20/18 |referral
End Goal:
id | created_at | on_boarded_date | source |
2 | 9/24/15 23:00 | | referral |
4 | 9/25/15 23:00 | | event |
11 | 9/26/15 23:00 | 7/1/17 | online |
Ideally, we would also have i number of extra columns at the end to show each connected_lead_id that is attached to the base id.
Thanks for the help!
A: demo:db<>fiddle
Main idea - sketch:
*
*Looping through the ordered set. Get all ids, that haven't been seen before in any connected_lead_id (cli). These are your starting points for recursion.
The problem is your number 142 which hasn't been seen before but is in same group as 11 because of its cli. So it is would be better to get the clis of the unseen ids. With these values it's much simpler to calculate the ids of the groups later in the recursion part. Because of the loop a function/stored procedure is necessary.
*The recursion part: First step is to get the ids of the starting clis. Calculating the first referring id by using the created_at timestamp. After that a simple tree recursion over the clis can be done.
1. The function:
CREATE OR REPLACE FUNCTION filter_groups() RETURNS int[] AS $$
DECLARE
_seen_values int[];
_new_values int[];
_temprow record;
BEGIN
FOR _temprow IN
-- 1:
SELECT array_agg(id ORDER BY created_at) as ids, connected_lead_id FROM groups GROUP BY connected_lead_id ORDER BY MIN(created_at)
LOOP
-- 2:
IF array_length(_seen_values, 1) IS NULL
OR (_temprow.ids || _temprow.connected_lead_id) && _seen_values = FALSE THEN
_new_values := _new_values || _temprow.connected_lead_id;
END IF;
_seen_values := _seen_values || _temprow.ids;
_seen_values := _seen_values || _temprow.connected_lead_id;
END LOOP;
RETURN _new_values;
END;
$$ LANGUAGE plpgsql;
*
*Grouping all ids that refer to the same cli
*Loop through the id arrays. If no element of the array was seen before, add the referred cli the output variable (_new_values). In both cases add the ids and the cli to the variable which stores all yet seen ids (_seen_values)
*Give out the clis.
The result so far is {8, 7, 336} (which is equivalent to the ids {2,4,11,142}!)
2. The recursion:
-- 1:
WITH RECURSIVE start_points AS (
SELECT unnest(filter_groups()) as ids
),
filtered_groups AS (
-- 3:
SELECT DISTINCT
1 as depth, -- 3
first_value(id) OVER w as id, -- 4
ARRAY[(MIN(id) OVER w)] as visited, -- 5
MIN(created_at) OVER w as created_at,
connected_lead_id,
MIN(on_boarded_date) OVER w as on_boarded_date -- 6,
first_value(source) OVER w as source
FROM groups
WHERE connected_lead_id IN (SELECT ids FROM start_points)
-- 2:
WINDOW w AS (PARTITION BY connected_lead_id ORDER BY created_at)
UNION
SELECT
fg.depth + 1,
fg.id,
array_append(fg.visited, g.id), -- 8
LEAST(fg.created_at, g.created_at),
g.connected_lead_id,
LEAST(fg.on_boarded_date, g.on_boarded_date), -- 9
COALESCE(fg.source, g.source) -- 10
FROM groups g
JOIN filtered_groups fg
-- 7
ON fg.connected_lead_id = g.id AND NOT (g.id = ANY(visited))
)
SELECT DISTINCT ON (id) -- 11
id, created_at,on_boarded_date, source
FROM filtered_groups
ORDER BY id, depth DESC;
*
*The WITH part gives out the results from the function. unnest() expands the id array into each row for each id.
*Creating a window: The window function groups all values by their clis and orders the window by the created_at timestamp. In your example all values are in their own window excepting 11 and 142 which are grouped.
*This is a help variable to get the latest rows later on.
*first_value() gives the first value of the ordered window frame. Assuming 142 had a smaller created_at timestamp the result would have been 142. But it's 11 nevertheless.
*A variable is needed to save which id has been visited yet. Without this information an infinite loop would be created: 2-8-2-8-2-8-2-8-...
*The minimum date of the window is taken (same thing here: if 142 would have a smaller date than 11 this would be the result).
Now the starting query of the recursion is calculated. Following describes the recursion part:
*Joining the table (the original function results) against the previous recursion result. The second condition is the stop of the infinite loop I mentioned above.
*Appending the currently visited id into the visited variable.
*If the current on_boarded_date is earlier it is taken.
*COALESCE gives the first NOT NULL value. So the first NOT NULL source is safed throughout the whole recursion
After the recursion which gives a result of all recursion steps we want to filter out only the deepest visits of every starting id.
*DISTINCT ON (id) gives out the row with the first occurence of an id. To get the last one, the whole set is descendingly ordered by the depth variable.
A: Ok the best I can come up with at the moment is to first build maximal groups of related IDs, and then join back to your table of leads to get the rest of the data (See this SQL Fiddle for the setup, full queries and results).
To get the maximal groups you can use a recursive common table expression to first grow the groups, followed by a query to filter the CTE results down to just the maximal groups:
with recursive cte(grp) as (
select case when l.connected_lead_id is null then array[l.id]
else array[l.id, l.connected_lead_id]
end from leads l
union all
select grp || l.id
from leads l
join cte
on l.connected_lead_id = any(grp)
and not l.id = any(grp)
)
select * from cte c1
The CTE above outputs several similar groups as well as intermediary groups. The query predicate below prunes out the non maximal groups, and limits results to just one permutation of each possible group:
where not exists (select 1 from cte c2
where c1.grp && c2.grp
and ((not c1.grp @> c2.grp)
or (c2.grp < c1.grp
and c1.grp @> c2.grp
and c1.grp <@ c2.grp)));
Results:
| grp |
|------------|
| 2,8 |
| 4,7 |
| 14 |
| 11,336,142 |
| 12,13 |
Next join the final query above back to your leads table and use window functions to get the remaining column values, along with the distinct operator to prune it down to the final result set:
with recursive cte(grp) as (
...
)
select distinct
first_value(l.id) over (partition by grp order by l.created_at) id
, first_value(l.created_at) over (partition by grp order by l.created_at) create_at
, first_value(l.on_boarded_date) over (partition by grp order by l.created_at) on_boarded_date
, first_value(l.source) over (partition by grp
order by case when l.source is null then 2 else 1 end
, l.created_at) source
, grp CONNECTED_IDS
from cte c1
join leads l
on l.id = any(grp)
where not exists (select 1 from cte c2
where c1.grp && c2.grp
and ((not c1.grp @> c2.grp)
or (c2.grp < c1.grp
and c1.grp @> c2.grp
and c1.grp <@ c2.grp)));
Results:
| id | create_at | on_boarded_date | source | connected_ids |
|----|----------------------|-----------------|----------|---------------|
| 2 | 2015-09-24T23:00:00Z | (null) | referral | 2,8 |
| 4 | 2015-09-25T23:00:00Z | (null) | event | 4,7 |
| 11 | 2015-09-26T23:00:00Z | 2017-07-01 | online | 11,336,142 |
| 12 | 2015-09-26T23:00:00Z | 2017-07-01 | event | 12,13 |
| 14 | 2015-09-26T23:00:00Z | (null) | (null) | 14 |
| |
doc_23529696
|
A: The Visualizer class can not be used for this. Use the AudioRecord class instead. I learned from this project:
https://github.com/sommukhopadhyay/FFTBasedSpectrumAnalyzer
The project has missing resources, so it can not be run. But it is still possible to learn from the code. Read a buffer from the AudioRecord and then do a FFT-transformation of that buffer with the FFT-classes available in the git-project I mentioned. Then you will get the frequencies and you can visualize them.
| |
doc_23529697
|
Messages | Follow
-id -id
-message -mem1 (logged in user)
-userid -mem2 (followed user)
-created
$display ="";
$sql = mysql_query("
SELECT * FROM messages AS me
JOIN follow AS fl
ON me.userid = fl.mem2
WHERE fl.mem1 = $id (logged in user)
ORDER BY me.created
DESC LIMIT 10
") or die(mysql_error());
while($row=mysql_fetch_array($query)){
$msgid = $row["id"];
$message = $row["message"];
$userid = $row["userid"];
$created = $row["created"];
$display .="<?php echo $userid; ?> : <?php echo $message; ?><br />
<?php echo $created; ?>";
}
In the database there are no duplicates, just on the retrieve. Thanks for the input!
Edited: Display Code
A: If you are sure that there are no duplicates and the problem is in the query (you can check that by executing it from your database's interface), you can try two things:
*
*Use the follow table as the leading one:
SELECT messages.*
FROM follow
JOIN messages ON follow.mem2=messages.userid
WHERE follow.mem1=$id
ORDER BY messages.created DESC
LIMIT 0,10;
*Use a subquery:
SELECT *
FROM messages
WHERE userid IN(
SELECT DISCTINCT(mem2)
FROM follow
WHERE mem1=$id
)
ORDER BY created DESC
LIMIT 0,10;
A: There's really only a few things that could cause the records to duplicate - try breaking down the query into basic components to see if there are more than one record:
SELECT * FROM follow WHERE mem1 = [id];
SELECT * FROM messages WHERE userid = [mem2 from previous result];
If either of the previous statements return more than one record, than the problem lies there. Other than that, I'd look at the PHP code to see if you're doing something there.
As for the query itself, I have a few recommendations:
*
*Place the table with the filter first - the sooner you can narrow the results the better.
*Specify a field list instead of using '*' - this will be a tiny bit more efficient, and clarify what you're after. Also, it will give 'DISTINCT' a fighting chance to work...
Here's an example:
SELECT DISTINCT me.id, me.message, me.userid, me.created
FROM follow AS fl
INNER JOIN messages AS me ON me.userid = fl.mem2
WHERE fl.mem1 = :logged_in_user
ORDER BY me.created
DESC LIMIT 10
A: You're getting "double" results, most likely because the query results in something different then you expect.
If I understand your table-structure correctly; you have a one-to-many relation from messages to followers.
In your query, however, you fetch combinations of messages and followers. Each line will consist of a unique combination of message<>follower.
In short; when a single message has two followers, you'll get two rows in the result with the same message; but a different follower entry.
If you want to show each message once; and then list all followers per message you can either use group-by functions (e.g group_concat) and group-by on message entries. The other possibility is to fetch the followers in a separate query once you've retrieved the message row, and then print the results from that query as the followers for that message.
If you're simply trying to get the number of followers; you can use a group-by on the UID of your message table and add a count on the UID or user ID of the follower table. (Do not that with group-by, the select * from shouldn't be used; but separate columns can.)
| |
doc_23529698
|
#include <limits> "limits" file not found
for example.
I have been trying different build settings, e.g. C++ Standard Library with no luck.
Rename the implementation files from .cpp to .mm did not work also. Is there an workaround to solve this issues?
A: Try using #include <limits.h> instead of #include <limits>
| |
doc_23529699
|
I am trying to figure out how to save this per object:
Here is an example
Object
attributes:
Name:
Fingerprint ("01010101010101010101010110") etc...
This is used to try to match with a master print
Any suggestions?
A: You'd have to convert that to/from something Core Data understands, and save the converted value. There are a couple of possibilities, both of which involve getting the actual bits via CFBitVectorGetBits. Once you have that, you can
*
*Save them in an NSData using something like +dataWithBytes:length:, and put that in a binary-type attribute on a managed object. Or...
*Depending on the number of bytes you're using, save them in an NSNumber using something like +numberWithLong: (or whatever is long enough for the number of bits). Then put that in one of Core Data's integer types-- again, choosing whatever size fits your bits.
You can make the conversion either by using custom accessor methods on your NSManagedObject subclass, or by using the transformable Core Data attribute type and a value transformer class. For the latter you'd subclass NSValueTransformer and implement your conversions there (Apple provides a couple of examples of this).
Depending on what you're actually doing, you might want to consider using NSIndexSet instead of CFBitVectorRef. If nothing else, it conforms to NSCoding-- which means you can use a transformable attribute but rely on Core Data's default value transformer instead of writing your own.
You might also find it a lot simpler to just use one of the integer types and rely on bitwise operators to determine if a bit is set. Then you don't need to do anything special with Core Data, you just choose the appropriately-sized integer type.
A: Why are you not just storing NSData? It is way, way easier to store binary data inside NSData than inside CFBitvectorRef.
If you're trying to store a hash / fingerprint of something, I assume you're creating a SHA-256 hash with CC_SHA256_Init, _Update and _Final. Those will give you a so-called digest which is a fingerprint of the data you pass into the CC_SHA256_Update.
// Create the context:
CC_SHA256_CTX shaContext;
CC_SHA256_Init(&shaContext);
// For each value:
CC_SHA256_Update(&shaContext, &v, sizeof(v));
// Get the fingerprint / digest:
unsigned char digest[CC_SHA256_DIGEST_LENGTH];
CC_SHA256_Final(digest, &shaContext);
NSData *fingerprint = [NSData dataWithBytes:digest length:sizeof(digest)];
Then you can store that fingerprint into a Core Data attribute that's Binary Data.
Depending on the type of v, you might have to change the call to CC_SHA256_Update(). If you do this on an NSObject, you need to call it for each instance variable that you're interested in (that should be part of the fingerprint), e.g. if you have
@property (nonatomic) int32_t count;
@property (nonatomic, copy) NSString *name;
you'd do
int32_t v = self.count
CC_SHA256_Update(&shaContext, &v, sizeof(v));
NSData *d = [self.name dataUsingEncoding:NSUTF8Stringencoding];
CC_SHA256_Update(&shaContext, [data bytes], [data length]);
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.