id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23538800
|
I have created one form "Calculator" gets two values from user with the mathematical operator addition, subtraction or multiplication :
example:
My form:
value_1 = 8
Value_2 = 6
Math_op = +
now I define a calculator function in views which gets values from the form and do some math and generates results like this
Answers = 14
Now I want to transfer this result or answers variables value to a new webpage to show out put of calculator event.
My confusion is that how can I define a url for this case and how can i define a method which takes value from template and render this value to another form template.
Code i have used, is given bello: UPDATED CODE
Forms.py
from django import forms
class NameForm(forms.Form):
first_value = forms.CharField(label='First Value', max_length=100)
Second_value = forms.CharField(label='Second Value', max_length=100)
operator = forms.CharField(label='Operator', max_length=100)
Views.py
rom django.http import HttpResponseRedirect
from django import forms
from .forms import NameForm
def get_name(request):
if request.method == 'POST':
form = NameForm(request.POST)
if form.is_valid():
value_1 = form.cleaned_data['first_value']
value_2 = form.cleaned_data['Second_value']
ope = form.cleaned_data['operator']
if ope == "+":
ans = int(value_1) + int(value_2)
elif ope == "-":
ans = int(value_1) - int(value_2)
elif ope == "*":
ans = int(value_1) * int(value_2)
else:
print "Values are not correct"
return HttpResponseRedirect('/ans_page/')
# if a GET (or any other method) we'll create a blank form
else:
form = NameForm()
return render(request, 'PEP/name.html', {'form': form})
def ans_page(request):
ans = request.session['ans']
return render(request,'PEP/ans.html',{'ans':ans})
Urls:
from django.conf.urls import url
from django.contrib import admin
from . import views
urlpatterns = [
url(r'^$', views.get_name, name='get_name'),
url(r'^/ans_page$', views.ans_page, name='ans_page'),
]
templates:
<form action="/ans_page/" method="post">
{% csrf_token %}
{{ form }}
<input type="submit" value="Submit" />
</form>
Template 2: ans.html
{% if ans %}
{{ ans }}
{% endif %}
Error:
KeyError at /ans_page/
'ans'
Request Method: POST
Request URL: http://127.0.0.1:8000/ans_page/
Django Version: 1.10
Exception Type: KeyError
Exception Value:
'ans'
Exception Location: /home/jai/Desktop/Django_form/local/lib/python2.7/site-packages/django/contrib/sessions/backends/base.py in __getitem__, line 57
Python Executable: /home/jai/Desktop/Django_form/bin/python
Python Version: 2.7.12
Python Path:
['/home/jai/Desktop/Django_form/src',
'/home/jai/Desktop/Django_form/lib/python2.7',
'/home/jai/Desktop/Django_form/lib/python2.7/plat-x86_64-linux-gnu',
'/home/jai/Desktop/Django_form/lib/python2.7/lib-tk',
'/home/jai/Desktop/Django_form/lib/python2.7/lib-old',
'/home/jai/Desktop/Django_form/lib/python2.7/lib-dynload',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-x86_64-linux-gnu',
'/usr/lib/python2.7/lib-tk',
'/home/jai/Desktop/Django_form/local/lib/python2.7/site-packages',
'/home/jai/Desktop/Django_form/lib/python2.7/site-packages']
Server time: Tue, 25 Jul 2017 08:44:15 +0000
A: you have mostly done all the parts, except that you need another view function and a url to display the result,
so in urls.py
urlpatterns = [
url(r'^$', views.get_name, name='get_name'),
url(r'^/ans_page$', views.ans_page, name='ans_page'), // add this
]
create a view
def ans_page(request):
ans = request.session['ans']
return render(request,'PEP/ans.html',{'ans':ans})
and in ans.html
{% if ans %}
{{ ans }}
{% endif %}
and in get_name view
request.session['ans'] = ans
return HttpResponseRedirect('/ans_page/')
and change the form to this
<form action="" method="post">
| |
doc_23538801
|
The data received is like this:
[
{
id:1,
nickname:"nickname"
Users:[{
name:"username"
}]
}
]
The datatable requires 2 parameters as props, columns, and data.
the type for this i have done like this:
type DataRow = {
id: number;
nickname:string,
Users:string
};
and column prop:
const columns: TableColumn<DataRow>[] = [
{
name: "ID",
selector: (row) => row.id,
},
{
name: "nickname",
selector: (row) => row.nickname,
},
{
name: "name of the user",
selector: (row) => row.Users.name, // ERROR: Property 'name' does not exist on type 'string'
},
];
| |
doc_23538802
|
/**
* Negate a rational number r
*
* @return a new rational number that is negation of this number -r
*/
public Rational negate()
{
// CHANGE THE RETURN TO SOMETHING APPROPRIATE
return new Rational ((-1*numerator),denominator);
}
/**
* Invert a rational number r
*
* @return a new rational number that is 1/r.
*/
public Rational invert()
{
// CHANGE THE RETURN TO SOMETHING APPROPRIATE
if (numerator == 0) {
throw new ZeroDenominatorException( );
}
return new Rational (denominator,numerator);
}
A: Assuming your ZeroDenominatorException is a checked exception, you need to add throws ZeroDenominatorException to your invert method signature (or whichever methods throw checked exceptions):-
public Rational invert() throws ZeroDenominatorException
| |
doc_23538803
|
REM returns current date and time as var datestamp in YYYYMMDD_hhmm format
REM current date as variable date in YYYYMMDD format
REM current time as variable time in hhmm format
for /f "tokens=2,3,4 delims=/ " %%a in ('DATE /T') do set date=%%c%%a%%b
for /f "tokens=1,2 delims=: " %%a in ('TIME /T') do set hr=%%a
for /f "tokens=1,2 delims=: " %%a in ('TIME /T') do set mi=%%b
for /f "tokens=1,2,3 delims=: " %%a in ('TIME /T') do set ampm=%%c
set time=%hr%%mi%
IF %ampm% EQU PM (
REM in case of 12 PM we don't need to perform any actions
IF %hr% EQU 12 GOTO end
REM in case of another hour 1-11 PM we need to reformat it to 24-hour format by adding 12 hours
set /A hour=1%hr%+12-100
set time=%hour%%mi%
) ELSE (
REM only in case of 12 AM we need to reformat it to 24-hour format by extracting 12 hours (00 hours)
IF %hr% EQU 12 (
set /A temp=1%hr%-12-100
set hour=0%temp%
set time=%hour%%mi%
)
)
GOTO end
:end
set datestamp=%date%_%time%
This is just the script which generates date-time in 24-hour format (maybe here is some better way to do that). However it does not work. I had to run it 3 times before it shows complete results:
D:\utils\scripts>getDateStamp.cmd & echo %datestamp%
%datestamp%
D:\utils\scripts>getDateStamp.cmd & echo %datestamp%
20150609_55
D:\utils\scripts>getDateStamp.cmd & echo %datestamp%
20150609_1855
What might be the problem here?
Thanks!
| |
doc_23538804
|
The error I encountered was java.io.IOException: Could not rename file.
I figured out from here that
it was because the driver ran by user, and executor processes are ran by root, and those roots did not have permission to write file on user folder.
My temporary solution was to save it into C:\ folder, suggested here.
However, I'm wondering if there's a way to configure pyspark to run executors by users as well so that I may able to write on the user folder.
| |
doc_23538805
|
While troubleshooting the issue, I created a separate solution using the instructions found in this article. The new solution works great, so I pointed the client of the new solution to my original WebApi project and it also works great. This has led me to believe there is something different between Angular clients, but I haven't found the significant difference.
Could someone review the code below and point out the difference between the client that works and the client that fails?
This is the Angular code within the application that works:
App configuration
app.config(['$httpProvider', function ($httpProvider) {
$httpProvider.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded;charset=utf-8';
// Override $http service's default transformRequest
$httpProvider.defaults.transformRequest = [function (data) {
// Converts an object to x-www-form-urlencoded serialization.
}];
}]);
In my factory
var deferred = $q.defer();
$http({
method: 'POST',
url: logoutUrl,
headers: getHeaders(),
}).then(function (data, status, headers, cfg) {
accessToken = null;
deferred.resolve({ status: 'success' });
}, function (err, status) {
console.log(err);
deferred.reject(status);
});
return deferred.promise;
function getHeaders() {
if (accessToken) {
return { "Authorization": "Bearer " + accessToken };
}
}
Here's the raw HTTP request that goes through. The OPTIONS preflight check passes, so the POST goes through.
OPTIONS http://localhost:56508/api/Account/Logout HTTP/1.1
Host: localhost:56508
Connection: keep-alive
Access-Control-Request-Method: POST
Origin: http://localhost:42458
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36
Access-Control-Request-Headers: authorization
Accept: */*
DNT: 1
Referer: http://localhost:42458/
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8
POST http://localhost:56508/api/Account/Logout HTTP/1.1
Host: localhost:56508
Connection: keep-alive
Content-Length: 0
Accept: application/json, text/plain, */*
Origin: http://localhost:42458
Authorization: Bearer fNKcX_jCNhQuIhorMw-QPyezReQJd5ehoVhqegUmSQZxdbHp-T6L4RPehSl6ihSpNbYH58uhELwNHV0bEAyRj0G7bmFv4O5GIqBDyiOIB-YBfez5zHRNHbMe_iTdtBwdgOdbLh5PNSNIOi4ffU6H-py4oko0rMkLSP_hFSl2TGcbJwuJkrmHmahmLyeyQ-OO8KI4Kc-WoTaIVw3dJ5LDxbdSFF9aWoaCVGDfbP1tcp-aTMmydqZLnkX5DAGQPDawsiuXuWuwvUDPz6f4K5F78r4D8ldl8cCSO0uniSv3mIZYbgDuzwfjIrV_lN5pFYHg38f-7RcwvE-TARr0wJ1dNcM9XnNTJRXxsJRpJP-LAG369smEnRk2Z2D7Gds0KCAK7zKcwRyJh8u7_YCLcMVJR97mhJk-n4zEfnD3yxa0VsiNHkHiAAjtXF78ASWBW2LXXYlvn0Mu0tcltZ7Qur9-TslHjUhD1BmNkQzwTkQte3kbMk7HsCDZCWaxr-j_8ApD
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36
DNT: 1
Referer: http://localhost:42458/
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8
This is the Angular code within the application that fails:
In my factory
var deferred = $q.defer();
$http({
method: 'POST',
url: logoutUrl,
headers: {
'Authorization': 'Bearer ' + accessToken
,'Content-Type': 'application/x-www-form-urlencoded;charset=utf-8'
},
}).then(function (data, status, headers, cfg) {
accessToken = null;
deferred.resolve({ status: 'success' });
}, function (err, status) {
console.log(err);
deferred.reject(status);
});
return deferred.promise;
Here's the raw HTTP request that goes through. The OPTIONS preflight check fails here, so the POST doesn't go through.
OPTIONS http://localhost:56508/api/Account/Logout HTTP/1.1
Host: localhost:56508
Connection: keep-alive
Access-Control-Request-Method: POST
Origin: http://localhost:45743
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36
Access-Control-Request-Headers: authorization
Accept: */*
DNT: 1
Referer: http://localhost:45743/
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8
A: I learned that the problem was not with my $http request or with CORS. The problem occurred because I called an asynchronous function as though it was synchronous, and the request was canceled before it had a chance to finish.
Here's an example of the code that was poorly written. The Logout function encapsulates the $http request.
authService.logout();
$window.location.href = location.origin;
Here's an example of the code as it should have been written. This code works as expected.
authService.logout().then(function success(data){
$window.location.href = location.origin;
}, function failure(data) {
$window.location.href = location.origin;
});
Without waiting for the promise to resolve causes the $http to execute the error callback, but the error object is unavailable. Therefore, I thought the preflight failed when in fact the POST was prematurely canceled by the client.
| |
doc_23538806
|
In C++, there is a function
int doubleToInt(double d)
{
return (int)(d >= 0.0 ? (d + 0.1) : (d - 0.1));
}
The same function I migrate to C# as (Note that, in C++, sizeof(int) is 2 bytes. So I am using short as return type)
private static short doubleToInt(double d)
{
return (short)(d >= 0.0 ? (d + 0.1) : (d - 0.1));
}
After this conversion, I am doing some operation and generating a binary file. The binary file generated in C# is different when compared to that of C++. Even if I compare the values while debugging (before writing to file), I am getting different answers.
Now I need to explain my client why it is different.
Can someone give me inputs on why it is different?
What I know is, the temporaries generated in C++ while doing floating point arithmetic operations are of higher precision.
Are there any other points? So that I can defend by telling "The way C++ handles the floating point is different from C#
Or can I modify the C# program to match C++ output? Is it possible? Also, I can't modify the C++ legacy code. I need to get the same results in C#. Is it possible?
A: The facts that:
*
*this function returns different output in C++ versus C# given normal program input, and
*this function returns identical output in C++ versus C# given controlled identical input
suggests:
*
*the normal program inputs to this function are different in C++ versus C#.
Regarding the latter, in a comment the OP states “I also created a sample test application in C++ and C# and hard coded the input. By hard coding the input to doubleToInt function, I am getting same results.” This suggests that, given identical inputs, the C++ and C# versions of the function return identical outputs. We would deduce from this that the cause of different outputs is different inputs.
The OP also states ”While debugging, to compare the results, if I see the output of C++ and C#, it is different for the same set of values.“ However, this is inconclusive, because debuggers and print statements used for debugging often do not print the complete, exact value of floating-point objects. Quite often, they round to six significant digits. For example, a simple std::cout << x displays both 10000.875 and 10000.9375 as “10000.9”, but they are different numbers and would yield different outputs in doubleToInt.
In conclusion, the problem may be that earlier work in the program, before doubleToInt is called, experiences floating-point rounding or other errors and passes different values to doubleToInt in the C++ and C# versions. To test for this, print the exact inputs to doubleToInt and see if they differ in the two versions.
Printing the inputs exactly might be done with:
*
*Use the %a format if your implementation supports it. (This is a C feature for printing floating-point values in hexadecimal floating-point notation. Some C++ libraries support it when printf is used.)
*Set the precision very high and print, as with std::cout.precision(100). Some C++ implementations may still not print the exact value (which is a quality issue), but they should print enough digits to distinguish the exact value from neighboring double values.
*Print the bytes of the representation of the value (by converting a pointer to the floating-point object to a pointer to unsigned char and printing the individual char objects).
Based on the code presented, the problem is unlikely to be floating-point issues in doubleToInt. The language definitions permit some slack in floating-point evaluation, so it is theoretically possible that d+.1 is evaluated with excess precision, instead of normal double precision, and then converted to int or short. However, this would result in different results only in very rare cases, where d+.1 evaluated in double precision rounds up to an integer but d+.1 evaluated in excess precision remains just below the integer. This requires that about 38 bits (53 bits in the double significand minus 16 bits in the integer portion plus one bit for rounding) have specific values, so we would expect it to occur only about 1 in 275 billion times by chance (assuming a uniform distribution is a suitable model).
In fact, the adding of .1 suggests to me that somebody was trying to correct for floating-point errors in a result they expected to be an integer. If somebody had a “natural” value they were trying to convert to an integer, the usual way to do it would be to round to the nearest value (as with std::round) or, sometimes, to truncate. Adding .1 suggests they were trying to calculate something they expected to be an integer but were getting results like 3.999 or 4.001, due to floating-point errors, so they “corrected” it by adding .1 and truncating. Thus, I suspect floating-point errors exist earlier in the program. Perhaps they are exacerbated in C#.
A: You're trying to round numbers here, using the default rounding. C++ didn't mandate the direction of rounding, and it's probably different from C# given the different results.
A: Your functions would technically produce different results should the double values exceed sizeof(short/int) on a given platform.
Both of the functions have possibilities to lose data as you're truncating (losing precision) from a double to an int or short. Assuming you're targeting a MS environment sizeof(double) == 8, sizeof(int) == 4, and sizeof(short) == 2; this is true for both C++ and C# in a Windows environment (endian-ness and bit-ness (32/64) are irrelevant in these sizes in an MS build).
You also need to give more information as to what's happening AFTER the functions are called to generate the binary output. Technically speaking 'binary' file output is only of unsigned character output (i.e. sizeof() == 1); meaning how you 'write' the output of your functions to the file can also severely affect your files in both C++ and C# with regards to outputting numeric types (double/int/short).
Are you using an fopen call in C++ with a specific formatted output to the file, or are you using std::fstream (or something else)? How are you writing the data to the file in C# as well? Are you doing something like file.Write(doubleToInt(d)) (assuming you're using a System.IO.StreamWriter) or are you using a System.IO.FileStream and converting the doubleToInt output to a byte[] then calling file.Write(dtiByteArr)?
All of that being said my best guess based on the information given would be that your C# function is returning a short instead of an int causing issues when the values passed in to the function are greater than short.MaxValue.
A: I think your problem is related to how the data (short) is written/read to/from the binary file. You need to consider Big-Endian/Small-Endian, so the data file is consistent no matter what platform the code is in.
Check the System.BitConverter class. The BitConverter.IsLittleEndian field can help with the conversion. The code should be something similar to the following:
short value = 12348;
byte[] bytes = BitConverter.GetBytes(value);
Console.WriteLine(BitConverter.ToString(bytes));
if (BitConverter.IsLittleEndian)
Array.Reverse(bytes);
Console.WriteLine(BitConverter.ToString(bytes)); // write to your file
A: I haven't completely set myself in to it, so maybe i can be wrong, but it could have something to do with what mentioned here:
In a thread about the difference on Float, Decimal and Double
As hey says: Double which you are using, are in C# a floating binary point type. (10001.10010110011)
maybe, Double in C++ are more like decimal in C# a floating decimal point type. (12345.65789)
And if you compare floating binary point type and floating decimal point type, it wont give the same result.
| |
doc_23538807
|
$news = new News();
$news->title = 'hello world';
$new->user = $user_id,
$news->urlcc = DB::raw('crc32("'.$args['newsShortUrlInput'].'")');
$news->save();
$news->refresh();
Here with attribute $news->urlcc comes from user input after using mysql function crc32();
For the SQL injection issue, above codes not safe.
So, my question is how to bind the parameters in DB::raw() with Laravel model something like below.
$news->urlcc = DB::raw('crc32(:newsShortUrlInput)', ['newsShortUrlInput' => $args['newsShortUrlInput]);
Thanks,
A: I found one solution, not sure it is right or perfect solution.
In News model class to rewrite setAttribute, see below.
public function setUrlcrcAttribute($shortUrl)
{
$this->attributes['urlcrc'] = $this->getConnection()->select('select crc32(?) as urlcrc', [$shortUrl])[0]->urlcrc;
}
In your service class to create a new model like below.
$news = new News();
$news->title = 'hello world';
$new->user = $user_id,
$news->urlcrc = $args['newsShortUrlInput']; // Laravel model will try to build the real attribute urlcrc
$news->save();
$news->refresh();
It works to me, but not sure this is perfect solution or not.
| |
doc_23538808
|
var zip = new JSZip();
zip.file("Hello.txt", "Hello World\n");
/*create a folder*/
var img = zip.folder("images");
/*create a file in images folder*/
img.file("Hello1.txt", "Hello111 World\n");
/* generate the zip file */
var content = zip.generateAsync({type:"blob"});
This is the code I tried to upload the zip file but got no response.
var fd = new FormData();
fd.append('fileZip', content);
$.ajax({
data: fd,
url: '/plan-upload/1',
type: 'POST',
processData: false,
contentType: false,
success: function(response) {
alert("success"); /*$('#text-us').modal('toggle');*/
}
});
Now, can we upload this generated zip file to server? If yes, how? If no, why?
A: When we generating the zip doing this code
var content = zip.generateAsync({type:"blob"});
In content variable has some object type date like promise or Blob type.
So backend code doesn't recognize it as a file,Now do this
var file = new File([content], "name.zip");
Now we can send this file, doing this
var fd = new FormData();
fd.append('fileZip', file);
$.ajax({
data: fd,
url: '/plan-upload/1',
type: 'POST',
processData: false,
contentType: false,
success: function(response) {
alert("success"); /*$('#text-us').modal('toggle');*/
}
});
| |
doc_23538809
|
set_include_path(get_include_path().PATH_SEPARATOR."/path/to/program/root");
However, I can't get it to the right directory and the error messages aren't helpful enough, so I'm lost. My user area directory (shared host) looks like this:
/home/linweb09/b/example.com-1050560306/user # <-- this is my root
I was trying to re-organise my program so that all of the model files go in this directory:
{root}/program_name/library/
The web-root folder is this:
{root}/htdocs/
So I changed all of my includes to look like this:
set_include_path(get_include_path().PATH_SEPARATOR
."/home/linweb09/b/example.com-1050360506/user");
require_once "/program_name/library/some_module.php";
The index has to load this include to work, to authenticate the user:
/htdocs/admin/authenticate.php
# index.php
set_include_path(get_include_path().PATH_SEPARATOR
."/home/linweb09/b/example.com-1050360506/user");
require_once "/htdocs/admin/authenticate.php";
And I run into these errors:
[warn] mod_fcgid: stderr: PHP Fatal error:
require_once() [function.require]: Failed opening required '/htdocs/admin/authenticate.php'
(include_path='.:/usr/share/pear:/usr/share/php:/home/linweb09/b/example.com-1050360506/user')
in /home/linweb09/b/example.com-1050360506/user/htdocs/admin/index.php on line 3
[warn] mod_fcgid: stderr: PHP Warning:
require_once(/htdocs/admin/authenticate.php) [function.require-once]: failed to open stream:
No such file or directory in
/home/linweb09/b/example.com-1050360506/user/htdocs/admin/index.php
on line 3
But it is in the correct location and the error says no such file or directory, so I'm not sure how I'm supposed to configure this.
I also tried these and I got the same errors:
set_include_path(get_include_path().PATH_SEPARATOR."/user");
set_include_path(get_include_path().PATH_SEPARATOR."/");
A: Try removing the leading slash. /htdocs/admin/authenticate.php implies that you are working from root /. If you want it to search the include path, you need to use relative paths:
require_once "program_name/library/some_module.php";
require_once "htdocs/admin/authenticate.php";
Incidentally, you would only need to set the include path once per script - you dont need the set_include_path() line for every include, just once at the top of the script.
| |
doc_23538810
|
Are these concepts supported in E.F., could not find them in 5 but was hoping 6.x would sort this out.
Does anybody know if this is possible, and have a working example.
A: I'm not an Oracle guy, but...
Optimistic concurrency in EF is implemented via concurrency tokens.
Concurrency token is just a field, which satisfies for two conditions:
1) its value is server-generated (for example, rowversion in MS SQL, or field, populated from sequences in any RDBMS, which supports sequences);
2) it is mapped to a property, which is configured with IsConcurrencyToken method in fluent API (or corresponding attribute with data annonations).
The thing, which makes concurrency token different from calculated field/property, is that EF LINQ provider generates additional WHERE clause, when updating database entries.
So, I can't see anything, that prevents from using optimistic concurrency tokens with Oracle, Firebird, MySQL etc. It's not a MSSQL feature only.
| |
doc_23538811
|
select t2.Source, coalesce(t1."This Week",0) "This Week"
from sellers t2 left outer join
(select Source,min("Week") as Week, sum(Sales) "This Week"
from salesdata
where Week = date_trunc('week', now())::date - 1
group by Source, Week) t1
on t1.Source = t2.Source
Current Result:
Source This Week
Judith 18
Thedia 64
Alfonso 0
Michael 15
Jordan 0
Desired Result:
Source This Week YTD
Judith 18 100
Thedia 64 150
Alfonso 0 258
Michael 15 487
Jordan 0 78
A: Assuming that the field week is of type date in your table:
SELECT source, week, "This Week", "YTD"
FROM (
SELECT source, week, coalesce(sum(sales), 0) AS "This Week"
FROM salesdata
WHERE week = date_trunc('week', now()) - 1) sw
JOIN (
SELECT source, coalesce(sum(sales), 0) AS "YTD"
FROM salesdata
WHERE date_trunc('year', week) = date_trunc('year', now()) sy USING (source);
Note that you do not need the sellers table, all information can come from the salesdata table.
| |
doc_23538812
|
The numbers which can be added together are predefined in a table, to make things easier.
Current approach: Shuffle the table using a small algorithm, add first X values together, if they don't add up to 8, start over (including shuffling again) until the first X values add up to 8.
My code does work, just 2 problems: It takes a long time to process (obviously) and it can cause a stack overflow error if I don't add a cooldown.
Code can be dirty, it's not for a live production. Also im only an intermediate lua developer at best...
function sleep (a) -- random sleep function I found
local sec = tonumber(os.clock() + a);
while (os.clock() < sec) do
end
end
function shuffle(tbl) -- random shuffle function I found
for i = #tbl, 2, -1 do
math.randomseed( os.time() )
math.random();math.random();math.random();math.random();
local j = math.random(i)
tbl[i], tbl[j] = tbl[j], tbl[i]
end
return tbl
end
local times = {
0.5,
1.0,
1.5,
2.0,
2.5,
3.0,
3.5,
4.0
}
local timeunits = {} --refer to line 49, I did not want to do it like that...
function nnumbersto8(amount)
local sum = 0
local numbs = {}
times = shuffle(times) --reshuffle the set
for i = 1,amount,1 do --add first x values together
sum = sum + times[i]
numbs[i] = times[i]
end
if sum ~= 8 then sleep(0.1) nnumbersto8(amount) return end --if they are not 8, repeat process with cooldown to avoid stack overflow
--return numbs -- This doesn't work for some reason, nothing gets returned outside the function
timeunits = numbs
end
nnumbersto8(5) -- manual run it for now
print(unpack(timeunits))
There must be a simpler way, right?
Thanks in advance, any help is appreciated!
A: Here is a method that will work for large numbers of elements, and will pick a random solution with theoretically even likelihood for each.
function solution_node (value, count, remainder)
local node = {}
node.value = value
node.count = count
node.remainder = remainder
return node
end
function choose_solutions (node1, node2)
if node1 == nil then
return node2
elseif node2 == nil then
return node1
else
-- Make a random choice of which solution to pick.
if node1.count < math.random(node1.count + node2.count) then
node2.count = node1.count + node2.count
return node2
else
node1.count = node1.count + node2.count
return node1
end
end
end
function decode_solution (node)
if node == nil then
return nil
end
answer = {}
while node.value ~= nil do
table.insert(answer, node.value)
-- This causes the solution to be randomly shuffled.
local i = math.random(#answer)
answer[#answer], answer[i] = answer[i], answer[#answer]
node = node.remainder
end
return answer
end
function random_sum(tbl, count, target)
local choices = {}
-- Normally arrays are not 0-based in Lua but this is very convenient.
for j = 0,count do
choices[j] = {}
end
-- Make sure that the empty set is there.
choices[0][0.0] = solution_node(nil, 1, nil)
for i = 1,#tbl do
for j = count,1,-1 do
for this_sum, node in pairs(choices[j-1]) do
local next_sum = this_sum + tbl[i]
local next_node = solution_node(tbl[i], node.count, node)
-- Try adding this value in to a solution.
if next_sum <= target then
choices[j][next_sum] = choose_solutions(next_node, choices[j][next_sum])
end
end
end
end
return decode_solution(choices[count][target])
end
local times = {
0.2,
0.3,
0.5,
1.0,
1.2,
1.3,
1.5,
2.0,
2.5,
3.0,
3.5,
4.0
}
math.randomseed( os.time() )
local result = random_sum(times, 5, 8.0)
print("answer")
for k, v in pairs(result) do print(v) end
Sorry for my code. I haven't coded in Lua for a few years.
A: There are no solutions for 1, 2 and for values greater than 5, so the function only accepts 3, 4 and 5.
Here we are doing a shallow copy of the times table then we get a random index from the copy and begin searching for the solution, removing values we use as we go.
local times = {
0.5,
1.0,
1.5,
2.0,
2.5,
3.0,
3.5,
4.0
}
function nNumbersTo8(amount)
if amount < 3 or amount > 5 then
return {}
end
local sum = 0
local numbers = {}
local set = {table.unpack(times)}
for i = 1, amount - 1, 1 do
local index = math.random(#set)
local value = set[index]
if not (8 < (sum + value)) then
sum = sum + value
table.insert(numbers, value)
table.remove(set, index)
else
break
end
end
local reminder = 8 - sum
for _,v in ipairs(set)do
if v == reminder then
sum = sum + v
table.insert(numbers, v)
break
end
end
if #numbers == amount then
return numbers
else
return nNumbersTo8(amount)
end
end
for i=1,100 do
print(table.unpack(nNumbersTo8(5)))
end
Example response:
1.5 0.5 3 2 1
3 0.5 1.5 1 2
2 3 1.5 0.5 1
3 2 1.5 1 0.5
0.5 1 2 3 1.5
A: This is the subset sum problem with an extra restriction on the number of elements you are allowed to choose.
The solution is to use Dynamic Programming similar to regular Subset Sum, but add an extra variable that indicates how many items you have used.
This should go something among the lines of:
Failing stop clauses:
DP[-1][x][n] = false, for all x,n>0 // out of elements
DP[i][-1][n] = false, for all i,n>0 // exceeded X items
DP[i][x][n] = false n < 0 // Passed the sum limit. This is an optimization only if all elements are non negative.
Successful stop clause:
DP[i][0][0] = true for all i >= 0
Recursive formula:
DP[i][x][n] = DP[i-1][x][n] OR DP[i-1][x-1][n-item[i]] // Watch for n<item[i] case here.
^ ^
Did not take the item Used the item
| |
doc_23538813
|
Below is my Component class:
@Component
class Link{
@Autowired
private RandomClass rcobj;
public void getFiveInstancesOfRandomClass(){
//here I want to create five new instances for RandomClass but I get only one by auto-wiring
}
}
Config.class
@Configuration
class ApplicationConfig{
@Bean
public Link link(){ return new Link();}
@Bean
@scope ("prototype")
public RandomClass randomClass(){ return new RandomClass();}
}
I looked at few examples which mostly use xml based configuration. One of the solutions is by invoking ApplicationContext but I want to solve this with lookup-method.
A: To inject prototype to singleton via java config I'm using following technique:
Singleton class:
public abstract class Single
{
abstract Proto newInstance();
public void useBean()
{
System.out.println( newInstance() );
}
}
Prototype class:
public class Proto
{
}
Context:
public class Context
{
@Bean
public Single single()
{
return new Single() {
@Override
Proto newInstance()
{
return proto();
}
};
}
@Bean
@Scope("prototype")
public Proto proto()
{
return new Proto();
}
}
Class for testing:
public static void main( String[] args )
{
ApplicationContext context = new AnnotationConfigApplicationContext( Context.class );
Single single = context.getBean( Single.class );
single.useBean();
single.useBean();
}
From output we can see that each call used different object:
test.Proto@b51256
test.Proto@1906517
p.s.
I totally agree with you, we should not bind beans with applicationContext. It creates additional coupling and I believe this is not a good practice.
| |
doc_23538814
|
I read Programmatically check if PHP is installed using Python thread . If PHP is installed then its ok. if not then I need to get PHP source from php.net and auto install with IMAP and MySQL extentions only if PHP is not installed on server.
I've tried the following code for checking PHP is instaled or not
import subprocess as sp
def hasPHP():
try :
sp.check_call(['php', '-v'])
return True
except: return False
if not hasPHP():
#then install php. How do I install php?
| |
doc_23538815
|
> #include <msp430.h>
> #include <delays.h>
>
> #define CMD 0
> #define DATA 1
>
> #define LCD_OUT P2OUT
> #define LCD_DIR P2DIR
> #define D4 BIT4
> #define D5 BIT5
> #define D6 BIT6
> #define D7 BIT7
> #define RS BIT2
> #define EN BIT3
>
> // Function to pulse EN pin after data is written to accept it void
> pulseEN(void) { LCD_OUT |= EN; delay_us(100); LCD_OUT &= ~EN;
> delay_us(100); }
>
> //Function to write data/command to LCD void lcd_write(int value,int
> mode) { if(mode == CMD) LCD_OUT &= ~RS;
> // Set RS -> LOW for Command mode else LCD_OUT |= RS;
> // Set RS -> HIGH for Data mode
>
> // Data has to be sent to LCD in 2 lots of 4 bits, as it is running in 4 bit mode LCD_OUT = ((LCD_OUT & 0x0F) | (value &
> 0xF0)); // Write top 4 bits pulseEN(); delay_us(10);
>
> LCD_OUT = ((LCD_OUT & 0x0F) | ((value << 4) & 0xF0)); // Write
> lower 4 bits pulseEN(); delay_us(10); }
>
> // Function to print a string on LCD void lcd_print(char *s) {
> while(*s) { lcd_write(*s, DATA); s++; }
> delay_ms(10); }
>
> // Function to move cursor to desired position on LCD void
> lcd_setCursor(int row, int col) { const row_offsets[] = { 0x00,
> 0x40}; lcd_write(0x80 | (col + row_offsets[row]), CMD); delay_ms(1);
> }
>
> // Function to clear the LCD void lcd_clr() { lcd_write(0x01, CMD);
> // Clear screen delay_ms(5); }
>
> // Initialize LCD void lcd_init() { P2SEL &= ~(0x60); LCD_DIR |=
> (D4+D5+D6+D7+RS+EN); LCD_OUT &= ~(D4+D5+D6+D7+RS+EN);
>
> delay_ms(15); // Wait for
> power up ( 15ms ) lcd_write(0x33, CMD);
> // Initialization Sequence 1 delay_ms(5); lcd_write(0x32, CMD);
> // Initialization Sequence 2 delay_ms(1);
>
> // All subsequent commands take 40 us to execute, except clear &
> cursor return (1.64 ms), may as well set the lot to 5ms just to be
> safe
>
> lcd_write(0x28, CMD); // 4 bit
> mode, 2 line delay_ms(5);
>
> lcd_write(0x0C, CMD); // Display
> ON, Cursor OFF, Blink OFF delay_ms(5);
>
> lcd_write(0x01, CMD); // Clear
> screen delay_ms(5);
>
> lcd_write(0x06, CMD); // Auto
> Increment Cursor delay_ms(5); }
In my main code, I read an AtoD value, but I am not too sure how to display it. The AtoD works fine, I can use debug mode in Crossworks and see the AtoD value is what I am expecting. I can display text on the LCD with something like this:
lcd_setCursor(0,0);
lcd_print("AtoD Test");
lcd_setCursor(1,0);
lcd_print("AtoD OK");
and that is fine. It will display characters I have written. But how do I get it to display a value I am reading from somewhere?
I am a complete novice at this, and the code for setting up the LCD was taken from HERE and edited slightly to fit my MSP and setup. Essentially, I just want to do something like
lcd_print("ADC_Result");
but when doing this, it just literally prints those words, and when trying
lcd_print("some text %*",ADC_Result);
I get an error saying too many arguments to lcd_print. I assume this means in my setup, I need to change something to the way lcd_print works, but not sure how to do this. Can someone please let me know how to edit this so I can do something along the lines of
lcd_print ("Some text %c", some_data);
and have it print something?
A: The lcd_print() function you are using from the borrowed code only takes one parameter, a pointer to a null-terminated string. See line 51 of that code. But you want to output to the LCD using a printf() like style, which is a variadic function that takes a format string and any number of variables. This is the "error saying too many arguments" problem from the compiler.
To fix this problem, there are two general solutions: re-write lcd_print() so you can use it like printf(), or perform your string conversion before calling lcd_print().
Here is one solution that uses sprintf() function:
// Create a buffer for sprintf to work with, use largest size you can afford
char sbuffer[BUFSIZE];
// Convert result to a string
sprintf(sbuffer, "ADC_Result = %d", some_data);
// Output string to LCD
lcd_print(sbuffer);
| |
doc_23538816
| ||
doc_23538817
|
file = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DCIM)+ "/" + UUID.randomUUID(), toString()+ ".jpg");
ImageReader.OnImageAvailableListener readerListener = new ImageReader.OnImageAvailableListener() {
@Override
public void onImageAvailable(ImageReader imageReader) {
Image image = null;
try {
image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes() [0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
save(bytes);
} catch (IOException e)
{
e.printStackTrace();
}
finally {
{
if (image != null)
image.close();
}
}
}
This is the code I have to create the file and save to the location, I have tried other solutions but they throw out errors.
File folder = new File(Environment.getExternalStorageDirectory() + "/CustomFolder");
File file;
if (!folder.exists()) {
boolean success = folder.mkdir();
if (success){
file = new File(folder.getPath() + "/" + UUID.randomUUID(), toString()+ ".jpg");
}else {
Toast.makeText(FacialDetection.this, "Failed to save file to folder", Toast.LENGTH_SHORT).show();
}
}else{
file = new File(folder.getPath() + "/" + UUID.randomUUID(), toString()+ ".jpg");
}
ImageReader.OnImageAvailableListener readerListener = new ImageReader.OnImageAvailableListener() {
@Override
public void onImageAvailable(ImageReader imageReader) {
Image image = null;
try {
image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes() [0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
save(bytes);
} catch (IOException e)
{
e.printStackTrace();
}
finally {
{
if (image != null)
image.close();
}
}
}
private void save(byte[] bytes) throws IOException {
OutputStream outputStream = null;
try {
outputStream = new FileOutputStream(***file***);
outputStream.write(bytes);
}finally {
if (outputStream != null)
outputStream.close();
}
}
};
reader.setOnImageAvailableListener(readerListener, mBackgroundHandler);
final CameraCaptureSession.CaptureCallback captureListener = new CameraCaptureSession.CaptureCallback() {
@Override
public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult result) {
super.onCaptureCompleted(session, request, result);
Toast.makeText(FacialDetection.this, "Saved " + ***file***, Toast.LENGTH_SHORT).show();
createCameraPreview();
}
};
The updated code, stuff bold and italic is what is throwing errors
A: You're calling:
Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DCIM)
I would speculate that .getExternalStoragePublicDirectory() means get directory on SD card. Is there a getInternal... method?
A: I will advise you to create your own custom folder location this way:
File folder = new File(Environment.getExternalStorageDirectory() + "/CustomFolder");
File file;
if (!folder.exists()) {
boolean success = folder.mkdir();
if (success){
file = new File(folder.getPath() + "/" + UUID.randomUUID(), toString()+ ".jpg");
}else {
//Some message
}
}else {
file = new File(folder.getPath() + "/" + UUID.randomUUID(), toString()+ ".jpg");
}
//The rest of your code...
A: //Create folder !exist
String folderPath = Environment.getExternalStorageDirectory() +"/myFoldername";
File folder = new File(folderPath);
if (!folder.exists()) {
File wallpaperDirectory = new File(folderPath);
wallpaperDirectory.mkdirs();
}
//create a new file
newFile = new File(folderPath, newPhoto.getName());
if (newFile != null) {
// save image here
Uri relativePath = Uri.fromFile(newFile);
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
intent.putExtra(MediaStore.EXTRA_OUTPUT, relativePath);
startActivityForResult(intent, CAMERA_REQUEST);
}
| |
doc_23538818
|
I didn't find any statistics so I am wondering if anybody can give me some information about this?
Another point that comes into my mind is doing live migration not only for maintenance purpose but for saving energy by migrating virtual machines and power off physical hosts. I found many papers discussing this topic so I think it's a big topic in currently research.
Does somebody know if there is any cloud provider already performing something like this?
A: How often do scheduled infrastructure maintenance events happen?
Infrastructure maintenance events don't have a set interval between occurrences, but generally happen once every couple of months.
How do I know if an instance will be undergoing a infrastructure maintenance event?
Shortly before a maintenance event, Compute Engine changes a special attribute in a virtual machine's metadata server before any attempts to live migrate or terminate and restart the virtual machine as part of a pending infrastructure maintenance event. The maintenance-event attribute will be updated before and after an event, allowing you to detect when these events are imminent. You can use this information to help automate any scripts or commands you want to run before and/or after a maintenance event. For more information, see the Transparent maintenance notice documentation.
SOURCE: Compute Engine Frequently Asked Questions.
| |
doc_23538819
|
I want communicate a flag between three applications(in the same device) always in bakground
A: If it is just a flag and both your applications are under the same teamid, you can use the pasteboard functionality to share data.
If the data is to be secure you can apply some encryption on this data and share the data.
You can use the following two links to share the data.
Just try to keep the name of the pasteboard as same.
http://www.enharmonichq.com/sharing-data-locally-between-ios-apps/
Paste from unique PasteBoard (app-specific pasteboard)
| |
doc_23538820
|
Entity:
import {
Entity,
PrimaryGeneratedColumn,
Column,
OneToMany,
BaseEntity,
AfterLoad,
} from "typeorm";
import { OtherEntity } from "./OtherEntity";
// ColumnNumericTransformer
export class ColumnNumericTransformer {
to(data: number): number {
return data;
}
from(data: string): number {
return parseFloat(data);
}
}
@Entity()
export class EntityExample extends BaseEntity {
@PrimaryGeneratedColumn("uuid")
id: string;
@Column()
name: string;
@Column("numeric", {
precision: 7,
scale: 2,
transformer: new ColumnNumericTransformer(),
default: 0,
})
GGR: number;
@Column("numeric", {
precision: 7,
scale: 2,
transformer: new ColumnNumericTransformer(),
default: 0,
})
bets: number;
@Column("numeric", {
precision: 7,
scale: 2,
transformer: new ColumnNumericTransformer(),
default: 0,
})
wins: number;
@Column({ default: 0 })
fee_percentage: number;
@Column("numeric", {
precision: 7,
scale: 2,
transformer: new ColumnNumericTransformer(),
default: 0,
})
fee_amount: number;
@OneToMany((type) => OtherEntity, (some) => some.data)
relations: OtherEntity[];
@AfterLoad()
setComputed() {
this.GGR = this.bets - this.wins;
this.fee_amount = this.GGR * (this.fee_percentage/100);
}
}
Factory:
update: async(attrs: Partial<EntityExample> = {}) => {
let entityExample = await getRepository(EntityExample).findOne(attrs.id);
entityExample = {...entityExample, ...attrs};
let updated_entity_example;
try {
updated_entity_example = await getRepository(EntityExample).save(entityExample);
} catch (e) {
throw new Error("Couln't save Entity Example")
}
return updated_entity_example
}
TypeORM error:
Property 'setComputed' is optional in type '{ id: string; name: string; GGR: number; bets: number; wins: number; fee_percentage: number; fee_amount: number; setComputed?: () => void; hasId?: () => boolean; save?: (options?: SaveOptions) => Promise<...>; remove?: (options?: RemoveOptions) => Promise<...>; softRemove?: (options?: SaveOptions...' but required in type 'EntityExample'
So, the question is how can I solve it? Should I pass setComputed to the factory every time? Or is there a better way? Maybe it worth to use different methods? Which ones are better?
A: I think the issue here is that partial means that properties can be undefined, if you do this {...{ foo: 'bar'}, ...{ foo: undefined}} then you will get { foo: undefined }. Because of this typescript will use the possibly undefined type option. Since the function properties probably aren't going to be passed in on attrs you may be able to omit anything with a function signature and have it work, so something like
update: async(attrs: Partial<OmitOnType<EntityExample, (...args: any) => any>> = {}) => {
Edit: Typescripts Omit won't take anything that isn't a string number or symbol, to be able to omit types you can use a type map that looks like this
type OmitOnType<T, U> = {
[P in keyof T]: T[P] extends U ? never : P
};
| |
doc_23538821
|
I have a linked server to an Oracle server that has several tables and views that contain multiple column wit the same name. The values within the columns contain the same data.
SELECT
SERVER_NAME,
SERVER_NAME,
SERVER_NAME,
IP_ADDRESS,
IP_ADDRESS,
IP_ADDRESS,
IP_ADDRESS,
PHY_LOCATION,
PHY_LOCATION,
PHY_LOCATION,
OS_VERSION,
OS_VERSION
FROM SVR_TABLE
Is there a query I can execute that will give me the distinct column name from a table? I'm trying to avoid going into each table and manually putting the distinct columns into a table.
SELECT
SERVER_NAME,
IP_ADDRESS,
PHY_LOCATION,
OS_VERSION
INTO [SQL_SVR_DB].[dbo].[SVR_TABLE]
FROM [ORACLE_SVR]..[SCHEMA_NAME].[SVR_TABLE]
A: Based on the second query that you have provided I think what you're looking for is Aliases.
Wikipedia defines SQL Aliases as:
"An alias is a feature of SQL that is supported by most, if not all,
relational database management systems (RDBMSs). Aliases provide
database administrators, as well as other database users, with the
ability to reduce the amount of code required for a query, and to make
queries simpler to understand. In addition, aliasing can be used as an
obfuscation technique to protect the real names of database fields."
You can read more about it here : https://en.wikipedia.org/wiki/Alias_%28SQL%29
The query you are looking for will need to br written as :
SELECT SERVER_NAME,
IP_ADDRESS,
PHY_LOCATION,
OS_VERSION
FROM [SQL_SVR_DB].[dbo].[SVR_TABLE] sqlTable
INNER JOIN [ORACLE_SVR].[SCHEMA_NAME].[SVR_TABLE] oraTable
ON sqlTable.ColumnName = oraTable.ColumnName
| |
doc_23538822
|
I find this post how to add consumers in multithreading , but its not what i need, becouse I dont need lock... And i need without class or something like that. Just with my code.. PLease help :)
private int occupiedBufferCount = 0;
private int occupiedBufferCount2 = 0;
int i = 0;
private void Producer()
{
int how_much_numbers = Convert.ToInt32(textBox3.Text);
using (StreamWriter writer = new StreamWriter("random_skaiciai.txt"))
{
for (i = 0; i < how_much_numbers; i++)
{
Monitor.Enter(this);
if ((occupiedBufferCount == 1) || (occupiedBufferCount2 == 1))
{
Monitor.Wait(this);
}
++occupiedBufferCount;
buffer = i;
Random rnd = new Random();
numbers = i;
//numbers = rnd.Next(nuo, iki);
writer.WriteLine(numbers + "");
prm = false;
fib = false;
Monitor.Pulse(this);
Monitor.Exit(this);
if (isCanceled == true)
break;
}
writer.Close();
Set_p(kiek);
}
}
private void Consumer1()
{
int how_much_numbers = Convert.ToInt32(textBox3.Text);
using (StreamWriter writer = new StreamWriter("Primary_numbers.txt"))
{
while (i < how_much_numbers)
{
Monitor.Enter(this);
if ((occupiedBufferCount == 0))
{
Monitor.Wait(this);
}
--occupiedBufferCount;
if (numbers != 0)
if (prime_num(numbers) == true)
{
writer.WriteLine(numbers + "");
}
prm = true;
Monitor.Pulse(this);
Monitor.Exit(this);
if (isCanceled == true)
break;
}
writer.Close();
}
}
private void Consumer2()
{
int how_much_numbers = Convert.ToInt32(textBox3.Text);
using (StreamWriter writer = new StreamWriter("fibon_Numbers.txt"))
{
while (i < how_much_numbers)
{
Monitor.Enter(this);
if ((occupiedBufferCount2 == 0))
{
Monitor.Wait(this);
}
--occupiedBufferCount2;
if (numbers != 0)
if (isfibonaci(numbers) == true)
writer.WriteLine(numbers + "");
Monitor.Pulse(this);
Monitor.Exit(this);
if (isCanceled == true)
break;
}
writer.Close();
}
}
| |
doc_23538823
|
but I am not sure how to do it, i'm pretty new to coding.
This is my code so far, I am not sure what I am doing wrong. Thanks in advance.
from bs4 import BeautifulSoup
import requests
r = requests.get('https://www.coindesk.com/price/bitcoin')
r_content = r.content
soup = BeautifulSoup(r_content, 'lxml')
p_value = soup.find('span', {'class': "currency-price", "data-value": True})['data-value']
print(p_value)
This is the result:
Traceback (most recent call last): File
"C:/Users/aidan/PycharmProjects/scraping/Scraper.py", line 8, in
p_value = soup.find('span', {'class': "currency-price", "data-value": True})['data-value'] TypeError: 'NoneType' object is not
subscriptable
A: Content is dynamically sourced from an API call returning json. You can use a list of currencies or a single currency. With requests javascript doesn't run and this content isn't added to the DOM and various DOM changes, to leave html as seen in browser, don't occur.
import requests
r = requests.get('https://production.api.coindesk.com/v1/currency/ticker?currencies=BTC').json()
print(r)
price = r['data']['currency']['BTC']['quotes']['USD']['price']
print(price)
r = requests.get('https://production.api.coindesk.com/v1/currency/ticker?currencies=ADA,BCH,BSV,BTC,BTG,DASH,DCR,DOGE,EOS,ETC,ETH,IOTA,LSK,LTC,NEO,QTUM,TRX,XEM,XLM,XMR,XRP,ZEC').json()
print(r)
A: The problem here is that the soup.find() call is not returning a value (that is, there is no span with the attributes you have defined on the page) therefore when you try to get data-value there is no dictionary to look it up in.
A: your website doesn't hold the data in html, this way you can't scrape it, but they are using an end point that you could use:
data = requests.get('https://production.api.coindesk.com/v1/currency/ticker?currencies=BTC').json()
p_value = data['data']['currency']['BTC']['quotes']['USD']['price']
print(p_value)
# output: 11375.678380772
the price is changing all the time so my output may be diffrent
| |
doc_23538824
|
JAVASCRIPT
function chekon()
{
document.addEventListener('DOMContentLoaded', upme);
window.addEventListener('resize', upme);
function upme()
{
var rome = document.getElementById("out-cmnt");
var rect = rome.getBoundingClientRect();
// console.log(rect.top, rect.right, rect.bottom, rect.left);
var poss = rect.top + window.scrollY;
var koss = rect.bottom + window.scrollY; var loss = koss - poss;
var isMobile = !(navigator.userAgentData.mobile);
// event listeners
// window.addEventListener('resize', relod, false);
// function relod() { if(isMobile) { location.reload(); } }
window.addEventListener('scroll', doso, false);
window.addEventListener('resize', doso, false);
function doso()
{
lopp = document.getElementById("Web_1920__1");
hope = lopp.clientHeight;
const meme = document.body.scrollHeight;
const keke = hope/meme;
const scsc = window.scrollY;
var scmx = (document.documentElement.scrollHeight - document.documentElement.clientHeight);
console.log("meme scroll-height = ", meme); console.log("scsc scroll-y = ", scsc);
console.log("scmx max-scroll-y = ", scmx);
var innr = window.innerHeight; console.log("innr inner-height = ", innr);
var scbb = scmx - scsc; var finn = scsc * keke; var nunn = scbb * keke;
if (window.matchMedia("(min-width: 765px)").matches)
{
var finn = scsc * keke * 1.087;
var nunn = scbb * keke * 1.087;
}
var noss = poss - innr + loss;
if(scsc > noss && window.matchMedia("(min-width: 765px)").matches && isMobile)
{
var xoxo = nunn;
document.getElementById("out-cmnt").style.top = "auto";
document.getElementById("out-cmnt").style.bottom = xoxo + "px";
}
if(scsc < noss)
{
document.getElementById("out-cmnt").style.top = "7074px";
}
if(nunn < 100 && isMobile)
{
document.getElementById("last-dab").style.visibility = "hidden";
}
if(nunn > 100 && isMobile)
{
document.getElementById("last-dab").style.visibility = "visible";
}
} }
}
chekon();
Function upme() and doso() is under function chekon() there. The upme() has 2 event listeners and doso() has also 2 event listeners with resize in common. I checked that if doso() resize isn't applied, the upme() resize event listener has no effect on doso() even though doso() is under upme() function. I though, maybe there is overlapping. But seems like fine to me. Is there something messed up in my code that is responsible for the window resize action? The "Web_1920__1" is for getting the total height of the page. The "out-cmnt" is flickering and showing up at wrong place after stopping window resize action for the browser. Then when I start scrolling again the element should get back at its intended position again. But no, staying at wrong position. A reload only fixes the problem for now. Funny thing is, I can't reproduce the wrong position even when the resize is down to at the same window size that showed the problem before. So I think Chrome is showing the problem at random window resizes. Is it browser bug or mine? Help me out please.
You can ignore the variables and calculations. Just care more for the structure, functions and event-listeners like any wrong declarations. Please help me to understand the problem.
| |
doc_23538825
|
MatrixXf matA(2, 2);
matA << 1, 2, 3, 4;
MatrixXf matB(4, 4);
matB << matA, matA/10, matA/10, matA;
std::cout << matB << std::endl;
what i want to achieve:
SparseMatrix<double> matA(2, 2);
matA.coeffRef(0, 0) = 1;
matA.coeffRef(1, 1) = 1;
SparseMatrix<double> matB(4, 4);
matB << matA, matA/10, matA/10, matA;
std::cout << matB << std::endl;
then i get a matrix like this:
1 0 0.1 0
0 1 0 0.1
0.1 0 1 0
0 0.1 0 0.1
but, it doesn't work for sparse matrix,
so does eigen have built-in initializer like this? or i need to write it myself, if so? how?
A: You cannot have such an initializer because of the storage format. From the manual Sparse matrix manipulations > Block operations:
However, for performance reasons, writing to a sub-sparse-matrix is much more limited, and currently only contiguous sets of columns (resp. rows) of a column-major (resp. row-major) SparseMatrix are writable. Moreover, this information has to be known at compile-time, leaving out methods such as block(...) and corner*(...).
The only option you have is to convert everything to dense matrices, use the comma initializer and convert back to sparse.
#include <iostream>
#include <Eigen/Sparse>
using namespace Eigen;
typedef SparseMatrix<double> SparseMatrixXd;
int main()
{
SparseMatrixXd matA(2, 2);
matA.coeffRef(0, 0) = 1;
matA.coeffRef(1, 1) = 1;
SparseMatrixXd matB(4, 4);
MatrixXd matC(4,4);
matC <<
MatrixXd(matA),
MatrixXd(matA)/10,
MatrixXd(matA)/10,
MatrixXd(matA);
matB = matC.sparseView();
std::cout << matB << std::endl;
}
Alternatively you can use the unsupported Kronecker product module for this exact example.
#include <iostream>
#include <Eigen/Sparse>
#include <unsupported/Eigen/KroneckerProduct>
using namespace Eigen;
typedef SparseMatrix<double> SparseMatrixXd;
int main()
{
SparseMatrixXd matA(2, 2);
matA.coeffRef(0, 0) = 1;
matA.coeffRef(1, 1) = 1;
SparseMatrixXd matB(4, 4);
matB =
kroneckerProduct( (MatrixXd(2,2) << 1,0,0,1).finished(), matA ) +
kroneckerProduct( (MatrixXd(2,2) << 0,1,1,0).finished(), matA/10);
std::cout << matB << std::endl;
}
| |
doc_23538826
|
Current code
import com.foo.FooDataObject;
public class Foo extends FooBase {
public Foo(final FooDataObject fooDataObject) {
super(fooDataObject.getFoo);
}
}
public abstract class FooBase implements Serializable {
private final String foo;
public FooBase(final String foo) {
this.foo = foo;
}
public String getFoo() {
return foo;
}
}
Change (the class of the constructor parameter changed):
import com.foo.wherever.FooDO;
public class Foo extends FooBase {
public Foo(final FooDO fooDO) {
super(fooDO.getFoo);
}
}
A: Yes, it should be. You are simply changing the type of the parameter, but still using the same way of instance construction - via constructor with arguments.
See more of the available ways for Jackson deserialization (if you use this framework) here.
| |
doc_23538827
| ||
doc_23538828
|
A: The @types/office-js version, generated off of https://github.com/DefinitelyTyped/DefinitelyTyped/blob/master/types/office-js/index.d.ts, is the one true source of the d.ts information.
What about it are you finding incorrect?
For the new Office 2016 wave of APIs for Excel, Word, and OneNote, the JavaScript is machine-generated, as is the d.ts -- so it should be accurate. For the 2013 "Common" APIs and for Outlook APIs (i.e., anything under the Office namespace), it is hand-written, but I do remember Outlook updating that file fairly recently, and fixing some of the earlier omissions.
If there are still any, let us know, and I can redirect to the right folks.
A: So, its not that the api's are different. In some of the methods i noticed few parameters were skipped and it still worked which lead me to think the APIs might be different (in future versions).
Example, in Office.body both these seem to work.
getAsync(coercionType, options, callback)
getAsync(coercionType, callback)
| |
doc_23538829
|
I've tried the solution provided here (Set selected value of typeahead) which is to use $("#id").typeahead('setQuery', query); to clear the field. However, when I do that I get an error in my console: Uncaught Error: missing source.
Is there a way around this? The code to clear the field is a line above the bottom, rest provided for context.
//===========================
//Typeahead.js
//===========================
var tagApi = $("#addtags").tagsManager();
var addtags = new Bloodhound({
datumTokenizer: function(d) { return Bloodhound.tokenizers.whitespace(d['tag']); },
queryTokenizer: Bloodhound.tokenizers.whitespace,
dupChecker:true,
// remote: '/tags/tags/search.json?q=%QUERY',
prefetch: {url: '/tags/tags/search.json?q='}
});
addtags.initialize();
$('#addtags').typeahead({
hint:true,
highlight:true,
autoselect:false,
dupChecker:true,
},
{
displayKey: 'tag',
source: addtags.ttAdapter(),
}).on('typeahead:selected', onTagSelect);
function onTagSelect($e, datum) {
tagApi.tagsManager("pushTag", datum['tag']);
//This is where I'm trying to clear the input:
$('#addtags').typeahead('setQuery', '');
};
A: I've created a jsfiddle to show how to clear the selected suggestion:
http://jsfiddle.net/Fresh/ecq0c0ph/
Try searching for a movie and then select a suggestion; the suggestion will be used to populate some fields and then the query input control will be cleared.
In a nutshell, to clear the suggestion when it is selected, create an event handler for Typeahead's selected event, and then use its API to set the value to an empty string e.g.
.bind("typeahead:selected", function () {
$('.typeahead').typeahead('val', '');
});
| |
doc_23538830
|
import csv
all_items=[]
with open('stockes.csv') as product:
stockes=csv.reader(product)
for row in stockes:
all_items.append(row)
print(all_items[1][2])
all_items([1],[2]).append(a)
sorry but the a is in place of my normal variable this is also my first question
A: the list is
[['bar code', 'names', 'stock', 'prise'], ['13245627', 'plain brackets', '3', '599'], ['12345670', '100 mm bolts', '7', '550'], ['56756777', 'L-shaped brackets', '8', '230']]
a=1+2
import csv
all_items=[]
with open('stockes.csv') as product:
stockes=csv.reader(product)
for row in stockes:
all_items.append(row)
print(all_items[1][2])
all_items([1][2]).append(a)
sorry I hope this is better
| |
doc_23538831
|
I dont know which transaction manager to use. Both have some advantages and disadvantages.
*
*Atomikos suppport global transaction which I dont need and log some information about transaction to file system which I want to avoid:
public void setEnableLogging(boolean enableLogging)
Specifies if disk logging should be enabled or not. Defaults to true.
It is useful for JUnit testing, or to profile code without seeing the
transaction manager's activity as a hot spot but this should never be
disabled on production or data integrity cannot be guaranteed.
advantages is that it use just one transaction manager
*When using DataSourceTransactionManager I need one per dataSource
@Bean
@Primary
DataSourceTransactionManager transactionManager1() {
DataSourceTransactionManager transactionManager = new DataSourceTransactionManager();
transactionManager.setDataSource(dataSource1());
return transactionManager;
}
@Bean
DataSourceTransactionManager transactionManager2() {
DataSourceTransactionManager transactionManager = new DataSourceTransactionManager();
transactionManager.setDataSource(dataSource2());
return transactionManager;
}
this is problem because I need to specify name of tm in annotation:
@Transactional("transactionManager1")
public void test() {
}
but I dont know it because in runtime I can switch in application which database to use.
is there some other options or I am missing something in this two transaction manager ?
A: You should solve this as option 2, using one DataSourceTransactionManager per data source. You will need to keep track of the transaction manager for each data source.
One thing additionally, if you need to be able to rollback transactions on both databases, you will have to set up a ChainedTransactionManager for both.
| |
doc_23538832
|
String input = "12-30-2017";
DateFormat inputFormatter = new SimpleDateFormat("yyyy-MM-dd");
Date date = inputFormatter.parse(input);
while I have my debug cursor at the date , it gives me 12-06-2019. It seems it is adding the months and making a valid date.
I need to throw invalid date here. how to do that.
A: Please Try inputFormatter.setLenient(false);
A: Your formatting pattern must match your input string.
Use modern java.time classes rather than the troublesome legacy date-time classes.
DateTimeFormatter f = DateTimeFormatter.ofPattern( "MM-dd-uuuu" ) ;
LocalDate ld = LocalDate.parse( "12-30-2017" , f ) ;
Whenever possible, use standard ISO 8601 formats for exchanging date-time strings.
| |
doc_23538833
|
My original guide for the code came from: https://gist.github.com/benmarwick/6127413
Neither this code (linked above) nor my code (below) gives the desired results at this point. When my code executed successfully (under previous versions of the packages),
it provide n-grams that involved a specific, key word. It would also provide an ordered list of words according to their distance from the key word within the set of n-grams.
There are two specific problems:
*
*One tm feature that is generating an error each time (that may be causing the next/second problem) is the PlainTextDocument. That line of code is:
eventdocs <- tm_map(eventdocs, PlainTextDocument)
The next line of code is:
eventdtm <- DocumentTermMatrix(eventdocs)
When trying to create the document-text matrix (eventdtm), the code gives the error:
Error in simple_triplet_matrix(i, j, v, nrow = length(terms), ncol = length(corpus), :
'i, j' invalid
I have updated everything, including java, and still this error is arising.
I remarked-out the PlainTextDocument code as the text I am using is already in .txt format, because I found some who said this step was not necessary. When I do this, the document-text matrix is formed (or seems to be formed accurately). But I would like to resolve this error because I previously encountered problems when that line did not execute.
*But, regardless of this, there seems to be a problem in the formation of the n-grams. The first block is the most suspect to me. I am not sure the NGramTokenizer is doing what it should.
That code is:
span <- 4
span1 <- 1 + span * 2
ngramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = span1, max = span1))
dtmevents <- TermDocumentMatrix(eventdocs, control = list(tokenize = ngramTokenizer))
#find ngrams that have the key word of interest
word <- "keyword"
subset_ngrams <- dtmevents$dimnames$Terms[grep(word, dtmevents$dimnames$Terms)]
subset_ngrams <- subset_ngrams[sapply(subset_ngrams, function(i) {
tmp <- unlist(strsplit(i, split=" "))
tmp <- tmp[length(tmp) - span]
tmp} == word)]
allwords <- paste(subset_ngrams, collapse = " ")
uniques <- unique(unlist(strsplit(allwords, split=" ")))
The uniques set of words is just the key word of interest, with all of the other high-frequency collocates removed (at this point, I know the code is not working). Any help or leads would be appreciated. It took a long time to get things working originally. Then, with the updates, I'm out of action. Thank you.
A: It's tm package version issue.
You need to install version 0.6-2.
Solutions:
*
*Code - faster:
require(devtools)
install_version("tm", version = "0.6-2", repos = "http://cran.r-project.org")
*If that doesn't work, download the package and install it manually.
| |
doc_23538834
|
My already configured and working repository has a gitlab origin.
I opened the repository in GitAhead and whenever I try to fetch, pull or push I got this error:
Unable to {whatever I'm doing} from 'origin' - unexpected HTTP status code: 404
If I use the linux terminal, I can fetch, pull and push the same repository normally.
I don't know if the problem is GitAhead not asking for credentials.
*
*I've configured my username and email on general and repository config.
*I've tried to add and remove a gitlab account (using personal access token)
Any ideas?
# cat ./.git/config
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
url = https://gitlab.com/c****c/a****3.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "develop"]
remote = origin
merge = refs/heads/develop
[branch "master"]
remote = origin
merge = refs/heads/master
[user]
name = Jorge
email = j****z@c****c.com
I've asked this same question on GitAhead's github 2 months ago but it got no attention.
This problem is not present in my Windows machine neither in a teammate's linux machine so it must be a misconfiguration problem or something. It's strange because using git commands in the terminal works perfect
| |
doc_23538835
|
Instead of such:
A: It is possible since release of new Android Wear Watchface API. To do that in our Activity that extends CanvasWatchFaceService on onCreate() we call:
setWatchFaceStyle(new WatchFaceStyle.Builder(AnalogWatchFaceService.this)
.setCardPeekMode(WatchFaceStyle.PEEK_MODE_SHORT)
...
.build());
And setCardPeekMode(WatchFaceStyle.PEEK_MODE_SHORT) is a key there. It also can show a bigger peek card - WatchFaceStyle.PEEK_MODE_VARIABLE.
| |
doc_23538836
|
DF <- structure(list(site = c("A", "A", "A", "A", "A", "A", "A", "A",
"A", "A", "A", "A", "B", "B", "B", "B", "B", "B", "B", "B", "B",
"B", "B", "B", "C", "C", "C", "C", "C", "C", "C", "C", "C", "C",
"C", "C", "D", "D", "D", "D", "D", "D", "D", "D", "D", "D", "D",
"D", "E", "E", "E", "E", "E", "E", "E", "E", "E", "E", "E", "E"
), month = c(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L,
1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 1L, 2L, 3L,
4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 1L, 2L, 3L, 4L, 5L, 6L,
7L, 8L, 9L, 10L, 11L, 12L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L,
10L, 11L, 12L), SWC = c(0.140086734851409, 0.137745990685859,
0.146660019201229, 0.275950971628449, 0.298260250896057, 0.26870029739777,
0.227566661823465, 0.197824137311287, 0.195409734063355, 0.229745648248465,
0.226546607074933, 0.158508782420749, 0.0809095246636771, 0.0804010923965351,
0.0845644708882278, 0.136702248824284, 0.121883242349049, 0.108167424836601,
0.0970784232538687, 0.0860934461299105, 0.0910916878172589, 0.10747642248062,
0.102700195758564, 0.0811833903700756, 0.115733715437788, 0.0631616319005478,
0.0631265153446416, 0.171535848109378, 0.18694684173028, 0.142807562821677,
0.145926108701425, 0.154393702185792, 0.171436382382201, 0.188897212829005,
0.186402403754978, 0.165098945598251, 0.0713685071127924, 0.0436531172429078,
0.0624862109235555, 0.127141665482761, 0.134542260869565, 0.124414092512545,
0.100807230998223, 0.0765214392215714, 0.0798724029741452, 0.103098854664915,
0.116568256944444, 0.1105108739241, 0.108650005144474, 0.0976296689160692,
0.105006219572287, 0.122777662914972, 0.102765292125318, 0.0851933017211099,
0.0566760862577016, 0.056282148272957, 0.0718264626865672, 0.0909327257326783,
0.10461694624978, 0.103895834299474), LE = c(0.946565193060996,
1.56650528637219, 4.45382423672104, 11.1985050677478, 29.1379975402081,
74.6488855786053, 95.5801950803702, 77.4708126488623, 39.6136552461462,
8.01576749720725, 2.6466216369622, 1.1745554117357, 2.13167679680568,
2.98141098231535, 5.69653566874706, 13.8293309632019, 26.1687157009092,
42.2656041113131, 53.2110193016699, 43.9856386693244, 28.6722592758158,
10.8442703054334, 3.66463523523524, 1.97253929316558, 3.12430388517694,
5.48701577036533, 6.95032997537497, 13.7237856698465, 32.2938743570904,
48.7715488919351, 55.6860655989469, 46.0893482005537, 31.0498645245332,
14.7640819905213, 8.98244641061733, 3.43372937817259, 3.69254598741488,
5.03158318364554, 7.73098511689167, 22.4154276282377, 47.1483358705029,
66.5623394397102, 77.0140731034483, 67.5194151592607, 44.6530820096047,
20.4357590631479, 6.56212672069986, 2.8940355016033, 1.69359970210389,
2.61391569682152, 6.3433679665485, 14.9249216033009, 41.1275149717784,
82.0861782662955, 100.382874115357, 81.3020338687151, 54.7966365463682,
21.3347745116999, 5.36748695652174, 1.70811548886738)), .Names = c("site",
"month", "SWC", "LE"), row.names = c(NA, -60L), class = "data.frame")
When I plot it, this is the default plot:
library(ggplot2)
ggplot(data = DF, aes(x = SWC, y = LE)) +
geom_smooth(method = "lm", se=FALSE, color="black", formula = y ~ x) +
geom_point() + facet_wrap(~site)
However, I would like to manually control the position of the facets. For instance, I would like the blank space to go after facet B, so facets C, D and E are on the bottom row.
Any way to do that? What would the best approach?
EDIT: I am voting to reopen this question because the suggested answers no longer seem to work. For example, names(g$grobs) returns NULL.
A: The function multiplot is pretty useful here.
multiplot <- function(..., plotlist=NULL, file, cols=1, layout=NULL) {
library(grid)
# Make a list from the ... arguments and plotlist
plots <- c(list(...), plotlist)
numPlots = length(plots)
# If layout is NULL, then use 'cols' to determine layout
if (is.null(layout)) {
# Make the panel
# ncol: Number of columns of plots
# nrow: Number of rows needed, calculated from # of cols
layout <- matrix(seq(1, cols * ceiling(numPlots/cols)),
ncol = cols, nrow = ceiling(numPlots/cols))
}
if (numPlots==1) {
print(plots[[1]])
} else {
# Set up the page
grid.newpage()
pushViewport(viewport(layout = grid.layout(nrow(layout), ncol(layout))))
# Make each plot, in the correct location
for (i in 1:numPlots) {
# Get the i,j matrix positions of the regions that contain this subplot
matchidx <- as.data.frame(which(layout == i, arr.ind = TRUE))
print(plots[[i]], vp = viewport(layout.pos.row = matchidx$row,
layout.pos.col = matchidx$col))
}
}
}
Try this
A<-ggplot(data = subset(DF,site=="A"), aes(x = SWC, y = LE)) +
geom_smooth(method = "lm", se=FALSE, color="black", formula = y ~ x) +
geom_point()
B<-ggplot(data = subset(DF,site=="B"), aes(x = SWC, y = LE)) +
geom_smooth(method = "lm", se=FALSE, color="black", formula = y ~ x) +
geom_point()
C<-ggplot(data = subset(DF,site=="C"), aes(x = SWC, y = LE)) +
geom_smooth(method = "lm", se=FALSE, color="black", formula = y ~ x) +
geom_point()
D<-ggplot(data = subset(DF,site=="D"), aes(x = SWC, y = LE)) +
geom_smooth(method = "lm", se=FALSE, color="black", formula = y ~ x) +
geom_point()
E<-ggplot(data = subset(DF,site=="E"), aes(x = SWC, y = LE)) +
geom_smooth(method = "lm", se=FALSE, color="black", formula = y ~ x) +
geom_point()
Blank<-ggplot()
multiplot(Blank, A, B, C, D, E, cols=3)
A: try this (for this particular scenario),
library(ggplot2)
ggplot(data = DF, aes(x = SWC, y = LE)) +
geom_smooth(method = "lm", se=FALSE, color="black", formula = y ~ x) +
geom_point() + facet_wrap(~factor(site,levels=c("C","D","E","A","B")), as.table=FALSE)
A: I ended up finding a more general answer to the problem, using the patchwork package.
Here's one reproducible example:
library(dplyr)
library(ggplot2)
library(patchwork)
df <- data.frame(group=c(1,1,2,2,3),
name=c('a','b','c','d','e'),
x=c(1,2,3,4,5),
y=c(2,3,4,5,6))
p1 <-
df %>%
filter(name %in% c("a", "b", "c")) %>%
ggplot(aes(x, y)) +
geom_point() +
xlab(NULL) +
facet_wrap(~ name, nrow = 1)
p2 <-
df %>%
filter(name %in% c("d", "e")) %>%
ggplot(aes(x, y)) +
geom_point() +
facet_wrap(~ name, nrow = 1)
p1 + p2 &
plot_layout(design = "AAAAAAA
#######
#BBBBB#",
heights = c(3, 0.3, 3))
More details at: https://github.com/thomasp85/patchwork/issues/205#issuecomment-867999897
| |
doc_23538837
|
publid interface one // throws an error
{
}
Public type one must be defined in its own file error i am getting
please give your clarifications regarding this
A: Because your Java file containing one most probably is not named "one.java".
A: Any public class or interface must be declared in a separate file, having the name of that interface or class.
If you have multiple top level classes/interfaces in the same file, only one of them can be public.
A: Create a new file called 'one.java'. Place the declaration of your interface in there.
Every public class, interface etc. needs to be in its own file.
| |
doc_23538838
|
I have the following variables count = 865 and total = 1060.
When I use the variables count/total I get 1.2254335260115607.
But my expected output of 865/1060 is not resulting in 0.8160377358490566.
Can someone please help me understand what I am seeing this behavior and how to correct it to the expected result of 0.8160377358490566? Thank you.
A: it seems like you have your variables assigned wrong and are dividing total by count accidentally, judging by your result
| |
doc_23538839
|
Edit: Tried looking into the AppBarConfiguration, but that only seems to affect whether or not the back arrow shows up
A: Ended up figuring out how to do it. According to the android documentation you have to add an OnDestinationChangedListener to the nav controller and then you can do a switch on all the different destinations that you need to change the constant UI elements within.
navController = Navigation.findNavController(this, R.id.nav_host_fragment);
final Toolbar toolbar = findViewById(R.id.toolbar);
navController.addOnDestinationChangedListener(new NavController.OnDestinationChangedListener() {
@Override
public void onDestinationChanged(@NonNull NavController controller, @NonNull NavDestination destination, @Nullable Bundle arguments) {
int id = destination.getId();
switch (id) {
case R.id.mainFragment:
toolbar.setVisibility(View.GONE);
break;
default:
toolbar.setVisibility(View.VISIBLE);
break;
}
}
});
| |
doc_23538840
|
Proxies__CG__\Foo\InvoiceBundle\Entity\Invoice
instead of
Foo\InvoiceBundle\Entity\Invoice
Here is my code:
class ProperProperty extends \ReflectionProperty{
public function __construct(){
parent::__construct();
}
private function getGetterName($propertyName){
$ret = "get" . ucfirst($propertyName);
return $ret;
}
public function getDoctrineValue($class, $object){
$propertyName = $this->getName();
$getterName = $this->getGetterName($propertyName);
$reflectionMethod = new \ReflectionMethod($class, $getterName);
$ret = $reflectionMethod->invoke($object);
return $ret;
}
}
I saw that proxy classes where kinda lazy-loaded object, is there anyway to force this load ?
Thanks :D
A: If someone is interested of how to fill your empty proxy:
protected function loadProxy($object){
$class = get_class($object);
if (strpos($class, "Proxies") === false)
return;
$methodName = "__load";
$reflectionMethod = new \ReflectionMethod($class, $methodName);
$ret = $reflectionMethod->invoke($object);
}
| |
doc_23538841
|
Here the redirect_url is set by stripping off all parameters from the request url:
def CreateOAuthFlow(self):
"""Create OAuth2.0 flow controller
This controller can be used to perform all parts of the OAuth 2.0 dance
including exchanging an Authorization code.
Args:
request: HTTP request to create OAuth2.0 flow for
Returns:
OAuth2.0 Flow instance suitable for performing OAuth2.0.
"""
flow = flow_from_clientsecrets('client_secrets.json', scope='')
# Dynamically set the redirect_uri based on the request URL. This is extremely
# convenient for debugging to an alternative host without manually setting the
# redirect URI.
flow.redirect_uri = self.request.url.split('?', 1)[0].rsplit('/', 1)[0]
return flow
When the application is called from the Google Drive UI (a get request to the application's root url with get parameters code and state) the application checks that it is authorized to make requests to Google Drive. In the event that the access has been revoked, it tries to re authorize itself using the following code, I believe:
creds = self.GetCodeCredentials()
if not creds:
return self.RedirectAuth()
where RedirectAuth() is defined as:
def RedirectAuth(self):
"""Redirect a handler to an authorization page.
Used when a handler fails to fetch credentials suitable for making Drive API
requests. The request is redirected to an OAuth 2.0 authorization approval
page and on approval, are returned to application.
Args:
handler: webapp.RequestHandler to redirect.
"""
flow = self.CreateOAuthFlow()
# Manually add the required scopes. Since this redirect does not originate
# from the Google Drive UI, which authomatically sets the scopes that are
# listed in the API Console.
flow.scope = ALL_SCOPES
# Create the redirect URI by performing step 1 of the OAuth 2.0 web server
# flow.
uri = flow.step1_get_authorize_url(flow.redirect_uri)
# Perform the redirect.
self.redirect(uri)
My problem is that when I revoke access of the application from my Google Dashboard and try to open it via Google Drive UI it redirects me to the authorization page and then redirects back to the application after I authorize it but it manages to retain state (the get parameters that were passed from Drive UI). I think this is inconsistent with what the code describes and I was wondering if there is any explanation for this behaviour.
A hosted version of the DrEdit app can be found here: http://idning-gdrive-test.appspot.com/
A: In the case of starting the app from the Drive UI, that code path is never touched. The redirect to the authorization endpoint is initiated directly from Drive. In other words, the path is:
Drive -> auth -> DrEdit
By the time it gets to the app the user has already made their decision. The state is passed through in the state query parameter.
To see the code path you're referring to in action, revoke access again. But instead of starting from Drive, just try loading the app directly. You might need to delete cookies for the app too. Anyway, in that case when the app loads it'll detect that the user isn't authorized and redirect to the auth endpoint:
DrEdit -> auth -> DrEdit
Hope that helps.
| |
doc_23538842
|
Select *
from Table A
where RecordID NOT IN (@objectvariable)
I have created a variable called @objectvariable as object to hold a list of RecordID and set up a SQL task and with result set option to "full result set" and mapped the result set variable.
The SQL task executes successfully and populates the list.
When I use this variable in a data flow task and construct the T-SQL script I get an error
The data type of variable "User::ProcessedData" is not supported in an expression.
Reading the variable "User::ProcessedData" failed with error code 0xC00470D0.
In a nutshell I want to build T-SQL like
Select *
from Table A
where RecordID NOT IN (100,102,103)
in a dataflow ado source task by using a variable to bring the 'IN' values. Is this possible?
Many thanks
A: The easiest solution is to produce the comma separated string value within your original data source (using one of the many value concatenation methods for each RDBMS vendor you can find on StackOverflow and other sites). Then just stick that into a String variable and build your dynamic query.
Having said that, you can also use a Script Task to convert your Recordset object into a comma-separated string. Here's a C# script that does that for you (see below code for usage notes):
#region Namespaces
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
using System.Data.OleDb;
using System.Linq;
#endregion
namespace ST_75a10d235ce24be89bab80890dca9be9
{
/// <summary>
/// ScriptMain is the entry point class of the script. Do not change the name, attributes,
/// or parent of this class.
/// </summary>
[Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
{
public void Main()
{
OleDbDataAdapter da = new OleDbDataAdapter();
DataTable dt = new DataTable();
da.Fill(dt, Dts.Variables["User::ContractSet"].Value);
Dts.Variables["User::ContractValues"].Value = String.Join(",", dt.AsEnumerable().Select(r => r.Field<string>("contract")).ToArray());
Dts.TaskResult = (int)ScriptResults.Success;
}
#region ScriptResults declaration
/// <summary>
/// This enum provides a convenient shorthand within the scope of this class for setting the
/// result of the script.
///
/// This code was generated automatically.
/// </summary>
enum ScriptResults
{
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
};
#endregion
}
}
*
*Create two variables, one Object (User::ContractSet in the example) and one String (User::ContractValues) in your package.
*Set the User::ContractSet as a ReadVariable and User::ContractValues as a ReadWriteVariable on your Script Task.
*Note you'll have to set the fieldname you're concatenating from your Recordset in the LINQ select query (it's "contract" in the example.)
*This script requires references to the System.Data.DataSetExtensions and System.Linq assemblies.
Place this task after your data flow populating User::ContractSet and then User::ContractValues will have a comma-separated list you can use to build a dynamic query.
| |
doc_23538843
|
var panel = new Array(3).fill(new Array(3).fill('*'));
Result is
[
['*', '*', '*'],
['*', '*', '*'],
['*', '*', '*']
]
After that I need to replace middle string:
panel[1][1] = '#';
And the result is
[
['*', '#', '*'],
['*', '#', '*'],
['*', '#', '*']
]
So it seems that prototype fill function fills array with object by reference.
Is it possible to force fill function to fill array with different object instances?
If no, what approach of creation this-like two-dimensional array in your opinion is the best?
Update
Algorithm assumes to create a plane of asterisks, and then a lot of those asterisks will be replaced with another symbols creating some reasonable result. After that it will be printed.
In C it could be done using matrix of chars and changing symbols at any indices.
But as far as we cannot simply change single symbol in javascript string, I'm trying to find some good analogue.
Currently I've created two-dimensional array using two loops, then changing symbols I need, and after that with map and join functions I'm joining this result to a single string. But is it the best solution?
Thank you.
A: Not sure about the efficiency of this method but Converting the array into string using JSON.stingify() and then parsing back to original format by JSON.parse() makes the objects loose its original reference and work without any pain.
var panel = new Array(3).fill(new Array(3).fill('*'));
var copy = JSON.parse(JSON.stringify(panel));
copy[1][1] = '#';
//you can assign the value of variable 'copy' back to 'panel' if required.
document.write(JSON.stringify(copy)); //for demo
You can simplify the above logic into single liner of code without extra variable, this way
var panel = JSON.parse(JSON.stringify(new Array(3).fill(new Array(3).fill('*'))));
panel[1][1] = '#';
document.write(JSON.stringify(panel)); //for demo
A: This is kind of a dirty fix, but I'll give it anyway:
var panel = new Array(3).fill(new Array(3).fill('*'));
panel[1] = [panel[1][0],"#",panel[1][2]];
console.log(panel);
/*
Result:
[
['*', '*', '*'],
['*', '#', '*'],
['*', '*', '*']
]
*/
| |
doc_23538844
|
I am following the curse ruby bits de codeschool
it adds a library called active_support to ruby
but this method not working for me
I think that this function is decrapited
I am not sure
require 'active_support/all'
{1 => 2}.diff(1 => 2) # => {}
{1 => 2}.diff(1 => 3) # => {1 => 2}
{}.diff(1 => 2) # => {1 => 2}
{1 => 2, 3 => 4}.diff(1 => 2) # => {3 => 4}
fernando@fernando:~/ruby$ ruby tweets.rb
tweets.rb:2:in `<main>': undefined method `diff' for {1=>2}:Hash (NoMethodError)
fernando@fernando:~/ruby$ irb
irb(main):001:0> require 'active_support/all'
=> true
irb(main):002:0> {1 => 2}.diff(1 => 2) # => {}
NoMethodError: undefined method `diff' for {1=>2}:Hash
from (irb):2
from /usr/bin/irb:12:in `<main>'
irb(main):003:0> {1 => 2}.diff(1 => 3) # => {1 => 2}
NoMethodError: undefined method `diff' for {1=>2}:Hash
from (irb):3
from /usr/bin/irb:12:in `<main>'
irb(main):004:0> {}.diff(1 => 2) # => {1 => 2}
NoMethodError: undefined method `diff' for {}:Hash
from (irb):4
from /usr/bin/irb:12:in `<main>'
irb(main):005:0> {1 => 2, 3 => 4}.diff(1 => 2) # => {3 => 4}
NoMethodError: undefined method `diff' for {1=>2, 3=>4}:Hash
from (irb):5
from /usr/bin/irb:12:in `<main>'
A: The Rails team deprecated Hash#diff in ActiveSupport in favor of MiniTest#diff. See https://github.com/rails/rails/pull/8142.
They tend to deprecate things often (another reason why testing is so important).
| |
doc_23538845
|
CREATE TABLE table1
(
CreatedAt DAteTimeOffset NULL
);
How can I insert into that table 500 row in a while loop and have each date every 5 secound ? I want my outcome result be like this :
2018-10-08 05:00:00.0000000 +00:00
2018-10-08 05:00:05.0000000 +00:00
2018-10-08 05:00:10.0000000 +00:00
2018-10-08 05:00:15.0000000 +00:00
2018-10-08 05:00:20.0000000 +00:00
2018-10-08 05:00:25.0000000 +00:00
2018-10-08 05:00:30.0000000 +00:00
2018-10-08 05:00:35.0000000 +00:00
2018-10-08 05:00:40.0000000 +00:00
2018-10-08 05:00:45.0000000 +00:00
2018-10-08 05:00:50.0000000 +00:00
2018-10-08 05:00:55.0000000 +00:00
2018-10-08 05:01:00.0000000 +00:00
2018-10-08 05:01:05.0000000 +00:00
2018-10-08 05:01:10.0000000 +00:00
2018-10-08 05:01:15.0000000 +00:00
and so on ...
I have a while loop here but I don't know how to achieve inserting consecutive rows with values every 5 sek.
DECLARE
@i int = 0
WHILE @i < 500
BEGIN
INSERT INTO table1
(CreatedAt)
VALUES
(?)
END
A: Try to use set-based approach. It's usually much faster:
WITH N AS --generate 500 rows (1..500)
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 1)) N
FROM (VALUES (1),(2),(3),(4),(5)) A(A)
CROSS JOIN (VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10)) B(B)
CROSS JOIN (VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10)) C(C)
)
INSERT INTO table1 SELECT DATEADD(SECOND, (N-1)*5, SYSDATETIME()) FROM N
If you really, really need a loop (discouraged), you can use following:
DECLARE @i int = 0;
DECLARE @d DAteTimeOffset = SYSDATETIME();
WHILE @i<500
BEGIN
INSERT table1 VALUES (@d);
SET @d = DATEADD(second, 5, @d);
SET @i += 1;
END
A: I think Zohar means this solution
declare @t table (CreatedAt datetime)
insert into @t
select top 500
dateadd(second, (row_number() over (order by @@spid ) - 1) * 5, sysdatetime())
from sys.objects
--cross join sys.objects a cross join sys.objects b
select * from @t
| |
doc_23538846
|
Right now, if I manually launch AFTER powering up my device, the application launches fine.
However, if I try and define it as a launcher with the following intent filter in my manifest
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.DEFAULT" />
<category android:name="android.intent.category.LAUNCHER" />
<category android:name="android.intent.category.HOME" />
</intent-filter>
it crashes when the device boots.
I don't seem to be getting any useful log info since the application crashes on startup.
Is the issue most likeley initialization related, getting nulls in my onResumer handler?
If so, how can I confirm and fix?
This is the contents of my onCreate method
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
mViewFlipper = (ViewFlipper) findViewById(R.id.viewFlipper1);
next_in = AnimationUtils.loadAnimation(this, R.anim.transition_next_in);
next_out = AnimationUtils.loadAnimation(this, R.anim.transition_next_out);
previous_in = AnimationUtils.loadAnimation(this, R.anim.transition_previous_in);
previous_out = AnimationUtils.loadAnimation(this, R.anim.transition_previous_out);
minutes = (NumberPicker) findViewById(R.id.picker_minutes);
minutes.setMaxValue(60);
seconds = (NumberPicker) findViewById(R.id.picker_seconds);
seconds.setMaxValue(60);
chk_phone = (CheckBox) findViewById(R.id.check_phone);
chk_phone.setOnCheckedChangeListener(checkboxListener);
chk_sms = (CheckBox) findViewById(R.id.check_sms);
chk_sms.setOnCheckedChangeListener(checkboxListener);
chk_weather = (CheckBox) findViewById(R.id.check_weather);
chk_weather.setOnCheckedChangeListener(checkboxListener);
// 1. Get a reference to the UsbManager (there's only one, so you don't
// instantiate it)
mUsbManager = (UsbManager) getSystemService(USB_SERVICE);
// 2. Create the Connection object
connection = new UsbConnection12(this, mUsbManager);
// 3. Instantiate the WroxAccessory
mAccessory = new WroxAccessory(this);
//*/
}
| |
doc_23538847
|
Here is my sample code:
// All the jPanels
JFrame frame = new JFrame();
frame.setLayout(new MigLayout());
JPanel jp1 = new JPanel();
jp1.setLayout(new MigLayout());
jp1.setBorder(BorderFactory.createLineBorder(Color.DARK_GRAY, 1));
JPanel jp2 = new JPanel();
jp2.setLayout(new MigLayout());
jp2.setBorder(BorderFactory.createLineBorder(Color.DARK_GRAY, 1));
JPanel jp3 = new JPanel();
jp3.setLayout(new MigLayout());
jp3.setBorder(BorderFactory.createLineBorder(Color.DARK_GRAY, 1));
JPanel jp4 = new JPanel();
jp4.setLayout(new MigLayout());
jp4.setBorder(BorderFactory.createLineBorder(Color.DARK_GRAY, 1));
JPanel jp5 = new JPanel();
jp5.setLayout(new MigLayout());
jp5.setBorder(BorderFactory.createLineBorder(Color.DARK_GRAY, 1));
JPanel jp6 = new JPanel();
jp6.setLayout(new MigLayout());
jp6.setBorder(BorderFactory.createLineBorder(Color.DARK_GRAY, 1));
JPanel jp7 = new JPanel();
jp7.setLayout(new MigLayout());
jp7.setBorder(BorderFactory.createLineBorder(Color.DARK_GRAY, 1));
JPanel bigPanel1 = new JPanel();
bigPanel1.setLayout(new MigLayout());
bigPanel1.setBorder(BorderFactory.createLineBorder(Color.DARK_GRAY, 1));
JPanel bigPanel2 = new JPanel();
bigPanel2.setLayout(new MigLayout());
bigPanel2.setBorder(BorderFactory.createLineBorder(Color.DARK_GRAY, 1));
//All the labels to be added to JPanel jp1
JLabel label1 = new JLabel();
label1.setText("LABEL1");
JLabel label2 = new JLabel();
label2.setText("LABEL2");
JLabel label3 = new JLabel();
label3.setText("LABEL3");
JLabel label4 = new JLabel();
label4.setText("LABEL4");
jp1.add(label1);
jp1.add(label2);
jp1.add(label3);
jp1.add(label4,"wrap");
bigPanel1.add(jp2);
bigPanel1.add(jp6);
bigPanel1.add(jp3,"grow,wrap");
bigPanel2.add(jp4);
bigPanel2.add(jp7);
bigPanel2.add(jp5,"grow,wrap");
frame.getContentPane().add(jp1,"dock north, wrap");
frame.getContentPane().add(bigPanel1,"span,grow,wrap");
frame.getContentPane().add(bigPanel2,"span,grow,wrap");
frame.pack();
frame.setVisible(true);
Which results in this output:GUI OUTPUT
What I want to achieve is being able to add labels into the 1st JPanel (jp1) without messing with the remainder JPanels width.
Additionally, I want to make the several JPanels inside a bigPanel to occupy its full width, as well as in jp2,jp6 and jp3 to fill bigPanel1.
How should I do this? Thanks in advance.
A: I have never used MigLayout, and personally dont see the reason if it can be done using default java LayoutManager.
Okay so I used a combination FlowLayout and GridBagLayout to achieve this, along with gc.fill=GridBagConstraints.NONE and gc.anchor=GridBagConstraints.WEST for those panels which we dont want to fill the contentpane width, also updated as per your comment to stop the JPanel/JFrame from growing larger than the given max width when more JLabels are added this was done using a JScrollPane:
import java.awt.Color;
import java.awt.Dimension;
import java.awt.GridBagConstraints;
import java.awt.GridBagLayout;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.JScrollPane;
import javax.swing.SwingUtilities;
import javax.swing.border.LineBorder;
public class Test {
public Test() {
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
frame.setLayout(new GridBagLayout());
final JPanel labelPanel = new JPanel();
labelPanel.setBorder(new LineBorder(Color.black));
for (int i = 0; i < 5; i++) {
labelPanel.add(new JLabel("Label" + (i + 1)));
}
final int maxWidth = 200;
final JScrollPane jsp = new JScrollPane(labelPanel) {
@Override
public Dimension getPreferredSize() {
//we set the height by checking if we exceeed the wanted ith thus a scrollbar will appear an we must incoprate that or labels wont be shpwn nicely
return new Dimension(maxWidth, labelPanel.getPreferredSize().width < maxWidth ? (labelPanel.getPreferredSize().height + 5) : ((labelPanel.getPreferredSize().height + getHorizontalScrollBar().getPreferredSize().height) + 5));
}
};
JPanel otherPanel = new JPanel();
otherPanel.add(new JLabel("label"));
otherPanel.setBorder(new LineBorder(Color.black));
JPanel otherPanel2 = new JPanel();
otherPanel2.add(new JLabel("label 1"));
otherPanel2.add(new JLabel("label 2"));
otherPanel2.setBorder(new LineBorder(Color.black));
GridBagConstraints gc = new GridBagConstraints();
gc.fill = GridBagConstraints.BOTH;
gc.weightx = 1.0;
gc.weighty = 1.0;
gc.gridx = 0;
gc.gridy = 0;
frame.add(jsp, gc);
gc.fill = GridBagConstraints.NONE;
gc.anchor = GridBagConstraints.WEST;
gc.gridy = 1;
frame.add(otherPanel, gc);
gc.anchor = GridBagConstraints.WEST;
gc.gridy = 2;
frame.add(otherPanel2, gc);
frame.pack();
frame.setVisible(true);
frame.revalidate();
frame.repaint();
}
public static void main(String[] args) {
//Create Swing components on EDT
SwingUtilities.invokeLater(new Runnable() {
@Override
public void run() {
new Test();
}
});
}
}
A: I have use BorderLayout and FlowLayout to manage the layouts. The frame has two JPanel's and one JPanel in it has two more JPanel's. All the internal panels use FlowLayout to align the JLabels. To arrange these panels on the JFrame I have used BorderLayout.
import java.awt.BorderLayout;
import java.awt.Color;
import java.awt.FlowLayout;
import java.awt.GridBagLayout;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.SwingUtilities;
import javax.swing.border.LineBorder;
public class LayoutTest {
public LayoutTest() {
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
frame.setLayout(new GridBagLayout());
JPanel motherPanel = new JPanel(new BorderLayout());
JPanel topPanel = new JPanel(new BorderLayout());
JPanel bottomPanel = new JPanel(new FlowLayout(FlowLayout.LEFT));
motherPanel.add(topPanel, BorderLayout.NORTH);
motherPanel.add(bottomPanel, BorderLayout.CENTER);
JPanel topUpperPanel = new JPanel(new FlowLayout(FlowLayout.LEFT));
JPanel topBottomPanel = new JPanel(new FlowLayout(FlowLayout.LEFT));
topUpperPanel.setBorder(new LineBorder(Color.BLACK));
topBottomPanel.setBorder(new LineBorder(Color.BLACK));
bottomPanel.setBorder(new LineBorder(Color.BLACK));
topPanel.add(topUpperPanel, BorderLayout.PAGE_START);
topPanel.add(topBottomPanel, BorderLayout.CENTER);
for(int i = 0; i < 3; i++) {
JLabel label = new JLabel("Label-" + String.valueOf(i));
label.setBorder(new LineBorder(Color.BLACK));
topUpperPanel.add(label);
}
for(int i = 0; i < 2; i++) {
JLabel label = new JLabel("Label-" + String.valueOf(i));
label.setBorder(new LineBorder(Color.BLACK));
topBottomPanel.add(label);
}
for(int i = 0; i < 5; i++) {
JLabel label = new JLabel("Label-" + String.valueOf(i));
label.setBorder(new LineBorder(Color.BLACK));
bottomPanel.add(label);
}
frame.add(motherPanel);
frame.setTitle("Layout Manager");
frame.pack();
frame.setVisible(true);
}
public static void main(String[] args) {
SwingUtilities.invokeLater(new Runnable() {
@Override
public void run() {
new LayoutTest();
}
});
}
}
P.S: I would suggest you to separate the panels such that there will be "whithout no interference with remaining JPanels."
| |
doc_23538848
|
Here are two example ajax calls I have:
function checkOrders() {
$.ajax({
type: "POST" ,
url:"/service/index.php" ,
data: {
q: "checkOrders"
} ,
complete: function(result) {
// note here the JSON.parse() clause
var x = JSON.parse(result.responseText);
if (x['unhandled_status']>0) {
noty({
text: '<center>There are currently <b>'+x['unhandled_status']+'</b> unhandled Orders.',
type: "information",
layout: "topRight",
modal: false ,
timeout: 5000
}
});
}
} ,
xhrFields: {
withCredentials: true
}
});
}
Note in the above example I have to JSON.parse() the responseText from my PHP page in order to deal with it as an object. It somehow sees the overall PHP response as an object, and I have to pull the responseText from that object and JSON.parse() it, in order to use it.
Now here is another ajax call that I have, whereby the returned response, I can use directly as a json response - meaning, somehow the PHP page does not return a full "object" but ONLY returns the json and my ajax call somehow already knows it is JSON and I do not need to JSON.parse() it:
function getUnfiledOrders() {
$.ajax({
type: "POST" ,
url:"/service/index.php" ,
data: {
queryType: "getUnfiledOrders"
} ,
success: function(result) {
if (result['total_records'] >0) {
noty({
text: result['response'],
type: "error",
modal: false,
dismissQueue: true,
layout: "topRight",
theme: 'defaultTheme'
});
}
} ,
xhrFields: {
withCredentials: true
}
});
}
In this case, I do not need to JSON.parse() the responseText in order to treat the response as a JSON object.
Both PHP response scripts look something like this:
header('content-type:application/json');
$array = array("total_records"=>3,"response"=>"SUCCESS");
echo json_encode($array);
Can anybody clue me in on this non-uniformity?
EDIT:
I realized that I had two different callbacks in each of the above ajax calls. One was on complete and the other was on success.
When I switched them both to success the returned response from my ajax request was handled uniformly.
So I guess my question now is:
*
*Why is there a non-uniformity between these two callbacks?
*Which is better to use?
A: I just thoroughly reviewed the docs on $.ajax() as covered here.
Anyway, the simplicity is that complete and success have different variables loaded in the function.
complete returns this:
complete
Type: Function( jqXHR jqXHR, String textStatus )
success returns this:
success
Type: Function( Anything data, String textStatus, jqXHR jqXHR )
When I have complete, the first parameter returned is an object so that is why I have to use:
complete: function(data) {
var x = JSON.parse(data.responseText);
}
As I am having to fetch the responseText in the object that is returned - and it has a string value (not necessarily formatted as JSON).
The success function returns, as the first variable, whatever is passed through from PHP. In this case it is a json_encoded string. So it returns that and you can play around with it as soon as it is returned with no further manipulation.
Ta-dah!
In addition, note that
.success() only gets called if server response is 200, .complete() will always get called regardless of status code
A: My previous response does not apply to your problem, although it is something you (and everyone) might need to be aware of.
Regarding your point about complete vs success -- yes, the returned values are in different order. Still, you don't need to parse the text on complete. The jqXHR response has a responseJSON object that you can use directly.
http://api.jquery.com/jQuery.ajax/#jqXHR
Previous Response
Are you calling checkOrders from a form, using a form button?
I have found that JQuery expects the data to be form-encoded in these case, regardless of you telling it you expect JSON.
jquery ajax returns success when directly executed, but returns error when attached to button, even though server response is 200 OK
Not sure if this is by design or a bug.
If this is your case, try changing form tag to div, or putting the button outside of the form.
A: In your case I would just use $.getJSON like the following:
$.getJSON( "api.php", { user: "John", password: "1234" } )
.done(function( json ) {
console.log( "JSON object: " + json.friends[ 3 ].name );
})
.fail(function( jqxhr, textStatus, error ) {
console.log( "Sorry, the request failed.");
});
This way you can make sure that the answer is always a js object.
Hope this helped,
Sebastian
| |
doc_23538849
|
Part of the procedure.
PROCEDURE backup(p_target_schemas IN VARCHAR2,
p_filename IN VARCHAR2)
AS
l_handler NUMBER;
BEGIN
l_handler := DBMS_DATAPUMP.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'EXM', version => '11.2');
DBMS_DATAPUMP.add_file (handle =>l_handler, filename => p_filename, directory =>'DATA_PUMP_DIR');
DBMS_DATAPUMP.metadata_filter(handle => l_handler, name => 'SCHEMA_EXPR', value => 'IN(' || p_target_schemas || ')');
DBMS_DATAPUMP.start_job (l_handler);
...
...
...
Is there something missing to export the CTX_PREFERENCES and that index?
| |
doc_23538850
|
Dim i As Integer
Dim ii As Integer
i = 2
ii = 2
For i = 2 To a
For ii = 2 To a
If Worksheets("Sheet1").Cells(i, 6) = Worksheets("Sheet2").Cells(ii, 6) Then
If Worksheets("Sheet1").Cells(i, 5) = Worksheets("Sheet2").Cells(ii, 5) Then
Worksheets("Sheet1").Cells(i, 18) = Worksheets("Sheet2").Cells(ii, 18)
Worksheets("Sheet1").Cells(i, 19) = Worksheets("Sheet2").Cells(ii, 19)
Worksheets("Sheet1").Cells(i, 20) = Worksheets("Sheet2").Cells(ii, 20)
Worksheets("Sheet1").Cells(i, 21) = Worksheets("Sheet2").Cells(ii, 21)
End If
End If
Next ii
Next i
A: Instead of the four lines use this one instead:
Worksheets("Sheet2").Range(Worksheets("Sheet2").Cells(ii, 18),Worksheets("Sheet2").Cells(ii, 21)).copy Worksheets("Sheet1").Range(Worksheets("Sheet1").Cells(i, 18),Worksheets("Sheet1").Cells(i, 21))
or you could put them in a With block:
With Worksheets("Sheet2")
.Range(.Cells(ii,18),.cells(ii,21)).copy Worksheets("Sheet1").Range(Worksheets("Sheet1").Cells(i, 18),Worksheets("Sheet1").Cells(i, 21))
End With
Or to make it even shorter; Declare the worksheets as variables:
Dim ows as worksheet
Dim tws as worksheet
Set ows = Worksheets("Sheet2")
Set tws = Worksheets("Sheet1")
With ows
.Range(.Cells(ii,18),.cells(ii,21)).copy tws.Range(tws.Cells(i, 18),tws.Cells(i, 21))
End With
| |
doc_23538851
|
The point is to allow students to get code on their phones and see what a method would return. A teacher would write a simple programming method that returns some integers. The students need to be able to get that method into the application. They can see the code and enter what they would expect the method to return. Then they need to be able to run the method so as to be able to see what the method returns...
Basically I want to be able to unit test some code that isn't on my blackberry yet...
I saw an advert for an app which does something similar... How can I do this:
.net for BlackBerry (C and VB.NET)
If you are learning ASP.NET, C# or Visual Basic.NET, then this application allows you type .NET code onto
your BlackBerry®, and view the output in seconds. It is an excellent way to try out snippets of .NET code
without having to fire up Visual Studio on your PC.
Code written with this application is stored on a remote server, and thus can be recalled from other devices,
so it can be shared with other users of this app.
A: I think you need to put your application binaries on a server to make your students able to download them OTA.
(http://blackberrystorm.wikidot.com/setup-blackberry-ota-applications)
then, you can use Global Events to communicate between your app and other apps.
(http://www.blackberry.com/knowledgecenterpublic/livelink.exe/fetch/2000/348583/800332/800620/What_Is_-_Global_Events_and_Global_Event_Listeners.html?nodeid=800527&vernum=0)
but accessing methods from other app, on the device in real time.., I don't think this is possible.
A: The BlackBerry JVM doen't support reflection, however if the applications all extend a Class that is also defined in the base application, and you can derive the fully qualified name of that Class in the downloaded software you can do something similar:
abstract public class MyDemoObject {
abstract public void myDemoMethod();
}
Then:
Class class = Class.forName("org.some.sample.ClassName");
MyDemoObject obj = (MyDemoObject)class.newInstance();
obj.myDemoMethod();
In the downloaded app:
package org.some.sample
public class ClassName extends MyDemoObject {
public void myDemoMethod() {
System.out.println("package org.some.sample.myDemoMethod()");
}
}
Kind of clunky but it does work. myDemoMethod will be running in the context of the base application, which is not a bad lesson to learn about programming for BlackBerry OS.
| |
doc_23538852
|
def updatefig(*args):
text_component.set_text(newText())
image_component.set_array(newArrayData())
contour_component.set_array(newArrayData())
return [text_component,image_component,contour_component]
This code doesn't raise an exception but neither does it update the contour lines. I wonder if this is just a matter my not knowing the right setter method of if there is more to it. Can anyone tell me if this is possible?
Thanks,
Eli
A: I didn´t full understand your code (also because it is not complete) and I would like rather to comment your question than answering it... (but I don't have enough reputation to be able to do it!)
Anyway... I think the problem might be related with the contour itself, which returns not an Artist, but a QuadContourSet instance! Do you think this might be the problem? I had something similar with the ArtistAnimation...
If this is the point, you have to "punch the QuadContourSet until it behaves like an
Artist"... I could solve my problem with all the information in the link!
Good luck!
| |
doc_23538853
|
I read that the problem could be that i didn't added the assembly in config file, but I did this, and also it is not fixed...
If someone has a litle time, help me please to fix the problem.
here is the code I call to add a new Player object
private void btnInsert_Click(object sender, EventArgs e)
{
Player playerData = new Player();
SetPlayerInfo(playerData);
using (ISession session = SessionFactory.OpenSession)
{
using (ITransaction transaction = session.BeginTransaction())
{
try
{
session.Save(playerData); // here it spits
transaction.Commit();
GetPlayerInfo();
}
catch (Exception ex)
{
transaction.Rollback();
throw ex;
}
}
}
}
private void GetPlayerInfo()
{
using (ISession session = SessionFactory.OpenSession)
{
IQuery query = session.CreateQuery("FROM Player");
IList<Player> pInfos = query.List<Player>();
dgvDisplay.DataSource = pInfos;
}
}
private void SetPlayerInfo(Player playerData)
{
playerData.PlayerName = tbxName.Text;
playerData.PlayerAge = Convert.ToInt32(tbxAge.Text);
playerData.DOJ = Convert.ToDateTime(dtpDOJ.Text);
playerData.BelongsTo = cmbBelongsTo.SelectedItem.ToString();
}
here is the mapping Player.hbm.xml code
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" auto-import="true">
<class name="NHibernateExperiment.Player, NHibernateExperiment" lazy="true">
<id name="PlayerId">
<generator class="native"/>
</id>
<property name="PlayerName" column ="PlayerName"/>
<property name="PlayerAge" column ="PlayerAge"/>
<property name="DOJ" column="DOJ"/>
<property name="BelongsTo" column="BelongsTo"/>
</class>
</hibernate-mapping>
here is the App.config code
<configuration>
<configSections>
<section name="hibernate-configuration" type="NHibernate.Cfg.ConfigurationSectionHandler, NHibernate" />
</configSections>
<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">
<session-factory>
<property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property>
<property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property>
<property name="connection.connection_string">Server=GRITCAN;database=testDB;Integrated Security=SSPI;</property>
<property name="dialect">NHibernate.Dialect.MsSql2008Dialect</property>
<property name="show_sql">true</property>
<property name='proxyfactory.factory_class'>NHibernate.ByteCode.LinFu.ProxyFactoryFactory, NHibernate.ByteCode.LinFu</property>
</session-factory>
</hibernate-configuration>
</configuration>
here is the StackTrace
at NHibernateExperiment.Form1.btnInsert_Click(Object sender, EventArgs e) in E:\projects\tests\NHibernate\NHibernateExperiment\NHibernateExperiment\Form1.cs:line 72
at System.Windows.Forms.Control.OnClick(EventArgs e)
at System.Windows.Forms.Button.OnClick(EventArgs e)
at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)
at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
at System.Windows.Forms.Control.WndProc(Message& m)
at System.Windows.Forms.ButtonBase.WndProc(Message& m)
at System.Windows.Forms.Button.WndProc(Message& m)
at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context)
at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context)
at System.Windows.Forms.Application.Run(Form mainForm)
at NHibernateExperiment.Program.Main() in E:\projects\tests\NHibernate\NHibernateExperiment\NHibernateExperiment\Program.cs:line 16 at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args)
at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
I've added the following 2 references to the project
NHibernate.dll and NHibernate.ByteCode.LinFu.dll
Thanks a lot for your help!
Thank You BOYS.
The Build Action for .hbm.xml file was Content.
As you suggested me I changed it to Embedded Resource and all works fine:)
A: Did you make the mapping file an embedded resource?
A: Check if xml mapping is marked as embedded resource. Also I'd reccomend you to use Fluent nHibernate library - this is a freedom from writing huge amount of xml mappings, only .net clasess
A: working Solution:
Form1.cs
using NHibernate;
using NHibernate.Cfg;
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace NHibernateTutorialPart1
{
public partial class Form1 : Form
{
private Configuration myConfiguration;
private ISessionFactory mySessionFactory;
private ISession mySession;
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
myConfiguration = new Configuration();
myConfiguration.Configure("hibernate_mysql.cfg.xml");
mySessionFactory = myConfiguration.BuildSessionFactory();
mySession = mySessionFactory.OpenSession();
using (mySession.BeginTransaction())
{
Contact lbContact=new Contact{FirstName="Nisha", LastName="Shrestha",ID=0};
mySession.Save(lbContact);
mySession.Transaction.Commit();
}
}
}
}
**hibernate_mysql.cfg.xml**
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">
<session-factory>
<property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property>
<property name="dialect">NHibernate.Dialect.MySQLDialect</property>
<property name="connection.driver_class">NHibernate.Driver.MySqlDataDriver</property>
<property name="connection.connection_string">
User Id=root;
Server=localhost;
Password=Password1;
Database=nhibernatecontacts;
</property>
<!--This is good for Debugging but not otherwise-->
<property name="show_sql">true</property>
<!--<property name="proxyfactory.factory_class">NHibernate.ByteCode.LinFu.ProxyFactoryFactory, NHibernate.ByteCode.LinFu</property>-->
<mapping assembly="NHibernateTutorialPart1"/>
</session-factory>
</hibernate-configuration>
**contact.hbm.xml**
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"
assembly="NHibernateTutorialPart1"
namespace="NHibernateTutorialPart1">
<class name="Contact" table="contact">
<id name="ID" column="ID">
<generator class="identity" />
</id>
<property name="FirstName" />
<property name="LastName" />
</class>
</hibernate-mapping>
**Contact class**
namespace NHibernateTutorialPart1
{
public class Contact
{
public virtual int ID { get; set; }
public virtual string FirstName { get; set; }
public virtual string LastName { get; set; }
}
}
**App.config**
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section name="hibernate-configuration" type="NHibernate.Cfg.ConfigurationSectionHandler,NHibernate"/>
</configSections>
</configuration>
You need to add reference for lesi.collection and NHibernate
You need to do embeded resource for contact.hbm.xml and hibernate_mysql.cfg.xml
I used mysql workbench where i created a new schema and new table with id, first name and last name.
The solution for No persister for exception was i forgot to gave table name in contact.hbm.xml
| |
doc_23538854
|
var1 var2 var3 var4 var5
23 1 0 0 0
23 0 0 0 1
43 0 0 0 1
43 0 1 1 0
I need to check values of variables var2, var3, var4, var5 and change binary values that for the rows with duplicates in var1, all other variables have the same values. When deciding which duplicate is to be changed, the priority is given to var2.
So I need to have my final dataframe as follows:
var1 var2 var3 var4 var5
23 1 0 0 0
23 1 0 0 0
43 0 1 1 1
43 0 1 1 1
Any suggestions?
Thank you
A: I think this is not well explained since my answer got 2 downgrades:) I hope you will forgive me for that as this is my first code to facilitate implementation of a set of rules that I have been applying in excel but want to automotize the process.
I will explain in other words.
Basically, I have a list of tranacitons and var1 is a transactional ID. Variables are decisions that I'm taking regarding each transaction. var2 - reject, var3 - correct; var4 - accept; var5 - accept and "do something else". The same transactions must have the same decision taken. Happened that for some transactions, decision are taken separately, reason why they have different decisions. My goal is to adjust decisions for the same transactions in the same way.
Regarding decisions, reject (var2) has a priority.If one is rejected, another must be also rejected. A priority of var2 comes from here.
If var1=1; others=0
Regarding other variables. They may have ones not exeptionally, eg. var3= 1; var4=1; var5=1; but in this case var2=0 (always). Important that transactions with the same ID have the same decisions.
Hope it help.
A: if i have understood your logic:
import pandas as np
df = pd.DataFrame({'var1': [23, 23, 43, 43],
'var2': [1, 0, 0, 0],
'var3': [0, 0, 0, 1],
'var4': [0, 0, 0, 1],
'var5': [0, 1, 1, 0]})
print(df)
df['var2'] = df.groupby(['var1'])['var2'].transform('max')
f = 1 - df['var2']
df['var3'], df['var4'],df['var5'] =[f]*3
print(df)
output:
var1 var2 var3 var4 var5
23 1 0 0 0
23 1 0 0 0
43 0 1 1 1
43 0 1 1 1
A: I think I did it. Maybe it's too long but it works :)
Thank you Frenchy once again for "groupby" suggestion!
import pandas as pd
import numpy as np
df = pd.DataFrame({'var1': [23, 23, 43, 43, 53],
'var2': [1, 0, 0, 0,1],
'var3': [0, 0, 0, 1, 0],
'var4': [0, 0, 0, 1, 0],
'var5': [0, 1, 1, 0, 0]})
print(df)
df['Dup'] = np.where(df['var1'].duplicated(keep=False), 'dup', np.nan)
df['var2'] = np.where(df['Dup']=="dup", df.groupby(['var1'])['var2'].transform('max'), df['var2'])
df['var3'] = np.where((df['Dup']=="dup") & (df['var2']==1), 0 , df['var3'])
df['var4'] = np.where((df['Dup']=="dup") & (df['var2']==1), 0 , df['var4'])
df['var5'] = np.where((df['Dup']=="dup") & (df['var2']==1), 0 , df['var5'])
df['others_dup'] = np.where((df['Dup']=='dup') & (df['var2']==0), 1, np.nan)
df['var3']=np.where(df['others_dup']==1, df.groupby(['var1'])['var3'].transform('max'), df['var3'])
df['var4']=np.where(df['others_dup']==1, df.groupby(['var1'])['var4'].transform('max'), df['var4'])
df['var5']=np.where(df['others_dup']==1, df.groupby(['var1'])['var5'].transform('max'), df['var5'])
print(df)
| |
doc_23538855
|
I am using POP3 server.
preferably to be in VB.net or in C# and preferably with as little third party libraries as possible.
Thanks in advance.
| |
doc_23538856
|
I can either get the fancybox to open with nothing by calling the fancybox directly, or i can populate the girdview without the showing the fancybox.
this is the code i have that just populates the girdview at the moment as this is where I need to go from.
Any and all help appreicated.
The method
Public Sub GetEmailContacts()
Session("RoleCode") = 27
Dim dt As DataTable
Dim dtToBind As DataTable = New DataTable()
dtToBind.Columns.Add("Contact Type", Type.GetType("System.String"))
dtToBind.Columns.Add("First Name", Type.GetType("System.String"))
dtToBind.Columns.Add("Last Name", Type.GetType("System.String"))
dtToBind.Columns.Add("Email Address", Type.GetType("System.String"))
dt = GetValues()
For Each dr As DataRow In dt.Rows
dtToBind.Rows.Add(dr(0).ToString(), dr(6).ToString(), dr(7).ToString(), dr(9).ToString())
Next
For Each dr As DataRow In dtToBind.Rows
Dim toButtonField = New ButtonField() With {
.ButtonType = ButtonType.Button,
.Text = "To: ",
.CommandName = "DoSomething"
}
Dim ccbuttonField = New ButtonField() With {
.ButtonType = ButtonType.Button,
.Text = "Cc: ",
.CommandName = "DoSomething"
}
gvContactList.Columns.Add(toButtonField)
gvContactList.Columns.Add(ccbuttonField)
Exit For
Next
gvContactList.DataSource = dtToBind
gvContactList.DataBind()
bttnTo.Attributes.Add("OnClientClick", "#emailAddress")
End Sub
The LinkButton :
<asp:LinkButton runat="server" cssclass="fancybox" ID="bttnTo" OnClick="getEmailContacts"><span style='font-size: 20px; color: darkgreen'><i id="toEmail" class="fa fa-users sameRow margin10"></i></span></asp:LinkButton>
this is whats in the JS File too
$(document).ready(function () {
$(".fancybox").fancybox({
parent: "form:first" // jQuery selector
});
});
a standard link
<a href="#emailAddresses" class="fancybox"><span style='font-size: 20px; color: darkgreen'><i id="toEmail" class="fa fa-users sameRow margin10"></i></span></a>
how would be best to call this function?
A: Here update your javascript
$(document).ready(function () {
fancybox = $(".fancybox").fancybox({
parent: "form:first" // jQuery selector
});
if (gridLoaded) {
fancybox.click();
}
});
You must gridLoaded to false on first page load, then when you are done loading the grid set the gridLoaded to true.
| |
doc_23538857
|
Is there a way to do the nodePath.scope.rename while excluding nodes of type ImportDeclaration?
A: Get Binding solution
Instead of renaming everything in the global scope, we can only rename the bindings, which excludes the original node by scoping to its parent object:
const binding = matchingNodePath.scope.getBinding(currentName);
binding.referencePaths.forEach((referencePath) => {
referencePath.scope.rename(
currentName,
hijackedNamespace,
referencePath.parent
);
});
Node replace solution
I could not find any way to exclude import declarations while doing a scope.rename. So what I ended up doing is cloning the nodes first:
// Get all import declarations.
const importDeclarationNodePaths = (nodePath.get('body') as NodePath[]).filter((node) =>
node.isImportDeclaration()
) as NodePath<ImportDeclaration>[];
// Make a copy of the node so we can restore it later after we rename all matching variables.
const originalImportDeclarationNodes = importDeclarationNodePaths.map((nodePath) => {
return cloneNode(nodePath.node, true);
}) as ImportDeclaration[];
Then we can rename and restore back the original specifiers
nodePath.scope.rename('originalName', 'newName');
// Because of the global scope rename, we need to set back original import declarations.
for (const importPosition in importDeclarationNodePaths) {
const hijackedImportDeclarationNodePath = importDeclarationNodePaths[importPosition];
const originalImportDeclarationNode = originalImportDeclarationNodes[importPosition];
// Replace the specifiers of the hijacked import declaration with its original value.
hijackedImportDeclarationNodePath.replaceWith(
importDeclaration(
originalImportDeclarationNode.specifiers,
hijackedImportDeclarationNodePath.node.source
)
);
}
| |
doc_23538858
|
Used cURL to pull page into $html, which succeeds fine.
Used Firebug to get exact XPATH to the table needed.
Code follows:
$dom = new DOMDocument($html);
$dom->loadHTML($html);
$xpath = new DOMXpath($dom);
$summary = $xpath->evaluate('/html/body/table[5]/tbody/tr/td[3]/table/tbody/tr[8]/td/table');
echo "Summary Length: " . $summary->length;
When executed, $summary->length is always zero. It doesn't pull that table node.
Any ideas?
A: Firefox is liable to insert "virtual" tbody elements into tables that don't have them; do those elements exist in the original file?
A: Just remove "/tbody". From xpath you got from firefox:
.//*[@id='data']/tbody/tr[1]/td[2]/span
create this:
.//*[@id='data']/tr[1]/td[2]/span
Aloe
| |
doc_23538859
|
Below is the function I've come up with: however, the results I've gotten seem to be really weird, being a combination of "echo" and a mess of single quotes. Does everything look correct in there? I'm new to PHP, so I'm really sorry if it's an obvious mistake I'm missing.
function makeTextInputField($name)
{
echo '<label for = "<?php $name ?>"> <?php ucfirst($name) ?> </label><input type = "text" name = "<?php $name?>"></input>';
}
A: You should not use any more tag inside the php
function makeTextInputField($name)
{
echo '<label for = "'.$name.'">'.ucfirst($name).'</label><input type = "text" name = "'.$name.'" />';
}
A: Your problem is that inside PHP code you're opening new PHP tags, which actually are not required. Try this function and see if it's working for you:
function makeTextInputField($name)
{
echo sprintf('<label for="%s">%s</label> <input type="text" name="%s"></input>', $name, ucfirst($name), $name);
}
A: Working Demo
Because you can insert line breaks in strings in PHP, you can make your function more readable by using variables inside it:
<?php
function makeTextInputField($name) {
$text = ucfirst($name);
echo "
<label for='{$name}'>{$text}</label>
<input type='text' name='{$name}' />
";
}
?>
And whenver you want to use it:
<h1>Welcome</h1>
<?php makeTextInputField('email'); ?>
OUTPUT
<h1>Welcome</h1>
<label for='email'>Email</label>
<input type='text' name='email' />
A: function makeTextInputField($name)
{
echo '<label for = "'.$name.'"> '.ucfirst($name).'</label><input type = "text" name = "'.$name.'"></input>';
}
That should work.
You are already in php. So no need for the <?php tags. Concatenate strings together with a .
A: Try with sprintf.
function textInput($name)
{
$html = '<label for="%1$s">%2$s</label><input type="text" name="%1$s"/>';
echo sprintf($html, $name, ucfirst($name));
}
A: <?php
class DeInput
{
protected $_format = '<div>
<label for="%s">%s</label>
<input class="formfield" type="text" name="%s" value="%s">
</div>';
public function render($content,$getFullyQualifiedName,$getValue,$getLabel)
{
$name = htmlentities($getFullyQualifiedName);
$label = htmlentities($getLabel);
$value = htmlentities($getValue);
$markup = sprintf($this->_format, $name, $label, $name, $value);
return $markup;
}
}
A: Putting PHP code inside quotation marks is somewhat bad practice so I using (.) point to combine strings can be used.
Here is my example:
function makeTextInputField($name) {
echo '<label for="'. $name .'">'.ucfirst($name).'</label>';
echo '<input type="text" name="'.$name .' />';
}
A: use return intead of echo, and it will be easier to manipulate with result.
Also you can split elements generation into different functions for more flexibility:
function createLabel($for,$labelText){
return '<label for = "'.$for.'"> '.ucfirst($labelText).'</label>';
}
function createTextInput($name,$value,$id){
return '<input type = "text" name = "'.$name.'" id="'.$id.'">'.$value.'</input>';
}
function myTextInput($name,$value,$labelText){
$id = 'my_input_'.$name;
return createLabel($id,$labelText).createTextInput($name,$value,$id);
}
echo myTextInput('email','','Type you email');
| |
doc_23538860
|
The first description says :
The template function async runs the function f asynchronously
(potentially in a separate thread which may be part of a thread
pool) and returns a std::future that will eventually hold the
result of that function call.
. [cppreference link]: std::async
What is the thread pool cppreference.com is talking about ?
I read the standard draft N4713 (C++ 17) and there is no mention of a possible thread pool usage.
I also know that there is no thread pool in the standard C++ as of now.
A: Purely hypothetical. cppreference is trying to tell you that standard allows execution of the task in the thread pool (as opposed to launching a new thread to execute it). And although standard may not explicitly allow it, there is nothing which would prohibit it either.
I am not aware of any implementation which would use a thread pool for std::async.
A: cppreference and the C++ standard are in fact at odds about this. cppreference says this (emphasis and strikethrough mine):
The template function async runs the function f asynchronously (potentially optionally in a separate thread which may be part of a thread pool).
Whereas the C++ standard says this:
If launch::async is set in policy, [std::async] calls [the function f] as if in a new thread of execution ...
And these are clearly two different things.
Only Windows' implementation of std::async uses a thread pool AFAIK, while gcc and clang start a new thread for every invocation of std::async (when launch::async is set in policy), and thus follow the standard.
More analysis here: https://stackoverflow.com/a/50898570/5743288
| |
doc_23538861
|
Iam having filters like color, manufatcurer etc… in the sidebar of my site
Problem is:
A) Only few attribute values are displaying (instead of all), for eg: “color” attribute has differnt color values like: black, red, blue, yellow etc.. iam just able to see only “black” value in the sidebar of my site , why its not showing me other color attriibute values?
and iam pretty sure that every product has some differnt color value..
Please kindly tell me , what could be issue?
Awaiting for response!
Edit: Iam getting attribute values from ‘Import Feed’
| |
doc_23538862
|
1. Does access_log in Apache 2.4 track requests resulting from file clicks in directory listing?
2. Is there a way to filter access logs to show these requests?
Let me broaden the question: under what circumstances does the access-log not show a path to the requested file?
| |
doc_23538863
|
CREATE PROCEDURE insertData
(
@ID int,
@Name varchar(50),
@Address varchar(50),
@bit BIT OUTPUT
)
as
begin
declare @oldName as varchar(45)
declare @oldAddress as varchar(45)
set @oldName=(select EmployeeName from Employee where EmployeeName=@Name)
set @oldAddress=(select Address from Employee where Address=@Address)
if(@oldName <> @Name | @oldAddress <> @Address)
insert into Employee(EmpID,EmployeeName,Address)values(@ID,@Name,@Address)
SET @bit = 1
END
But this is giving me an error when I am saving it like Incorrect syntax near <..
A: There are several things wrong here
if(@oldName <> @Name | @oldAddress <> @Address)
Won't work - maybe try
if @oldName <> @Name OR @oldAddress <> @Address
Of course, this will never be true because of the way the two queries above (which could and should have just been one query assigning both variables) make sure that the variables are always equal.
I.e.:
set @oldName=(select EmployeeName from Employee where EmployeeName=@Name)
What can @oldName be, if not equal to @Name? (Okay, it could be NULL, but then <> is the wrong operator to use if NULL is what you're checking for)
I think that what you wanted to write here was:
select @oldName=EmployeeName,@oldAddress = Address from Employee where EmpID = @ID
A: You should use OR and not |
You can also do this instead of querying and checking each value separately, this will insert new row if name and/or address do not match for given empid
IF NOT EXISTS (
select * from Employee
where EmpID = @ID AND EmployeeName = @Name AND Address = @Address)
insert into Employee(EmpID,EmployeeName,Address)values(@ID,@Name,@Address)
SET @bit = 1
END
A: You can use != instead of "<>" and you can use Or instead of "|".
| |
doc_23538864
|
A: Not by the looks of things at the moment, because Xamarin is dependent on the cross-platform SQLite Engine which is not supported yet by EF. See here: Xamarin.Forms support #4269
A: Short Answer: Not Possible.
Reason: You should not expose connection string, queries, authentication on Client Side. Remember that different from a Webapi or MVC app, mobile app is all at client side, it is exposing your database.
Imagine like you are using entity framework on the browser. If they open the app (crackers) they will have credentials, server, they may query your database.
| |
doc_23538865
|
000000 = 0.0218354767535
000001 = 0.0218265654136
000002 = 0.0218184623573
000003 = 0.021811165579
000004 = 0.0218046731276
000005 = 0.0217989831063
000006 = 0.0217940936718
000007 = 0.0217900030345
000008 = 0.0217867094577
000009 = 0.0217842112574
000010 = 0.021782506802
I want to get the last three digits of each decimal number in list given. the following is my need in this example.
535
136
573
579
276
063
718
345
577
574
802
any suggestions please for the possible solution?
A: Assuming the list given is a list of strings. In that case please use this code.
l1 = ["000000 = 0.0218354767535",
"000001 = 0.0218265654136",
"000002 = 0.0218184623573",
"000003 = 0.021811165579",
"000004 = 0.0218046731276",
"000005 = 0.0217989831063",
"000006 = 0.0217940936718",
"000007 = 0.0217900030345",
"000008 = 0.0217867094577",
"000009 = 0.0217842112574",
"000010 = 0.021782506802"]
for each_item in l1:
print each_item[-3:]
Modified Answer after receiving clarification
result_list = [] # This would store the end result with int
# The below list is the input list with floats
l1 = [0.0218354767535,
0.0218265654136,
0.0218184623573,
0.021811165579,
0.0218046731276,
0.0217989831063,
0.0217940936718,
0.0217900030345,
0.0217867094577,
0.0217842112574,
0.021782506802]
#For each float in the list, find the decimal part, convert to string, extract last 3, convert to int and append to result list
for each_item in l1:
result_list.append(int(str(each_item-int(each_item))[-3:]))
print result_list
Output
[535, 136, 573, 579, 276, 63, 718, 345, 577, 574, 802]
A: For each line you can index only the last three elements like this:
with open('filename') as infile:
for line in infile:
print line.rstrip()[-3:]
| |
doc_23538866
|
/TYPE/BOOKING/IBAN/NL12BANK0003456789/BIC/BANKNL2A/NAME/Mr. A. Someguy/CODE/Codenumber 12345678/REF/NOTPROVIDED/LINE/ABCD EFG 234567890 1234 ETC
/TYPE/BOOKING/IBAN/NL34BANK000123456/BIC/BANKNL2U/NAME/Mr. A. Dinges/CODE/98765432/REF/NOTPROVIDED
And I want to look up individual elements in these strings without having to write unreadable code with many CHARINDEX-es and SUBSTRINGS. So I found the SPLIT_STRING function.
select contract, [value]
from SCHEMA.PAYMENTS
CROSS APPLY STRING_SPLIT(paymentrow, '/')
It's great, but now I get these values in rows:
bookingnumber value
12-3-56789012-3
12-3-56789012-3 TYPE
12-3-56789012-3 BOOKING
12-3-56789012-3 IBAN
12-3-56789012-3 NL12BANK0003456789
12-3-56789012-3 BIC
12-3-56789012-3 BANKNL2A
12-3-56789012-3 NAME
12-3-56789012-3 Mr. A. Someguy
12-3-56789012-3 CODE
12-3-56789012-3 Codenumber 12345678
12-3-56789012-3 REF
12-3-56789012-3 NOTPROVIDED
12-3-56789012-3 LINE
12-3-56789012-3 ABCD EFG 234567890 1234 ETC
98-7-65431234-0
98-7-65431234-0 TYPE
98-7-65431234-0 BOOKING
98-7-65431234-0 IBAN
98-7-65431234-0 NL34BANK000123456
98-7-65431234-0 BIC
98-7-65431234-0 BANKNL2U
98-7-65431234-0 NAME
98-7-65431234-0 Mr. A. Dinges
98-7-65431234-0 CODE
98-7-65431234-0 98765432
98-7-65431234-0 REF
98-7-65431234-0 NOTPROVIDED
So as a final step, I would to pivot the value column so that the elements in the original string appear as neat columns, like this:
bookingnumber type IBAN BIC name code ref
12-3-56789012-3 BOOKING NL12BANK0003456789 BANKNL2A Mr. A. Someguy Codenumber 12345678 NOTPROVIDED
98-7-65431234-0 BOOKING NL34BANK0001234567 BANKNL2U Mr. A. Dinges 98765432 NOTPROVIDED
I've been tinkering with PIVOT, but either it doesn't work, or it doesn't give the right results.
Help would be greatly appreciated.
A: IF the columns are not dynamic, perhaps a little XML
Example
Declare @YourTable table (Bookingnumber varchar(50),paymentrow varchar(max) )
Insert Into @YourTable values
('12-3-56789012-3','/TYPE/BOOKING/IBAN/NL12BANK0003456789/BIC/BANKNL2A/NAME/Mr. A. Someguy/CODE/Codenumber 12345678/REF/NOTPROVIDED/LINE/ABCD EFG 234567890 1234 ETC')
,('98-7-65431234-0','/TYPE/BOOKING/IBAN/NL34BANK000123456/BIC/BANKNL2U/NAME/Mr. A. Dinges/CODE/98765432/REF/NOTPROVIDED')
Select A.Bookingnumber
,B.*
From @YourTable A
Cross Apply (
Select Type = xDim.value('/x[3]','varchar(max)')
,IBan = xDim.value('/x[5]','varchar(max)')
,BIC = xDim.value('/x[7]','varchar(max)')
,Name = xDim.value('/x[9]','varchar(max)')
,Code = xDim.value('/x[11]','varchar(max)')
,Ref = xDim.value('/x[13]','varchar(max)')
From ( values (cast('<x>' + replace(paymentrow,'/','</x><x>')+'</x>' as xml))) as A(xDim)
) B
Returns
A: Your own attempt calling STRING_SPLIT() proofs, that you are using v2016+. So you can call JSON to your rescue:
(Credits to John Cappelletti for the mockup)
Declare @YourTable table (Bookingnumber varchar(50),paymentrow varchar(max) )
Insert Into @YourTable values
('12-3-56789012-3','/TYPE/BOOKING/IBAN/NL12BANK0003456789/BIC/BANKNL2A/NAME/Mr. A. Someguy/CODE/Codenumber 12345678/REF/NOTPROVIDED/LINE/ABCD EFG 234567890 1234 ETC')
,('98-7-65431234-0','/TYPE/BOOKING/IBAN/NL34BANK000123456/BIC/BANKNL2U/NAME/Mr. A. Dinges/CODE/98765432/REF/NOTPROVIDED');
SELECT *
FROM @YourTable t
CROSS APPLY OPENJSON(CONCAT('[{',REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
t.paymentrow,'/TYPE/','"Type":"')
,'/IBAN/','","IBAN":"')
,'/BIC/','","BIC":"')
,'/NAME/','","Name":"')
,'/CODE/','","Code":"')
,'/REF/','","Ref":"')
,'/LINE/','","Line":"'),'"}]'))
WITH([Type] NVARCHAR(100)
,[IBAN] NVARCHAR(100)
,[BIC] NVARCHAR(100)
,[Name] NVARCHAR(250)
,[Code] NVARCHAR(100)
,[Ref] NVARCHAR(100)
,[Line] NVARCHAR(500)) A;
We use some replacements to transform your string to such a JSON
[{"Type":"BOOKING",
"IBAN":"NL12BANK0003456789",
"BIC":"BANKNL2A",
"Name":"Mr. A. Someguy",
"Code":"Codenumber 12345678",
"Ref":"NOTPROVIDED",
"Line":"ABCD EFG 234567890 1234 ETC"}]
The WITH-clause will do the pivoting implicitly.
UPDATE If this is position-safe
With a fixed input pattern, this is easier
SELECT *
FROM @YourTable t
CROSS APPLY OPENJSON(CONCAT('[["',REPLACE(STUFF(t.paymentrow,1,1,''),'/','","'),'"]]'))
WITH([Type] NVARCHAR(100) '$[1]'
,[IBAN] NVARCHAR(100) '$[3]'
,[BIC] NVARCHAR(100) '$[5]'
,[Name] NVARCHAR(250) '$[7]'
,[Code] NVARCHAR(100) '$[9]'
,[Ref] NVARCHAR(100) '$[11]'
,[Line] NVARCHAR(500) '$[13]') A;
The intermediate JSON-array looks like this:
["TYPE","BOOKING","IBAN","NL12BANK0003456789","BIC","BANKNL2A","NAME","Mr. A. Someguy","CODE","Codenumber 12345678","REF","NOTPROVIDED","LINE","ABCD EFG 234567890 1234 ETC"]
A: SELECT contract
, x.value('/M[3]', 'nvarchar(max)') as IBAN
, x.value('/M[5]', 'nvarchar(max)') as BIC
, x.value('/M[7]', 'nvarchar(max)') as NAME
FROM (
SELECT contract
, CAST('<M>' + REPLACE(paymentrow, '/', '</M><M>') + '</M>' AS xml) AS x
FROM payments
) sub
Example at dbfiddle.co.uk
| |
doc_23538867
|
I am using Angularjs 1.2.22
Given this CSS :
.ng-enter {
animation: bounceInUp 2s;
}
.ng-leave {
animation: bounceOutUp 2s;
}
And this route :
var app = angular.module('app', ['ngRoute', 'ngAnimate']);
app.config(['$routeProvider',
function ($routeProvider) {
$routeProvider.
when('/horodateur/currentuser', {
templateUrl: 'partials/horodateur/currentuser.html',
controller: 'CurrentUserController'
}).
when('/horodateur/keypad', {
templateUrl: 'partials/horodateur/keypad.html',
controller: 'KeypadController'
}).
otherwise({
redirectTo: '/horodateur/currentuser'
});
}]);
The first time the partial view is shown, I can see the entering animation.
A button on that first partial view gives back control to it's controller to load another view :
app.controller('CurrentUserController',
function ($scope, $location) {
$scope.showKeypad = function () {
$location.path('/horodateur/keypad');
}
});
At this point, I can see the leaving animation of the first partial view.
Then, the second partial view is rendered, but this time, no entering animation.
If I hit the browser's back button to return to the first partial view, I can still see the leaving animation, but I cannot see the entering animation of the other partial view.
It must be something I don't understand with partial views loading or rendering ...
Anybody knows what I am missing ?
A: I solved my problem. I just don't understand why ...
To see the entering animation after the leaving animation of the previous view, I had to augment the animation-duration of the entering animation.
Both at 500ms just don't work. I had to keep the leaving animation at a lower duration (500 ms) but augment the entering animation to 1500ms or higher to be able to see it.
| |
doc_23538868
|
I've set gridcontrol datasource to a linq to ef join query like this:
var x = from B in x.B join E in X.E on B.E_ID equals E.E_ID
select B
gridcontrol.datasource = x.tolist();
in xaml code I've set grid control columns as this:
first column FieldName = ID
second column FieldName = E.Name
since the second column is binded to E table,the gridcontrol loading is awfuly slow
plz give me a solution to increase the speed
A: with many thanks of lvan Stoev
as he/she said,i changed my query and it was usefull.now it just take 8 second to load the usercontrol.
the query:
db.b.include("E").tolist()
| |
doc_23538869
|
Query<Diagram> q=ofy.query(Diagram.class).filter("datePublished !=", "").order("-likes").limit(18);
A: When applying an inequality filter in the GAE datastore there are some restrictions.
You can read more here: https://developers.google.com/appengine/docs/java/datastore/queries
In this case, to have an inequality on datePublished you must order on that same field primarily before you can order on another.
So assuming the datePublished field is indexed:
Query<Diagram> q=ofy.query(Diagram.class).filter("datePublished !=", "").order("datePublished").order("-likes").limit(18);
Assuming this isn't a migration concern, you may want to consider denormalising this data when you store it, for example setting a 'noDatePublished' boolean.
| |
doc_23538870
|
and i want set it
-wating under s3 sleep state
-wake up when netwrok tarffic by wake on match patter(ARP request)
but have some trouble
network setting
as you see i use 'intel 1000ET quad port' nic and there is only one port
have WOL capacyty
so i set 'wake on match pattern' nas wake up only arp request to '192.168.1.100'
but i want to nas response not only request to '192.168.1.100' but also response qrp request to '192.168.1.101'
so i need port 1 response 'arp request' targeting two different IP.
how can i manually overide WOL bitmap option
https://msdn.microsoft.com/en-us/library/windows/hardware/ff543710(v=vs.85).aspx
i read document about ndis driver.
it said NDIS driver can program nic multiple 'WOL bitmap pattern'
so it is possible what i want right?
A: i can not find any way to set it in user mod app.
now i use moded 'ndis filter driver sample'
insert code that generate 'OID_PM_PATTERN_ADD' request.
| |
doc_23538871
|
I want to create it as part of an automated deployment so either ARM or through C#. The Data Connection source is an EventHub and needs to include the properties specifying the table, consumer group, mapping name and data format.
I have tried creating a resource manually and epxporting the template but it doesn't work. I have also looked through the MSFT online documentation and cannot find a working example.
This is all I have found example
A: Please take a look at this good example which shows how to create control plane resources (cluster, database, data connection) using ARM templates, and using a data plane python API for the data plane resources (table, mapping).
In addition, for C# please see docs here and following C# example for how to create an event hub data connection:
var dataConnection = managementClient.DataConnections.CreateOrUpdate(resourceGroup, clusterName, databaseName, dataConnectionName,
new EventHubDataConnection(eventHubResourceId, consumerGroup, location: location));
A: I've actually just finished building and pushing an Azure Sample that does this (see the deployment template and script in the repo).
Unfortunately, as I elected not to use the Azure CLI (and stick with pure Azure PowerShell), I wasn't able to fully automate this, but you can at least see the approach I took.
I've filed feedback with the product group here on UserVoice
| |
doc_23538872
|
I'd then like to return that processed JSON, but the console is telling me it's undefined. I didn't declare the variable so it should be treated as global. Why am I still having this issue?
The code won't work with the snippet because the API is only local, but processedData essentially looks like this: {'A': '123'}
function hitAPI() {
var getData = $.get('http://127.0.0.1:5000/myroute');
getData.done(function() {
processedData = (JSON.parse(getData['responseJSON']['data']));
});
return processedData;
};
var x = hitAPI()
console.log(x);
| |
doc_23538873
|
The first file looks like this:
>A1
NNNNNNNNNN
NNNNNNNNNN
>B2
ACGTNNNNNN
NNNGTGTNNN
NNNNNNNNNN
>B3
GGGGGGGGGG
NNNTTTTTTT
NNNNCTGNNN
And the file with strings looks like this:
Name1
Name1
Name2
Name2
Name3
Name4
So finally I would like to find lines containing '>' and replace '>' with '>string' from second file to get this output:
>Name1 A1
NNNNNNNNNN
NNNNNNNNNN
>Name1 B2
ACGTNNNNNN
NNNGTGTNNN
NNNNNNNNNN
>Name2 B3
GGGGGGGGGG
NNNTTTTTTT
NNNNCTGNNN
A: If you have GNU sed:
sed '/^>/R file_with_strings' first_file | sed '/^>/{N;s/>\(.*\)\n\(.*\)/>\2 \1/;}'
A: This might work for you (GNU sed):
sed -E '1{x;s/^/cat file2/e;x}
/^>/{G;s/^>(\S+)\n(\S+)/>\2 \1/;P;s/^[^\n]*\n//;h;d}' file1
On the first line slurp the second file into hold space.
If a line begins >, append the hold space and using pattern matching and back references build the required header line from the first line of the hold space.
Print the result (the first line of the pattern space), remove the first line ,replace the hold space and delete the current line.
Repeat.
| |
doc_23538874
|
http://imgur.com/FGrGi
The HTML:
<div id="phocagallery" class="pg-category-view" style="width:800px;margin: auto;">
<div class="pg-category-view-desc">Pictures of the Roskilde Family</div>
<div id="pg-icons"></div>
<div style="clear:both"></div>
<div class="phocagallery-box-file" style="height:158px; width:120px;">
<div class="phocagallery-box-file" style="height:158px; width:120px;">
<div class="phocagallery-box-file" style="height:158px; width:120px;">
<div class="phocagallery-box-file" style="height:158px; width:120px;">
<div class="phocagallery-box-file" style="height:158px; width:120px;">
If I remove the the white space disappears. I look in my css with firebug but I can't for the life of me figure out why it is giving me this white-space. I used the Yahoo css-reset.
EDIT: CSS
div id="phocagallery" class="pg-category-view":
body, div, dl, dt, dd, ul, ol, li, h1, h2, h3, h4, h5, h6, pre, code, form, fieldset, legend, input, button, textarea, blockquote, th, td, p
{
margin: 0;
padding: 0;
}
body {
color: #000000;
font-family: Verdana,Arial,Helvetica,sans-serif;
font-size: 75%;
line-height: 1.3;
}
Anybody have a clue?
A: You must have a sidebar on the left or the right with a floating element in it (or which is floating itself).
The clear:both causes the element to be under that floating element.
See the problem here: http://jsfiddle.net/9Razw/
One solution is to set overflow: hidden on #phocagallery or a parent of the clear:both element.
A: the class pg-category-view-desc probably give the div float:left or float:right and it also has a fixed height and the clear apply the div height
A: Use it
.clear
{
height:0px;
clear:both;
}
| |
doc_23538875
|
If I began to animate the ellipse, but then decided to stop the animation, is there any way I could restart the animation from it's current point? I haven't been able to find a way to determine the ellipse's current point along a path after stopping the animation.
Here is my animation code (all standard):
e.attr({ rx: 15, ry: 5 }).animateAlong(p, 15000, true, function () {
e.attr({ rx: 10, ry: 10 });
clicked = false;
});
A: How about you define the last points outside using onAnimate?
var lastX = 15, lastY = 5;
e.onAnimation(function() {
lastX = e.attr('rx');
lastY = e.attr('ry');
});
e.attr({ rx: lastX, ry: lastY }).animateAlong(p, 15000, true, function () {
e.attr({ rx: 10, ry: 10 });
clicked = false;
});
So if you run this code several times, inside a function or whatever, you ellipse will start not at 15, 5 but at the last point in the animation (before it was interrupted).
This what you meant?
| |
doc_23538876
|
myButton.setTranslateX(10);
myButton.setTranslateY(-10);
Those methods works inside
public void start(Stage primaryStage) throws Exception {}
For which I understand, start is a method in Application to run JavaFX purpose. Since all myButton objects will have the same structure, i tried to make the following method in Main.java file
public void createMyButton(double X, double Y, String label, String image_path) throws Exception {
this.setTranslateX(X);
this.setTranslateY(Y);
this.setText(label);
//TO DO this.setButtonImage(src=image_path);
}
However I understand that the methods inside createMyButton are from another class (from Node I think). And (of course) i get the error
Cannot resolve method 'setTranslateX' in 'Main' s
since the compiler is looking those methods in my program, not in JavaFX SDK. How can I call other class methods in my own methods? I tried with
public void createMyButton(bla bla) throws Exception extends Node
public void createMyButton(bla bla) throws Exception extends Application
but i think Iam completely outside the diamond. I also trying to make my own class which inherits methods from other class but it's a little bit outside of my current knowledge and i was wondering if there is a easier/straighter way to do it
A: I'm not a JavaFX person but I think the issue is that you're calling this.setTranslateX(X); in a method where this is not a button (I think it's a Main object perhaps, need to see more code to be sure).
Try this:
public Button createMyButton(double X, double Y, String label, String image_path) throws Exception {
Button button = new Button(...) // not sure how you're initialising your buttons normally
button.setTranslateX(X);
button.setTranslateY(Y);
button.setText(label);
button.setButtonImage(src=image_path);
return button
}
Then, elsewhere when you want to create a button, you'd call the method instead:
Button button = createMyButton(10, 20, "My Button", "images/button.png")
| |
doc_23538877
|
When I run this example in the RGUI it's very slow, but eventually it opens a web page in a browser.
When I run it in R Studio it's very slow, eventually the command finishes but nothing appears in the "view" tab, and R Studio keeps using more and more memory until I force kill it.
library(leaflet)
leaflet() %>% addTiles() %>% addMarkers(lng=174.768, lat=-36.852, popup="R")
I've tried a few things with my path, and installing leaflet from github, but there's not much to troubleshoot.
> sessionInfo()
R version 4.0.0 (2020-04-24)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 15063)
Matrix products: default
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C
[5] LC_TIME=English_United States.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] compiler_4.0.0 tools_4.0.0
Edit 2020-05-09:
I reverted to R 3.6.2 and am no longer having the issue. Fortunately I could change it in R Studio's options so I can switch back to 4.0 if that becomes necessary.
I'm not adding this as an answer because it's a work around.
A: Rstudio viewer problem has been adressed.
Rstudio developer :
This is a known issue with R 4.0.0 (exactly) that has been fixed by the R team in R 4.0.2.
see full post here:
https://community.rstudio.com/t/rstudio-viewer-session-viewhtml-not-found/75592
| |
doc_23538878
|
$fullname_pattern = "/[a-zA-Z]+/";
$fullname = $_POST['fullname'];
$email = $_POST['email'];
$email_pattern = "/^[a-zA-Z]+([a-zA-Z0-9]+)?@[a-zA-Z]{3,50}\.(com|net|org)/";
$password = $_POST['password'];
$password_pattern = "/(.){6,12}/";
if(preg_match($fullname_pattern,$fullname)) &&
(preg_match($email_pattern,$email)) &&
(preg_match($password_pattern,$password))
{
header("Location: home.php");
}
else
header("Location: registration.php");
Don't know what to do!
A: if(preg_match($fullname_pattern,$fullname))
Remove Last ")"
And add here (preg_match($password_pattern,$password)))
Like this:
if(preg_match($fullname_pattern,$fullname)
&& preg_match($email_pattern,$email)
&& preg_match($password_pattern,$password)
) {
| |
doc_23538879
|
May I know how can I make it in Python?
Thanks a lot!
A: from random import randint
from pickle import dump, load
from os.path import isfile
if isfile('state.bin'):
with open('state.bin', 'rb') as fh:
state = load(fh)
else:
state = {'counter' : 0, 'iterations' : 1}
if state['counter'] == 0 and state['iterations'] == 100:
print('a string*')
else:
if randint(0, 100) < 3 and state['counter'] < 3:
print('a string*')
state['counter'] += 1
state['iterations'] += 1
with open('state.bin', 'wb') as fh:
dump(state, fh)
Now run this script a 100 times, and statistically this should only print a string 1-3% of the time. It also keeps track of how many iterations you've done and if the ammount is less than 1 or more than 3 and will act accordingly.
| |
doc_23538880
|
>02-12 10:15:34.625: E/AndroidRuntime(1018): FATAL EXCEPTION: AsyncTask
> 1 02-12 10:15:34.625: E/AndroidRuntime(1018): java.lang.RuntimeException: An error occured while executing
> doInBackground() 02-12 10:15:34.625: E/AndroidRuntime(1018): at
> android.os.AsyncTask$3.done(AsyncTask.java:200) 02-12 10:15:34.625:
> E/AndroidRuntime(1018): at
> java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273)
> 02-12 10:15:34.625: E/AndroidRuntime(1018): at
> java.util.concurrent.FutureTask.setException(FutureTask.java:124)
> 02-12 10:15:34.625: E/AndroidRuntime(1018): at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307)
> 02-12 10:15:34.625: E/AndroidRuntime(1018): at
> java.util.concurrent.FutureTask.run(FutureTask.java:137) 02-12
> 10:15:34.625: E/AndroidRuntime(1018): at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068)
> 02-12 10:15:34.625: E/AndroidRuntime(1018): at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561)
> 02-12 10:15:34.625: E/AndroidRuntime(1018): at
> java.lang.Thread.run(Thread.java:1096) 02-12 10:15:34.625:
> E/AndroidRuntime(1018): Caused by: java.lang.NullPointerException
> 02-12 10:15:34.625: E/AndroidRuntime(1018): at
> se.gibk.gibk.MainActivity$PostTask.GetList(MainActivity.java:88) 02-12
> 10:15:34.625: E/AndroidRuntime(1018): at
> se.gibk.gibk.MainActivity$PostTask.doInBackground(MainActivity.java:143)
> 02-12 10:15:34.625: E/AndroidRuntime(1018): at
> se.gibk.gibk.MainActivity$PostTask.doInBackground(MainActivity.java:1)
> 02-12 10:15:34.625: E/AndroidRuntime(1018): at
> android.os.AsyncTask$2.call(AsyncTask.java:185) 02-12 10:15:34.625:
> E/AndroidRuntime(1018): at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305)
My code is :
public class MainActivity extends Activity {
private static final String TAG = "Debugging";
private ListView list;
ArrayList<String> links;
Sermons sermons;
protected void onCreate(Bundle savedInstanceState) {
Log.d(TAG, "MainActivity - OnCreate()");
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
new PostTask(this).execute("http://gibk.se/sample-page/predikningar/?podcast");
Log.d(TAG, "execute");
}
public class PostTask extends AsyncTask> {
private Context context;
public PostTask(Context context) {
this.context = context;
}
private ArrayList<Sermons> GetList(String url) {
Log.d(TAG, "GetList");
ArrayList<Sermons> results = new ArrayList<Sermons>();
try {
URL urls = new URL(url);
XmlPullParserFactory factory = XmlPullParserFactory.newInstance();
factory.setNamespaceAware(false);
XmlPullParser xmlParser = factory.newPullParser();
xmlParser.setInput(this.getInputStream(urls), "UTF_8");
boolean insideItem = false;
int eventType = xmlParser.getEventType();
while (eventType != XmlPullParser.END_DOCUMENT) {
if (eventType == XmlPullParser.START_TAG) {
if (xmlParser.getName().equalsIgnoreCase("item")) {
sermons = new Sermons();
insideItem = true;
} else if (xmlParser.getName().equalsIgnoreCase("title")) {
if (insideItem)
sermons.setSermon(xmlParser.nextText());
} else if (xmlParser.getName().equalsIgnoreCase("itunes:author")) {
if (insideItem)
sermons.setPreacher(xmlParser.nextText());
} else if (xmlParser.getName().equalsIgnoreCase("guid")) {
if (insideItem)
links.add(xmlParser.nextText());
}
results.add(sermons);
}
else if(eventType==XmlPullParser.END_TAG && xmlParser.getName().equalsIgnoreCase("item")){
insideItem=false;
}
eventType = xmlParser.next();
}
}
catch (MalformedURLException e) {
e.printStackTrace();
}
catch (XmlPullParserException e) {
e.printStackTrace();
}
catch (IOException e) {
e.printStackTrace();
}
return results;
}
private InputStream getInputStream(URL url) {
try {
return url.openConnection().getInputStream();
} catch (IOException e) {
return null;
}
}
@Override
protected ArrayList<Sermons> doInBackground(String... params) {
Log.d(TAG, "doInBackGround");
String url = params[0];
ArrayList<Sermons> sermon = this.GetList(url);
return sermon;
}
@Override
protected void onPostExecute(ArrayList<Sermons> result) {
super.onPostExecute(result);
Log.d(TAG, "onPostExecute");
links = new ArrayList<String>();
list = (ListView) findViewById(R.id.list);
list.setAdapter(new Adapter(MainActivity.this, (ArrayList<Sermons>) result));
list.setOnItemClickListener(new OnItemClickListener() {
@Override
public void onItemClick(AdapterView<?> arg0, View v, int position,
long arg3) {
Uri uri = Uri.parse(links.get(position));
String url = uri.toString();
Intent intent = new Intent(Intent.ACTION_VIEW);
intent.setData(Uri.parse(url));
startActivity(intent);
}
});
}
}
}
A: Not sure if its related but:
"title".equalsIgnoreCase(xmlParser.getName())
is better than:
xmlParser.getName().equalsIgnoreCase("title")
To reduce chances for NPE.
Another issue:
private InputStream getInputStream(URL url) {
try {
return url.openConnection().getInputStream();
} catch (IOException e) {
return null;
}
}
Shouldn't return null on exception, but throw exception instead.
| |
doc_23538881
|
I've got the following file on two different Ubuntu machines.
HTTP/1.1 100 Continue
HTTP/1.1 200 OK
Date: Tue, 14 Mar 2222 15:10:28 GMT
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Set-Cookie: JSESSIONID=soekfifnowmds3278xks;Path=/
Content-Type: application/json; charset=UTF-8
Pragma: no-cache
Content-Length: 743
Server:
{
"testName": "Test\r",
"tagString": "",
"description": "TestTest\r",
"status": "READY",
"ignoreSampleCount": 0,
"targetHosts": "\r\n\r",
"useRampUp": true,
"rampUpType": "THREAD",
"threshold": "D",
"scriptName": "test.py",
"duration": 60000,
"runCount": 0,
"agentCount": 1,
"vuserPerAgent": 1,
"processes": 1,
"rampUpInitCount": 0,
"rampUpInitSleepTime": 0,
"rampUpStep": 1,
"rampUpIncrementInterval": 1000,
"threads": 1,
"progressMessage": "",
"testComment": "",
"scriptRevision": 400,
"region": "NONE",
"samplingInterval": 1,
"param": "",
"createdDate": "Jun 27, 2222 3:10:28 PM",
"lastModifiedDate": "Jun 27, 2347 3:10:28 PM",
"id": 21
}
It's in the JSON format but I'm not using jq to parse it because the file consists of other details (like return headers since I'm querying this data from an API) which aren't in JSON and jq will pop out an error if I even try to parse the file using it. Hence, I'm using grep.
Now, I need to get out the id from this data (just the numeric part, in this case, 21). Perhaps there's a better way to do this, but till now I was using
cat File | grep '"id": [0-9]*' | grep -o [0-9]*
And it gives me the right answer. However, for some reason, the behaviour was inconsistent. Like I mentioned in the beginning, I've got the exact same JSON data on two Ubuntu machines. But when I run the same command to fetch out the id from one of the machines, the above comamnd doesn't work! I get no result as though grep could find nothing.
The problem I found was in the second grep command. On the machine on which the above command was not working, I replaced it with grep -o [0-9]. and it fetched it fine. But I know that the moment the id runs into more than 2 digits, it'll stop working. But the * is not working just on that system! While on the other, it's working for any number of digits flawlessly!
Any suggestions would be greatly appreciated! If we're not able to figure out why it's behaving inconsistently, perhaps you could please provide me with another grep that would fetch me the same.
A: You can use GNU grep with its Perl Compatible Regular Expression capabilities enabled with the -P flag and print only the matching entry using -o flag.
grep -oP '"id": \K[0-9]+' file
21
where the \K escape sequence stands for
\K: This sequence resets the starting point of the reported match. Any previously matched characters are not included in the final matched sequence.
RegEx Demo
You can remove the -i flag if you are using in the cURL which includes all the header information as in the above JSON, without that the output should be plain JSON which you can feed to jq as
curl ... | jq '.id'
21
| |
doc_23538882
|
Now this method works perfectly on some PCs that I have and when I run the application it gives same checksum hashes. The problem is on some other PCs I always get the auto-update message because I'm getting different MD5 hashes, Now after that the application starts the auto-update and re-downloads the file on my server, I get a new exe file with a totally different hashes. I'm really confused what is possibly can cause this? is it a problem from the C# downloading method? is it a problem from the md5 function it self? any inputs would be appreciated...
EDIT: a method to calculate md5 checksum
public static string GetMD5HashFromFile(string filename)
{
using (var md5 = new MD5CryptoServiceProvider())
{
var buffer = md5.ComputeHash(File.ReadAllBytes(filename));
var sb = new StringBuilder();
for (int i = 0; i < buffer.Length; i++)
{
sb.Append(buffer[i].ToString("x2"));
}
return sb.ToString();
}
}
| |
doc_23538883
|
The moment I remove this line COURSE_CONTENT = models.TextField(default='What will be taught?') in models.py then the view function is working fine, which tells thats there is nothing wrong with views.py.
This is the error
https://db.tt/TWKu3N90
More detailed error
https://db.tt/a8jbyM89
Any help will be greatly appreciated.
| |
doc_23538884
|
@Component
public class MyCachingObj extends HibernateMapper{
@PostConstruct
public void loadAll() {
...
getCurrentSession().getQuery(); //this method is in HibernateMapper
...
}
}
@Repository
public class HibernateMapper {
@Autowired
private SessionFactory _session;
public Session getCurrentSession() {
return _session;
}
}
What I have read so far, that @PostConstruct doesn't guarantee that spring is done with all beans. Then how can i make sure that spring is done processing and i have session ready to be picked up ?
EDIT
I ended up creating
@Component
@Transactional
@Repository
public class ApplicationStartup implements ApplicationListener<ContextRefreshedEvent>{..}
which had autowired sessions and autowired another Component class. However, i noticed that this onApplicationEvent method was being called twice. I ended up putting a boolean variable to load the underlying data only once.
Now few questions
*
*Why onApplicationEvent is being called twice.
*My cached object is annotated with @Component, Is it safe to assume that as long as that object is being used by @Autowired instance, it would remain singleton and would hold the data i loaded at startup ?
| |
doc_23538885
| ||
doc_23538886
|
@Query("UPDATE packs SET is_opened = 1 WHERE pack_id IN (:packId)")
fun unlockPack(packId: Int)
I calling it from Repository:
override fun unlockOnePackById(packID: Int) {
db.packDaoAccess().unlockPack(packID)
}
which called from presener, and then result goes to Activity. When i use GET or another sql queries, i have Observable result:
@Query("SELECT * FROM packs")
fun allFullPacks(): Observable<List<AnimalPackFull>>
But as i know, UPDATE return nothing. How can i detect, it my UPDATE query completed correct? I need it to show popup to User.
A: If you tweak your query to have some return value of an Int like so
@Query("UPDATE packs SET is_opened = 1 WHERE pack_id IN (:packId)")
fun unlockPack(packId: Int) : Int
It should return you the number of rows affected by your update query.
Same goes for delete queries
| |
doc_23538887
|
At their collab version, I made the below changes to add some transformation techniques.
*
*First change to the __getitem__ method of class PennFudanDataset(torch.utils.data.Dataset)
if self.transforms is not None:
img = self.transforms(img)
target = T.ToTensor()(target)
return img, target
In actual documentation it is
if self.transforms is not None:
img, target = self.transforms(img, target)
Secondly, at the get_transform(train)function.
def get_transform(train):
if train:
transformed = T.Compose([
T.ToTensor(),
T.GaussianBlur(kernel_size=5, sigma=(0.1, 2.0)),
T.ColorJitter(brightness=[0.1, 0.2], contrast=[0.1, 0.2], saturation=[0, 0.2], hue=[0,0.5])
])
return transformed
else:
return T.ToTensor()
**In the documentation it is-**
def get_transform(train):
transforms = []
transforms.append(T.ToTensor())
if train:
transforms.append(T.RandomHorizontalFlip(0.5))
return T.Compose(transforms)
While implementing the code, I'm getting the below error. I'm not able to get what I',m doing wrong.
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py", line 272, in __getitem__
return self.dataset[self.indices[idx]]
File "<ipython-input-41-94e93ff7a132>", line 72, in __getitem__
target = T.ToTensor()(target)
File "/usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py", line 104, in __call__
return F.to_tensor(pic)
File "/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py", line 64, in to_tensor
raise TypeError('pic should be PIL Image or ndarray. Got {}'.format(type(pic)))
TypeError: pic should be PIL Image or ndarray. Got <class 'dict'>
A: I believe the Pytorch transforms only work on images (PIL images or np arrays in this case) and not labels (which are dicts according to the trace). As such, I don't think you need to "tensorify" the labels as in this line target = T.ToTensor()(target) in the __getitem__ function.
| |
doc_23538888
|
if I add this line It is mixing up with my main output
echo '<link rel="icon" href="/favicon.ico" type="image/x-icon">';
the output I'm getting
<link rel="icon" href="/favicon.ico" type="image/x-icon">hello world
my PHP code
<?php
header('Content-Type: application/json');
class Main {
function mFun() {
echo "hello world";
}
}
$main = new Main();
echo '<link rel="icon" href="/favicon.ico" type="image/x-icon">';
$main -> mFun();
?>
UPDATE
why anyone not noticing header('Content-Type: application/json');
| |
doc_23538889
|
Category Date Value
A 01/01/2015 4
A 02/01/2015 1
B 01/01/2015 6
B 02/01/2015 7
Table 1 above has the values for each category organized by month.
Table 2
Category Date Value
A 03/01/2015 10
C 03/01/2015 66
D 03/01/2015 9
Suppose table 2 comes in, which has the values for each category in March, 2015.
Table 3
Category Date Value
A 01/01/2015 4
A 02/01/2015 1
A 03/01/2015 10
B 01/01/2015 6
B 02/01/2015 7
B 03/01/2015 0
C 01/01/2015 0
C 02/01/2015 0
C 03/01/2015 66
D 01/01/2015 0
D 02/01/2015 0
D 03/01/2015 9
I want to "outer-join" the two tables "vertically" on Python:
If Table2 has a category that Table1 doesn't have, then it adds that category to Table3 and assign a value of 0 for 01/01/2015 and 02/01/2015. Also, the category that is in table1 but not in table2 will also be added in table 3, by assigning a value of 0 for 03/01/2015. If both have the same categories, they will just be added vertically with the values in the table1 and table2.
Any advice or help will be greatly appreciated.. I've been thinking about this all day and still can't find an efficient way to do this.
Thanks so much!
A: I would do this using Pandas as follows (I'll call your tables df1 and df2):
First prepare the list of dates and categories for the final table using set together with concat to select only the unique values from your original tables:
import pandas as pd
dates = set(pd.concat([df1.Date,df2.Date]))
cats = set(pd.concat([df1.Category,df2.Category]))
Then we create the landing table by iterating through these sets (that's where itertools.product is a neat trick although note that you have to cast it as a list to feed it into the dataframe constructor):
df3 = pd.DataFrame(list(itertools.product(cats,dates)),columns = ['Category','Date'])
df3
Out[88]:
Category Date
0 D 01/01/2015
1 D 03/01/2015
2 D 02/01/2015
3 C 01/01/2015
4 C 03/01/2015
5 C 02/01/2015
6 A 01/01/2015
7 A 03/01/2015
8 A 02/01/2015
9 B 01/01/2015
10 B 03/01/2015
11 B 02/01/2015
Finally we bring in the values using merge (setting on='left'), using np.fmax to consolidate the two sets (you have use fmax instead of max to ignore the nans):
df3['Value'] = np.fmax(pd.merge(df3,df1,how='left')['Value'],pd.merge(df3,df2,how='left')['Value'])
df3
Out[127]:
Category Date Value
0 D 01/01/2015 NaN
1 D 03/01/2015 9
2 D 02/01/2015 NaN
3 C 01/01/2015 NaN
4 C 03/01/2015 66
5 C 02/01/2015 NaN
6 A 01/01/2015 4
7 A 03/01/2015 10
8 A 02/01/2015 1
9 B 01/01/2015 6
10 B 03/01/2015 NaN
11 B 02/01/2015 7
| |
doc_23538890
|
(function($) {
$(document).ready( function() {
var theContent = $(".transaction-results p").last();
console.log(theContent.html());
theContent.html().replace(/Total:/, 'Total without shipping:');
});
})(jQuery);
Any thoughts?
Thank you!
A: The string was replaced, but you didn't reassign the string to the html of the element. Use return:
theContent.html(function(i,h){
return h.replace(/Total:/, 'Total without shipping:');
});
JS Fiddle demo (kindly contributed by diEcho).
References:
*
*html().
A: (function($) {
$(document).ready( function() {
var theContent = $(".transaction-results p").last();
console.log(theContent.html());
theContent.html(theContent.html().replace('Total:', 'Total without shipping:'));
});
})(jQuery);
Why you did /Total:/ and not 'Total' like a normal string?
-The solution from @David Thomas works.
A: You have extra : in string to search and also assign it back to html of theContent
Live Demo
$(document).ready( function() {
var theContent = $(".transaction-results p").last();
console.log(theContent.html());
theContent.html(theContent.html().replace(/Total/, 'Total without shipping:'));
});
| |
doc_23538891
|
TEST1:
I created a python script test1.py in the folder testenv with only:
print('Hello World')
Then I created the environment, installed pyinstaller and created the executable
D:\testenv> python -m venv venv_test
...
D:\testenv\venv_test\Scripts>activate.bat
...
(venv_test) D:\testenv>pip install pyinstaller
(venv_test) D:\testenv>pyinstaller --clean -F test1.py
And it creates my test1.exe of about 6 Mb
TEST 2: I modified test1.py as follows:
import pandas as pd
print('Hello World')
I installed pandas in the environment and created the new executable:
(venv_test) D:\testenv>pip install pandas
(venv_test) D:\testenv>pyinstaller --clean -F test1.py
Ant it creates my test1.exe which is now of 230 Mb!!!
if I run the command
(venv_test) D:\testenv>python -V
Python 3.5.2 :: Anaconda custom (64-bit)
when I am running pyinstaller I get some messages I do not understand, for example:
INFO: site: retargeting to fake-dir 'c:\\users\\username\\appdata\\local\\continuum\\anaconda3\\lib\\site-packages\\PyInstaller\\fake-modules'
Also I am getting messages about matplotlib and other modules that have nothing to do with my code, for example:
INFO: Matplotlib backend "pdf": added
INFO: Matplotlib backend "pgf": added
INFO: Matplotlib backend "ps": added
INFO: Matplotlib backend "svg": added
I know there are some related questions:
Reducing size of pyinstaller exe, size of executable using pyinstaller and numpy
but I could not solve the problem and I am afraid I am doing something wrong with respect to anaconda.
So my questions are:
what am I doing wrong? can I reduce the size of my executable?
A: I accepted the answer above but I post here what I did step by step for complete beginners like me who easily get lost.
Before I begin I post my complete test1.py example script with all the modules I actually need. My apologies if it is a bit more complex than the original question but maybe this can help someone.
test1.py looks like this:
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import matplotlib.image as image
import numpy as np
import os.path
import pandas as pd
import re
from matplotlib.ticker import AutoMinorLocator
from netCDF4 import Dataset
from time import time
from scipy.spatial import distance
from simpledbf import Dbf5
from sys import argv
print('Hello World')
I added matplotlib.use('Agg') (as my actual code is creating figures)
Generating a PNG with matplotlib when DISPLAY is undefined
1) Install a new version of python independently from anaconda.
downloaded python from:
https://www.python.org/downloads/
installed selecting 'add python to path' and deselecting install launcher for all users (I don't have admin rights)
check that I am using the same version from CMD, just writing python I get:
Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017,
06:04:45) [MSC v.1900 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
2) Create and activate the environment, from CMD
D:\> mkdir py36envtest
...
D:\py36envtest>python -m venv venv_py36
...
D:\py36envtest\venv_py36\Scripts>activate.bat
3) Install in the environment all the modules needed in the script
Making sure they are compatible to the python version with the command:
(from Matplotlib not recognized as a module when importing in Python)
(venv_py36) D:\py36envtest> python -m pip install nameofmodule
NB: in my case I also had to add the option --proxy https://00.000.000.00:0000
for the example I used development version of py installer:
(venv_py36) D:\py36envtest> python -m pip install https://github.com/pyinstaller/pyinstaller/archive/develop.tar.gz
and the modules: pandas, matplolib, simpledbf, scipy, netCDF4. At the end my environment looks like this.
(venv_py36) D:\py36envtest> pip freeze
altgraph==0.15
cycler==0.10.0
future==0.16.0
macholib==1.9
matplotlib==2.1.2
netCDF4==1.3.1
numpy==1.14.0
pandas==0.22.0
pefile==2017.11.5
PyInstaller==3.4.dev0+5f9190544
pyparsing==2.2.0
pypiwin32==220
python-dateutil==2.6.1
pytz==2017.3
scipy==1.0.0
simpledbf==0.2.6
six==1.11.0
style==1.1.0
update==0.0.1
4) Create/modify the .spec file (when you run pyinstaller it creates a .spec file, you can rename).
Initially I got a lot of ImportError: DLL load failed (especially for scipy) and missing module error which I solved thanks to these posts:
What is the recommended way to persist (pickle) custom sklearn pipelines?
and the comment to this answer:
Pyinstaller with scipy.signal ImportError: DLL load failed
My inputtest1.spec finally looks like this:
# -*- mode: python -*-
options = [ ('v', None, 'OPTION')]
block_cipher = None
a = Analysis(['test1.py'],
pathex=['D:\\py36envtest', 'D:\\py36envtest\\venv_py36\\Lib\\site-packages\\scipy\\extra-dll' ],
binaries=[],
datas=[],
hiddenimports=['scipy._lib.messagestream',
'pandas._libs.tslibs.timedeltas'],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
name='test1',
debug=False,
strip=False,
upx=True,
runtime_tmpdir=None,
console=True )
5) Finally make the executable with the command
(venv_py36) D:\py36envtest>pyinstaller -F --clean inputtest1.spec
my test1.exe is 47.6 Mb, the .exe of the same script created from an anaconda virtual environment is 229 Mb.
I am happy (and if there are more suggestions they are welcome)
A: The problem is that you should not be using a virtual environment and especially not anaconda. Please download default python 32 bit and use only necessary modules. Then follow the steps provided in the links, this should definitely fix it.
Although you created a virtual env, are you sure your spec file is not linking to old Anaconda entries?
If all this fails, then submit a bug as this is very strange.
A: I had a similar problem and found a solution.
I used Windows terminal preview. This program allows creation of various virtual environments like Windows Power Shell (btw. Linux Ubuntu too. Also, worth noting: you can have many terminals in this program installed and, even, open a few at once. Very cool stuff).
Inside Windows Power Shell in Windows terminal preview I installed all the necessary libraries (like numpy, pandas,re, etc.), then I opened the path to my file and tried to use this command:
pyinstaller --onefile -w 'filename.py'
...but, the output exe didn't work. For some reason, the console said that there is a lack of one library (which I had installed earlier). I've found the solution in mimic the auto-py-to-exe library. The command used by this GUI is:
pyinstaller --noconfirm --onedir --console "C:/Users/something/filename.py"
And this one works well. I reduced the size of my output exe program from 911MB to 82,9MB !!!
BTW. 911MB was the size of output made by auto-py-to-exe.
I wonder how is it possible that no one yet has created a compressor that reads the code, checks what libraries are part of the code, then putting only them inside the compression. In my case, auto-py-to-exe probably loaded all libraries that I ever installed. That would explain the size of this compressed folder.
Some suggest using https://virtualenv.pypa.io/en/stable/ but in my opinion, this library is very difficult, at least for me.
| |
doc_23538892
|
Can Microsoft.Build.BuildEngine be used to build multiple solutions at once (i.e. through multithreading)?
I know that it has some STAThread requirement, although I've never quite understood what that means for my programs.
Edit: Let me clarify. I know that MSBuild can do multithreaded build of projects in a solution. The question I have is whether Microsoft.Build.BuildEngine.Engine can be used by different threads to build different solutions/projects.
I tried creating separate Engines in different threads, and it didn't work. When I tried using one Engine, in one thread, it worked.
Can the Microsoft.Build.BuildEngine .NET libraries build more than one project/solution at a time?
A: It depends on what you're doing. Yes you can do multiple parallel project builds (in VS go to Tools -> Options -> Projects and Solutions -> Build and Run). If you are running MSBuild then you can use the /m[:number] command. As for BuildEngine that has been deprecated in favor of Microsoft.Build.Construction/Microsoft.Build.Evaluation/Microsoft.Build.Execution, which I believe the BuildParameters.MaxNodeCount is what sets the number of threads to use. Although I would personally recommend just running msbuild itself or use the IDE, it's much simpler.
| |
doc_23538893
|
The goal is to show n lines (rows) in data to to find pattern for what he got the penalty.
So here is the data:
d = {'balance_id': [70775,70775 ,70775,70775,70775], 'amount': [2500, 2450,-500,500,2700]
,'description':['payment_for_job_order_080ecd','payment_for_job_order_180eca'
,'penalty_for_being_absent_at_job','balance_correction','payment_for_job_order_120ecq']}
df1 = pd.DataFrame(data=d)
df1
balance_id amount description
0 70775 2500 payment_for_job_order_080ecd
1 70775 2450 payment_for_job_order_180eca
2 70775 -500 penalty_for_being_absent_at_job
3 70775 500 balance_correction
4 70775 2700 payment_for_job_order_120ecq
I try this:
df1.loc[df1['description']=='balance_correction'].iloc[:-2]
and get nothing. Also using shift doesn't help. If we need 2 roes to show, the result should be
balance_id amount description
1 70775 2450 payment_for_job_order_180eca
2 70775 -500 penalty_for_being_absent_at_job
What can solve the issue?
A: If the index on your data frame is sequential (0, 1, 2, 3, ...), you can filter by the index:
idx = df1.loc[df1['description'] == 'balance_correction'].index
df1.loc[(idx - 2).append(idx - 1)]
| |
doc_23538894
|
Private Sub textbox1_AfterUpdate()
If IsNumeric(textbox1.Value) = False Then
Me!textbox1.Undo
MsgBox "only numbers are allowed"
Exit Sub
End If
Exit Sub
using BeforeUpdate event:
Private Sub textbox1_BeforeUpdate(Cancel As Integer)
If IsNumeric(textbox1.Value) = False Then
MsgBox "only numbers are allowed"
Me!textbox1.Undo
Cancel = True
Exit Sub
End If
Exit Sub
My current code does not execute at all. I have also tried it in the textbox1_BeforeUpdate event. Please see code.
New Code:
Public Function IsValidKeyAscii(ByVal keyAscii As Integer, ByVal value As
String) As Boolean
IsValidKeyAscii = (keyAscii = vbKeyDot And InStr(1, value, Chr$(vbKeyDot)) =
0) Or (keyAscii >= vbKey0 And keyAscii <= vbKey9)
End Function
Private Sub textbox1_KeyDown(KeyCode As Integer, Shift As Integer)
If Not IsValidKeyAscii(KeyCode, textbox1.value) Then KeyCode = 0
End Sub
A: You shouldn't be using VBA for this task at all.
Just set the field format property to General number. That's the built-in way to ensure users can only enter numbers in a field.
A: Write a validator function (could be in its own KeyInputValidator class or module), so you can reuse this logic everywhere you need it, instead of copy/pasting it for every numeric textbox you need:
Option Explicit
Private Const vbKeyDot As Integer = 46
'@Description("returns true if specified keyAscii is a number, or if it's a dot and value doesn't already contain one")
Public Function IsValidKeyAscii(ByVal keyAscii As Integer, ByVal value As String) As Boolean
IsValidKeyAscii = (keyAscii = vbKeyDot And InStr(1, value, Chr$(vbKeyDot)) = 0) Or (keyAscii >= vbKey0 And keyAscii <= vbKey9)
End Function
Then use it in the textboxes' KeyPress event handler (assuming this is a MSForms textbox control) to determine whether or not to accept the input - since the event provides a MSForms.ReturnInteger object, that object's Value property can be set to 0 to "swallow" a keypress:
Private Sub TextBox1_KeyPress(ByVal keyAscii As MSForms.ReturnInteger)
If Not IsValidKeyAscii(keyAscii.Value, TextBox1.value) Then keyAscii.Value = 0
End Sub
That way you don't need to undo any inputs, or pop any annoying warning or message boxes: the value in the field is guaranteed to be a valid numeric value!
EDIT the above event handler signature is for a MSForms control. Looks like Access uses a different interface:
Private Sub TextBox1_KeyDown(KeyCode As Integer, Shift As Integer)
Here the KeyCode is passed ByRef, so you can alter it directly. In other words, this becomes the logic:
If Not IsValidKeyAscii(KeyCode, TextBox1.value) Then KeyCode = 0
A: You can try using the lost focus event:
Private Sub TextBox1_LostFocus()
Dim blnNumber As Boolean
Dim strNumber As String
strNumber = TextBox1.Value
blnNumber = IsNumeric(strNumber)
If Not blnNumber Then
Me!TextBox1.Undo
MsgBox "only numbers are allowed"
Else
'And, if you want to force a decimal.
If InStr(strNumber, ".") < 1 Then
Me!TextBox1.Undo
MsgBox "only doubles are allowed"
End If
End If
End Sub
Also, check the Textbox1 element that you have listed in access. Is it's name TextBox1? or something else?
For example, in excel it is represented like the following: =EMBED("Forms.TextBox.1","") even though the name that the code references is TextBox1.
| |
doc_23538895
|
I'm using this currently:
var textRange = new TextRange(TextInput.Document.ContentStart, TextInput.Document.ContentEnd);
string[] lines = textRange.Text.Split('\n');
A: RichTextBox is a FlowDocument type and that does not have a Lines property. What you are doing seems like a good solution. You may want to use IndexOf instead of split.
You can also add an extension method like the article suggests:
public static long Lines(this string s)
{
long count = 1;
int position = 0;
while ((position = s.IndexOf('\n', position)) != -1)
{
count++;
position++; // Skip this occurance!
}
return count;
}
A: I know I'm very late to the party, but I came up with another reliable and reusable solution using RTF parsing.
Idea
In RTF, every paragraph ends with a \par. So e.g. if you enter this text
Lorem ipsum
Foo
Bar
in a RichTextBox, it will internally be stored as (very very simplified)
\par
Lorem ipsum\par
Foo\par
Bar\par
therefore, it is a quite reliable method to simply count the occurrences of those \par commands. Note though that there is always 1 more \par than there are actual lines.
Usage
Thanks to extension methods, my proposed solution can simply be used like this:
int lines = myRichTextBox.GetLineCount();
where myRichTextBox is an instance of the RichTexBox class.
Code
public static class RichTextBoxExtensions
{
/// <summary>
/// Gets the content of the <see cref="RichTextBox"/> as the actual RTF.
/// </summary>
public static string GetAsRTF(this RichTextBox richTextBox)
{
using (MemoryStream memoryStream = new MemoryStream())
{
TextRange textRange = new TextRange(richTextBox.Document.ContentStart, richTextBox.Document.ContentEnd);
textRange.Save(memoryStream, DataFormats.Rtf);
memoryStream.Seek(0, SeekOrigin.Begin);
using (StreamReader streamReader = new StreamReader(memoryStream))
{
return streamReader.ReadToEnd();
}
}
}
/// <summary>
/// Gets the content of the <see cref="RichTextBox"/> as plain text only.
/// </summary>
public static string GetAsText(this RichTextBox richTextBox)
{
return new TextRange(richTextBox.Document.ContentStart, richTextBox.Document.ContentEnd).Text;
}
/// <summary>
/// Gets the number of lines in the <see cref="RichTextBox"/>.
/// </summary>
public static int GetLineCount(this RichTextBox richTextBox)
{
// Idea: Every paragraph in a RichTextBox ends with a \par.
// Special handling for empty RichTextBoxes, because while there is
// a \par, there is no line in the strict sense yet.
if (String.IsNullOrWhiteSpace(richTextBox.GetAsText()))
{
return 0;
}
// Simply count the occurrences of \par to get the number of lines.
// Subtract 1 from the actual count because the first \par is not
// actually a line for reasons explained above.
return Regex.Matches(richTextBox.GetAsRTF(), Regex.Escape(@"\par")).Count - 1;
}
}
A: int lines = MainTbox.Document.Blocks.Count;
it is as simple is this.
| |
doc_23538896
|
{
"id":"2461",
"name":"GEORGIA INSTITUTE OF <leo_highlight style=border-bottom: 2px solid rgb(255, 255, 150); background-c",
"logo":"",
"address":null,
"city":null,
"state":null,
"campus_uri":"{{PATH}}2461\/"
},
....
....
When I do strip_tgs on this one, the whole JSON string is getting truncated at the name tag above. The JSON string looks like this.
{"id":"2461","name":"GEORGIA INSTITUTE OF
All below this line is gone. This is a huge JSON. But its getting truncated here.
Any ideas on what to do? I need to strip out all HTML tags.
Update:
Adding more details...
This JSON string I got is from encoding the query results array. So I get array from MySQL query and I encoded it with json_encode and trying to strip_tags on it.
A: $array = json_decode($json, true);
array_walk_recursive($array, function (&$val) { $val = strip_tags($val); });
$json = json_encode($json);
As simple... Decode it, walk through and encode it.
A: Strip out the tags after you have decoded the JSON object. You might do this in a lazy fashion (i.e. when needed) rather than go through every item an strip tags on all fields.
| |
doc_23538897
|
Is it something I can do in the table itself?
Please help
Thank you.
A: If I have understood you correctly, tables can be overridden to do what you need:
Table t = new Table() {
@Override
protected String formatPropertyValue(Object rowId, Object colId,
Property property) {
Object v = property.getValue();
if (v instanceof Date) {
Date dateValue = (Date) v;
return "Formatted date: " + (1900 + dateValue.getYear())
+ "-" + dateValue.getMonth() + "-"
+ dateValue.getDate();
}
return super.formatPropertyValue(rowId, colId, property);
}
};
example from vaadin forum
| |
doc_23538898
|
Can anyone help ?
parameters:
- name: parameter1
type: object
default:
IN:
Test1:
folderPath: abc/myFolder1
Test2:
folderPath: abc/myFolder2
Test3:
folderPath: abc/myFolder3
US:
Test4:
folderPath: xyz/myFolder4
CA:
Test5:
folderPath: lmn/myFolder5
| |
doc_23538899
|
So i have done all the steps according to documentation for the plugin:
I have created a folder auth with controllers and view that comes with BithAuth libary and called the module controller and method yet this gives me a 404 error.
I noticed one diffirence between the test module and my bithauth module for instance when i call
<?php Modules::run('module/controller/method'); ?>
this on the test module and then visit localhost/home/index the page shows home page view and renders the view of module. However when i call this on my bitauth module and i visit localhost i get redirected to localhost/bithauth_controller/bitauth_method and that throws a 404 error.
My folder structure:
->application
-->controllers
-->views
-->models
--->modules/auth/controllers
--->modules/auth/vews/examples
--->modules/auth/models
My home controller that maps to home url: localhost
class Home extends MX_Controller {
public function index()
{
$this->load->view('home');
}
}
and its view file:
<html>
<head>
<title>Shop: index</title>
</head>
<body>
<?php Modules::run('auth/Example/index'); ?>
</body>
</html>
Now in the auth folder i have bithauth controller:
class Example extends MX_Controller
{
/**
* Example::__construct()
*
*/
public function __construct()
{
parent::__construct();
$this->load->library('bitauth');
$this->load->helper('form');
$this->load->helper('url');
$this->load->library('form_validation');
$this->form_validation->set_error_delimiters('<div class="error">', '</div>');
}
public function index()
{
if( ! $this->bitauth->logged_in())
{
$this->session->set_userdata('redir', current_url());
redirect('example/login');
}
$this->load->view('example/users', array('bitauth' => $this->bitauth, 'users' => $this->bitauth->get_users()));
}
}
And the view in auth/views: a simple form
<body>
<?php
echo '<table border="0" cellspacing="0" cellpadding="0" id="table">';
echo '<caption>BitAuth Example: Users</caption>';
echo '<tr><th width="1">ID</th><th>Username</th><th>Full Name</th><th>Actions</th></tr>';
if( ! empty($users))
{
foreach($users as $_user)
{
$actions = '';
if($bitauth->has_role('admin'))
{
$actions = anchor('example/edit_user/'.$_user->user_id, 'Edit User');
if( ! $_user->active)
{
$actions .= '<br/>'.anchor('example/activate/'.$_user->activation_code, 'Activate User');
}
}
echo '<tr>'.
'<td>'.$_user->user_id.'</td>'.
'<td>'.$_user->username.'</td>'.
'<td>'.$_user->fullname.'</td>'.
'<td>'.$actions.'</td>'.
'</tr>';
}
}
echo '</table>';
echo '<div id="bottom">';
echo anchor('example/logout', 'Logout', 'style="float: right;"');
echo anchor('example/groups', 'View Groups');
if($bitauth->is_admin())
{
echo '<br/>'.anchor('example/add_user', 'Add User');
}
echo '</div>';
?>
</body>
I noticed that redirect was happening due to if() statement in index() method i solved that now however i dont get the view of my module displayed and i dont get any errors.
A:
i dont get the view of my module displayed and i dont get any errors.
try this:
<?php echo Modules::run('module/controller/method'); ?>
but better to use this way (Codeigniter Output class) :
<?php $this->output->append_output( Modules::run('module/controller/method') ); ?>
i hope this helps
A: So if any one of you is here, looking for a solution when using a REQUEST_METHOD (GET/POST/DELETE/PUT) on your route, and facing the 404 error, here's the solution:
Apparently, CodeIgniter's HMVC plugin forgot to consider handling the different types of routing requests, unlike it's native version. Actually, when you define a route for a specific type of request, it fails to load the associated module. So, I went ahead and tweaked the module class. If you've maintained your code structure as specified by others above, you need to go to application/third_party/MX/Modules.php and add following to line 234:
$method = $_SERVER['REQUEST_METHOD'];
$val = is_array($val) ?(empty($val[$method]) ?'' :$val[$method]) :$val;
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.