text
stringlengths 15
59.8k
| meta
dict |
|---|---|
Q: Firebase Query Array Angular Using "in" I am facing a problem right now. I really don't understand where is my fault.
I try to query with library @angular/fire
Following is my query code:
return this.firestore.collection('Transaksi', ref => ref
.where('travel', '==', travel)
.where('waktu', 'in', [1, 5]) //this line is problem.
.where('tanggal', '==', tanggal)
.where('dariKota', '==', dari)
.where('menujuKota', '==', menuju))
.valueChanges()
But, I only get value where when waktu == 5.
If I use
return this.firestore.collection('Transaksi', ref => ref
.where('travel', '==', travel)
.where('waktu', '==', 1)
.where('tanggal', '==', tanggal)
.where('dariKota', '==', dari)
.where('menujuKota', '==', menuju))
.valueChanges()
I get the correct value when I use waktu == 1 or waktu == 5 (normal).
But didn't get anything when I use waktu in [1, 5].
I try to use waktu in [5, 1] but got the result.
Is something there I am missing?
or in angular, in still not supportted?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62032381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: in regx, is it possible to accurately extract a json scalar value? wish to extract a scalar value from json.
*
*know JSON uses double quotes.
*know datatype of scalar: string, number, date, boolean.
*know scalar will be on first level, ie, not an attribute of an embedded object
{ "want": "string" } => "string"
{ "want": 123 } => 123
{ "not": { "want": "wrong" }, "want": "right" } => "right
{ "nothing": 0 } => null / not found
do not know how to handle the opening/closing quotes, nor do I know how to handle embedded objects.
is this possible?
this is the best I have come up with so far:
// match `want` attribute
(?:"want"\s*:\s*)
// string, number, boolean or null
(((?:")([^"]*)(?:"))|([-0-9][.eE0-9]*)|true|false|null)
// followed by comma or right bracket
(?:\s*(,|}))
it's good because it
*
*can be run in postgres
*grabs strings
*grabs numbers
*grabs boolean and null
it's bad because it
*
*does not ensure want is a first level attribute
*string value cannot have quote (") inside
A: This expression will get you 50% of the way there:
(?<=:\s*)(".*?"(?<!\\")|\-?(0|[1-9]\d*)(\.\d+)?([eE][+-]?\d+)?)(?=\s*})
Or, when written as a multi-line regex:
(?x:
(?<=:\s*) # After : + space
(
".*?"(?<!\\") # String in double quotes
| # -or-
\-? # Optional leading -ve
(0|[1-9]\d*) # Number
(\.\d+)? # Optional fraction
([eE][+-]?\d+)? # Optional exponent
)
(?=\s*}) # space + }
)
This will not match your nested object example ({ "not": { "want" ...) or rather, it will match, but on the wrong thing. Also, your final example ({ "nothing": 0 } => null / not found) is difficult because 0 is a valid number. To work around the this problem, I would just check the result in procedural code and replace a result of 0 with null.
The nested objects problem is a whole different ball game though. It's getting into the realm of lexical analysis rather than simple tokenizing. At that point, you might as well just use a JSON library because you'd be writing a full JSON parser anyway. Fortunately, JSON is a simple enough grammar that it wouldn't be that expensive to use a third party library - certainly no more than doing it yourself.
I think the short answer is: from a simple { "name" : <value> } object, yes, but from anything more complicated, no.
For info on the JSON syntax, see http://www.json.org/.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/11047424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Simple Ray Tracing with Lambertian Shading, Confusion I didn't see another post with a problem similar to mine, so hopefully this is not redundant.
I've been reading a book on the fundamentals of computer graphics (third edition) and I've been implementing a basic ray tracing program based on the principles I've learned from it. I had little trouble implementing parallel and perspective projection but after moving onto Lambertian and Blinn-Phong Shading I've run into a snag that I'm having trouble figuring out on my own.
I believe my problem is related to how I am calculating the ray-sphere intersection point and the vectors to the camera/light. I attached a picture that is output when I run simply perspective projection with no shading.
Perspective Output
However, when I attempt the same scene with Lambertian shading the spheres disappear.
Blank Ouput
While trying to debug this myself I noticed that if I negate the x, y, z coordinates calculated as the hit point, the spheres appear again. And I believe the light is coming from the opposite direction I expect.
Lambertian, negated hitPoint
I am calculating the hit point by adding the product of the projected direction vector and the t value, calculated by the ray-sphere intersection formula, to the origin (where my "camera" is, 0,0,0) or just e + td.
The vector from the hit point to the light, l, I am setting to the light's position minus the hit point's position (so hit point's coords minus light's coords).
v, the vector from the hit point to the camera, I am getting by simply negating the projected view vector;
And the surface normal I am getting by hit point minus the sphere's position.
All of which I believe is correct. However, while stepping through the part that calculates the surface normal, I notice something I think is odd. When subtracting the hit point's position from the sphere's position to get the vector from the sphere's center to the hit point, I believe I should expect to get a vector where all of the values lie within the range (-r,r); but that is not happening.
This is an example from stepping through my code:
Calculated hit point: (-0.9971, 0.1255, -7.8284)
Sphere center: (0, 0, 8) (radius is 1)
After subtracting, I get a vector where the z value is -15.8284. This seems wrong to me; but I do not know what is causing it. Would a z value of -15.8284 not imply that the sphere center and the hit position are ~16 units away from each other in the z plane? Obviously these two numbers are within 1 from each other in absolute value terms, that's what leads me to think my problem has something to do with this.
Here's the main ray-tracing loop:
auto origin = Position3f(0, 0, 0);
for (int i = 0; i < numPixX; i++)
{
for (int j = 0; j < numPixY; j++)
{
for (SceneSurface* object : objects)
{
float imgPlane_u = left + (right - left) * (i + 0.5f) / numPixX;
float imgPlane_v = bottom + (top - bottom) * (j + 0.5f) / numPixY;
Vector3f direction = (w.negated() * focal_length) + (u * imgPlane_u) + (v * imgPlane_v);
Ray viewingRay(origin, eye, direction);
RayTestResult testResult = object->TestViewRay(viewingRay);
if (testResult.m_bRayHit)
{
Position3f hitPoint = (origin + (direction) * testResult.m_fDist);//.negated();
Vector3f light_direction = (light - hitPoint).toVector().normalized();
Vector3f view_direction = direction.negated().normalized();
Vector3f surface_normal = object->GetNormalAt(hitPoint);
image[j][i] = object->color * intensity * fmax(0, surface_normal * light_direction);
}
}
}
}
GetNormalAt is simply:
Vector3f Sphere::GetNormalAt(Position3f &surface)
{
return (surface - position).toVector().normalized();
}
My spheres are positioned at (0, 0, 8) and (-1.5, -1, 6) with rad 1.0f.
My light is at (-3, -3, 0) with an intensity of 1.0f;
I ignore any intersection where t is not greater than 0 so I do not believe that is causing this problem.
I think I may be doing some kind of mistake when it comes to keeping positions and vectors in the same coordinate system (same transform?), but I'm still learning and admittedly don't understand that very well. If the view direction is always in the -w direction, why do we position scene objects in the positive w direction?
Any help or wisdom is greatly appreciated. I'm teaching this all to myself so far and I'm pleased with how much I've taken in, but something in my gut tells me this is a relatively simple mistake.
Just in case it is of any use, here's the TestViewRay function:
RayTestResult Sphere::TestViewRay(Ray &viewRay)
{
RayTestResult result;
result.m_bRayHit = false;
Position3f &c = position;
float r = radius;
Vector3f &d = viewRay.getDirection();
Position3f &e = viewRay.getPosition();
float part = d*(e - c);
Position3f part2 = (e - c);
float part3 = d * d;
float discriminant = ((part*part) - (part3)*((part2*part2) - (r * r)));
if (discriminant > 0)
{
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d) * (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 2;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
else if (discriminant == 0)
{
float t_add = ((d)* (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d)* (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 1;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
return result;
}
EDIT:
I'm happy to report I figured out my problem.
Upon sitting down with my sister to look at this I noticed in my ray-sphere hit detection I had this:
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
Which is incorrect. d should be negative. It should be:
float t_add = ((neg_d * (e_min_c)) + sqrt(discriminant)) / (part2);
(I renamed a couple variables) Previously I had a zero'd vector so I could express -d as (zero_vector - d)and I had removed that because I implemented a member function to negate any given vector; but I forgot to go back and call it on d. After fixing that and moving my sphere's into the negative z plane my Lambertian and Blinn-Phong shading implementations work correctly.
Lambertian + Blinn-Phong
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/50057825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: The type returned from the numeric index must be a subtype of the type returned from the string index In http://www.typescriptlang.org/Handbook#interfaces-array-types
what does it means that
"with the restriction that the type returned from the numeric index must be a subtype of the type returned from the string index."
Can someone give an example?
A: Just for reference, there are two types of index signatures, string and numeric.
String index signature:
[index: string]: SomeType
This says that when I access a property of this object by a string index, the property will have the type SomeType.
Numeric index signature:
[index: number]: SomeOtherType
This says that when I access a property of this object by a numeric index, the property will have the type SomeOtherType.
To be clear, accessing a property by string index is like this:
a["something"]
And by numeric index:
a[123]
You can define both a string index signature and a numeric index signature, but the type for the numeric index must either be the same as the string index, or it must be a subclass of the type returned by the string index.
So, this is OK:
interface SomeInterface {
[index: string]: Fruit;
[index: number]: Fruit;
}
Because both index signatures have the same type Fruit. But you can also do this:
interface SomeInterface {
[index: string]: Fruit;
[index: number]: Apple;
}
As long as Apple is a subclass of Fruit.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/33626047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Bootstrap making span width should not exceed image size I have a requirement to show image with a text together on same column where they both share same width. By default, text size will exceed image size if text is big enough. I manage to solve the issue by giving max-width style to span with some javascript code. I wonder is there a way to solve this issue without some Javascript black magic?
<link href="https://cdn.jsdelivr.net/npm/bootstrap@4.6.0/dist/css/bootstrap.min.css" rel="stylesheet" />
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@4.6.0/dist/js/bootstrap.bundle.min.js"></script>
<div class="d-flex flex-column align-items-center">
<!-- LONG ASS TEXT START -->
<span class="flex-grow-1 flex-shrink-1 text-break"> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nam a nisi aliquet, tempus felis non, sollicitudin erat. Phasellus dapibus lobortis magna, at maximus dui tincidunt sit amet. Suspendisse fringilla gravida velit, id pharetra libero semper in. Proin in lacus dui. In ultricies suscipit tortor, vel pharetra arcu efficitur non. Donec sed quam tellus. Maecenas lacinia vehicula mauris, eget facilisis metus elementum ut. Aliquam ullamcorper nisl non nunc fermentum pellentesque. Sed at pretium ligula. Quisque sit amet metus porta, scelerisque ligula nec, vulputate tortor. Curabitur blandit, tellus efficitur sollicitudin imperdiet, risus quam molestie risus, a mattis orci mauris bibendum velit. Curabitur volutpat porttitor enim ut fringilla. Nunc sit amet odio bibendum, tempus nulla vulputate, cursus nisl. Praesent tincidunt turpis ac risus ultrices mollis. Phasellus ipsum arcu, pharetra ut lectus luctus, sollicitudin pretium purus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris sagittis maximus ex, in dapibus tellus rhoncus at. Etiam id vestibulum arcu. Aenean sed hendrerit dui, et vehicula mi. Donec posuere sapien ac bibendum lobortis. Phasellus dignissim tempus ante, id tincidunt dolor consequat ut. Morbi aliquam porttitor condimentum. Aenean auctor libero erat, pretium facilisis lacus pharetra id. Etiam ipsum enim, faucibus eu felis id, pharetra blandit lectus. Integer ut metus ac odio bibendum egestas. Suspendisse potenti. Aenean ut feugiat mi. Vestibulum commodo massa dolor, ut tincidunt diam posuere at. Ut quis vehicula nulla, in imperdiet erat. Duis sed eleifend magna. Donec vulputate, lectus vel convallis aliquam, odio justo semper odio, a vulputate ante mauris non nisl. Sed sollicitudin tortor sit amet porta vehicula. Fusce sodales lorem eu dolor sodales, eu scelerisque nisl imperdiet. Duis hendrerit ligula sed tellus finibus dignissim. Proin vitae nibh dictum lorem accumsan finibus sed quis est. In est metus, euismod aliquet posuere a, laoreet ut urna. Nullam nec mauris a metus blandit lobortis. Aenean sed volutpat dolor. Curabitur tristique ligula magna, non elementum orci suscipit rutrum. In efficitur tortor ut enim commodo, sed consequat lacus varius. Morbi lectus ante, suscipit nec metus egestas, iaculis semper nisi. Nam a faucibus arcu, sit amet blandit nulla. Ut leo risus, suscipit id rhoncus et, posuere vitae diam. Cras et porttitor nibh, vel ornare lacus. Etiam nec lectus nunc. Maecenas scelerisque purus sed ex gravida volutpat. Nam vehicula tortor diam, et eleifend orci maximus accumsan. Donec gravida turpis ut interdum rutrum. Phasellus non lobortis quam. Sed quis laoreet lectus. Curabitur eu faucibus massa. Fusce feugiat arcu nec ante ultricies vulputate. Mauris gravida finibus nisl. Fusce suscipit blandit erat nec gravida. Vivamus auctor dapibus augue, sit amet porttitor diam tempor eu. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Quisque condimentum eleifend posuere. Aliquam porttitor odio a tincidunt varius. Proin id semper orci. Suspendisse potenti. Aliquam fringilla laoreet cursus. Vestibulum dictum urna in sem elementum, sed lacinia eros tincidunt. </span>
<!-- LONG ASS TEXT END -->
<!-- IMAGE START -->
<img src="https://www.englishclub.com/images/esl-exams/toeic_office.jpg" class="img-fluid" alt="..."></div>
https://jsfiddle.net/p315wutg/
|
{
"language": "la",
"url": "https://stackoverflow.com/questions/66294875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: OS X design decisions. Terminate the app on last window close? Unlike Windows, GNOME and most other GUI's, OS X application programs do not all terminate if the main window (or all the windows) of that application are closed.
For example, fire up Firefox, Safari, Word, or most document based apps. Either click the red dot in the corner or type cmdW to close the window. You can see that the menu of that program is still active, and the program is still running. With OS X newbies, sometimes you will see dozens of these windowless zombies running and they wonder why their computer is getting slower.
With some document based programs, there is some sense to not terminating the application if it has no windows. For example, with Safari or Word, you can still type CmdN and get a new document window for whatever that application was designed to do: browse the web (Safari) or type a new document (Word).
Apple is mixed with their design philosophy on this. Some close on the last window closed and some do not. Third party apps are even more mixed.
There are other apps that do close when their red close button is clicked. System Preferences, Dictionary, the Mac App Store, iPhoto and Calculator do terminate when the sole or last window is closed. iCal, Address Book, iTunes, DVD Player do not terminate.
What I find particularly annoying is the applications that do not have a logical "New Document" or "Open" function yet they do not terminate when the document window is closed. Example: fire up iTunes or Address Book and terminate the main window. There sits a zombie with no window and no function other than manually selecting "Quit".
It is easy to close the application after the last window closes. Cocoa even gives you notification of that event. Just add this to your application delegate:
- (BOOL)applicationShouldTerminateAfterLastWindowClosed:(NSApplication *)sender
{
return YES;
}
My question is this: Is there any reason I should NOT terminate my application after the last window closes? Why is this so variable on OS X software? Unless the app has a "new" or "open" or some other clearly understood reason to not terminate with no window open, the failure to terminate seems like a bug to me.
A: In general, never close a document based application when the last window closes. The user will expect to be able to open a new document without relaunching the application, and it will confuse them if they can't.
For non-document based applications, you need to consider a few things:
*
*How long does it take for my application to open? If it takes more than a second, you should probably not quit.
*Does my application need a window to be useful? If your application can do work without windows, you should not quit.
iTunes doesn't quit because, as Anne mentioned, you don't need a window to play music (question 2). It is also not based on Cocoa, so it is much more difficult to close after the last window, especially since it allows you to open windows for specific playlists so there are an indefinite number of possible windows to be open.
In my opinion, Address Book does not need to stay open. This could be a left-over design decision from older versions of OS X, or it could be that someone at Apple just thought it was better to leave it open (maybe so you can add a contact?). Both iTunes and Address Book provide access to their main interfaces through the Window menu, as well as a keyboard shortcut (Option+Command+1 for iTunes, Command+0 for Address Book).
A: Per Apple's Human Interface Guidelines (a guide for Mac developers):
In most cases, applications that are
not document-based should quit when
the main window is closed. For
Example, System Preferences quits if
the user closes the window. If an
application continues to perform some
function when the main window is
closed, however, it may be appropriate
to leave it running when the main
window is closed. For example, iTunes
continues to play when the user closes
the main window.
A: The main iTunes window can be reopened from the 'Window' menu. Mail.app has similar behavior. I can't think of any applications that close when the last window is closed, and as such I don't think there's a good reason that your app should behave that way (in fact, i'm surprised its not in Apple's user experience guidelines, it would really bother me!).
One reason why you'd want to close e.g. the iTunes main window but keep the app open is to be able to use the app as sort of a server for third party scripts/applications. I rarely use the main iTunes interface, but instead control my music with a third party app. I also write AppleScripts for other apps that I launch instead of interacting with that app's interface.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/5480114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
}
|
Q: EF 6 - How to correctly perform parallel queries When creating a report I have to execute 3 queries that involve separated entities of the same context. Because they are quite heavy ones I decided to use the .ToListAsync(); in order to have them run in parallel, but, to my surprise, I get a exception out of it...
What is the correct way to perform queries in parallel using EF 6? Should I manually start new Tasks?
Edit 1
The code is basically
using(var MyCtx = new MyCtx())
{
var r1 = MyCtx.E1.Where(bla bla bla).ToListAsync();
var r2 = MyCtx.E2.Where(ble ble ble).ToListAsync();
var r3 = MyCtx.E3.Where(ble ble ble).ToListAsync();
Task.WhenAll(r1,r2,r3);
DoSomething(r1.Result, r2.Result, r3.Result);
}
A: As a matter of interest, when using EF Core with Oracle, multiple parallel operations like the post here using a single DB context work without issue (despite Microsoft's documentation). The limitation is in the Microsoft.EntityFrameworkCore.SqlServer.dll driver, and is not a generalized EF issue. The corresponding Oracle.EntityFrameworkCore.dll driver doesn't have this limitation.
A: The problem is this:
EF doesn't support processing multiple requests through the same DbContext object. If your second asynchronous request on the same DbContext instance starts before the first request finishes (and that's the whole point), you'll get an error message that your request is processing against an open DataReader.
Source: https://visualstudiomagazine.com/articles/2014/04/01/async-processing.aspx
You will need to modify your code to something like this:
async Task<List<E1Entity>> GetE1Data()
{
using(var MyCtx = new MyCtx())
{
return await MyCtx.E1.Where(bla bla bla).ToListAsync();
}
}
async Task<List<E2Entity>> GetE2Data()
{
using(var MyCtx = new MyCtx())
{
return await MyCtx.E2.Where(bla bla bla).ToListAsync();
}
}
async Task DoSomething()
{
var t1 = GetE1Data();
var t2 = GetE2Data();
await Task.WhenAll(t1,t2);
DoSomething(t1.Result, t2.Result);
}
A: Check out https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/enabling-multiple-active-result-sets
From the documentation:
Statement interleaving of SELECT and BULK INSERT statements is
allowed. However, data manipulation language (DML) and data definition
language (DDL) statements execute atomically.
Then your above code works and you get the performance benefits for reading data.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/41749896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
}
|
Q: Displaying data from web api into tables with ajax I have a web api in the url "http://jendela.data.kemdikbud.go.id/api/index.php/ccarisanggar/searchGet", and I intend to display with jquery ajax and inserted into the table, I've been trying to find a tutorial but it's hard to get it
A: By Jquery you can do this way
$.ajax({
url: 'http://jendela.data.kemdikbud.go.id/api/index.php/ccarisanggar/searchGet',
type: 'GET',
success: function (responce) {
// code to append into your table
},
error: function (jqXHR, textStatus, errorThrown) {
}
});
A: I can't show you the whole code snippet. Anyway, hope this help for you.
<table id="my_table" border='1'>
<tr>
<th>Column 1</th>
<th>Column 2</th>
<th>Column 3</th>
</tr>
</table>
<script>
var response = [{
"column_1":"90",
"column_2":"Abc",
"column_3":"50"
},
{
"column_1":"68",
"column_2":"Cde",
"column_3":"90"
}];
$(function() {
$.each(response, function(i, item) {
$('<tr>').append(
$('<td>').text(item.column_1),
$('<td>').text(item.column_2),
$('<td>').text(item.column_3)
).appendTo('#my_table');
});
});
</script>
A: As mentioned I believe you are using codeigniter as your php framework.
To accomplish your task you need follow below steps :
1.) In view file for eg. myview.php add this
<div id="mydata"></div>
<script>
$.ajax({
type: "GET",
url: "http://jendela.data.kemdikbud.go.id/api/index.php/ccarisanggar/searchGet",
beforeSend: function(){
$("#mydata").html('<span style="color:green;tex-align:center;">Connecting....</span>');
},
success: function(data){
if(data!="")
{
$("#mydata").html(data);
}else{
$("#mydata").html('<span style="color:red;tex-align:center;">No data found !</span>');
}
}
});
</script>
2.)To save the data in the database either create event handler such as button clicks or you can try using setInterval function.
<button id="mybt" onclick="save_to_db()">Save to DB</button>
<script>
function save_to_db(){
//code to format data to insert into the table
$.ajax({
type:"POST",
url:"/mycontroller/insert_function" //
data:"data_to_insert",
success:function(data){
if(data=="ok"){
console.log("inserted successfully");
}
}
})
}
</script>
A: In your HTML tag
<table id="data">
</table>
In your script tag
var url="http://jendela.data.kemdikbud.go.id/api/index.php/ccarisanggar/searchGet";
$.ajax({
type: "GET",
url: url,
cache: false,
// data: obj_data,
success: function(res){
console.log("data",res);
//if you want to remove some feild then delete from below array
var feilds =["sanggar_id","kode_pengelolaan","nama","alamat_jalan","desa_kelurahan","kecamatan","kabupaten_kota","propinsi","lintang","bujur","tahun_berdiri","luas_tanah"];
var html='';
html+=`<thead>
<tr>`;
$.each(feilds,function(key,val){
html+=`<th class="${val}">${val}</th>`;
})
html+=`</tr>
</thead>
<tbody>`;
$.each(res.data,function(key,val){
html+=`<tr>`;
$.each(feilds,function(aaa,feild){
html+=`<th class="${val[feild]}">${val[feild]}</th>`;
})
html+=`</tr>`;
})
html+=`</tr>
</tbody>`;
$("#data").html(html);
},
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/45275638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: "Manifest merger failed with multiple errors, see logs" error in android studio I'm new to kotlin, I'm practicing the basics. Anytime I run a simple code it refuses it run and I get a
Execution failed for task ':app:processDebugAndroidTestManifest'.
> Manifest merger failed with multiple errors, see logs
error.
I don't know what is the cause, please note I am literally running a print("Hello") function
Here's my manifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.hellokotlin">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/Theme.HelloKotlin">
<activity
android:name=".MainActivity"
android:exported="true">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
And Here's my gradle.build:
plugins {
id 'com.android.application'
id 'kotlin-android'
}
android {
compileSdk 31
defaultConfig {
applicationId "com.example.hellokotlin"
minSdk 21
targetSdk 31
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
kotlinOptions {
jvmTarget = '1.8'
}
}
dependencies {
implementation 'androidx.core:core-ktx:1.3.2'
implementation 'androidx.appcompat:appcompat:1.2.0'
implementation 'com.google.android.material:material:1.3.0'
implementation 'androidx.constraintlayout:constraintlayout:2.0.4'
testImplementation 'junit:junit:4.+'
androidTestImplementation 'androidx.test.ext:junit:1.1.2'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'
}
I get this from the build output (as logcat is empty)
Executing tasks: [:app:generateDebugSources, :app:createMockableJar, :app:generateDebugAndroidTestSources, :app:compileDebugUnitTestSources, :app:compileDebugAndroidTestSources, :app:compileDebugSources] in project C:\Users\user\AndroidStudioProjects\HelloKotlin
> Task :app:preBuild UP-TO-DATE
> Task :app:preDebugBuild UP-TO-DATE
> Task :app:compileDebugAidl NO-SOURCE
> Task :app:compileDebugRenderscript NO-SOURCE
> Task :app:compileLintChecks UP-TO-DATE
> Task :app:generateDebugBuildConfig UP-TO-DATE
> Task :app:generateDebugSources UP-TO-DATE
> Task :app:createMockableJar UP-TO-DATE
> Task :app:preDebugAndroidTestBuild SKIPPED
> Task :app:compileDebugAndroidTestAidl NO-SOURCE
> Task :app:processDebugAndroidTestManifest FAILED
C:\Users\user\AndroidStudioProjects\HelloKotlin\app\build\intermediates\tmp\manifest\androidTest\debug\tempFile1ProcessTestManifest1728079930965623012.xml Error:
android:exported needs to be explicitly specified for <activity>. Apps targeting Android 12 and higher are required to specify an explicit value for `android:exported` when the corresponding component has an intent filter defined. See https://developer.android.com/guide/topics/manifest/activity-element#exported for details.
C:\Users\user\AndroidStudioProjects\HelloKotlin\app\build\intermediates\tmp\manifest\androidTest\debug\tempFile1ProcessTestManifest1728079930965623012.xml Error:
android:exported needs to be explicitly specified for <activity>. Apps targeting Android 12 and higher are required to specify an explicit value for `android:exported` when the corresponding component has an intent filter defined. See https://developer.android.com/guide/topics/manifest/activity-element#exported for details.
C:\Users\user\AndroidStudioProjects\HelloKotlin\app\build\intermediates\tmp\manifest\androidTest\debug\tempFile1ProcessTestManifest1728079930965623012.xml Error:
android:exported needs to be explicitly specified for <activity>. Apps targeting Android 12 and higher are required to specify an explicit value for `android:exported` when the corresponding component has an intent filter defined. See https://developer.android.com/guide/topics/manifest/activity-element#exported for details.
See http://g.co/androidstudio/manifest-merger for more information about the manifest merger.
> Task :app:checkDebugAarMetadata UP-TO-DATE
> Task :app:generateDebugResValues UP-TO-DATE
> Task :app:generateDebugResources UP-TO-DATE
> Task :app:mergeDebugResources UP-TO-DATE
> Task :app:createDebugCompatibleScreenManifests UP-TO-DATE
> Task :app:extractDeepLinksDebug UP-TO-DATE
> Task :app:processDebugMainManifest UP-TO-DATE
> Task :app:processDebugManifest UP-TO-DATE
> Task :app:processDebugManifestForPackage UP-TO-DATE
> Task :app:processDebugResources UP-TO-DATE
> Task :app:compileDebugKotlin
> Task :app:javaPreCompileDebug UP-TO-DATE
> Task :app:compileDebugJavaWithJavac UP-TO-DATE
> Task :app:preDebugUnitTestBuild UP-TO-DATE
> Task :app:javaPreCompileDebugUnitTest UP-TO-DATE
> Task :app:processDebugJavaRes NO-SOURCE
> Task :app:processDebugUnitTestJavaRes NO-SOURCE
> Task :app:checkDebugAndroidTestAarMetadata UP-TO-DATE
> Task :app:generateDebugAndroidTestResValues UP-TO-DATE
> Task :app:javaPreCompileDebugAndroidTest UP-TO-DATE
> Task :app:compileDebugSources UP-TO-DATE
> Task :app:bundleDebugClasses
> Task :app:compileDebugUnitTestKotlin
w: C:\Users\user\AndroidStudioProjects\HelloKotlin\app\src\test\java\com\example\hellokotlin\ExampleUnitTest.kt: (15, 25): This expression will be resolved to Int in further releases. Please add explicit convention call
> Task :app:compileDebugUnitTestJavaWithJavac NO-SOURCE
> Task :app:compileDebugUnitTestSources UP-TO-DATE
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:processDebugAndroidTestManifest'.
> Manifest merger failed with multiple errors, see logs
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/7.0.2/userguide/command_line_interface.html#sec:command_line_warnings
BUILD FAILED in 13s
20 actionable tasks: 4 executed, 16 up-to-date
Thanks for your help in advance.
UPDATE:
I created a new project, a no activity project, and it seems to be going fine. Unlike before I was using a project that has an activity. BUT, I have notice that this is a common issue so I would love if you could explain this to me for future references, thanks for your help.
A: Just update your androidx.test.ext:junit dependency to 1.1.3 or later. This should solve your problem.
androidTestImplementation "androidx.test.ext:junit:1.1.3"
A: It might be due to deprecated gradle features used is incompatible with your Gradle version 8.0
Try running the Gradle build with this command line argument --warning-mode=all.
It will give detailed description of what exactly the deprecated features are used along with links to documentation regarding the fix.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70075760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How is the pentaho community server installed on linux? Where can I find the pentaho community files, how is it installed, and how do I access the user console?
A: Pentaho files are found at:
https://sourceforge.net/projects/pentaho/files
The server is in the 'Business Intelligence Server' folder.
Download the zip file of the latest version, and unzip it. Open a terminal at the location of the unziped directory, and then cd into the directory. Now run ./start-pentaho.sh. It is a java application, so make sure you have java installed.
The server is now running, and the 'user console' can be accessed through a web browser at localhost:8080 by default. It was very slow for me, just fyi. The default login is 'admin' and 'password'.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/45425314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Apache Spark - how to see how many nodes are being used during a job run? I am using Scala Spark 2.4 and want to know the usage of the queue.
How to display how many nodes from a big cluster (100+ nodes) are being utilized for any particular job that is running?
Thanks.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/66546465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Swift Error: exc_bad_instruction (code=exc_i386_invop subcode=0x0) I've looked on the site for the answer to my problem, but I could not find a solution that would work for me. I am getting the error code every time I run my application on the Iphone 5 simulator. But I do not get the error on Iphone 6 and up. Why is this? It will crash when I press a button or it will crash as soon as I go to the third screen.
func pickQuestion () {
if (Questions.count > 0 ) && (numberWrong < 4 ){
***qNum = Int(arc4random()) % Questions.count***
qLabel.text = Questions[qNum].Question
answerNum = Questions[qNum].Answer
for i in 0..<buttons.count {
buttons[i].setTitle(Questions[qNum].Answers[i], for: UIControlState.normal)
}
Questions.remove(at: qNum)
}
else {
actioncall()
}
func actioncall () {
let storyBoard: UIStoryboard = UIStoryboard(name: "Main", bundle: nil)
let vc: GameOverViewController = storyBoard.instantiateViewController(withIdentifier: "Game Over") as! GameOverViewController
self.present(vc, animated: true, completion: nil)
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40186462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: iterator got by applying begin() on getter method doesn't allow to access to first elements of pointed list I have a object's pointer *myObject with a getter method:
vector<string> getList();
When I create my iterator to run throught my list variable returned by getList() like this:
vector<string>::const_iterator it = myObject->getList().begin();
and I display it:
cout << *it << endl;
It displays me nothing (certainly an empty string).
It's similar for it+1, it+2, it+3, it+4, but from it+5 it displays the right element
Whereas, when I rewrite the code like this:
vector<string> myList = myObject.getList();
vector<string>::const_iterator it = myList.begin();
cout << *it << endl;
Everything works.
Could you help me to understand this problem please?
Thank you.
A: When you do this:
vector<string>::const_iterator it = myObject->getList().begin();
At the end of the line, the iterator it is invalid, because the vector<string> returned by getList() is a temporary value.
However, when you store the vector<string> in a local variable, with
vector<string> myList = myObject.getList();
Then, the iterator remains valid, as long as myList is not destroyed.
A: In the frist case the code has undefined behaviour because the temporary object of type std::vector<std::string> returned by function myObject.getList() will be deleted at the end pf executing the statement. So the iterator will be invalid.
Apart from the valid code in the second example you could also write
const vector<string> &myList = myObject.getList();
vector<string>::const_iterator it = myList.begin();
cout << *it << endl;
that is you could use a constant reference to the temporary object.
A: The problem is that you are returning a copy of vector, it would go out of scope and out of existance, thus be invalid.
What you want to do is return a const reference to your list.
const vector<string>& getList();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/23311165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Run other code while awaiting This is probably a stupid question, but my current project is stuck until I figure it out.
Say I have this function.
async function AtoB() {
try{
await ACheck();
let quantity = (ABalance - (1 / ABuy));
quantity = parseFloat(quantity.toFixed(4));
exchange.createMarketBuyOrder('A/B', quantity)
trade();
} catch(err) {
console.log(err);
}
}
The function is awaiting another function called ACheck() that looks like this:
async function ACheck() {
while(ABalance/ABuy < 10) {
}
return true;
}
My issue is that ABalance and ABuy are set in another function that gets their values using CCXT. When AtoB() runs, it stops a setTimeout I have in place that calls the price function every 5 seconds. This effectively means that the function will never run since the values for ABalance and Abuy never update.
Is there any way to run the prices function at the same 5 seconds intervals while awaiting ACheck()?
Thanks in advance
A: You are not allowed to block the main thread if you want to run other things async.
Just have your setTimeout run like it does now, when it fires, calculate ABalance/ABuy. If Abalance/Abuy < 10, do nothing and wait for next setTimeout to fire. If not, do the rest.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/49310133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Magento get salesrule id from product_id (or entity_id) We are using Magento Enterprise 1.10 and I need a way to get the salesrule_id for each product on category pages as well as product detailed pages. This is pretty easy when you're using the cart.
Basically all I want to do is something like this --> Pseudo query
SELECT rule_id FROM salesrule WHERE entity_id = <entity_id)
Do you know of any way to do this outside of the cart. Is there a method I can call with a product id that return's a rule_id if it exists for this product?
Any suggestion would greatly help me.
A: A product does not have sales rules attached to it and there is no inbuilt method to load rules for a given product. On the cart page it is easy because the rule gets applied to the quote and on each item you can get applied_rule_ids which gives you the ids of any matched rules currently applied to the quote item which is not the same as a product object. So for example, you could have multiple sales rules which apply to the same item so applied_sales_rule_ids would give you all the applied rules not just one.
You could try and load a collection of sales rules and add a filter on the product_ids column and then pass in the entity id of the product. This post here explores this with some code examples and may be of some help.
A: Try to use Mage_CatalogRule_Model_Resource_Rule::getRulesFromProduct().
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/13595296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Query a specific time-range and alert at specific time of the day I need to run a rule at 2 am, querying logs from 0 to 2 am, and alert if matches are found.
So far all the rules I created are frequency rules, but I don't know how to achieve the specific time range for the query, and a specific time for the alert, can someone please help?
(I guess the ANY type could let me add my time range as part of the filter....but then how can I run the rule at 2 am every day?)
A: In UTC:
filter:
range:
"@timestamp":
gte: "now/d+0h"
lt: "now/d+2h"
A: The now is take the time of the server.
filter:
- range:
"@timestamp":
"from": "now-2h"
"to": "now"
A: if you want your alert to be effective for specific hours only, you can
create an enhancement that drop the alert if the current time doesnt match your needs
check
https://elastalert.readthedocs.io/en/latest/recipes/adding_enhancements.html
regards
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/37855146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How can I handle multipart/form-data POST requests in my java servlet? I'm having a very hard time dealing with multipart/form-data requests with my java application server. From what I have found out, the servlet 3.0 specification provides methods such as HttpServletRequest.getParts(), which would be ideal for processing the form data uploaded to my servlet.
However, this method is part of the 3.0 servlet specification, and my application server (Tomcat 6) does not support this yet. Even with a valid 3.0 web.xml file and the java EE 6 libs, I get the following exception when trying to call getParts():
java.lang.NoSuchMethodError: javax.servlet.http.HttpServletRequest.getParts()Ljava/util/Collection;
Switching application servers is not really a feasible option for this project. Are there any third-party libraries available for processing multipart/form-data within java servlets?
A: Tomcat 6 does not and will not support Servlet Specification 3.0. You should attempt doing this on Tomcat 7, but I'm not really sure whether this functionality is present in the beta release that is currently available. The functionality is expected to be present in the production release though.
You could continue using Apache Commons FileUpload like posted in the other answer, or you could use Glassfish (depending on the current phase and type of your project).
A: Check out Apache Commons Fileupload. It gives you a programmatic API to parse a multipart request, and iterate through the parts of it individually.
I've used it in the past for straightforward multipart processing and it does the job fine without being overly complicated.
A: when we used post method than data are encrypted so we have to used servletfileupload to get requested data and using FileItemIterator we can get all form data.
i already answer on this link How to process a form sent Google Web Toolkit in a servlet
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/3510788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Why is the insantiation of this singleton class giving different values Why do these two statement give different outcomes...
*
*print(Database().id == Database().id) (which gives False)
*but it gives True this way
d1 = Database()
d2 = Database()
print(d1.id == d2.id)
here's the entire code:
import random
class Database:
initialized = False
def __init__(self):
self.id = random.randint(1, 101)
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super(Database, cls).__new__(cls, *args, **kwargs)
return cls._instance
print(Database().id == Database().id)
# False
d1 = Database()
d2 = Database()
print(d1.id == d2.id)
# True
A: __new__() always calls __init__() if the returned instance is of the correct class. This means, every Database() call is still going through and setting a new random value to id.
This id is an instance variable that you have defined, it will not really affect the result of id(), which is why if you do
print(Database() is Database())
it will return True even though the id check returned false. The instance is the same, you just changed id value in between, so they evaluate as False.
In the second case, because the instance is the same, both of them get the value generated by __init__() when you assign d2. You can confirm this by adding a print(d1.id) before and after d2 is assigned.
You need to make use of the initialized boolean to correctly skip the code in __init__()
import random
class Database:
initialized = False
def __init__(self):
if not self.initialized:
self.id = random.randint(1, 101)
self.initialized = True
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super(Database, cls).__new__(cls, *args, **kwargs)
return cls._instance
Take a look at issue with singleton python call two times __init__ if you want to see other implementations of a singleton class or alternative approaches that avoid singleton entirely.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70543564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Why do I get this error? 'NoneType' object has no attribute 'shape' in opencv I'm working on real-time clothing detection. so i borrowed the code from GitHub like this:https://github.com/rajkbharali/Real-time-clothes-detection
but (H, W) = frame.shape[:2]:following error in last line.
Where should I fix it?
from time import sleep
import cv2 as cv
import argparse
import sys
import numpy as np
import os.path
from glob import glob
import imutils
from imutils.video import WebcamVideoStream
from imutils.video import FPS
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/My Drive/experiment/Yolo_mark-master/x64/Release/
Labels = []
classesFile1 = "data/obj.names";
with open(classesFile1, 'rt') as f:
Labels = f.read().rstrip('\n').split('\n')
np.random.seed(42)
COLORS = np.random.randint(0, 255, size=(len(Labels), 3), dtype="uint8")
weightsPath = "obj_4000.weights"
configPath = "obj.cfg"
net1 = cv.dnn.readNetFromDarknet(configPath, weightsPath)
net1.setPreferableBackend(cv.dnn.DNN_BACKEND_OPENCV)
net1.setPreferableTarget(cv.dnn.DNN_TARGET_CPU)
image = WebcamVideoStream(src=0).start()
fps = FPS().start()
#'/home/raj/Documents/yolov3-Helmet-Detection-master/safety.mp4'
#while fps._numFrames<100:
while True:
#for fn in glob('images/*.jpg'):
frame = image.read()
#frame = imutils.resize(frame,width=500)
(H, W) = frame.shape[:2]
A: The reason behind your error is that the frame is None(Null). Sometimes, the first frame that is captured from the webcam is None mainly because
(1) the webcam is not ready yet ( and it takes some extra second for it to get ready)
or (2) the operating system does not allow your code to access the webcam.
In the first case, before you do anything on the frame you need to check whether the frame is valid or not :
while True:
frame = image.read()
if frame is not None: # add this line
(H, W) = frame.shape[:2]
In the other case, you need to check the camera setting in your Operating system.
Also, for capturing the webcam frames there is another method based on the VideoCapure class in Opencv that might be easier to debug.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64045677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Implementing channels in haskell -- Tackling the awkward squad
In the paper Tackling the awkward squad, Simon Peyton Jones has provided a "possible implementation" of a Channel.
type Channel a = (MVar (Stream a) , -- Read end
MVar (Stream a) ) -- Write end (the hole)
type Stream a = MVar (Item a)
data Item a = MkItem a (Stream a)
Now, he implements a function putChan :: Channel a -> a -> IO () like this
putChan (read, write) val
= do { new_hole <- newEmptyVar ;
old_hole <- takeMVar write ;
putMVar write new_hole ;
putMVar old_hole (MkItem val new_hole) }
The function above takes a MVar out of write, then puts an empty MVar into it.
Then it writes to the old_hole it extracted from write.
The question is, why does it write to old_hole? It has been taken out from write and its scope is limited to the current block only, then what difference does it make?
A:
The question is, why does it write to old_hole? It has been taken out from write and its scope is limited to the current block only, then what difference does it make?
Not quite. old_hole is "in scope" on the read side. You have to look at newChan for the full picture:
newChan = do {
read <- newEmptyMVar ;
write <- newEmptyMVar ;
hole <- newEmptyMVar ;
putMVar read hole ;
putMVar write hole ;
return (read,write) }
So right after calling newChan the "old_hole" from putChan is the same MVar as hole in newChan. And as channel operations progress, that old_hole is always somewhere nested in the MVars of read.
I found the design of linked list-style channels truly difficult to wrap my head around at first. The illustration from that paper does a decent job of showing the structure, but the basic idea is readers "peel off" a layer of MVar to reveal a value, while writers are inserting values "at the bottom" of the pile of MVars, mainting a pointer to the bottom-most one.
By the way this is the design used in Control.Concurrent.Chan
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/27850363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How to use a homescreen widget button to shutdown a service? I have a button whose WidgetProvider kicks off a service with a
PendingIntent. That works just fine. How do I similarly attach an
event handler to the button so that when it is pressed a second time,
it shuts the service down? Is there an appropriate pattern to follow
for something like this?
Thanks.
A: Use a getBroadcast() PendingIntent, where the BroadcastReceiver calls stopService().
Or, use a getService() PendingIntent, where you send a command to your service that has the service call stopSelf().
Or, switch the service to an IntentService, so it shuts down automatically, if that is a better service implementation for your scenario.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/4543833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: override error android studio I'm doing an Android development course using Android Studio but I'm getting a @Override error in one of the examples that I tried to copy/repeat.
Here the code where I get the error
Handler handler = new Handler(){
@Override //here I get an error, that is not overriding a class
public void handlerMessage(Message msg) {
TextView JeroensText = (TextView) findViewById(R.id.JText);
JeroensText.setText("Lekker bezig!");
}
};
Anybody how to solve it?
A: There's no handlerMessage(Message message) method on android.os.Handler class, you should override handleMessage(Message message) method (without the 'r')
A: In both cases:
Handler handler = new Handler(new Handler.Callback () {
@Override
public boolean handleMessage(Message msg) {
TextView JeroensText = (TextView) findViewById(R.id.JText);
JeroensText.setText("Lekker bezig!");
return false;
}
});
}
and
Handler handler = new Handler () {
@Override
public void handleMessage(Message msg) {
TextView JeroensText = (TextView) findViewById(R.id.JText);
JeroensText.setText("Lekker bezig!");
}
};
as you can see there is handleMessage() method, not handlerMessage().
Hope it help
A: Its just a simple miss - remove that extra 'r' from your hanlderMessage method.
Since there is no 'handlerMessage(Message)' method for Handler. Use either
with Callback
Handler handler = new Handler(new Handler.Callback () {
@Override
public boolean handleMessage(Message msg) {
// handle message here
return true;
}
});
or
Handler handler = new Handler () {
@Override
public void handleMessage(Message msg) {
// handle message herer
}
};
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/34753850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to show images in dropdownlist options? I need to have a dropdownlist whose options contains text followed by a small image.
Suppose I have a dropdownlist of fruits. I want to show the options as :
Option 1 : small Image of mango then text Mango
Option 2 : small Image of orange then text Orange
.....................
Is it possible to implement in asp.net 2.0?
If yes then please help with sample code.
I don't want to use JQuery.
A: You can do that server side by just placing <asp:Image ImageUrl="some.gif" /> tags in your ASP.NET code. The browser will show them when the page is loaded.
A: You don't need to use JQuery, use CSS background-image property for every <option>.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/1720360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I upload videos into my youtube account without oAuth consent screen using YT data API v3 I want to upload local videos into my youtube account through my web app without oAuth consent screen. I am using node js as server and react js as frontend.
A: You need to understand the difference between private and public data. Public data is data that is not owned by any user and is available publicly.
Methods like Search List for example access public data available on YouTube.
Private data is data that is owned by a user. In order to upload to your YouTube account you need permission to upload to that account.
The Videos.insert method as you can see by the documentation requires that the user have consented to your application accessing their YouTube account. They must have consented to your access with one of the following scopes.
So the answer to your question is that you cant. You need to consent to authorization before your application is going to be allowed to upload to YouTube
upload public videos
Please remember that all videos uploaded via the videos.insert endpoint from unverified API projects created after 28 July 2020 will be restricted to private viewing mode. To lift this restriction, each API project must undergo an audit to verify compliance with the Terms of Service. Please see the API Revision History for more details.
Your application will need to go though application verificatin process before any of the videos you upload will be public
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/69416856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to count the number of columns in Excel table using VBA? Dim housetypes, elements, NumRowsElements, NumColumnsHousetypes
elements = Range("c10:c19")
housetypes = Range("d9:n9")
NumRowsElements = UBound(elements)
NumColumnsHousetypes = UBound(housetypes)
MsgBox ("NumRowsElements is " & NumRowsElements & " NumColumnsHousetypes is " & NumColumnsHousetypes)
I'm not sure why the above code will show me the correct number of row headers (C10 to C19) but not return the correct number of column headers (D9 to N9)?
A: Both elements and housetypes are 2D arrays, not 1D. The first dimension corresponds to the rows, and the second corresponds to columns.
When using Ubound, if dimension (2nd parameter) is omitted, 1 is assumed.
NumRowsElements = UBound(elements, 1)
NumColumnsHousetypes = UBound(housetypes, 2)
will return the results you want.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/69978058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Mongo Bulk Insert Spring I am working on understanding Mongo's bulk insert method. I want to insert records that have a unique index, and the duplicates simply get skipped over. I understand I will need an unorded bulk insert, but am having issues with the implementation. I already have logic that has been implemented with .saveAll by mapping through and building an object. I am failing to connect the dots on where to explicitly insert into the bulk operation. Can I not pass my object, or do I need map each key to data. In the examples I have read, they use small objects and specify each key.
List<Employee> employees =
people.getEmployees().stream()
.filter(criteria)
.map(
e -> {
return PersonObject.builder()
.firstName(person.getFirstName())
.lastName(person.getLastName())
.gender(person.getGender())
.salary(employee.getSalary())
.weight(person.getWeight())
.height(person.getHeight())
.school(Schools.SEC)
.family(familyOrgins.getFamily())
.hairColor(familyOrgins.getHairColor())
.eyeColor(familyOrgins.getEyeColor())
.city(city)
.state(state)
.country(country)
.build();
})
.collect(Collectors.toList());
result = employeesRepository.saveAll(employees);
I have tried to inserting after the list of objects has been created like below:
.build();
})
.collect(Collectors.toList());
BulkWriteResult mongoResult = bulkWrite(itemLocations, false);`
I want to insert using a bulk and if last name is the same, then skip over that person insertion.
A: Here is a very small program that performs an Ordered bulkWrite using Spring. You can change to unordered by changing the enum value on BulkOperations.BulkMode. This program will insert 3 records, then delete one, then update one. Because of the filter predicates in the later bulk commands I had to go with an ordered update. If you are only inserting you could easily get away with a faster unordered bulk command.
Main.java
package com.example.accessingdatamongodb;
import com.mongodb.bulk.BulkWriteResult;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.mongodb.core.BulkOperations;
import org.springframework.data.mongodb.core.MongoOperations;
import org.springframework.data.mongodb.core.query.Criteria;
import org.springframework.data.mongodb.core.query.Query;
import org.springframework.data.mongodb.core.query.Update;
@SpringBootApplication
public class Main implements CommandLineRunner {
@Autowired
private MongoOperations mongoTemplate;
public static void main(String[] args) {
SpringApplication.run(Main.class, args);
}
@Override
public void run(String... args) throws Exception {
// CLEAR THE COLLECTION
mongoTemplate.remove(new Query(), Customer.class);
BulkOperations bulkOps = mongoTemplate.bulkOps(BulkOperations.BulkMode.ORDERED, Customer.class);
for (Integer i = 0; i < 3; i++)
{
Customer customer = new Customer();
customer.firstName = "Barry";
customer.id = i.toString();
bulkOps.insert(customer);
}
bulkOps.remove(new Query().addCriteria(new Criteria("i").is(1)));
bulkOps.updateOne(new Query().addCriteria(new Criteria("i").is(2)), new Update().set("name", "Barry"));
BulkWriteResult results = bulkOps.execute();
}
}
For completeness here are the other files in the project...
application.properties
spring.data.mongodb.uri=mongodb://localhost:27017/springtestBulkWrite
spring.data.mongodb.database=springtestBulkWrite
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.6.4</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.example</groupId>
<artifactId>accessing-data-mongodb</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>accessing-data-mongodb</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver-sync</artifactId>
<version>4.5.0</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver-core</artifactId>
<version>4.5.0</version>
</dependency>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>bson</artifactId>
<version>4.5.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
SimpleMongoConfig.java
import com.mongodb.client.MongoClient;
import com.mongodb.ConnectionString;
import com.mongodb.MongoClientSettings;
import com.mongodb.client.MongoClients;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.mongodb.core.MongoTemplate;
@Configuration
public class SimpleMongoConfig {
@Value("${spring.data.mongodb.database}")
private String databaseName;
@Value("${spring.data.mongodb.uri}")
private String uri;
@Bean
public MongoClient mongo() {
ConnectionString connectionString = new ConnectionString(uri);
MongoClientSettings mongoClientSettings = MongoClientSettings.builder()
.applyConnectionString(connectionString)
.build();
return MongoClients.create(mongoClientSettings);
}
@Bean
public MongoTemplate mongoTemplate() throws Exception {
return new MongoTemplate(mongo(), databaseName);
}
}
CustomerRepository.java (not really used in this trivial example)
package com.example.accessingdatamongodb;
import java.util.List;
import org.springframework.data.mongodb.repository.MongoRepository;
public interface CustomerRepository extends MongoRepository<Customer, String> {
public Customer findByFirstName(String firstName);
public List<Customer> findByLastName(String lastName);
public void deleteByLastName(String lastName);
}
Customer.java
package com.example.accessingdatamongodb;
import org.springframework.data.annotation.Id;
public class Customer {
@Id
public String id;
public String firstName;
public String lastName;
public Customer() {}
public Customer(String firstName, String lastName) {
this.firstName = firstName;
this.lastName = lastName;
}
@Override
public String toString() {
return String.format(
"Customer[id=%s, firstName='%s', lastName='%s']",
id, firstName, lastName);
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71593554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Mod Rewrite Clean URL's How can I transform:
*
*http://example.com/about.php
*http://example.com/hello/deep.php
*http://example.com/hello/world/deeper.php
Dynamically, into URLs like:
*
*http://example.com/about/
*http://example.com/hello/deep/
*http://example.com/hello/world/deeper/
A: see http://gist.github.com/22877
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/1982825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: When classifying images to folders Error Comes Failed to get device properties, error code: 30 Keras examples
data_226000_227000.h5_286.png
D:/Icorve Project/Learn Neural Networks/Looking Straight/data_226000_227000.h5_286.png
(531, 413)
(531, 413, 3)
(180, 140, 1)
[INFO] loading network...
2020-01-07 09:24:19.310174: E tensorflow/core/grappler/clusters/utils.cc:87] Failed to get device properties, error code: 30
2020-01-07 09:26:35.773432: E tensorflow/core/grappler/clusters/utils.cc:87] Failed to get device properties, error code: 30
[INFO] classifying image...
examples
data_226000_227000.h5_288.png
D:/Icorve Project/Learn Neural Networks/Looking Straight/data_226000_227000.h5_288.png
(531, 413)
(531, 413, 3)
(180, 140, 1)
[INFO] loading network...
2020-01-07 09:30:36.892513: E tensorflow/core/grappler/clusters/utils.cc:87] Failed to get device properties, error code: 30
2020-01-07 09:32:51.847629: E tensorflow/core/grappler/clusters/utils.cc:87] Failed to get device properties, error code: 30
[INFO] classifying image...
When classifying images to folders Error Comes Failed to get device properties, error code: 30 Keras
Some images are classifying correctly. What this error
A: If you are using two GPU's let's say Intel GPU and NVIDIA GPU, then please choose graphic driver to NVIDIA GPU by opening NVIDIA control Panel.
In case if you are using only one driver then please update NVIDIA GPU graphic driver.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/59622030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Matplotlib: How to plot with only NAN values? How can I plot a timeseries with timestamps on the x-axis and "data" that contains only NAN on the y-axis using matplotlib?
The goal is to get an empty plot with timestamps on the x-axis and an empty y-axis. Why do I want to have an empty plot? Well, I am creating plots in a loop for every day of the year and it can happen that the timeseries has data gaps (no, interpolation of data gaps makes no sence).
TIMESTAMP
2019-10-21 12:00:00 NaN
2019-10-21 12:01:00 NaN
2019-10-21 12:02:00 NaN
2019-10-21 12:03:00 NaN
2019-10-21 12:04:00 NaN
..
2019-10-22 11:55:00 NaN
2019-10-22 11:56:00 NaN
2019-10-22 11:57:00 NaN
2019-10-22 11:58:00 NaN
2019-10-22 11:59:00 NaN
Freq: T, Name: df_plot.prec, Length: 1440, dtype: float64
DatetimeIndex(['2019-10-21 12:00:00', '2019-10-21 12:01:00',
'2019-10-21 12:02:00', '2019-10-21 12:03:00',
'2019-10-21 12:04:00', '2019-10-21 12:05:00',
'2019-10-21 12:06:00', '2019-10-21 12:07:00',
'2019-10-21 12:08:00', '2019-10-21 12:09:00',
...
'2019-10-22 11:50:00', '2019-10-22 11:51:00',
'2019-10-22 11:52:00', '2019-10-22 11:53:00',
'2019-10-22 11:54:00', '2019-10-22 11:55:00',
'2019-10-22 11:56:00', '2019-10-22 11:57:00',
'2019-10-22 11:58:00', '2019-10-22 11:59:00'],
dtype='datetime64[ns]', name='TIMESTAMP', length=1440, freq='T')
plt.plot(df_plot.index, df_plot.prec)
results in:
ValueError: view limit minimum -0.001 is less than 1 and is an invalid Matplotlib date value. This often happens if you pass a non-datetime value to an axis that has datetime units
timeseries is of type datetime64
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/60952177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to get reproducible result when running Keras with Tensorflow backend Every time I run LSTM network with Keras in jupyter notebook, I got a different result, and I have googled a lot, and I have tried some different solutions, but none of they are work, here are some solutions I tried:
*
*set numpy random seed
random_seed=2017
from numpy.random import seed
seed(random_seed)
*set tensorflow random seed
from tensorflow import set_random_seed
set_random_seed(random_seed)
*set build-in random seed
import random
random.seed(random_seed)
*set PYTHONHASHSEED
import os
os.environ['PYTHONHASHSEED'] = '0'
*add PYTHONHASHSEED in jupyter notebook kernel.json
{
"language": "python",
"display_name": "Python 3",
"env": {"PYTHONHASHSEED": "0"},
"argv": [
"python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
]
}
and the version of my env is:
Keras: 2.0.6
Tensorflow: 1.2.1
CPU or GPU: CPU
and this is my code:
model = Sequential()
model.add(LSTM(16, input_shape=(time_steps,nb_features), return_sequences=True))
model.add(LSTM(16, input_shape=(time_steps,nb_features), return_sequences=False))
model.add(Dense(8,activation='relu'))
model.add(Dense(1,activation='linear'))
model.compile(loss='mse',optimizer='adam')
A: The seed is definitely missing from your model definition. A detailed documentation can be found here: https://keras.io/initializers/.
In essence your layers use random variables as their basis for their parameters. Therefore you get different outputs every time.
One example:
model.add(Dense(1, activation='linear',
kernel_initializer=keras.initializers.RandomNormal(seed=1337),
bias_initializer=keras.initializers.Constant(value=0.1))
Keras themselves have a section about getting reproduceable results in their FAQ section: (https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development). They have the following code snippet to produce reproducable results:
import numpy as np
import tensorflow as tf
import random as rn
# The below is necessary in Python 3.2.3 onwards to
# have reproducible behavior for certain hash-based operations.
# See these references for further details:
# https://docs.python.org/3.4/using/cmdline.html#envvar-PYTHONHASHSEED
# https://github.com/fchollet/keras/issues/2280#issuecomment-306959926
import os
os.environ['PYTHONHASHSEED'] = '0'
# The below is necessary for starting Numpy generated random numbers
# in a well-defined initial state.
np.random.seed(42)
# The below is necessary for starting core Python generated random numbers
# in a well-defined state.
rn.seed(12345)
# Force TensorFlow to use single thread.
# Multiple threads are a potential source of
# non-reproducible results.
# For further details, see: https://stackoverflow.com/questions/42022950/which-seeds-have-to-be-set-where-to-realize-100-reproducibility-of-training-res
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
from keras import backend as K
# The below tf.set_random_seed() will make random number generation
# in the TensorFlow backend have a well-defined initial state.
# For further details, see: https://www.tensorflow.org/api_docs/python/tf/set_random_seed
tf.set_random_seed(1234)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)
A: Keras + Tensorflow.
Step 1, disable GPU.
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = ""
Step 2, seed those libraries which are included in your code, say "tensorflow, numpy, random".
import tensorflow as tf
import numpy as np
import random as rn
sd = 1 # Here sd means seed.
np.random.seed(sd)
rn.seed(sd)
os.environ['PYTHONHASHSEED']=str(sd)
from keras import backend as K
config = tf.ConfigProto(intra_op_parallelism_threads=1,inter_op_parallelism_threads=1)
tf.set_random_seed(sd)
sess = tf.Session(graph=tf.get_default_graph(), config=config)
K.set_session(sess)
Make sure these two pieces of code are included at the start of your code, then the result will be reproducible.
A: I resolved this issue by adding os.environ['TF_DETERMINISTIC_OPS'] = '1'
Here an example:
import os
os.environ['TF_DETERMINISTIC_OPS'] = '1'
#rest of the code
#TensorFlow version 2.3.1
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/45230448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: Tensorflow keras layers.Multiply() TypeError: 'NoneType' object is not subscriptable I'm a beginning on Keras. I'm trying to convert the following R script into Python
1 library ( keras )
2 # we fit j =0 with feature matrix dat .X , claims dat$C1 and volumes dat$C0
3 dat .Y <- as . matrix ( dat$C1 / sqrt ( dat$C0 )) # observed responses
4 dat .W <- as . matrix ( sqrt ( dat$C0 )) # volumes used as offsets
5 q <- 20 # number of hidden neurons
6 # definition of neural network
7 features <- layer_input ( shape = c( ncol ( dat .X )))
8 net <- features % >%
9 layer_dense ( units = q , activation = ’tanh ’) % >%
10 layer_dense ( units = 1, activation = k_exp )
11 volumes <- layer_input ( shape =c (1))
12 offset <- volumes % >%
13 layer_dense ( units = 1, activation = ’ linear ’ , use_bias = FALSE , trainable = FALSE ,weights = list ( array (1 , dim =c (1 ,1))))
14 weights = list ( array (1 , dim =c (1 ,1))))
15 merged <- list ( net , offset ) % >%
16 layer_multiply ()
Below my script python:
input_shape = SelectedMatrix_Tr.shape[1]
net = Sequential ([
Dense(20, activation="tanh", input_shape=(input_shape,), name="hidden_layer_1"),
Dense(1, activation=tf.keras.activations.exponential, name="output_layer")
])
offset = Sequential([
Dense(
units=1,
activation = 'linear',
input_shape=(None,1),
#input_dim=1,
use_bias = True,
trainable = False,
kernel_initializer=tf.constant_initializer(1.)
)
])
multiplied = tf.keras.layers.Multiply()([net,offset])
The error arrives at the last line of Multiply. Below the error message:
TypeError Traceback (most recent call
last) in
----> 1 multiplied = tf.keras.layers.Multiply()([model,offset])
~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\base_layer.py
in call(self, *args, **kwargs) 1021 with
ops.name_scope_v2(name_scope): 1022 if not self.built:
-> 1023 self._maybe_build(inputs) 1024 1025 if self._autocast:
~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\base_layer.py
in _maybe_build(self, inputs) 2623 # operations. 2624
with tf_utils.maybe_init_scope(self):
-> 2625 self.build(input_shapes) # pylint:disable=not-callable 2626 # We must set also ensure
that the layer is marked as built, and the build 2627 # shape
is stored since user defined build functions may not be calling
~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\utils\tf_utils.py
in wrapper(instance, input_shape)
268 if input_shape is not None:
269 input_shape = convert_shapes(input_shape, to_tuples=True)
--> 270 output_shape = fn(instance, input_shape)
271 # Return shapes from fn as TensorShapes.
272 if output_shape is not None:
~\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\layers\merge.py
in build(self, input_shape)
86 def build(self, input_shape):
87 # Used purely for shape validation.
---> 88 if not isinstance(input_shape[0], tuple):
89 raise ValueError('A merge layer should be called on a list of inputs.')
90 if len(input_shape) < 2:
TypeError: 'NoneType' object is not subscriptable
Any suggestions ?
Thanks !
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/67992740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Using One Model inside another model in Django I am Developing a E-commerce Application with Django
So what I was thinking is getting the category of the Product in a separate Model and list them down in another using choice field in CharField.
So Here is the code for this
This is the model for getting the Categories from the user
class ProjektCat(models.Model):
id = models.AutoField(primary_key=True)
Option_Name = models.CharField(max_length=50)
Option_Number = models.IntegerField()
Number_Visits = models.IntegerField(default=0)
def __str__(self):
return f'{self.Option_Name}'
and here is the code to list those categories as a dropdown in the CharField
class Software_And_Service(models.Model):
id = models.AutoField(primary_key=True)
Product_Name = models.CharField(max_length=100, default='')
projectKats = ProjektCat.objects.all()
choice = []
for i in projectKats:
option = (i.Option_Number, i.Option_Name)
choice.append(option)
Cateogary = models.CharField(
max_length=256, choices=choice)
Price = models.IntegerField(default=0)
Description = models.TextField(default='', max_length=5000)
pub_date = models.DateField(auto_now_add=True, blank=True, null=True)
image = models.URLField(default='')
linkToDownload = models.URLField(default='')
def __str__(self):
return f'Projekt : {self.Product_Name}'
But it's Showing me an Error that there is no such table in app_name.projektcat
Is there is any solution for this??
A: It's not how you do this. First correctly assign the projectKats field i.e
# You can set max_length as per your choice
projectKats = models.CharField(max_length=50)
You need to do this logic in django forms rather than django models.
So this is how you can do it.
forms.py
from django import forms
from .models import ProjektCat, Software_And_Service
def get_category_choices():
return [(obj.Option_Name,obj.Option_Name) for obj in ProjektCat.objects.values_list('Option_Name',flat=True).distinct()]
class SoftwareAndServiceForm(forms.ModelForm):
projectKats = forms.ChoiceField(choices=get_category_choices)
class Meta:
model = Software_And_Service
fields = [
'projectKats',
# whatever fields you want
]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/68411280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Can a '.exe' file made through python run on another device with python language not installed? See, I made a python program with about 7 libraries being used and I converted it into a '.exe' file. So, I wondered if I could run this same file on another device with python not installed.
Is it possible?
Please let me know. Thanks.
A: Depends on how you did it. Pyinstaller is a common one that allows you to do that.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65235193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Compiling the "at" command for my mips toolchain While cross compiling with the following command :-
./configure CC=/data/designs/pweth-s-t1501/trunk/software/embedded/ar9341/linux/build/gcc-4.3.3/build_mips/staging_dir/usr/bin/mips-linux-uclibc-gcc --host=mips-linux
During the course of the compilation I am getting this kind of an error.
checking Trying to compile a trivial ANSI C program... configure: error: Could not compile and run even a trivial ANSI C program - check CC.
My question is that does not the binary know that it is being cross-compiled so it is natural that the compiled binary will not be running on my machine.
Can anybody please tell me a workaround for the same problem?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/17895621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: SqlDataReader returns last row I am developing an online examination system, but I'm having difficulties trying to read questions from the database to be displayed on an aspx on page load. Please help me out... I only get the last row displayed not all the rows !
I know that the values gets over ridden but how to solve !?
SqlDataReader reader `enter code here`= cmd.ExecuteReader();
while (reader.Read())
{
LabelRadio1.Questions = reader["questionTitle"].ToString();
LabelRadio1.Answers = reader["Answer1"].ToString();
LabelRadio1.Answers = reader["Answer2"].ToString();
LabelRadio1.Answers = reader["Answer3"].ToString();
LabelRadio1.Answers = reader["Answer4"].ToString();
LabelRadio1.Answers = reader["Answer5"].ToString();
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/36678967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Add padding based on partial sum I have four given variables:
*
*group size
*total of groups
*partial sum
*1-D tensor
and I want to add zeros when the sum within a group reached the partial sum. For example:
groupsize = 4
totalgroups = 3
partialsum = 15
d1tensor = torch.tensor([ 3, 12, 5, 5, 5, 4, 11])
The expected result is:
[ 3, 12, 0, 0, 5, 5, 5, 0, 4, 11, 0, 0]
I have no clue how can I achieve that in pure pytorch. In python it would be something like this:
target = [0]*(groupsize*totalgroups)
cursor = 0
current_count = 0
d1tensor = [ 3, 12, 5, 5, 5, 4, 11]
for idx, ele in enumerate(target):
subgroup_start = (idx//groupsize) *groupsize
subgroup_end = subgroup_start + groupsize
if sum(target[subgroup_start:subgroup_end]) < partialsum:
target[idx] = d1tensor[cursor]
cursor +=1
Can anyone help me with that? I have already googled it but couldn't find anything.
A: Some logic, Numpy and list comprehensions are sufficient here.
I will break it down step by step, you can make it slimmer and prettier afterwards:
import numpy as np
my_val = 15
block_size = 4
total_groups = 3
d1 = [3, 12, 5, 5, 5, 4, 11]
d2 = np.cumsum(d1)
d3 = d2 % my_val == 0 #find where sum of elements is 15 or multiple
split_points= [i+1 for i, x in enumerate(d3) if x] # find index where cumsum == my_val
#### Option 1
split_array = np.split(d1, split_points, axis=0)
padded_arrays = [np.pad(array, (0, block_size - len(array)), mode='constant') for array in split_array] #pad arrays
padded_d1 = np.concatenate(padded_arrays[:total_groups]) #put them together, discard extra group if present
#### Option 2
split_points = [el for el in split_points if el <len(d1)] #make sure we are not splitting on the last element of d1
split_array = np.split(d1, split_points, axis=0)
padded_arrays = [np.pad(array, (0, block_size - len(array)), mode='constant') for array in split_array] #pad arrays
padded_d1 = np.concatenate(padded_arrays)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72477827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Core plot. Lables overlapped Hi I have a core plot graphic. By default it looks fine, but if I resize it, lables on x axis are overlapped.
I have dates on x axis, how can I change days to weeks? please see screenshot.
Is it some event which heppen on resize?
A: The default axis labeling policy puts the ticks a constant distance apart. You might try the CPTAxisLabelingPolicyAutomatic labeling policy. This policy will automatically adjust the tick spacing as the plot range changes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/6770693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Should bower_components be included into the project? I'm working on an ASP.NET MVC 5 project that was initially using Nuget for static content like Bootstrap, jQuery etc. I have now switched to bower as it is the way to go and is also integrated with visual studio.
I noticed that when installing bower packages, they are not automatically included into the project. So I have left them out for now but is this a good idea? Should bower packages be included or not? It doesn't make any difference to access because in my BundleConfig.cs file I'm still able to link these files to aliases as before.
A: bower will download the entire package using Git.
Your package must be publically available at a Git endpoint (e.g., GitHub). Remember to push your Git tags!
This means that bower will actually download the entire repository/release for you to use in your ASP project. I've tested this with bootstrap and jQuery. These repositories can contain many many files so including them into the .csproj file will clutter your project file and may not be ideal.
This can be compared to using nuget packages where we are only referencing external packages in the packages.config file but we do not include the dll and other assemblies into the project. This leads me to think that bower_components should indeed not be included in the project; this makes no difference to usage as you can still reference the packages and use the files.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/44023808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How can I shift column values forward if the previous value is in a list of excluded values? I have some TSQL code that produces a de-normalized flat file from nicely organized relational tables. The code completes quickly, and the data isn't overwhelming, so chances are any suggestions would help. I don't have to worry much about performance because this process is only intended to be run 1 time a month. I have some wiggle room in that respect.
The source data, for example's sake, is laid out like this: One person (table 1) can have many incidents (table 2). Each incident can have many codes tied to it (table 3). Each code has an ordered sequence. So after flattening this out, one row in the extract file may look like this:
Name IncidentId Code1 Code2 Code3 Code4
Sue Ellen Crandell 1991 abc1 def1 xyz0 888
These de-normalized ordered code columns could potentially go out to over 50. The problem is, that there is a new requirement that if one of the ordered code columns has a value that's in the list of exclusions, then the following ordered code column values should be shifted forward one position. This means that if def1 was in the exclusion list, the row should look like this:
Name IncidentId Code1 Code2 Code3 Code4
Sue Ellen Crandell 1991 abc1 xyz0 888 <empty string>
Before I fetch additional relational data and export the results to the file, I use dynamic T-SQL to de-normalize these ordered code values into a temp table. Due to not wanting to mess with the dynamic T-SQL, and probable limitations with being able to use conditionals to shift the columns during that part of the process, I'm thinking that the easiest place to put the exclusion list evaluation would be after the ordered code values make it into the temp table.
If I have a temp table that looks like the first data row above, how can I
*
*Check each column value.
*Remove the value if it is included in the exclusion list
*Conditionally shift values forward when encountering a value in an exclusion list, as shown in the row examples above?
The exclusion list is just a handful of static values that I can either dump into a temp table or use with an IN operator. I'm guessing that a CTE might be needed, but the recursion logic isn't clear to me.
A: First create a CTE that unpivots the table so that each code is on a separate row:
with cte(Name, IncidentId, CodeName, Code)
as(
select Name, IncidentId, CodeName, Code
from Incident i
unpivot(Code for CodeName in (Code1, Code2, Code3, Code4)) unpvt
)
Now you do an outer join on the CTE to itself, filtering out the excluded codes. This gives you one row for each Name-Incident-Code tuple, but you have null values in the rows where the code was excluded (you need the null rows to maintain the proper count of codes).
Select *, t1.Name, t1.IncidentId, isnull(t2.Code, '') Code,
ROW_NUMBER() over(partition by t1.Name, t1.IncidentId order by isnull(t2.CodeName, 'zzz')) CodeNumber
from cte t1
left outer join cte t2 on t1.Name = t2.Name and
t1.IncidentId = t2.IncidentId and
t1.Code = t2.Code and
not exists(select 1 from Exclude e where e.Code = t2.Code)
The ROW_NUMBER() here will create the new CodeNumber. The order byisnull(t2.CodeNumber, 'zzz')) pushes the null rows to the end so that the rows that have valid codes get numbered first (because "zzz" is greater than "Code-whatever-").
Now you just need to pivot the previous query back so that the codes become columns again:
select Name, IncidentId, [1] Code1, [2] Code2, [3] as Code3, [4] as Code4
from
(
Select t1.Name, t1.IncidentId, isnull(t2.Code, '') Code, ROW_NUMBER() over(partition by t1.Name, t1.IncidentId order by isnull(t2.CodeName, 'zzz')) CodeNumber
from cte t1
left outer join cte t2 on t1.Name = t2.Name and t1.IncidentId = t2.IncidentId and t1.Code = t2.Code and not exists(select 1 from Exclude e where e.Code = t2.Code)
) x
pivot(max(Code) for CodeNumber in ([1], [2], [3], [4])
) as pvt
SQL Fiddle
Update
There's a couple problems with the code above. First, when I create the CodeNumber with ROW_NUMBER(), I am sorting by CodeName. This breaks down after 9 code columns because they no longer sort correctly (they get sorted alphabetically instead of numerically). So I need to pull the code number out in the CTE so I can use it to sort by later:
with cte(Name, IncidentId, CodeName, CodeNumber, Code)
as(
select Name, IncidentId, CodeName, convert(int, SUBSTRING(CodeName, 5, len(CodeName))), Code
from Incident i
unpivot(Code for CodeName in (Code1, Code2, Code3, Code4, Code5, Code6, Code7, Code8, Code9, Code10)) unpvt
)
Now the rest of the query looks like this:
select Name, IncidentId, [1] Code1, [2] Code2, [3] as Code3, [4] as Code4, [5] as Code5, [6] as Code6, [7] as Code7, [8] as Code8, [9] as Code9, [10] as Code10
from
(
Select t1.Name, t1.IncidentId, isnull(t2.Code, '') Code, ROW_NUMBER() over(partition by t1.Name, t1.IncidentId order by isnull(t2.CodeNumber, 999)) NewCodeNumber
from cte t1
left outer join cte t2 on t1.Name = t2.Name and t1.IncidentId = t2.IncidentId and t1.Code = t2.Code and not exists(select 1 from Exclude e where e.Code = t2.Code)
) x
pivot(max(Code) for NewCodeNumber in ([1], [2], [3], [4], [5], [6], [7], [8], [9], [10])
) as pvt
Note that since I now have a column called CodeNumber in the CTE, I am calling the newly generated number "NewCodeNumber". Also, I am ordering by t2.CodeNumber instead of t1.Code.
Updated SQL Fiddle.
Update
Regarding the question in your comment, you're essentially asking about unpivoting multiple columns, which is not as straightforward as unpivoting a single column. One way to accomplish it is to unpivot the code and the codedate separately:
with cteCode(Name, IncidentId, CodeName, CodeNumber, Code)
as(
select Name, IncidentId, CodeName, convert(int, SUBSTRING(CodeName, 5, len(CodeName))), Code
from Incident i
unpivot(Code for CodeName in (Code1, Code2, Code3, Code4, Code5, Code6, Code7, Code8, Code9, Code10)) unpvt
), cteCodeDate(Name, IncidentId, CodeName, CodeNumber, CodeDate)
as(
select Name, IncidentId, CodeName, convert(int, SUBSTRING(CodeName, 9, len(CodeName))), CodeDate
from Incident i
unpivot(CodeDate for CodeName in (CodeDate1, CodeDate2, CodeDate3, CodeDate4, CodeDate5, CodeDate6, CodeDate7, CodeDate8, CodeDate9, CodeDate10)) unpvt
)
and then join them back together:
Select t1.Name, t1.IncidentId, isnull(t2.Code, '') Code, ROW_NUMBER() over(partition by t1.Name, t1.IncidentId order by isnull(t2.CodeNumber, 999)) NewCodeNumber, t3.CodeDate
from cteCode t1
join cteCodeDate t3 on t3.Name = t1.Name and t3.IncidentId = t1.IncidentId and t3.CodeNumber = t1.CodeNumber
left outer join cteCode t2 on t1.Name = t2.Name and t1.IncidentId = t2.IncidentId and t1.Code = t2.Code and not exists(select 1 from Exclude e where e.Code = t2.Code)
Pivoting multiple columns isn't as easy as a single column either, so I went a different route to get the final result:
select Name, IncidentId,
MAX(case when newCodeNumber = 1 then Code end) Code1,
MAX(case when newCodeNumber = 1 then CodeDate end) CodeDate1,
MAX(case when newCodeNumber = 2 then Code end) Code2,
MAX(case when newCodeNumber = 2 then CodeDate end) CodeDate2,
MAX(case when newCodeNumber = 3 then Code end) Code3,
MAX(case when newCodeNumber = 3 then CodeDate end) CodeDate3,
MAX(case when newCodeNumber = 4 then Code end) Code4,
MAX(case when newCodeNumber = 4 then CodeDate end) CodeDate4,
MAX(case when newCodeNumber = 5 then Code end) Code5,
MAX(case when newCodeNumber = 5 then CodeDate end) CodeDate5,
MAX(case when newCodeNumber = 6 then Code end) Code6,
MAX(case when newCodeNumber = 6 then CodeDate end) CodeDate6,
MAX(case when newCodeNumber = 7 then Code end) Code7,
MAX(case when newCodeNumber = 7 then CodeDate end) CodeDate7,
MAX(case when newCodeNumber = 8 then Code end) Code8,
MAX(case when newCodeNumber = 8 then CodeDate end) CodeDate8,
MAX(case when newCodeNumber = 9 then Code end) Code9,
MAX(case when newCodeNumber = 9 then CodeDate end) CodeDate9,
MAX(case when newCodeNumber = 10 then Code end) Code10,
MAX(case when newCodeNumber = 10 then CodeDate end) CodeDate10
from
(
Select t1.Name, t1.IncidentId, isnull(t2.Code, '') Code, ROW_NUMBER() over(partition by t1.Name, t1.IncidentId order by isnull(t2.CodeNumber, 999)) NewCodeNumber, t3.CodeDate
from cteCode t1
join cteCodeDate t3 on t3.Name = t1.Name and t3.IncidentId = t1.IncidentId and t3.CodeNumber = t1.CodeNumber
left outer join cteCode t2 on t1.Name = t2.Name and t1.IncidentId = t2.IncidentId and t1.Code = t2.Code and not exists(select 1 from Exclude e where e.Code = t2.Code)
) x
group by Name, IncidentId
SQL Fiddle
A: This is too long for a comment.
This is best handled with dynamic SQL. Moving things over from column to column to handle exclusions is cumbersome, at best. It would end up being some variant on:
if code1 is not excluded then code1
else if code2 is not excluded then code2
else if code3 is not excluded then code3
else code4 is not excluded then code4 as code1
if code1 is not excluded
if code2 is not excluded then code2
else if code3 is not excluded then code3
else if code4 is not excluded then code4
and so on, and so on and so on
Instead, you probably have a place where you can add something like this to the dynamic SQL:
where not exists (select 1 from ExcludedCodes ec where ec.code <> the.code)
And you will eliminate them before the pivot.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24040050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Dynamic dictionary feature in NMT V3 I have heard that it should be possible to add a dynamic dictionary, holding a set of terms or word list when you train your model. This should be possible in the Translation Hob today.
How to do this using V3 or NMT in portal.customtranslator.azure.ai using custom translator project?
/Mats
A: At this time the dictionary feature is not available in the NMT custom translator but it is a feature we are working on and will release in a future update.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/50948081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Wordpress/Woocommerce Sessions Good afternoon, I have built a website for me client using Wordpress/Woocommerce. the site works great but one problem is that the basket/session doesn't clear after the order is finished. It looks to me that WooCommerce doesn't even have the feature as standard. Working with the Woocommece standard files, whats the best way to kill a session after the checkout process is complete?
Is there a way around this?
A: Your cart should be clearing after checkout so something else may be wrong.
You could create a function with empty_cart() in it that is triggered when payment is complete "just in case".
See: http://docs.woothemes.com/wc-apidocs/class-WC_Cart.html
add_filter( 'woocommerce_payment_complete_order_status', 'pg_woocommerce_payment_complete_order_status', 10, 2 );
function pg_woocommerce_payment_complete_order_status ( $order_status, $order_id ) {
$global $woocommerce;
$woocommerce->cart->empty_cart();
}
A: You can user this hook :
function custom_empty_cart( $order_id, $data ) {
// delete current cart item here
}
add_action( 'woocommerce_checkout_order_processed', 'custom_empty_cart', 11, 2 );
Or you can check woocommerce hook list from here.
A: The cart should empty as this feature is standard, empty_cart();
Are you using any cache-plugins? In that case you need to exclude the cart page
http://docs.woothemes.com/document/configuring-caching-plugins/
It could also be a problem with the payment-gateway, make sure you have updated to the latest version.
A: i found a similar question but it clears the cart on the click of a button..
http://wordpress.org/support/topic/add-an-empty-cart-button-to-cart-page seems to have worked for some user..
here is the code , it should go in to the function.php file
add_action('init', 'woocommerce_clear_cart_url');
function woocommerce_clear_cart_url() {
global $woocommerce;
if( isset($_REQUEST['clear-cart']) ) {
$woocommerce->cart->empty_cart();
}
}
this is the code for the button
<input type="submit" class="button" name="clear-cart" value="<?php _e('Empty Cart', 'woocommerce'); ?>" />
im guessing u dont need a button, so in the next page when u want the cart to be empty, u can explicitly call this function _e('Empty Cart', 'woocommerce');
for example if it you would like to clear the cart in a post page whose title is Payment Success
if(is_single('Payment Success'))
_e('Empty Cart', 'woocommerce');
or if its slug is payment-success
is_single('payment-success');
_e('Empty Cart', 'woocommerce');
similarly to check if its homepage you can use
is_front_page();
or if its on the wordpress page u can use this
if(is_page( 'Payment Success' ))
_e('Empty Cart', 'woocommerce');
if(is_page( 'payment-success' ))
_e('Empty Cart', 'woocommerce');
you can include the above code in the head section of your wordpress site!!
Hope this helps
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24265809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: SQLite3 - One cell has different type than other cells in column I have a SQLite3 db. I use SQLite Browser and Python to work with this db. I have a column description which is a text column on the tabe. I've changed the text in one row using Sqlite3 Browser. From then, I'm getting error in python script.
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 36: ordinal not in range(128)
I've tried to catch this exception and print the row and I realised that instead of text, there is object buffer (this is just one cell) other cells in row are ok:
<read-write buffer ptr 0x047684B0, size 4307 at 0x04768490>
This is the only cell with this object, the cells I didn't modified are unicode's.
How to solve this issue? Is there some way to convert buffer to string in the database?
EDIT: python code
with engine.connect() as connection:
for row in connection.execute('SELECT * FROM final_table'):
try:
if any(x in ' '.join(unicode(y) for y in row[:2]+row[3:4]+row[5:13]+row[14:]) for x in ('mm445','xxs', '998s54a')):
print row[:2]+row[3:4]+row[5:13]+row[14:]
for x in row:
print type(x)
except:
print 'exc'
print row[:2]+row[3:4]+row[5:13]+row[14:]
A: You can use unicode() to convert a byte string from some encoding to Unicode, but unless you specifiy the encoding, Python assumes it's ASCII.
SQLite does not use ASCII but UTF-8, so you have to tell Python about it:
...join(unicode(y, 'UTF-8') for y in ...)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/38302563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to separate text using substring I was wondering how can I separate a column containing the following:
BURGER, Petrus (CHV 494081)
Into 3 columns:
FirstName, LastName, ID
A: SELECT
a[2] AS FirstName,
a[1] AS LastName,
a[3] AS ID
FROM (
SELECT regexp_matches(column_name, '(.+), (.+) \((.+)\)')
FROM table_name
) t(a)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/35516102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Firebase data structure: Attending school on date I am currently looking into my first application using Firebase as the backend.
I have 2 models, School and User. Each user can sign up for a date to attend the school, so I also need a Date.
A SQL table would look like this:
schools: id, name
users: id, name, email
schools_users: id, school_id, user_id, date
What would be the proper way of designing this data structure in Firebase?
A: Since you don't specify any requirements, I suggest starting with the most naive mapping at first:
root
schools
1: "name of school1"
2: "name of school2"
users:
1: { "name": "Maeh", "email": "2523229@stackoverflow.com" }
2: { "name": "Frank", "email": "209103@stackoverflow.com" }
schools_users:
1_1: "20141031"
1_2: "20130102"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/26699191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: JQuery stops working I have he following JQuery on my page and there is a form on it. After I select a name from the autocomplete function, it disables the mask and datepicker function.
Is there anything wrong on the code?
$().ready(function () {
$("#name").autocomplete({
source: "names.php",
width: 260,
matchContains: true,
select: function (event, ui) {
$('#name').val(ui.item.nome);
complete_fields();
}
});
$(".data").mask('99/99/9999', {
placeholder: " "
});
$(".data").datepicker({
dateFormat: 'dd/mm/yy',
nextText: 'Next',
prevText: 'Previous'
});
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/21685524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Cannot install a j-text-utils-0.3.3.jar - no main manifest attribute I'm trying to install j-text-utils to make a table. When I click on the jar file, nothing happens. So I tried make a .bat file and run the file from cmd. However, this shows up:
no main manifest attribute, in j-text-utils-0.3.3.jar
I searched around and the problems were mainly with pom.xml and MANIFEST.MF.
I dragged the file in Eclipse and the thing did have both pom.xml and MANIFEST.MF.
Here is what the pom.xml looks like:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>dnl.utils</groupId>
<artifactId>j-text-utils</artifactId>
<packaging>jar</packaging>
<version>0.3.3</version>
<name>Java Text Utilities</name>
<url>http://code.google.com/p/j-text-utils</url>
<dependencies>
<dependency>
<groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId>
<version>2.4</version>
</dependency>
<dependency>
<groupId>net.sf.opencsv</groupId>
<artifactId>opencsv</artifactId>
<version>2.3</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>14.0.1</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.7</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<extensions>
<extension>
<groupId>org.jvnet.wagon-svn</groupId>
<artifactId>wagon-svn</artifactId>
<version>1.9</version>
</extension>
</extensions>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
</plugins>
</build>
<distributionManagement>
<repository>
<uniqueVersion>false</uniqueVersion>
<id>googlecode</id>
<url>svn:https://d-maven.googlecode.com/svn/trunk/repo/</url>
</repository>
</distributionManagement>
</project>
Here is what the MANIFEST.MF looks like:
Manifest-Version: 1.0
Built-By: danielo
Build-Jdk: 1.7.0_45
Created-By: Apache Maven 3.1.0
Archiver-Version: Plexus Archiver
How may I fix this problem please? :(
I used the download link from Google Code: https://code.google.com/archive/p/j-text-utils/downloads
A: This .jar file is to be used as a library, not a as an executable application.
To access the functionalities, you should write your own application in Java and import the needed classes contained in the j-text-utils-0.3.3.jar file.
To be able to compile your Java code, the j-text-utils-0.3.3.jar needs to be available on the classpath.
See this question for more info : What is a classpath and how do I set it?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56696438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Julia immutable struct with mutable list Here's some broken code:
struct NumberedList
index::Int64
values::Vector{Int64}
NumberedList(i) = new(i, Int64[])
end
function set_values!(list::NumberedList, new_values::Vector{Int64})
list.values = new_values
end
# ---
mylist = NumberedList(1)
set_values!(mylist, [1, 2, 3])
I don't want to do either of these:
*
*I could declare the NumberedList as a mutable struct (making everything mutable)
*I could replace set_values! with something like this (copy all the values):
function set_values!(list::NumberedList, new_values::Vector{Int64})
for i in new_values
push!(list.values, i)
end
end
But I'd like to make index immutable, but allow values to be assigned to.
Other notes:
*
*Thread from the Discourse: "Mutable field in immutable type," but this doesn't apply to vectors in this setting.
A: It depends on what exactly you want to mutate.
*
*If you really want to reassign the field values::Vector{Int64} itself, then you have to use a mutable struct. There's no way around that, because the actual data of the struct changes when you reassign that field.
*If you use an immutable struct with a values::Vector{Int64} field, it means that you cannot change which array is contained, but the array itself is mutable and can change its elements (which are not stored in the struct). In this case, you really do have to copy values to it from external arrays, like your example code (though I would point out that your code did not reset the array to an empty one). I personally think this would be cleaner:
function set_values!(list::NumberedList, new_values::Vector{Int64})
empty!(list.values) # reset list.values to Int64[]
append!(list.values, new_values)
end
*The thread you linked talks about using Base.Ref. Base.Ref is pretty much THE way to make a field of an immutable struct indirectly act like a mutable field. It works like this: the field cannot change which RefValue{Vector{Int64}} instance is contained, but the instance itself is mutable and can change its reference (again, not stored in the struct) to any Int64 array. You have to use indexing values[] to get to the array, though:
struct NumberedList
index::Int64
values::Ref{Vector{Int64}}
NumberedList(i) = new(i, Int64[])
end
function set_values!(list::NumberedList, new_values::Vector{Int64})
list.values[] = new_values # "reassign" different array to Ref
end
# ---
mylist = NumberedList(1)
set_values!(mylist, [1, 2, 3])
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/69156101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: call API in R to evaluate each row in data frame I am using this api called rosette_api_key and it works so great. However, it seems to only work by passing one variable at the time.
As per below, variable "h" is a sentence I want to get sentiment analysis on.
library(rosette)
var.name <- "rosette_api_key"
var.value <- "api key"
args = list(var.value)
names(args) = var.name
do.call(Sys.setenv, args)
h = "This is a cool site"
x = ros_sentiment(paste0(h, collapse=""))
$document
$document$label
[1] "pos"
$document$confidence
[1] 0.6999912
$entities
list()
When I changed "h" to point to a column in a data.frame then it wont accept it.
equally when I changed "h" to multiple sentences like
h = c("I love my house", "This is a sad day", "My son is cute")
It scores my "h" as one single document but i want it to return 3 different scores for each sentences.
What do I do wrong? Please advice.
Thanks so much
Ped
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40196173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Highlight very simple code tag in wordpress? In a self hosted wordpress blog, I'm trying to hightlight this code : [xml], inline.
If I put this :
[code language="ps" light="true"][xml][/code]
It outputs as 1
Which html source is :
<div class="line number1 index0 alt2">
<code class="powershell plain">1</code>
</div>
How can I tell the syntaxt highlight engine to ignore the [xml] tag and render it as is?
Is it possible to hightlight some words in a sentence? (like the ``code`` construct in SO)
[edit] I also tried :
[code language="ps" light="true"][xml][/code]
But it outputs [xml]
A: Use this plugin to highlight code snippets
https://wordpress.org/plugins/crayon-syntax-highlighter/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24079113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Tensorflow back_prop ValueError: setting an array element with a sequence I'm getting this value error:ValueError: setting an array element with a sequence, when doing back_prop in tensorflow. I'm using large IMDB dataset and glove 50d pre-trained vectors. I have tried everything converting multi-dimensional list into np.array, converting individual lists into np.array and also did reshape operation x = x.reshape((batch,time_steps,embedding))
on x but it gave me value error ValueError: total size of new array must be unchanged. I think something is wrong with my input but don't know what? You could run this code on your pc by downloading IMDB dataset and 50d glove vectors. Please Help!
Traceback (most recent call last):
File "nlp.py", line 109, in <module>
sess.run(minimize_loss,feed_dict={X : x, Y : y})
File "/home/indy/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/home/indy/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 619, in _run
np_val = np.array(subfeed_val, dtype=subfeed_dtype)
ValueError: setting an array element with a sequence.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
import math
import os
from nltk.tokenize import TweetTokenizer
batch = 500
start = 0
end = batch - 1
learning_rate = 0.2
num_classes = 8
path = "/home/indy/Downloads/aclImdb/train/pos"
time_steps = 250
embedding = 50
def get_embedding():
gfile_path = os.path.join("/home/indy/Downloads/glove.6B", "glove.6B.50d.txt")
f = open(gfile_path,'r')
embeddings = {}
for line in f:
sp_value = line.split()
word = sp_value[0]
embedding = [float(value) for value in sp_value[1:]]
embeddings[word] = embedding
return embeddings
ebd = get_embedding()
def get_y(file_name):
y_value = file_name.split('_')
y_value = y_value[1].split('.')
return y_value[0]
def get_x(path,file_name):
file_path = os.path.join(path,file_name)
x_value = open(file_path,'r')
for line in x_value:
x_value = line.replace("<br /><br />","")
x_value = x_value.lower()
tokeniz = TweetTokenizer()
x_value = tokeniz.tokenize(x_value)
padding = 250 - len(x_value)
if padding > 0:
p_value = ['pad' for i in range(padding)]
x_value = np.concatenate((x_value,p_value))
x_value = [ebd['value'] for value in x_value]
return x_value
def batch_f(path):
directory = os.listdir(path)
y = [get_y(directory[i]) for i in range(len(directory))]
x = [get_x(path,directory[i]) for i in range(len(directory))]
return x,y
X = tf.placeholder(tf.float32, [batch,time_steps,embedding])
Y = tf.placeholder(tf.int32, [batch])
def build_nlp_model(x, _units, lstm_layers,num_classes):
x = tf.transpose(x, [1, 0, 2])
x = tf.reshape(x, [-1, embedding])
x = tf.split(0, time_steps, x)
lstm = tf.nn.rnn_cell.LSTMCell(num_units = _units, state_is_tuple = True)
multi_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * lstm_layers, state_is_tuple = True)
outputs , state = tf.nn.rnn(multi_lstm,x, dtype = tf.float32)
weights = tf.Variable(tf.random_normal([_units,num_classes]))
biases = tf.Variable(tf.random_normal([num_classes]))
logits = tf.matmul(outputs[-1], weights) + biases
return logits
logits = build_nlp_model(X,400,4,num_classes)
c_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits,Y)
loss = tf.reduce_mean(c_loss)
decayed_learning_rate = tf.train.exponential_decay(learning_rate,0,10000,0.9)
optimizer= tf.train.AdamOptimizer(decayed_learning_rate)
minimize_loss = optimizer.minimize(loss)
correct_predict = tf.nn.in_top_k(logits, Y, 1)
accuracy = tf.reduce_mean(tf.cast(correct_predict, tf.float32))
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
for i in range(25):
x, y = batch_f(path)
sess.run(minimize_loss,feed_dict={X : x, Y : y})
accu = sess.run(accuracy,feed_dict = {X: x, Y: y})
cost = sess.run(loss,feed_dict = {X: x,Y: y})
start = end
end = (start + batch)
print ("Minibatch Loss = " + "{:.6f}".format(cost) + ", Training Accuracy= " + "{:.5f}".format(accu))
EDIT: The other error that I'm getting, when I run the code.
(500, 250, 50)
(500,)
Traceback (most recent call last):
File "nlp.py", line 115, in <module>
accu = sess.run(accuracy,feed_dict = {X: x, Y: y})
File "/home/indy/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/home/indy/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 636, in _run
feed_dict_string, options, run_metadata)
File "/home/indy/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 708, in _do_run
target_list, options, run_metadata)
File "/home/indy/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 728, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.InvalidArgumentError: targets[0] is out of range
[[Node: InTopK = InTopK[T=DT_INT32, k=1, _device="/job:localhost/replica:0/task:0/cpu:0"](add, _recv_Placeholder_1_0)]]
Caused by op u'InTopK', defined at:
File "nlp.py", line 102, in <module>
correct_predict = tf.nn.in_top_k(logits, Y, 1)
File "/home/indy/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 890, in in_top_k
targets=targets, k=k, name=name)
File "/home/indy/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 704, in apply_op
op_def=op_def)
File "/home/indy/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2260, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/indy/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1230, in __init__
self._traceback = _extract_stack()
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/38838456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: UnauthorizedAccessException in visual studio design view I am having a problem with the design view of visual studio where I will get an error when I try to load a Telerik reference specifically Telerik controls for windows 8 when I load that I get the exception when I remove it the exception goes away.
all this happens on a blank xaml with only a grid in it. The project will run fine and show whatever UI I have in the xaml but it just wont show in the design view.
this is the stack trace
System.UnauthorizedAccessException
Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))
at Microsoft.VisualStudio.Silverlight.AssemblyMetadataHelper.IMetaDataDispenserEx.OpenScope(String szScope, Int32 dwOpenFlags, Guid& riid)
at Microsoft.VisualStudio.Silverlight.AssemblyMetadataHelper.OpenScope(IMetaDataDispenserEx dispenser, String path)
at Microsoft.Expression.Utility.AssemblyHelper.TryGetAttributeValueAsString(String path, String attributeName, String& attributeValue, Boolean& attributeExists)
at Microsoft.Expression.Utility.AssemblyHelper.IsDesignAssembly(String path, Version extensibilityAssemblyVersion)
at Microsoft.Expression.DesignSurface.Assemblies.ProjectAssemblyResolver.FindBestMetadataAssembly(IEnumerable`1 possiblePaths)
at Microsoft.Expression.DesignSurface.Assemblies.ProjectAssemblyResolver.FindMetadataAssembly(String assemblyPath, String designerExtension)
at Microsoft.Expression.DesignSurface.Assemblies.ProjectAssemblyResolver.EnsureDesignMetadataLoaded(Boolean showErrors)
at Microsoft.Expression.DesignSurface.Assemblies.ProjectAssemblyResolver.AssemblyCollection_EnumerationChanged(Object sender, EnumerationChangedEventArgs`1 e)
at System.EventHandler`1.Invoke(Object sender, TEventArgs e)
at Microsoft.Expression.Utility.Collections.NotifyingCollectionBase`1.EventInvoker(EnumerationChangedEventArgs`1 eventArguments)
at Microsoft.Expression.Utility.Events.SuspendingEventManager`1.ForwardEvents()
at Microsoft.Expression.Utility.Events.Suspender.SuspendDisposer.Dispose(Boolean disposing)
at Microsoft.Expression.Utility.Events.Suspender.SuspendDisposer.Dispose()
at Microsoft.Expression.DesignSurface.Assemblies.AssemblyCollection.AssemblyService_AssembliesUpdated(Object sender, EventArgs`1 e)
at Microsoft.Expression.DesignSurface.Assemblies.AssemblyService.OnAssembliesUpdated(IEnumerable`1 assemblyInformation)
at Microsoft.Expression.DesignSurface.Assemblies.AssemblyService.WindowsRuntimeService_IterationContextChanged(Object sender, WindowsRuntimeContextChangedEventArgs e)
at System.EventHandler`1.Invoke(Object sender, TEventArgs e)
at Microsoft.Expression.Utility.WindowsRuntimeService.SetIterationContext(String[] paths)
at Microsoft.Expression.Utility.WindowsRuntimeService.SynchronizeDesignerContext()
at Microsoft.Expression.Utility.WindowsRuntimeService.ProcessShadowCopyResults(IEnumerable`1 results)
at Microsoft.Expression.DesignSurface.Assemblies.AssemblyService.FlushShadowCopyUpdateQueue()
at Microsoft.Expression.DesignSurface.Assemblies.ProjectAssemblyResolver.Project_ReferencesChanged(Object sender, HostItemChangesEventArgs`1 e)
at System.EventHandler`1.Invoke(Object sender, TEventArgs e)
at Microsoft.Expression.DesignHost.Isolation.Remoting.LocalEvent`3.<>c__DisplayClass1b.<Invoke>b__1a()
at Microsoft.Expression.DesignHost.Isolation.Remoting.STAMarshaler.Call.InvokeWorker()
at Microsoft.Expression.DesignHost.Isolation.Remoting.STAMarshaler.Call.Invoke(Boolean waitingInExternalCall)
at Microsoft.Expression.DesignHost.Isolation.Remoting.STAMarshaler.InvokeCall(Call call)
at Microsoft.Expression.DesignHost.Isolation.Remoting.STAMarshaler.DirectInvoke(Boolean inbound, Action action, Int32 sourceApartmentId, Int32 targetApartmentId, Int32 originId, WaitHandle aborted)
at Microsoft.Expression.DesignHost.Isolation.Remoting.STAMarshaler.DirectInvokeInbound(Action action, Int32 targetApartmentId)
at Microsoft.Expression.DesignHost.Isolation.Remoting.STAMarshaler.MarshalIn(Action action, Int32 targetApartmentId, CancellationToken cancelToken, CallSynchronizationMode syncMode, CallModality callModality, String methodName, String filePath, Int32 lineNumber)
at Microsoft.Expression.DesignHost.Isolation.Remoting.ThreadMarshaler.MarshalIn(IRemoteObject targetObject, Action action, CallSynchronizationMode syncMode, CallModality callModality, ApartmentState apartmentState, String memberName, String filePath, Int32 lineNumber)
at Microsoft.Expression.DesignHost.Isolation.Remoting.LocalEvent`3.Invoke(Object sender, TEvent args)
at Microsoft.Expression.DesignHost.Isolation.Remoting.LocalHostProject.<>c__DisplayClass9b.<Microsoft.Expression.DesignHost.Isolation.Remoting.IRemoteHostProjectCallback.OnReferencesChanged>b__9a()
at Microsoft.Expression.DesignHost.Isolation.Remoting.STAMarshaler.Call.InvokeWorker()
at Microsoft.Expression.DesignHost.Isolation.Remoting.STAMarshaler.Call.Invoke(Boolean waitingInExternalCall)
at Microsoft.Expression.DesignHost.Isolation.Remoting.STAMarshaler.InvokeCall(Call call)
at Microsoft.Expression.DesignHost.Isolation.Remoting.STAMarshaler.ProcessQueue(CallQueue queue)
at Microsoft.Expression.DesignHost.Isolation.Remoting.STAMarshaler.ProcessInboundQueue(Int32 identity)
at Microsoft.Expression.DesignHost.Isolation.Remoting.STAMarshaler.ProcessMessage(Int32 msg, IntPtr wParam, IntPtr lParam, Boolean elevatedQuery, Boolean& handled)
at Microsoft.Expression.DesignHost.Isolation.Remoting.STAMarshaler.OnWindowMessage(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled)
at Microsoft.Expression.DesignHost.Isolation.Remoting.MessageOnlyHwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam)
at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg)
at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame)
at System.Windows.Threading.Dispatcher.PushFrame(DispatcherFrame frame)
at System.Windows.Threading.Dispatcher.Run()
at System.Windows.Application.RunDispatcher(Object ignore)
at System.Windows.Application.RunInternal(Window window)
at System.Windows.Application.Run(Window window)
at Microsoft.Expression.DesignHost.Isolation.DesignerProcess.RunApplication()
at Microsoft.Expression.DesignHost.Isolation.DesignerProcess.DesignProcessViewProvider.AppContainerDesignerProcessRun(String[] activationContextArgs)
at Microsoft.Expression.DesignHost.Isolation.DesignerProcess.DesignProcessViewProvider.<>c__DisplayClass7.<applicationView_Activated>b__6()
at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
and my xaml
<Page
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:t3"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:Input="using:Telerik.UI.Xaml.Controls.Input"
x:Class="t3.MainPage"
mc:Ignorable="d">
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<Button Content="Button" HorizontalAlignment="Left" Margin="517,180,0,0" VerticalAlignment="Top"/>
</Grid>
</Page>
the error really does not seem to give me any information on what the problem is. Any ideas?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/21145837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Restore On Premises SSAS Database to Azure I've set up an SSAS server on Azure and configured it to connect to an Azure storage account for backups.
Using the Azure Storage Explorer, I've also uploaded a backed-up SSAS database to this same storage account, and I'm trying to restore this backed up db to my SSAS server in Azure.
I've tried to do this using Powershell with
restore-asdatabase -restorefile "mySSASbackup.abf " -name "DBName" -server "asazure://southcentralus.asazure.windows.net/MySSASServer”
But then I get restore-asdatabase : This feature is not supported in AS Azure.
Trying via SSMS: I can connect to the server successfully, but when I try to restore a DB and click on the browse button I get an invalid UNC path notification.
Can anyone help please?
A: You may restore the .abf files into Azure Analysis Servive instance by uploading the file into associated blob storage of AS instance
https://azure.microsoft.com/en-us/blog/backup-and-restore-your-azure-analysis-services-models/
https://www.neowin.net/news/azure-analysis-services-update-brings-backup-and-restore-new-pricing-options-and-more
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/45162018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to correctly implement a multi-window application in C#? I have a C# Winforms application which supports multiple windows, similar to a browser.
At the moment, when the user launches a new window, I just find the process name and launch the process again. This seems like a bit of a hack, and it comes with the extra problem that the new window has to do a lot of redundant application initialization work.
I tried this:
var form = new MainForm();
form.Show()
But this has the hidden caveat that closing the main window (which looks like any other window) results in closing all windows.
How can I make my program behave like a browser, where each window is independent but we don't launch a new process every time?
(Note: I think Chrome is an exception in that it launches a new process for each window!)
A: To open a new window:
Thread thread = new Thread(() => Application.Run(new MainForm()))
{
IsBackground = false
};
thread.Start();
Obviously, this implies that if you access shared state from multiple threads, you need to properly synchronize that access.
A: You can use MDI containers to support multiple windows. Your MainForm will contain all other forms. Other forms can work independent.
Set your main form, MDI container property to true. When creating new form, set mdi parent to your main form.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24164909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: How to delete completed kubernetes pod? I have a bunch of pods in kubernetes which are completed (successfully or unsuccessfully) and I'd like to clean up the output of kubectl get pods. Here's what I see when I run kubectl get pods:
NAME READY STATUS RESTARTS AGE
intent-insights-aws-org-73-ingest-391c9384 0/1 ImagePullBackOff 0 8d
intent-postgres-f6dfcddcc-5qwl7 1/1 Running 0 23h
redis-scheduler-dev-master-0 1/1 Running 0 10h
redis-scheduler-dev-metrics-85b45bbcc7-ch24g 1/1 Running 0 6d
redis-scheduler-dev-slave-74c7cbb557-dmvfg 1/1 Running 0 10h
redis-scheduler-dev-slave-74c7cbb557-jhqwx 1/1 Running 0 5d
scheduler-5f48b845b6-d5p4s 2/2 Running 0 36m
snapshot-169-5af87b54 0/1 Completed 0 20m
snapshot-169-8705f77c 0/1 Completed 0 1h
snapshot-169-be6f4774 0/1 Completed 0 1h
snapshot-169-ce9a8946 0/1 Completed 0 1h
snapshot-169-d3099b06 0/1 ImagePullBackOff 0 24m
snapshot-204-50714c88 0/1 Completed 0 21m
snapshot-204-7c86df5a 0/1 Completed 0 1h
snapshot-204-87f35e36 0/1 ImagePullBackOff 0 26m
snapshot-204-b3a4c292 0/1 Completed 0 1h
snapshot-204-c3d90db6 0/1 Completed 0 1h
snapshot-245-3c9a7226 0/1 ImagePullBackOff 0 28m
snapshot-245-45a907a0 0/1 Completed 0 21m
snapshot-245-71911b06 0/1 Completed 0 1h
snapshot-245-a8f5dd5e 0/1 Completed 0 1h
snapshot-245-b9132236 0/1 Completed 0 1h
snapshot-76-1e515338 0/1 Completed 0 22m
snapshot-76-4a7d9a30 0/1 Completed 0 1h
snapshot-76-9e168c9e 0/1 Completed 0 1h
snapshot-76-ae510372 0/1 Completed 0 1h
snapshot-76-f166eb18 0/1 ImagePullBackOff 0 30m
train-169-65f88cec 0/1 Error 0 20m
train-169-9c92f72a 0/1 Error 0 1h
train-169-c935fc84 0/1 Error 0 1h
train-169-d9593f80 0/1 Error 0 1h
train-204-70729e42 0/1 Error 0 20m
train-204-9203be3e 0/1 Error 0 1h
train-204-d3f2337c 0/1 Error 0 1h
train-204-e41a3e88 0/1 Error 0 1h
train-245-7b65d1f2 0/1 Error 0 19m
train-245-a7510d5a 0/1 Error 0 1h
train-245-debf763e 0/1 Error 0 1h
train-245-eec1908e 0/1 Error 0 1h
train-76-86381784 0/1 Completed 0 19m
train-76-b1fdc202 0/1 Error 0 1h
train-76-e972af06 0/1 Error 0 1h
train-76-f993c8d8 0/1 Completed 0 1h
webserver-7fc9c69f4d-mnrjj 2/2 Running 0 36m
worker-6997bf76bd-kvjx4 2/2 Running 0 25m
worker-6997bf76bd-prxbg 2/2 Running 0 36m
and I'd like to get rid of the pods like train-204-d3f2337c. How can I do that?
A: As previous answers mentioned you can use the command:
kubectl delete pod --field-selector=status.phase=={{phase}}
To delete pods in a certain "phase", What's still missing is a quick summary of what phases exist, so the valid values for a "pod phase" are:
Pending, Running, Succeeded, Failed, Unknown
And in this specific case to delete "error" pods:
kubectl delete pod --field-selector=status.phase==Failed
A: If this pods created by CronJob, you can use spec.failedJobsHistoryLimit and spec.successfulJobsHistoryLimit
Example:
apiVersion: batch/v1
kind: CronJob
metadata:
name: my-cron-job
spec:
schedule: "*/10 * * * *"
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
...
A: Here's a one liner which will delete all pods which aren't in the Running or Pending state (note that if a pod name has Running or Pending in it, it won't get deleted ever with this one liner):
kubectl get pods --no-headers=true |grep -v "Running" | grep -v "Pending" | sed -E 's/([a-z0-9-]+).*/\1/g' | xargs kubectl delete pod
Here's an explanation:
*
*get all pods without any of the headers
*filter out pods which are Running
*filter out pods which are Pending
*pull out the name of the pod using a sed regex
*use xargs to delete each of the pods by name
Note, this doesn't account for all pod states. For example, if a pod is in the state ContainerCreating this one liner will delete that pod too.
A: You can do it on two ways.
$ kubectl delete pod $(kubectl get pods | grep Completed | awk '{print $1}')
or
$ kubectl get pods | grep Completed | awk '{print $1}' | xargs kubectl delete pod
Both solutions will do the job.
A: You can do this a bit easier, now.
You can list all completed pods by:
kubectl get pod --field-selector=status.phase==Succeeded
delete all completed pods by:
kubectl delete pod --field-selector=status.phase==Succeeded
and delete all errored pods by:
kubectl delete pod --field-selector=status.phase==Failed
A: If you would like to delete pods not Running, it could be done with one command
kubectl get pods --field-selector=status.phase!=Running
Updated command to delete pods
kubectl delete pods --field-selector=status.phase!=Running
A: Here you go:
kubectl get pods --all-namespaces |grep -i completed|awk '{print "kubectl delete pod "$2" -n "$1}'|bash
you can replace completed with CrashLoopBackOff or any other state...
A: I think pjincz handled your question well regarding deleting the completed pods manually.
However, I popped in here to introduce a new feature of Kubernetes, which may remove finished pods automatically on your behalf. You should just define a time to live to auto clean up finished Jobs like below:
apiVersion: batch/v1
kind: Job
metadata:
name: remove-after-ttl
spec:
ttlSecondsAfterFinished: 86400
template:
...
A: Here is a single command to delete all pods that are terminated, Completed, in Error, etc.
kubectl delete pods --field-selector status.phase=Failed -A --ignore-not-found=true
If you are using preemptive GKE nodes, you often see those pods hanging around.
Here is an automated solution I setup to cleanup: https://stackoverflow.com/a/72872547/4185100
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/55072235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52"
}
|
Q: Is there a framewok / API I could use to export iPhone-SDK's ABRecordRef instances to vCard? I'm looking at source code from Funambol, but the dependencies are so huge, I'm rethinking of using them, not to mention the code is based on OC++. Can anyone help me out on this?
Thanks.
A: No, afraid not.
You can either try to find a library / sample code in C/C++/ObjC that will generate a vCard from provided information, or attempt to roll your own.
You can find more information about vCard including the specs here; http://www.imc.org/pdi/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/484738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Config file location and binaries and build systems like autoconf Most build systems, like autoconf/automake, allow the user to specify a target directory to install the various files needed to run a program. Usually this includes binaries, configuration files, auxilliary scripts, etc.
At the same time, many executables often need to read from a configuration file in order to allow a user to modify runtime settings.
Ultimately, a program (let's say, a compiled C or C++ program) needs to know where to look to read in a configuration file. A lot of times I will just hardcode the path as something like /etc/MYPROGAM/myprog.conf, which of course is not a great idea.
But in the autoconf world, the user might specify an install prefix, meaning that the C/C++ code needs to somehow be aware of this.
One solution would be to specify a C header file with a .in prefix, which simply is used to define the location of the config file, like:
const char* config_file_path = "@CONFIG_FILE_PATH@"; // `CONFIG_FILE_PATH` is defined in `configure.ac`.
This file would be named something like constants.h.in and it would have to be process by the configure.ac file to output an actual header file, which could then be included by whatever .c or .cpp files need it.
Is that the usual way this sort of thing is handled? It seems a bit cumbersome, so I wonder if there is a better solution.
A: There are basically two choices for how to handle this.
One choice is to do what you've mentioned -- compile the relevant paths into the resulting executable or library. Here it's worth noting that if files are installed in different sub-parts of the prefix, then each such thing needs its own compile-time path. That's because the user might specify --prefix separately from --bindir, separately from --libexecdir, etc. Another wrinkle here is that if there are multiple installed programs that refer to each other, then this process probably should take into account the program name transform (see docs on --program-transform-name and friends).
That's all if you want full generality of course.
The other approach is to have the program be relocatable at runtime. Many GNU projects (at least gdb and gcc) take this approach. The idea here is for the program to attempt to locate its data in the filesystem at runtime. In the projects I'm most familiar with, this is done with the libiberty function make_relative_prefix; but I'm sure there are other ways.
This approach is often touted as being nicer because it allows the program's install tree to be tared up and delivered to users; but in the days of distros it seems to me that it isn't as useful as it once was. I think the primary drawback of this approach is that it makes it very hard, if not impossible, to support both relocation and the full suite of configure install-time options.
Which one you pick depends, I think, on what your users want.
Also, to answer the above comment: I think changing the prefix between configure- and build time is not really supported, though it may work with some packages. Instead the usual way to handle this is either to require the choice at configure time, or to supported the somewhat more limited DESTDIR feature.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/26853947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Create a .txt file if doesn't exist, and if it does append a new line I would like to create a .txt file and write to it, and if the file already exists I just want to append some more lines:
string path = @"E:\AppServ\Example.txt";
if (!File.Exists(path))
{
File.Create(path);
TextWriter tw = new StreamWriter(path);
tw.WriteLine("The very first line!");
tw.Close();
}
else if (File.Exists(path))
{
TextWriter tw = new StreamWriter(path);
tw.WriteLine("The next line!");
tw.Close();
}
But the first line seems to always get overwritten... how can I avoid writing on the same line (I'm using this in a loop)?
I know it's a pretty simple thing, but I never used the WriteLine method before. I'm totally new to C#.
A: string path = @"E:\AppServ\Example.txt";
File.AppendAllLines(path, new [] { "The very first line!" });
See also File.AppendAllText(). AppendAllLines will add a newline to each line without having to put it there yourself.
Both methods will create the file if it doesn't exist so you don't have to.
*
*File.AppendAllText
*File.AppendAllLines
A: You just want to open the file in "append" mode.
http://msdn.microsoft.com/en-us/library/3zc0w663.aspx
A: You can just use File.AppendAllText() Method this will solve your problem.
This method will take care of File Creation if not available, opening and closing the file.
var outputPath = @"E:\Example.txt";
var data = "Example Data";
File.AppendAllText(outputPath, data);
A: string path=@"E:\AppServ\Example.txt";
if(!File.Exists(path))
{
File.Create(path).Dispose();
using( TextWriter tw = new StreamWriter(path))
{
tw.WriteLine("The very first line!");
}
}
else if (File.Exists(path))
{
using(TextWriter tw = new StreamWriter(path))
{
tw.WriteLine("The next line!");
}
}
A: When you start StreamWriter it's override the text was there before. You can use append property like so:
TextWriter t = new StreamWriter(path, true);
A: else if (File.Exists(path))
{
using (StreamWriter w = File.AppendText(path))
{
w.WriteLine("The next line!");
w.Close();
}
}
A: Try this.
string path = @"E:\AppServ\Example.txt";
if (!File.Exists(path))
{
using (var txtFile = File.AppendText(path))
{
txtFile.WriteLine("The very first line!");
}
}
else if (File.Exists(path))
{
using (var txtFile = File.AppendText(path))
{
txtFile.WriteLine("The next line!");
}
}
A: You don't actually have to check if the file exists, as StreamWriter will do that for you. If you open it in append-mode, the file will be created if it does not exists, then you will always append and never over write. So your initial check is redundant.
TextWriter tw = new StreamWriter(path, true);
tw.WriteLine("The next line!");
tw.Close();
A: You could use a FileStream. This does all the work for you.
http://www.csharp-examples.net/filestream-open-file/
A: Use the correct constructor:
else if (File.Exists(path))
{
using(var sw = new StreamWriter(path, true))
{
sw.WriteLine("The next line!");
}
}
A: File.AppendAllText adds a string to a file. It also creates a text file if the file does not exist. If you don't need to read content, it's very efficient. The use case is logging.
File.AppendAllText("C:\\log.txt", "hello world\n");
A: From microsoft documentation, you can create file if not exist and append to it in a single call
File.AppendAllText Method (String, String)
.NET Framework (current version) Other Versions
Opens a file, appends the specified string to the file, and then closes the file. If the file does not exist, this method creates a file, writes the specified string to the file, then closes the file.
Namespace: System.IO
Assembly: mscorlib (in mscorlib.dll)
Syntax
C#C++F#VB
public static void AppendAllText(
string path,
string contents
)
Parameters
path
Type: System.String
The file to append the specified string to.
contents
Type: System.String
The string to append to the file.
AppendAllText
A: using(var tw = new StreamWriter(path, File.Exists(path)))
{
tw.WriteLine(message);
}
A: .NET Core Console App:
public static string RootDir() => Path.GetFullPath(Path.Combine(AppContext.BaseDirectory, @"..\..\..\"));
string _OutputPath = RootDir() + "\\Output\\" + "MyFile.txt";
if (!File.Exists(_OutputPath))
File.Create(_OutputPath).Dispose();
using (TextWriter _StreamWriter = new StreamWriter(_OutputPath))
{
_StreamWriter.WriteLine(strOriginalText);
}
A: Please note that AppendAllLines and AppendAllText methods only create the file, but not the path. So if you are trying to create a file in "C:\Folder", please ensure that this path exists.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/9907682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "182"
}
|
Q: C#: Read and modify settings in another application's app.config file I have a number of applications running which communicate with each other but none of these applications have their own user interface. I have a system console application which acts as a user interface for the system (i.e. the set of applications which all talk to each other).
I would like to be able to use the system console to read and modify the configuration of each of the non-gui apps.
Each app has an app.config file created using the Visual Studio Settings GUI. The settings are all in application scope, which results in an app.config file which looks a bit like this:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" >
<section name="ExternalConfigReceiver.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
</sectionGroup>
</configSections>
<applicationSettings>
<ExternalConfigReceiver.Properties.Settings>
<setting name="Conf1" serializeAs="String">
<value>3</value>
</setting>
<setting name="Conf2" serializeAs="String">
<value>4</value>
</setting>
</ExternalConfigReceiver.Properties.Settings>
</applicationSettings>
I have tried using the following code to read the configuration settings:
System.Configuration.ExeConfigurationFileMap fileMap = new System.Configuration.ExeConfigurationFileMap();
fileMap.ExeConfigFilename = "PATH_TO_THE_FOLDER\\app.config";
System.Configuration.Configuration config = System.Configuration.ConfigurationManager.OpenExeConfiguration(fileMap, System.Configuration.ConfigurationUserLevel.None);
someVariable = config.AppSettings.Settings["Conf1"];
someVariable2 = config.AppSettings.Settings["Conf2"];
However on closer inspection of the config.AppSettings object, I find that it contains no settings.
What am I doing wrong? Am I using the wrong method to read the config file? Is this method best for a different sort of config file?
A: Its possible to use the config file as XML and then use XPath to change values:
using (TransactionScope transactionScope = new TransactionScope())
{
XmlDocument configFile = new XmlDocument();
configFile.Load("PathToConfigFile");
XPathNavigator fileNavigator = configFile.CreateNavigator();
// User recursive function to get to the correct node and set the value
WriteValueToConfigFile(fileNavigator, pathToValue, newValue);
configFile.Save("PathToConfigFile");
// Commit transaction
transactionScope.Complete();
}
private void WriteValueToConfigFile(XPathNavigator fileNavigator, string remainingPath, string newValue)
{
string[] splittedXPath = remainingPath.Split(new[] { '/' }, 2);
if (splittedXPath.Length == 0 || String.IsNullOrEmpty(remainingPath))
{
throw new Exception("Path incorrect.");
}
string xPathPart = splittedXPath[0];
XPathNavigator nodeNavigator = fileNavigator.SelectSingleNode(xPathPart);
if (splittedXPath.Length > 1)
{
// Recursion
WriteValueToConfigFile(nodeNavigator, splittedXPath[1], newValue);
}
else
{
nodeNavigator.SetValue(newValue ?? String.Empty);
}
}
Possible path to Conf1:
"configuration/applicationSettings/ExternalConfigReceiver.Properties.Settings/setting[name=\"Conf1\"]/value"
A: as far as i know you cant use the System.Configuration.Configuration to access config files of other applications
they are xml files and you can use the xml namspace to interact with them
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/13931171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Intro class C# for loop issue I'm working on the following two problems for my intro level C# course. I've completed problem 3, but am having trouble with problem 4. The issue is that the total is not coming out correct, as it is not adding the first combo value entered and I'm not quite sure where I went wrong. I would appreciate any help you guys can provide with this. Please keep in mind this is an intro level course, so it needs to be simple for loops, if then statements, do while statements, etc. Here is the code I have so far:
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Enter number of customers: ");
var numCust = Convert.ToInt32(Console.ReadLine());
int lunchCombo = 0;
decimal total = 0;
Console.WriteLine("Enter lunch combo purchased");
lunchCombo = Convert.ToInt32(Console.ReadLine());
for ( int i = 1; i < numCust; i++ )
switch (lunchCombo)
{
case 1:
Console.WriteLine("Enter lunch combo purchased");
lunchCombo = Convert.ToInt32(Console.ReadLine());
total = total + 4.25M;
break;
case 2:
Console.WriteLine("Enter lunch combo purchased");
lunchCombo = Convert.ToInt32(Console.ReadLine());
total = total + 5.75M;
break;
case 3:
Console.WriteLine("Enter lunch combo purchased");
lunchCombo = Convert.ToInt32(Console.ReadLine());
total = total + 5.25M;
break;
case 4:
Console.WriteLine("Enter lunch combo purchased");
lunchCombo = Convert.ToInt32(Console.ReadLine());
total = total + 3.75M;
break;
default:
Console.WriteLine("Invalid input");
break;
}
Console.WriteLine("Your total is {0}", total);
Console.ReadKey();
}
}
*A restaurant has 4 lunch combos for customers to choose:
Combo 1: Fried chicken with slaw [price: 4.25]
Combo 2: roast beef with mashed potato [price: 5.75]
Combo 3: Fish and chips [price:5.25]
Combo 4: soup and salad [price: 3.75]
Write a program to ask which lunch combo the customer orders. Use a switch statement to determine and display the amount of money the customer needs to pay. Display “Invalid input” if the customer ordered something not on the menu. Do not use any if…else statements.
*Expand program 3 to calculate the total amount due from a group of customers. The program first asks for the number of customers in the group. Then it uses a loop to take the orders one by one. If a customer orders something not on the menu, ignore that order and move on to the next customer. Use the number of customer in the group to determine how many times the loop will execute. Do not ask the user to enter a special value such as -1 to stop the loop. Calculate and display the total amount of money the group needs to pay.
A: You need to ask the customer what combo they want outside the switch statement. I'll just use psuedo-code, so I'm not directly doing your homework for you:
var total = 0;
var numCust = "How Many Customers?"
for (int i = 0; i < numCust; i++){
var combo = "What Combo do you want?"
switch (combo){
case 1:
total += 4.25;
break;
case 2:
total += 5.25;
break;
case 4:
total += 5.75;
break;
}
}
write("The total is: " + total);
A: You will need to add the existing ordering logic inside another loop on the number of customers you read at the beginning of your code. Here's the logic but you should write the code. I don't think it would help you learn anything if someone here would write the code for you.
loop (numCust) {
read order number;
loop (lunchCombo) {
add to total;
}
}
A: You need to have one instance of each of the following two lines at the beginning of the for loop and before the switch statement:
Console.WriteLine("Enter lunch combo purchased");
lunchCombo = Convert.ToInt32(Console.ReadLine());
Then you switch on which lunchCombo the user chose.
Plus, your for loop should either loop from i = 0 to i < numCust or from i = 1 to i <= numCust. You're leaving off a customer with the way you have it.
A: It would be easier to start your for loop at i = 0 instead of i = 1. Then inside the for loop, ask for the user's input for the lunch combo outside of the switch statement
for (int i = 0; i < numCust; i++)
{
Console.WriteLine("What is this customer's order?");
lunchCombo = Convert.ToInt32(Console.ReadLine());
switch (lunchCombo)
{
case 1:
total = total + 4.25M;
break;
case 2:
total = total + 5.75M;
break;
case 3:
total = total + 5.25M;
break;
case 4:
total = total + 3.75M;
break;
default:
Console.WriteLine("Invalid input");
break;
}
}
A: Think about the effects of starting at 1 and where you're stopping the counter for i.
That is to say you're counting starting at 1 to n-1, which means if you're looping through one time less than you intended to. So, if numCustomers is 4, the loop results in:
i starts at | 0 | 1 | 2 | 3 | 4 |
---------------------------------
i loop... | 4 | 3 | 2 | 1 | 0 |
So i'm not getting through enough times if I start at 1. You'll need to make one of two changes. Change i to start at 0, OR change the comparison to be <=. Either will work.
And don't worry, this is a common error!
(Credit to Quantic for the comment answer!)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/39669077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: php regex error preg_match(): Unknown modifier '[' Iam very new to Php Regex, but somehow learned bit and tried as below, but getting error as : PHP Warning: preg_match(): Unknown modifier '[' in /home/3ZZyLt/prog.php on line 4
Here is the whole code with output: https://ideone.com/fTIyUK
Same code from above link:
<?php
$email = "paulw Paul Walker paulw 2014-12-28 07:18:09 paul@comp.com 2014-12-28 07:18:09 2014-12-28 07:18:09"; // Invalid email address
$regex = "/[a-z] [A_Z] [A-Z] [A-Z] (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2}) /[-0-9a-zA-Z.+_]+@[-0-9a-zA-Z.+_]+\.[a-zA-Z]{2,4} (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2}) (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/";
if ( preg_match( $regex, $email ) ) {
echo $email . " is a valid email. We can accept it.";
} else {
echo $email . " is an invalid email. Please try again.";
}
?>
Please tell me what needs to be changed?
Updated:
Input: string(a-z) string(a-zA-Z) string(a-zA-Z) string(a-z) date(yyyy-mm-dd hh:mm:ss) emailid date(yyyy-mm-dd hh:mm:ss) date(yyyy-mm-dd hh:mm:ss)
8 Parameters:
userid (string)
first name(string)
last name(string)
userid(string)
date
email
date
date
A: $regex = "/[a-z] [A_Z] [A-Z] [A-Z] (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2}) /[-0-9a-zA-Z.+_]+@[-0-9a-zA-Z.+_]+\.[a-zA-Z]{2,4} (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2}) (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/";
If you see this regex closely you have an unescaped / in the middle of regex but you're using / as regex delimiter.
Use this regex instead with an alternative regex delimiter:
$regex = "~[a-z]+ [A-Z]+ [A-Z]+ [A-Z]+ (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2}) [-\w.+]+@[-\w.+]+\.[a-zA-Z]{2,4} (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2}) (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})~i";
RegEx Demo
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30511756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: How to force field type to range in Grocery Crud? I need a range input to get an integer between 100 and 500 in Grocery Crud. Currently it's a normal text input field, and field_type() method doesn't have the range option. Is there a solution to add Bootstrap range instead?
A: For a simple way, you can use type number at input form
<input type="number" name="grocery" id="grocery" min="100" max="500" />
Another way you can use javascript
<script>
function integerInRange(value, min, max) {
if(value < min || value > max)
{
document.getElementById("grocery").value = "500";
alert("min = 100 and max = 500");
}
}
</script>
and in your view
<input type="text" class="form-control" name="grocery" id="grocery" onkeyup="integerInRange(this.value, 100, 500)" />
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/59682387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: initializing scope using ng-init is not binding in directive Hi I am trying to bind a value to a directive using '@' which is defined in the ng-init event. but this doesn't comes into my directive and it returns undefined. When does this property actually resolves is it in Compile, link? Also, why is it returning undefined always? What am I doing wrong here?
<!DOCTYPE html>
<html ng-app="app">
<head>
<script data-require="angular.js@*" data-semver="1.3.0-beta.5" src="https://code.angularjs.org/1.3.0-beta.5/angular.js"></script>
<script src="script.js"></script>
</head>
<body ng-controller="demoController" ng-init="init()">
<h1>Confused directives!!!</h1>
<my-user userName="{{user.name}}"></my-user>
<br>
{{user.name}}
</body>
</html>
(function(){
'use strict';
var app = angular.module('app', []);
app.controller('demoController', ['$scope', demoController])
function demoController($scope){
console.log('Im in the controller');
$scope.init = function (){
$scope.user = {
name: 'Argo'
}
console.log('Im in init method of controller')
}
}
app.directive('myUser', myUser);
function myUser(){
return {
restrict: 'AE',
replace: true,
scope: {
userName: '@'
},
template: '<div><b>{{userName}} is defined from init method </b></div>',
link: function(scope, ele, attr){
console.log('Im in link method of directive');
console.log('this is loaded' + scope.userName);
attr.$observe('userName', function(newValue){
console.log(newValue);
})
}
}
}
})();
http://plnkr.co/edit/vAKMNub8HRcphiaMMauR?p=preview
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/25861293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Heroku Error R14 (Memory quota exceeded) running Node app The description gives me a pretty clear idea, what the error depends on. But I am not sure how to fix it. Do I have to pay for Heroku or can the issue be solved on the free version.
The script is very simple.
I am collecting a lot of 3rd party API's, into one collected feed with pagination added.
So like this
const api = [];
Promise.all(
endpoints.map((endpoint) =>
fetch(endpoint)
.then(res => res.json())
.then(e => {
// .... something
return e;
})
catch(err => console.log(err)
).then(result => api.push(result)
app.get("/api", (req, res) => {
let { size, page } = req.query;
page = page * 20;
size = page + 20;
res.json(api.slice(page, size))
})
When the server is up and running local, it is working fine everytime.
When I deploy it to Heroku, I get the Error R14 most of the times. Not all the times! If I refresh the page 4-6 times, it will receive the data.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64941135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Extract part of sql query (numbers) using regex I need to extract part of sql query (saved in file) - numbers in section IN. But sql query also has other numbers (but not in section IN) What pattern should i use?
Pattern '(\d+)' extract more numbers that i need (from other part of sql query).
" IN ('7',
'9',
'11',
'13',
'14',
'24')"
A: You can use
(?:\G(?!^)',\s*'|\bIN\s*\(')\K\d+
(?<=\bIN\s*\([^()]*)\d+
See regex demo #1 and regex demo #2.
Regex #1 (compliant with Boost, PCRE, Onigmo regex libraries):
*
*(?:\G(?!^)',\s*'|\bIN\s*\(') - end of the previous match and then ', ,, zero or more whitespace and then a ', or a whole word IN followed with (' substring
*\K - match reset operator that discards the currently matched text
*\d+ - one or more digits
Regex #2 (compliant with JavaScript ECMAScript 2018+, .NET, PyPi regex):
*
*(?<=\bIN\s*\([^()]*) - a location that is immediately preceded with
*
*\bIN - whole word IN
*\s* - zero or more whitespaces
*\( - a ( char
*[^()]* - zero or more chars other than ( and )
*\d+ - one or more digits.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65306542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to collect a Verbose stream from a parent-child powershell script from .NET? I have: C#.NET using the PowerShell() object, calling one powershell script, which calls another.
C#->Powershell(parent)->Powershell(child). [capture verbose: parent works, child does not].
My code CORRECTLY grabs/streams Verbose from the first (parent) powershell script. However, I have not been able to get the Verbose stream from the (child) powershell script. Note: if I call hello1.ps directly, it works (because it's not the child).
(I've read numerous articles / searches / other stackoverflow answers. Mostly they talk about how to capture it once, or recommend the `$verbosepreference).
Solutions? Thanks!
public void Run()
{
var ps = PowerShell.Create();
ps.AddScript("`$verbosepreference='continue'"); // <-- tried this, no effect.
ps.AddScript("set-location \"" + dir + "\"");
ps.AddScript(System.IO.File.ReadAllText("./hello-caller.ps1"));
ps.AddParameter("name", "A");
ps.AddParameter("where", "B");
var output = new PSDataCollection<PSObject>();
output.DataAdded += Verbose_DataAdded;
var results = ps.BeginInvoke<PSObject, PSObject>(null, output, null, AsyncInvoke, null);
}
void Verbose_DataAdded(object sender, DataAddedEventArgs e)
{
var source = ((PSDataCollection<PSObject>) sender);
var msg = source[e.Index].ToString();
Console.WriteLine(msg);
}
--- hello-caller.ps1 ---
Param(
[string]$name,
[string]$where
)
./hello.ps1 $name $where
--- hello.ps1 ---
Param(
[string]$name,
[string]$where
)
$i = 0
while ($i -le 30) {
Write-Verbose "Hello " $name " from " $where
Start-Sleep -s 1
$i++
}
return true
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/20695102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Android Studio 4.1.1 won't launch on MacOS Big Sur I had an outdated Android Studio app and after updating to the latest version the app won't launch.
Clicking on opening the app shows the launch screen and after that saying "No android SDK found", pressing next and continue to update shows "Nothing to do, SDK is up to date", after that - nothing.
Tried:
*
*remove completely Android Studio and all files using terminal commands
*Uninstall and reinstall android studio from scratch
No window, nothing. Only the Android studio button on the status bar and that's it. I can't even close the app without a force quit.
What shows on the status bar after the finish:
The only message on launch (after clicking next shows "nothing to do, sdk is up to date"):
After that, just nothing, no window, seeing my Desktop and that's all.
A: I had the same problem but solved it by deleting configuration files from previous Android Studio installations as described here
(Section: Studio doesn't start after upgrade).
A: I did a fresh install of Big Sur yesterday on my Catalina machine. Today I updated my Android Studio 3.5.3 installation there to latest Android 4.1.1 and had no issues with running and building with Android Studio 4.1.1 for my Android native and React Native apps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64843168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Rails: Clarifying purpose of precompile:assets I'm trying to understand what exactly precompiling:assets does, because I've realized that for my last project my CSS would never update when I pushed my app to heroku unless I typed bundle exec rake assets:precompile, but this only started happening towards the end, so I believe I probably added something to the config file.
I'm currently trying to understand caching, which made me think about precompile:assets. Is precompile:assets similar to caching by pre-loading the assets to the web server so that those assets aren't loaded directly from the Rails stack? This is for performance purposes right?
A: You can find everything you need to know in the Asset Pipeline Rails Guide.
A: Caching is a related, but separate topic.
The purpose of compiling assets includes the combining and minimizing of assets, e.g. javascript that is all on 1 line with 1 letter variables, as opposed to the originals which are used in development mode and let you debug there using the original source code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/13435983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Workbook.save - "the following features cannot be saved in macro-free workbooks..." My sub opens up a .xlsm file in a new Excel instance, makes changes to the file, and saves the changes.
However, despite the fact that the file being opened is .xlsm, upon workbook.save I get the alert message "the following features cannot be saved in macro-free workbooks..."
This doesn't make sense to me because the file is .xlsm. Any ideas?
Edit: I believe I found the source of the problem. Although the variable "path" contains C:\file.xlsm, wkb.path is empty. (see Debug.print's below). Why is the wkb.path empty?
Set XL = New Excel.Application
XL.Visible = False
For Each Path In XLSMPaths
Set wkb = XL.Workbooks.Add(Path)
Debug.Print Path ' "C:\file.xlsm"
Debug.Print wkb.path ' ""
Debug.print wkb.name ' "file"
wkb.Save '<- alert message "The following features cannot be saved in macro-free workbooks..."
Next Path
A: By default, the .save method will save the file as .xls or .xlsx (dependant on your version of excel), you need to force this with .saveas
wkb.SaveAs Filename:=MyNewPathFilename, FileFormat:=xlOpenXMLWorkbookMacroEnabled
obviously change the variable 'MyNewPathFilename' to whatever you want to save the file as, probably want to take the Path and check it ends in .xlsm then pass it into this variable
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31602750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Play Framework Return Partial View I am using Play Framework and, using AJAX, would like to return a partial view to the calling script to render. I come from the world of ASP.NET MVC so this is a very simple concept there, but I don't see a place for it in Play.
An example of what I would like to do:
Main.html
<html>
<head><title>Test</title></head>
<body>
<h1>Here's my list</h1>
<input type="text" id="new-entry" /><button id="add-new-entry">Add</button>
<ul id="item-list">
#{list items, as:'item'}
<li>#{anitemtemplate item}</li>
#{/list}
</ul>
<script>
$(function() {
$("#add-new-entry").click(function() {
var action = #{jsAction @add(':name') /};
var title = $("#new-entry").val();
$.post(action(title), null, function(data) {
var newData = $(document.createElement("li")).html(data);
$("#item-list").append(newData);
});
});
});
</script>
</body>
</html>
anitemtemplate.html
${item.title} <em>by ${item.author}</em>
Me.java
public static void add(String title) {
//add the item
return render("anitemtemplate", newitem); //how to do this??
}
A: It should be :
public static void add(String title) {
//add the item
return render("anitemtemplate.html", item);
}
or
public static void add(String title) {
//add the item
renderArgs.put("item", newitem);
return render("anitemtemplate.html");
}
And if the anitemtemplate.html is in /app/views/tags/ , like the tag usage suppose to in your main.html, you should use render("tags/anitemtemplate.html") - the first argument of render is the relative path of the template from the /app/views/ directory.
And BTW,
this
#{list items, as:'item'}
<li>#{anitemtemplate item}</li>
#{/list}
should be
#{list items, as:'item'}
<li>#{anitemtemplate item /}</li>
#{/list}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/9519445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How many timers can I create in C on Windows? I am working with a table (struct) that stores entries of received messages. I need to monitor a specific operation for each entry periodically, which is why I thought of creating a timer for each created entry to monitor how long this entry has been in the table, and based on the duration, I need to execute some commands and restart the timer.
I am new to timers and callbacks in C, but I know that each timer leads to a new thread. My questions is, how many timers can I create on a standard board?
I know a different method would be to add a new member for the timer operation and iterate over the table for this to work in non-critical events with one timer, but I am trying to avoid this as the table entries can affect the whole program. If it is not possible to create many timers (+1000), what would be the other ideal way of doing this?
A: If the entries in the table should be "expired" (and removed) at certain times, then here is a possible solution, using only a single timer:
First of all use a priority queue where the "priority" is the expiration time. You can of course use any other table-like structure, but keep it sorted on the expiration time (it simplify things later).
Then have your single recurring timer, which is triggered ten, twenty or maybe more times per second.
When the timer is triggered, you get the current time, and simply remove all elements in the table whose expiration time have passed (comparing its expiration time with the current time). If the table is sorted on expiration date, it's a very simple loop where you compare the time from the "top" element, and if expired then "pop" the top element.
You can use the same or similar techniques for just about any timing-related issue.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51061992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: Android mixed language text - BidiFormatter on String with RTL and LTR text I have a ListView with custom View where I have a TextView :
<TextView
android:id="@+id/textViewItemTitle"
android:layout_width="0dp"
android:layout_weight="1"
android:layout_height="wrap_content"
android:gravity="right|center_horizontal"
android:text="title" />
This TextView contains Hebrew text.
if(!bidi.isRtl(event)){
event = bidi.unicodeWrap(event);
}
holder.title.setText(String.format("%s %s %s", bidi.unicodeWrap(item.getStartTimeNoDate().trim()), event,
bidi.unicodeWrap(item.getDuration().trim())));
Where the first argument is time hh:mm:ss, second (event) is a Hebrew String and third like the first.
The problem: some time the event String contains mixed text in Hebrew and English like abc-אבג then all the text behave like the the gravity is left (and not right like I defined in the text view), I mean indented to left.
How to solve that?
A: The accepted answer will do the job when the text is in a TextView. This is a more general answer, applicable both to the basic/happy scenario and to other, more complicated use cases.
There are situations when mixed-language text is to be used someplace other than inside a TextView. For instance, the text may be passed in a share Intent to Gmail or WhatsApp and so on. In such cases, you must use a combination of the following classes:
*
*BidiFormatter
*TextDirectionHeuristics
As quoted in the documentation, these are ...
Utility class[es] for formatting text for display in a potentially opposite-directionality context without garbling. The directionality of the context is set at formatter creation and the directionality of the text can be either estimated or passed in when known.
So for example, say you have a String that has a combination of English & Arabic, and you need the text to be
*
*right-to-left (RTL).
*always right-aligned, even if the sentence begins with English.
*English & Arabic words in the correct sequence and without garbling.
then you could achieve this using the unicodeWrap() method as follows:
String mixedLanguageText = ... // mixed-language text
if(BidiFormatter.getInstance().isRtlContext()){
Locale rtlLocale = ... // RTL locale
mixedLanguageText = BidiFormatter.getInstance(rtlLocale).unicodeWrap(mixedLanguageText, TextDirectionHeuristics.ANYRTL_LTR);
}
This would convert the string into RTL and align it to the left, if even one RTL-language character was in the string, and fallback to LTR otherwise. If you want the string to be RTL even if it is completely in, say English (an LTR language), then you could use TextDirectionHeuristics.RTL instead of TextDirectionHeuristics.ANYRTL_LTR.
This is the proper way of handling mixed-direction text in the absence of a TextView. Interestingly, as the documentation states,
Also notice that these direction heuristics correspond to the same types of constants provided in the View class for setTextDirection(), such as TEXT_DIRECTION_RTL.
Update:
I just found the Bidi class in Java which seems to do something similar. Look it up!
Further references:
1. Write text file mix between arabic and english.
2. Unicode Bidirectional Algorithm.
A: Try adding to your TextView:
android:textDirection="anyRtl"
For more reading:
http://developer.android.com/reference/android/view/View.html#attr_android:textDirection
A: I had the same issue and I am targeting API 16
My solution was very simple, I added to the beginning of the String "\u200F"
String mixedLanguageText = ... // mixed-language text
newText = "\u200F" + mixedLanguageText;
"\u200F" = Unicode Character 'RIGHT-TO-LEFT MARK' (U+200F)
A: The following code snippet demonstrates how to use unicodeWrap():
String mySuggestion = "15 Bay Street, Laurel, CA";
BidiFormatter myBidiFormatter = BidiFormatter.getInstance();
// The "did_you_mean" localized string resource includes
// a "%s" placeholder for the suggestion.
String.format(R.string.did_you_mean,
myBidiFormatter.unicodeWrap(mySuggestion));
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/20473657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: bootstrap 3 navbar active li background I have a bootstrap navbar in this fiddle and its dropdown is visible on hover the mouse as
$('.navbar .dropdown').hover(function () {
$(this).find('.dropdown-menu').first().stop(true, true).slideDown(300);
}, function () {
$(this).find('.dropdown-menu').first().stop(true, true).slideUp(180)
});
But how to make the parent li background still active as when mouseover down to dropdown menu similar to one without sliding.
A: You can simply add class open on-hover and remove it on mouse leave.
See below example,
$(document).ready(function() {
$('.navbar .dropdown').hover(function() {
$(this).addClass('open');
},
function() {
$(this).removeClass('open');
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://getbootstrap.com/dist/js/bootstrap.min.js"></script>
<link href="https://getbootstrap.com/dist/css/bootstrap.min.css" rel="stylesheet"/>
<nav class="navbar navbar-default">
<div class="container-fluid">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="#">Brand</a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav">
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Dropdown <span class="caret"></span></a>
<ul class="dropdown-menu">
<li><a href="#">Action</a></li>
<li><a href="#">Another action</a></li>
<li><a href="#">Something else here</a></li>
<li role="separator" class="divider"></li>
<li><a href="#">Separated link</a></li>
<li role="separator" class="divider"></li>
<li><a href="#">One more separated link</a></li>
</ul>
</li>
</ul>
</div>
<!-- /.navbar-collapse -->
</div>
<!-- /.container-fluid -->
</nav>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/37583836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Handling STX-ETX frame with Netty Have to implement the following TCP/IP protocol implementation with Netty:
Message structure:
The messages are embedded in an STX-ETX frame:
STX MESSAGE ETX
0x02 7b20224d... 0x03
An `escaping` of STX and ETX within the message is not necessary since it is in JSON format
Escape sequence are following:
JSON.stringify ({"a": "\ x02 \ x03 \ x10"}) → "{" a \ ": " \ u0002 \ u0003 \ u0010 \ "}".
Here is more info about STX, ETX control codes.
Length of the message could be different and it will have JSON format, something like:
\0x02{"messageID": "Heartbeat"}\0x03
My idea was made a combination of custom Frame delimiter with StringEncoder/StringDecoder.
For custom Frame delimiter -> use 0x03 as a delimiter and skip the first byte (0x02).
So created the following FrameDelimiterDecoder:
@Slf4j
public class FrameDelimiterDecoder extends DelimiterBasedFrameDecoder {
public FrameDelimiterDecoder(int maxFrameLength, ByteBuf delimiter) {
super(maxFrameLength, delimiter);
}
@Override
protected Object decode(ChannelHandlerContext ctx, ByteBuf buffer) throws Exception {
ByteBuf buffFrame = null;
Object frame = super.decode(ctx, buffer);
if (frame instanceof ByteBuf) {
buffFrame = (ByteBuf) frame;
} else {
log.info("frame: {}", frame);
}
if (buffFrame != null) {
buffFrame.writeBytes(buffer.skipBytes(1));
} else {
log.warn("buffer is <null>");
}
return buffFrame;
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
log.error(cause.getMessage(), cause);
}
}
And use it for initialisation:
@Slf4j
@Component
@RequiredArgsConstructor
public class QrReaderChannelInitializer extends ChannelInitializer<SocketChannel> {
private final StringEncoder stringEncoder = new StringEncoder();
private final StringDecoder stringDecoder = new StringDecoder();
private final QrReaderProcessingHandler readerServerHandler;
private final NettyProperties nettyProperties;
@Override
protected void initChannel(SocketChannel socketChannel) {
ChannelPipeline pipeline = socketChannel.pipeline();
pipeline.addLast(new FrameDelimiterDecoder(1024 * 1024, Unpooled.wrappedBuffer(FrameConstant.ETX)));
if (nettyProperties.isEnableTimeout()) {
pipeline.addLast(new ReadTimeoutHandler(nettyProperties.getClientTimeout()));
}
pipeline.addLast(stringDecoder);
pipeline.addLast(stringEncoder);
pipeline.addLast(readerServerHandler);
}
}
However, it always fails with:
c.s.netty.init.FrameDelimiterDecoder : java.lang.IndexOutOfBoundsException: readerIndex(28) + length(1) exceeds writerIndex(28): PooledUnsafeDirectByteBuf(ridx: 28, widx: 28, cap: 1024)
io.netty.handler.codec.DecoderException: java.lang.IndexOutOfBoundsException: readerIndex(28) + length(1) exceeds writerIndex(28): PooledUnsafeDirectByteBuf(ridx: 28, widx: 28, cap: 1024)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471)
Could not understand what is missing there.
How to process STX-ETX frame for request/response with Netty?
A: After try and fails finally, solved this issue.
Rethink the code for FrameDelimiterDecoder and found the way how to do it with an array of bytes and at the end convert to ByteBuf. I believe it could be done with the buffer directly or to use ByteBuffer from NIO package and then convert.
The simplest for me was:
@Slf4j
public class FrameDelimiterDecoder extends DelimiterBasedFrameDecoder {
public FrameDelimiterDecoder(int maxFrameLength, ByteBuf delimiter) {
super(maxFrameLength, delimiter);
}
@Override
protected Object decode(ChannelHandlerContext ctx, ByteBuf buffer) {
boolean inMessage = false;
int size = buffer.readableBytes();
ByteBuffer byteBuffer = ByteBuffer.allocate(size);
buffer.readBytes(byteBuffer);
byte[] byteArray = new byte[size - 2];
byte[] data = byteBuffer.array();
int index = 0;
for (byte b : data) {
if (b == FrameConstant.START_OF_TEXT) {
if (!inMessage) {
inMessage = true;
} else {
log.warn("Unexpected STX received!");
}
} else if (b == FrameConstant.END_OF_TEXT) {
if (inMessage) {
inMessage = false;
} else {
log.warn("Unexpected ETX received!");
}
} else {
if (inMessage) {
byteArray[index] = b;
index += 1;
}
}
}
return Unpooled.wrappedBuffer(byteArray);
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
if (cause instanceof InterruptedException) {
log.warn("interrupted exception occurred");
Thread.currentThread().interrupt();
} else {
log.error("FrameDelimiterEncoder exception occurred:", cause);
}
}
}
Where FrameConstant look like:
@UtilityClass
public class FrameConstant {
public final int START_OF_TEXT = 0x02;
public final int END_OF_TEXT = 0x03;
public final int MAX_FRAME_LENGTH = 1024 * 1024;
}
Then initialize it:
@Slf4j
@Component
@RequiredArgsConstructor
public class QrReaderChannelInitializer extends ChannelInitializer<SocketChannel> {
private final StringEncoder stringEncoder = new StringEncoder();
private final StringDecoder stringDecoder = new StringDecoder();
private final QrReaderProcessingHandler readerServerHandler;
private final NettyProperties nettyProperties;
@Override
protected void initChannel(SocketChannel socketChannel) {
ChannelPipeline pipeline = socketChannel.pipeline();
// Add the delimiter first
pipeline.addLast(getDelimiterDecoder());
if (nettyProperties.isEnableTimeout()) {
pipeline.addLast(new ReadTimeoutHandler(nettyProperties.getClientTimeout()));
}
pipeline.addLast(stringDecoder);
pipeline.addLast(stringEncoder);
pipeline.addLast(readerServerHandler);
}
private FrameDelimiterDecoder getDelimiterDecoder() {
ByteBuf delimiter = Unpooled.wrappedBuffer(new byte[]{FrameConstant.END_OF_TEXT});
return new FrameDelimiterDecoder(FrameConstant.MAX_FRAME_LENGTH, delimiter);
}
}
And some modification for handler:
@Slf4j
@Component
@RequiredArgsConstructor
@ChannelHandler.Sharable
public class QrReaderProcessingHandler extends ChannelInboundHandlerAdapter {
private final PermissionService permissionService;
private final EntranceService entranceService;
private final Gson gson;
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
String remoteAddress = ctx.channel().remoteAddress().toString();
String stringMsg = (String) msg;
if (log.isDebugEnabled()) {
log.debug("CLIENT_IP: {}", remoteAddress);
log.debug("CLIENT_REQUEST: {}", stringMsg);
}
if (HEARTBEAT.containsName(stringMsg)) {
HeartbeatResponse heartbeatResponse = buildHeartbeatResponse();
sendResponse(ctx, heartbeatResponse);
}
}
private <T> void sendResponse(ChannelHandlerContext ctx, T response) {
ctx.writeAndFlush(formatResponse(response));
}
private <T> String formatResponse(T response) {
String realResponse = String.format("%s%s%s",
(char) FrameConstant.START_OF_TEXT,
gson.toJson(response),
(char) FrameConstant.END_OF_TEXT);
log.debug("response: {}", realResponse);
return realResponse;
}
And finally, it sends correctly formed response back:
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64090053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Deploy small change to google app engine I want to deploy a change in my app.yaml file to google app engine. Is there a simple way to do this without redeploying the whole app? Is there a way of changing the app.yaml file on the google cloud directly? Or just deploying one file from my Windows directory?
My app is working fine in the virtual environment but I'm having some issues on the google cloud platform. The whole deploying process takes quite a while so I'm looking for a faster way to make a change and test it.
A: You can use appcfg.py update app.yaml from AppEngine Python SDK:
https://cloud.google.com/appengine/docs/standard/python/tools/appcfg-arguments#update
Use the files argument to upload one or more YAML files that define
modules. No other types of YAML files can appear in the command line.
Only the specified modules will be updated.
A: You can try using gcloud app deploy inside the directory where your application is located in order to upload the file you need.
Specifying no files with the command deploys only the app.yaml file of a given service.
This command will only upload to the cloud the files where there are changes, so if you have only modified the app.yaml file, it should not take too much time for the upload. However, as that is the configuration file of your application, it might need to be re-deployed completely, as the changes made in that file might affect the behaviour of the whole app. That is the reason why it might be taking longer than expected.
On the other side, you may want to know that if you are using App Engine Flexible environment, the deployment will always be slower than in a Standard environment, as resources have to be deployed before launching the application itself.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/47679360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How can a SQLAlchemy query use MySQL's REGEXP operator? How can this SQL query:
SELECT * from table where field REGEXP 'apple|banna|pear';
be written using SQLAlchemy?
My base query looks like this:
query = session.query(TableFruitsDTO)
A: The SQLAlchemy docs describe how to use the MySQL REGEXP operator. Since there is no built-in function, use .op() to create the function:
query = session.query(TableFruitsDTO).filter(
TableFruitsDTO.field.op('regexp')(r'apple|banana|pear')
)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/27228635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: toggle menu with javascript One of the theme I am using on wordpress use toggle menu if the parent menu have child menu under.
This is the current js code
var navItemDropdown = $('#nav li .dropdown');
navItemDropdown.each(function(){
thisDropdown = $(this);
$(this).parent().prepend('<span class="sub-nav-toggle"></span>');
});
$('body').on('click','.sub-nav-toggle',function(event){
$(this).parents('li').toggleClass('active');
thisDropdown = $(this).parents('li').find('.dropdown');
thisDropdown.slideToggle('fast');
return false;
});
It's added <span class="sub-nav-toggle"></span> if there's a child menu.
The code of menu which have child menu becomes like this
<li class="menu-item menu-item-type-custom menu-item-object-custom menu-item-has-children menu-item-22"><span class="sub-nav-toggle"></span><a href="#">About Us</a>
<section class="dropdown"><ul>
<li class="menu-item menu-item-type-post_type menu-item-object-page menu-item-26"><a href="page_id=6">Who we are</a></li>
<li class="menu-item menu-item-type-post_type menu-item-object-page menu-item-25"><a href="page_id=7">Our Vision</a></li>
</ul></section>
</li>
What I would like to achieve is instead of injecting the <span class="sub-nav-toggle"></span>, I would like to insert class="sub-nav-toggle" inside the a href tag. Which means <a> tag will become like this
<a class="sub-nav-toggle" href="#">About Us</a>
There's a way I can add the class for the menu through Wordpress Menu, but it's only added the class inside <li> tag instead of <a> tag, so it's not working and cannot do that way.
Please help me out. Thank you.
A: I think you need to change this:
$(this).parent().prepend('<span class="sub-nav-toggle"></span>');
to this:
$(this).prev().addClass('sub-nav-toggle');
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19783538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: how can we make ListHeaderComponent as sticky header in SectionList? Description
i am using horizontal tab bar inside ListHeaderComponent and I want make ListHeaderComponent as a sticky header in SectionList. we can make stickyheader in flatlist by stickyHeaderIndices={[0]} but it does not work in SectionList
React Native version:
React Native Environment Info:
System:
OS: macOS 10.14.6
CPU: (12) x64 Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz
Memory: 38.35 MB / 16.00 GB
Shell: 5.3 - /bin/zsh
Binaries:
Node: 12.15.0 - /usr/local/bin/node
Yarn: 1.21.1 - /usr/local/bin/yarn
npm: 6.13.4 - /usr/local/bin/npm
Watchman: 4.9.0 - /usr/local/bin/watchman
SDKs:
iOS SDK:
Platforms: iOS 12.4, macOS 10.14, tvOS 12.4, watchOS 5.3
Android SDK:
API Levels: 28, 29
Build Tools: 28.0.3, 29.0.1, 29.0.2
System Images: android-26 | Google Play Intel x86 Atom, android-27 | Google APIs Intel x86 Atom
IDEs:
Android Studio: 3.5 AI-191.8026.42.35.5791312
Xcode: 10.3/10G8 - /usr/bin/xcodebuild
npmPackages:
react: 16.8.6 => 16.8.6
react-native: 0.59.8 => 0.59.8
npmGlobalPackages:
rename-horizon: 1.1.0
react-native-cli: 2.0.1
Steps To Reproduce
Provide a detailed list of steps that reproduce the issue.
*
*make section list and add ListHeaderComponent props
*add stickyHeaderIndices={[0]} in section list
it will not make ListHeaderComponent as sticky header
Expected Results
it should be making ListHeaderComponent as a sticky header. or alternative way to make it sticky header
A: Use something like this.
<View style={{ flex: 1 }}>
{renderStickyListHeaderComponent()}
<SectionList
{...sectionListprops}
ref={sectionListRef}
sections={sectionData}
stickySectionHeadersEnabled={false}
/>
</View>
A: To make the top or any other header sticky for section list, provide a custom scroll component:
const CustomScrollComponent = (props) => <ScrollView {...props} stickyHeaderHiddenOnScroll={true} stickyHeaderIndices={[0]} />
then in your section list:
<SectionList
// other props
renderScrollComponent={CustomScrollComponent}
/>
A: you can use renderSectionHeader & stickySectionHeadersEnabled = true
render() {
return (
<SectionList
data={sections}
renderSectionHeader = {({section:{title}}) => (<YourHeader>)}
stickySectionHeadersEnabled/>
)
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/63215635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: (HTML5) Can't load my image For some reason, despite using the right syntax(to my knowledge) and having the image file in the same folder as my HTML document, when I load the document, the image I programmed to load does not appear on the page. What could be the issue here?
Here, my HTML file is clearly in the same folder as my image..
https://gyazo.com/ac35ed711716d9e8c5b34123f80d71d1
And here is my code (this is for q3.html)
<!DOCTYPE html>
<html>
<head>
<title>Question Three</title>
</head>
<body>
<p>
<h1>Dominos Pizza order form</h1>
<img src=“dominos.png” alt=“Dominos logo” width=“100” height=“50”>
</p>
</body>
</html>
I am running this on a Macbook Pro, using TextEdit as my editing tool. Here is what my page looks like after I open the document:
https://gyazo.com/0a527e90897d082b1f722c3293ad34d0
A: Notice “ in your <img>. It is not the actual quotes. Thus replace “ with ".
Thus the new HTML would be as follows
<!DOCTYPE html>
<html>
<head>
<title>Question Three</title>
</head>
<body>
<p>
<h1>Dominos Pizza order form</h1>
<img src="dominos.png" alt="Dominos logo" width="100" height="50">
</p>
</body>
</html>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/34953390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: First selected item value from checkbox collection My html code is as below.
<table id="rooms">
<tbody>
<%
var rooms = Model.AvailableRooms.ToList();
%>
<tr>
<%
foreach (var r in rooms)
{
%><td style="text-align: center">
<input type="checkbox"
value="<%: r.Id %>-<%: r.Name %>-<%: Model.Provider.Key %>" />
</td>
<td style="text-align: center">
<%: r.Name %>
</td>
<%
}
%>
</tr>
</tbody>
</table>
From that I need to get the first selected item's value.How could I do that ?
A: Use jQuery selectors to get it:
$("input[type='checkbox']:checked").first().val()
or
$("input[type='checkbox']:checked:first").val()
JSFIDDLE
A: $('input[type=checkbox]:checked').first()
A: $('#rooms input:checked:first').val()
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/18191891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Put json data from jquery into Thymeleaf I am using google book api for ISBN and I am able to get data. Now I want to attach that data into a form. I am not able to that.
This is my Jquery Code
$(document).ready(function(){
$('#submitCode').click(function(){
var x;
var isbn = $('#isbn').val();
var xmlhttp = new XMLHttpRequest();
var url = "https://www.googleapis.com/books/v1/volumes?q=isbn:" + isbn;
xmlhttp.onreadystatechange = function() {
/*<![CDATA[*/
if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {
x = JSON.parse(xmlhttp.responseText);
callback(x);
}/*]]>*/
};
xmlhttp.open("GET", url, true);
xmlhttp.send();
function callback(x) {
alert(JSON.stringify(x));
console.log(x);
};
});
});
My HTML code
<div class="input-field col s6 m6">
<input id="author" name="author" th:field="*{author}" type="text" class="validate" />
<label for="author">Author</label>
</div>
<div class="input-field col s6 m6">
<input id="title" name="title" th:field="*{title}" type="text" class="validate" />
<label for="title">Title</label>
</div>
How should I attach author data and title into the form in Thymeleaf?
A: If you've been able to retrieve the data already, you should only need to update the DOM using $('#id').val(value). I did a bit of digging, and it looks like your API returns the title and authors like this, hence the use of json.items[0].volumeInfo in the new callback code.
{
"items": [{
"volumeInfo": {
"title": "Example Title",
"subtitle": "Example Subtitle",
"authors": [ "Example Author" ]
}
}]
}
$(document).ready(function() {
$('#submitCode').click(function() {
var x;
var isbn = $('#isbn').val();
var xmlhttp = new XMLHttpRequest();
var url = "https://www.googleapis.com/books/v1/volumes?q=isbn:" + isbn;
xmlhttp.onreadystatechange = function() {
/*<![CDATA[*/
if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {
x = JSON.parse(xmlhttp.responseText);
callback(x);
} /*]]>*/
};
xmlhttp.open("GET", url, true);
xmlhttp.send();
function callback (json) {
var book = json.items[0].volumeInfo
$('#author').val(book.authors.join('; '))
$('#title').val(book.title)
};
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<label><input id="isbn" type="text" value="1904764827"/>ISBN</label>
<div class="input-field col s6 m6">
<input id="author" name="author" th:field="*{author}" type="text" class="validate" />
<label for="author">Author</label>
</div>
<div class="input-field col s6 m6">
<input id="title" name="title" th:field="*{title}" type="text" class="validate" />
<label for="title">Title</label>
</div>
<button id="submitCode">Submit</button>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/41081978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Uneven Results Doing A Sort / Orderby on Amazon SimpleDB I've been getting really uneven results trying to request a numerically sorted list of records from Amazon SimpleDB.
I am zero padding my numbers to get them selected lexigraphically, but still no luck. These two queries are giving the same result, for example:
select * from cbcallers where calls_completed is not null order by calls_completed desc
select * from cbcallers where calls_completed is not null order by calls_completed asc
However, I am getting the correct results using Amazon's query language:
['calls_completed'starts-with ''] sort 'calls_completed' desc
And last week, I was getting different (unordered) results from this query on the same dataset. Anyone have any idea what's up? Is my query jacked?
The dataset looks like this:
Sdb-Item-Name, calls_completed, name, icon
8uda23sd7, 0000002, john smith, /myimgicon.jpg
8uda5asd3, 0000015, john smarts, /myimgicon2.jpg
8udassad8, 0000550, john smoogie, /myimgicon3.jpg
A: Your query looks completely correct. I loaded your data and used your queries verbatim and got just what you would expect.
Ascending:
select * from cbcallers where calls_completed is not null order by calls_completed asc
[
Item 8uda23sd7
icon: myimgicon.jpg
name: john smith
calls_completed: 0000002,
Item 8uda5asd3
icon: myimgicon2.jpg
name: john smarts
calls_completed: 0000015,
Item 8udassad8
icon: myimgicon3.jpg
name: john smoogie
calls_completed: 0000550]
Descending:
select * from cbcallers where calls_completed is not null order by calls_completed desc
[
Item 8udassad8
icon: myimgicon3.jpg
name: john smoogie
calls_completed: 0000550,
Item 8uda5asd3
icon: myimgicon2.jpg
name: john smarts
calls_completed: 0000015,
Item 8uda23sd7
icon: myimgicon.jpg
name: john smith
calls_completed: 0000002]
Maybe it is an issue with your SimpleDB client, which one are you using, do you know if it is using the latest SimpleDB API version ("2009-04-15")?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2249263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Handling user registrations Using only Django without the need to be redirected to other template This is my view
class UserRegistrationView(FormView):
template_name = "register/register_form.html"
form_class = None
extra_fields_form_page_slug = ""
email_template = "register/account_activation_email.html"
def get_form(self, form_class=None):
form = super().get_form(form_class)
add_extra_fields(
form.fields, form.helper.layout.fields, self.extra_fields_form_page_slug
)
return form
def get_extra_form(self):
return FormPage.objects.filter(slug=self.extra_fields_form_page_slug).first()
def form_valid(self, form):
user = form.save(commit=False)
user.is_active = False
user.save()
email = form.cleaned_data.get("email")
email_template = "register/account_activation_email.html"
send_confirmation_email(
self, "Activate Your Account", user, email, email_template
)
return render(self.request, "register/after_submission.html")
This is working fine (registration wise) but it's not from many other sides, because I have the registration form as a pop up window in the header, so it's available throughout the whole website, my problems are:
*
*if the user had a successful registration they will be redirected to the specified template "after_submission" what I want is to stay on the same page and display a pop up with some message
*if the user had an unsuccessful registration they will be redirected to the main template "register_form.html" with the errors displayed their, what I want is to stay on the same page and to display the error messages on the pop up form if the user opened it again
is this achievable using only Django?? or I must throw some JS in there, and if JS is the answer, can you give me a quick insight into how to do it?
A: You can redirect to the same page in your CBV :
from django.http import HttpResponseRedirect
return HttpResponseRedirect(self.request.path_info)
As stated in the comment your solution require Ajax and JS, you can however redirect to the same page but that will make a new request and refresh the whole page which might no be ideal for you.
There is a tutorial to work with Django and Ajax Ajax / Django Tutorial
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/68226188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Data return is undefined vue.js? i have a problem i don't understand why i can't recover my recette.
My route at node is ok i have my good res ut at vue my code don't work my res is undefined.
What I am trying to do is to filter my recipes by retrieving only the recipes which have as category recipe 1. I made a route on node which works and which returns me exactly what I want but at the level of view i have a problem.
NODE.JS
router.get("/recette_light", (req, res) => {
db.cat_recette
.findOne({
where: { id: req.body.id },
include: { all: true },
})
.then((cat_recette) => {
if (cat_recette) {
res.status(200).json({
cat_recette: cat_recette,
});
} else {
res.json("il n'y a pas de cat_recettes");
}
})
.catch(err => {
res.json(err);
});
});
VUE.JS
<div>
<navbar_user />
<mylight :recette="recette" :user="user" />
<myfooter />
</div>
</template>
<script>
import navbar_user from "../components/navbar_user";
import mylight from "../components/light";
import myfooter from "../components/myfooter";
export default {
name: "",
data() {
return {
recette: "",
user: "",
};
},
components: {
navbar_user,
mylight,
myfooter,
},
created: function() {
this.axios
.get("http://localhost:3000/recette/rec_recette/:1")
.then((res) => {
(this.cat_recette.recette = res.data.recette),
this.axios
.get(
"http://localhost:3000/user/rec_user/" +
localStorage.getItem("email")
)
.then((res) => {
this.user = res.data.user;
});
});
},
};
Thank you for your help i'm novice
A: On the frontend, you are making an HTTP request with the GET method, which has no body. On the backend, req.body.id will be undefined because there is no request body in the first place.
So you have several options:
First: use a POST request on the front end
axios({
method: 'POST',
url:"http://localhost:3000/recette/rec_recette",
headers: {},
data: {
id: 'votre_id_ici', // This is the body part
}
});
The backend code to handle the post request:
(Use async/await to make the code cleaner)
router.post('/recette_light', async (req, res) => {
try {
// Assuming you are searching for your recette using MongoDB doc.id
const cat_recette = await db.cat_recette.findById(req.body.id);
// If there are no matching docs.
if (!cat_recette) {
return res.json("il n'y a pas de cat_recettes");
}
// Otherwise send the data to the frontend
res.status(200).json({ cat_recette: cat_recette, });
} catch (err) {
console.log(err);
res.status(500).json({ msg: 'Server Error', });
}
});
Second: use the GET method still but with URL parameters
axios.get("http://localhost:3000/recette/rec_recette/votre_id_ici")
The backend code to handle it:
// Note the /:id at the end
router.get('/recette_light/:id', async (req, res) => {
try {
// Assuming you are searching for your recette using MongoDB doc. id
// Note the req.params.id here not req.body.id
const cat_recette = await db.cat_recette.findById(req.params.id);
// If there are no matching docs.
if (!cat_recette) {
return res.json("il n'y a pas de cat_recettes");
}
res.status(200).json({ cat_recette: cat_recette, });
} catch (err) {
console.log(err);
res.status(500).json({ msg: 'Server Error', });
}
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65675218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: iOS project can't run when AIR SDK upgraded to 3.6 That is very simple flash iOS project with FB 4.7, AIR SDK 3.6, and running on iPad2(5.1.1).
When I debug with the default AIR SDK 3.4 of FB 4.7, the project is running ok. But, when I upgrade the AIR SDK to 3.6 according the help of Adobe. The FB 4.7 debuger can't connect to running process, and the "hello world" doesn't show as well.
Any suggestions?
A: I know what's wrong with the problem.
The key is the step 4. in Adobe AIR SDK Upgrade Help.
Copy the contents from the aot folder (AIRSDK_back_up\lib\aot) of the AIR SDK backup to the aot folder of the newly created AIR SDK (AIRSDK\lib\aot).
Don't copy all contents from the aot folder, just copy strip from lib\aot\bin to the aot bin folder of the newly created AIR SDK 3.6 (AIRSDK\lib\aot\bin).
Then, I restart the FB 4.7, the "hello world!" show as well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/15356152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to add some known objects to ace editors syntax checker? we're using the ACE editor to write javascript code that's interpreted on the server side. So the server has a JavaScript interface and can execute submitted code to accomplish some task from the outside.
The server implements some new objects that are not known by ACE. So ACE shows a warning, if one of this unknown objects is used in code.
What is the correct way to tell ACE, that there are some new objects, variables und functions? I already took a look into worker-javascript.js, but I DON'T want to reimplement this whole stuff (updating ACE would get harder than). Is there any interface I can use?
A: Ace uses jshint, which have an option to set a list of global variables.
Ace supports changeOptions call on worker to modify default options it passes to jshint, but doesn't have a way to pass list of gloabals
You can add it by changing line at https://github.com/ajaxorg/ace/blob/v1.1.8/lib/ace/mode/javascript_worker.js#L130
to lint(value, this.options, this.options.globals);
and from your code calling
editor.session.$worker.call("changeOptions", [{
globals: {foo: false, bar: false...},
undef: true, // enable warnings on undefined variables
// other jshint options go here check jshint site for more info
}]);
the change to worker.js#L130 is simple enough and should be accepted if you make a pull request to ace
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28434455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: junit and ant issue. Cannot start test When I am running a junit test from ant I always get:
D:\metrike>ant test
Buildfile: build.xml
init:
compile:
test:
[junit] Running jmt.test.TestCodeBase
[junit] Testsuite: jmt.test.TestCodeBase
[junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 0,046 sec
[junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 0,046 sec
[junit]
[junit] Testcase: warning(junit.framework.TestSuite$1): FAILED
[junit] No tests found in jmt.test.TestCodeBase
[junit] junit.framework.AssertionFailedError: No tests found in jmt.test.TestCodeBase
[junit]
[junit]
[junit] Test jmt.test.TestCodeBase FAILED
This is the ant file:
<target name="test" depends="compile">
<mkdir dir="target/test-results"/>
<junit haltonfailure="no" printsummary="on">
<classpath >
<pathelement location="target/classes"/>
<pathelement location="Libraries/junit3.8.1/junit.jar"/>
</classpath>
<formatter type="brief" usefile="false"/>
<formatter type="xml" />
<batchtest todir="target/test-results" >
<fileset dir="target/classes" includes="**/TestCodeBase.class"/>
</batchtest>
</junit>
</target>
But when I manually run the test, junit test works:
D:\metrike>cd target
D:\metrike\target>cd classes
D:\metrike\target\classes>java jmt.test.TestCodeBase
fatsource.jar eclapsed : 2297 ms
over all : 2297 ms
contains 3073 classes and 3700 referred classes, 35968 referred methods, 22351 referred fields
Memory usage: 21326 KB
Post gc-memory usage: 19506 KB
contains 3073 classes and 3700 referred classes, 35968 referred methods, 22351 referred fields
Can someone please tell me what I am doing wrong? I have been trying to fix this for a whole day but I cannot find the solution.
A: 1) Does jmt.test.TestCodeBase extend TestCase (junit.framework.TestCase)?
If not, it will need to to be picked up by the junit ant task.
2) Is the class written as a junit TestCase, or is it just called from the main method? See this link for an example of writing simple tests in Junit3 style. For Junit4, just add @Test above the methods.
3) Are the test methods in Junit3 style (every method starts with test) or Junit4 style (every method has a @Test above it)?
If Junit3, you should be good to go. If Junit4, you need to include the Junit4 test library in your ant classpath, rather than using junit3.8.1.
A: It seems your test class is not actually a JUnit test class. When you run it manually, you are not running it as a test, but as a regular Java application. The class has a main method, right? To run as a JUnit 3 (which you seem to be using) test, the class needs to extend TestCase and have one or more public void methods whose names start with 'test'. For testing, I would try running the class as a JUnit test in the IDE.
A: Please post sample code of jmt.test.TestCodeBase, esp. class definition and one of the test methods.
I looks like you use public static int main() instead of JUnit convention public void testFoo() methods. If you're using JUnit4, your test methods should have the @Test annotation.
JUnit tests usually cannot be run just with java <Testclass>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/3803380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Prevent new WordPress users from logging in until manually "activated"? I'm developing a plugin for WordPress which has 3 groups of users. I need to disable some users and prevent them from login. what I mean isn't preventing them to access the backend. I want to prevent them from is log in. For example, when they want to login they should see a message like this account isn't active yet.
thank you guys.
A: after some search and see similar problems I solved this problam like this :
first add a user meta for user status so we can checking if user is active or not then we can disable or enable users.
add_filter( 'authenticate', 'chk_active_user',100,2);
function chk_active_user ($user,$username)
{
$user_data = $user->data;
$user_id = $user_data->ID;
$user_sts = get_user_meta($user_id,"user_active_status",true);
if ($user_sts==="no")
{
return new WP_Error( 'disabled_account','this account is disabled');
}
else
{
return $user;
}
return $user;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43223557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How can I have a test unit in the same source file? This question is Ruby related.
Suppose I want to have the test unit for my class in the same file as it's definition. Is it possible to do so? For example, if I'd pass a "--test" argument when I run the file, I'd want it to run the test unit. Otherwise, execute normally.
Imagine a file like this:
require "test/unit"
class MyClass
end
class MyTestUnit < Test::Unit::TestCase
# test MyClass here
end
if $0 == __FILE__
if ARGV.include?("--test")
# run unit test
else
# run normally
end
end
What code should I have in the #run unit test section?
A: This can be achieved with a module:
#! /usr/bin/env ruby
module Modulino
def modulino_function
return 0
end
end
if ARGV[0] == "-test"
require 'test/unit'
class ModulinoTest < Test::Unit::TestCase
include Modulino
def test_modulino_function
assert_equal(0, modulino_function)
end
end
else
puts "running"
end
or without module, actually:
#! /usr/bin/env ruby
def my_function
return 0
end
if ARGV[0] == "-test"
require 'test/unit'
class MyTest < Test::Unit::TestCase
def test_my_function
assert_equal(0, my_function)
end
end
else
puts "running rc=".my_function()
end
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/4013587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: SettingWithCopyWarning while assigning value_counts() I am trying to assign certain values to a column in a dataframe within a Python function.
df = df[(df['date'] <= max_date) & (df['date'] > min_date) | (df['x_date'] <= max_date) & (df['x_date'] > min_date)]
numbers = len(df)
df["patients"] = value_counts.patients
df["numbers"] = numbers
However, I get this error:
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df["patients"] = value_counts.patients
file.py:52: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
How can I get rid of this warning?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72802477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to access htdocs folder when xampp can't open in mac Big Sur? I just found that I couldn't open Xampp after upgrading to Big Sur.
My question is: How do I access htdocs folder?
I'm afraid that it will override the htdocs folder if I install the latest version of Xampp.
A: did you check the hide folder named ".bitname" in your profile root folder? If not, try to find the "xampp" folder inside ".bitname/machines" and copy it to another folder to backup current xampp data.
After isntall/reinstall xampp just put the folder back to the same place ".bitname/machines".
Steps:
*
*Open Finder and make hidden files visible (cmd + shift + .)
*Go to folder /Users/USERNAME/.bitnami/stackman/machines and backup/copy complete xampp folder to a safe place
*Delete everything in folder /Users/USERNAME/.bitnami/stackman
*Download from https://sourceforge.net/projects/xamp...
*Install newest version of XAMPP
*Run XAMPP once for all folders to be created
*Quit XAMPP
*Rename new folder /Users/USERNAME/.bitnami/stackman/machines/xampp to /Users/USERNAME/.bitnami/stackman/machines/xampp_original
*Copy saved folder xampp to /Users/USERNAME/.bitnami/stackman/machines
*Run XAMPP
PS: If you have another MAC maybe is a good idea to test it before using a simulated xampp instalation!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/67989730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why don't I need to specify "typename" before a dependent type in C++20? This bit of code compiled in C++20 (using gcc 10.1) without using the typename keyword before the dependent type std::vector<T>::iterator. Why does it compile?
#include <vector>
template<typename T>
std::vector<T>::iterator // Why does this not require "typename" before it?
f() { return {}; }
int main() {
auto fptr = &f<int>;
}
code playground
A: One of the new features in C++20 is Down with typename.
In C++17, you had to provide the typename keyword in nearly all† dependent contexts to disambiguate a type from a value. But in C++20, this rule is relaxed a lot. In all contexts where you need to have a type, the typename keyword is no longer mandatory.
One such context is the return type of a function in class scope, as in your example. Others include the type in a member declaration, the type on the right-hand side of a using declaration, the parameter declaration of a lambda, the type you're passing to static_cast, etc. See the paper for the full list.
† Nearly all because base-specifiers and mem-initializer-ids were always excluded, as in:
template <typename T> struct X : T::type { }; // always ok
This is okay because, well, that needs to be a type. The paper simply extends this logic (well, it has to be a type, so let's just assume it's a type) to a lot more places that have to be types.
A: From the reference, from c++20, in contexts where the dependent name is unambiguously a typename, the typename keyword is no longer needed. In particular:
A qualified name that is used as a declaration specifier in the (top-level) decl-specifier-seq of:
a simple declaration or function definition at namespace scope
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61990971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "74"
}
|
Q: Log sqlite selected values I have select query like this
String innerSelectQuery = "SELECT * FROM " + TABLE_NAME_EVENT_TYPE_MASTER + " WHERE EventTypeKey = '" + cursor.getInt(2) + "'";
Cursor innerCursor = db.rawQuery(innerSelectQuery, null);
if (innerCursor.moveToFirst()) {
userEvent.setEventTypeKey(innerCursor.getString(1));
Log.e("tag", "EventTypeKey " + innerCursor.getString(1));
innerCursor.close();
}
I was able to log the values using innerCursor.getString(1).Is there a way to log and see the returned values in a table format like sql.
A: to show full output of your cursor use following.
Log.v("Cursor Object", DatabaseUtils.dumpCursorToString(cursor))
A: Log.d("TAG", "Your Message : " + innerCursor.getString(1));
i hope it's helpful to you ..!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/45031692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Algorithm for finding end of a list (SAP GUI) I'm writing a script that adds elements to a list in a SAP GUI screen. Now, it seems that when using SAP GUI, nothing "exists" unless it is actually on screen, so the first step involves finding the end of the list.
I accomplished this by scrolling though each element, and checking if it was blank.
Do While Not blank
If session.findById("wnd[1]/usr/tblSAPLCZDITCTRL_4010/ctxtMAPL-MATNR[2,0]").Text = "" Then blank = True
session.findById("wnd[1]/usr/tblSAPLCZDITCTRL_4010").verticalScrollbar.Position = i
i = i + 1
Loop
However, for very large existing lists, this takes a long time. I'm trying to figure out a way to find the end more quickly. Some truths/limitations I know:
*
*I'm assuming I have no knowledge of the list length.
*I cannot command the verticalScrollbar.position too far beyond the end of
the list. For ex. if the list contains 62 elements, .verticalScrollbar.Position = 100 will not work.
*In the case of the above example, SAP does NOT throw an error. Nothing happens at all, and then next line of code executes.
*All references to elements are with respect to their position on the screen. Ex, if I scroll down 5 positions, the 6th element of the overall list would actually indexed as 1.
*On the other hand, verticalScrollbar.Position is absolute
I'm thinking of doing the following (in very psuedocode):
i = 0
do while scrolled = true
scrolled = false
a = GUIlist[0]
verticalScrollbar.Position = i + 1000
b = GUIlist[0]
'check to see the first element shown has changed
if a <> b then
scrolled = true
i = i + 1000
end if
loop
do while scrolled = true
scrolled = false
a = GUIlist[0]
verticalScrollbar.Position = i + 500
b = GUIlist[0]
if a <> b then
scrolled = true
i = i + 500
end if
loop
...and so on until I'm iterating i by one.
Is there a generally accepted better way of doing this kind of 'search'?
Any input is appreciated.
Thanks
A: My suggestion:
session.findById("wnd[0]").sendVKey 83
myPosition = session.findById("wnd[1]/usr/tblSAPLCZDITCTRL_4010").verticalScrollbar.Position
do
if session.findById("wnd[1]/usr/tblSAPLCZDITCTRL_4010/ctxtMAPL-MATNR[2,0]").Text = "" then exit do
myPosition = myPosition + 1
session.findById("wnd[1]/usr/tblSAPLCZDITCTRL_4010").verticalScrollbar.Position = myPosition
loop
msgbox myPosition
Regards,
ScriptMan
A: Just to go to the end
max_scrollbar = session.findById("wnd[1]/usr/tblSAPLCZDITCTRL_4010").verticalScrollbar.Maximum ' Get the maximum scrollbar value
session.findById("wnd[1]/usr/tblSAPLCZDITCTRL_4010").verticalScrollbar.Position = max_scrollbar ' Go to the end
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/40112096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Memory leak in background process on Heroku I was wondering if anyone can advise me on how to track down a memory leak / issue on a background process on Heroku.
I have one dyno worker running with a delayed_job queue, processing all sorts of different processes. From time to time, I'm getting a sudden jump in the memory consumed. Subsequent jobs then exceed the memory limit and fail, and all Hell breaks loose.
The weird thing is I can't see that the jump in memory is connected to any particular job. Here's the sort of log I see:
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=load_avg_1m val=0.00
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=load_avg_5m val=0.01
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=load_avg_15m val=0.01
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_total val=133.12 units=MB
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_rss val=132.23 units=MB
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_cache val=0.88 units=MB
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_swap val=0.01 units=MB
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_pgpgin val=0 units=pages
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_pgpgout val=45325 units=pages
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=diskmbytes val=0 units=MB
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=load_avg_1m val=0.15
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=load_avg_5m val=0.07
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=load_avg_15m val=0.17
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=memory_total val=110.88 units=MB
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=memory_rss val=108.92 units=MB
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=memory_cache val=1.94 units=MB
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=memory_swap val=0.01 units=MB
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=memory_pgpgin val=2908160 units=pages
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=memory_pgpgout val=42227 units=pages
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=diskmbytes val=0 units=MB
Aug 15 07:13:35 vemmleads app/heroku-postgres: source=HEROKU_POSTGRESQL_CHARCOAL measure.current_transaction=1008211 measure.db_size=482260088bytes measure.tables=39 measure.active-connections=6 measure.waiting-connections=0 measure.index-cache-hit-rate=0.99996 measure.table-cache-hit-rate=1
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=load_avg_1m val=0.00
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=load_avg_5m val=0.00
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=load_avg_15m val=0.14
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=memory_total val=108.00 units=MB
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=memory_rss val=107.85 units=MB
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=memory_cache val=0.15 units=MB
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=memory_swap val=0.01 units=MB
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=memory_pgpgin val=0 units=pages
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=memory_pgpgout val=33609 units=pages
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=diskmbytes val=0 units=MB
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=load_avg_1m val=0.30
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=load_avg_5m val=0.07
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=load_avg_15m val=0.04
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_total val=511.80 units=MB
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_rss val=511.78 units=MB
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_cache val=0.00 units=MB
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_swap val=0.02 units=MB
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_pgpgin val=27303936 units=pages
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_pgpgout val=154826 units=pages
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=diskmbytes val=0 units=MB
The memory usage of worker.1 jumps from 108.00 to 551.80 for no apparent reason. It doesn't look like any jobs were processed during that time, so it's hard to understand where that giant chomp of memory comes from. Some way later in the log, worker1 hits the memory limit and fails.
I have NewRelic Pro running. It doesn't help at all - in fact it doesn't even create alerts for the repeated memory errors. The above Heroku logs give me no more information.
Any ideas or pointers about what to investigate next would be appreciated.
Thanks
Simon
A: There isn't enough information here to pinpoint what's going on.
The most common cause of memory leaks in Rails applications (especially in asynchronous background jobs) is a failure to iterate through large database collections incrementally. For example, loading all User records with a statement like User.all
For example, if you have a background job that is going through every User record in the database, You should use User.find_each() or User.find_in_batches() to process these records in chunks (default is 1000 for ActiveRecord).
This limits the working set of objects loaded into memory while still processing all of the records.
You should look for un-bounded database lookups that could be loading huge numbers of objects.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/18255254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: SQL Dynamic ASC and DESC I have the following SQL statement where order by clause is passed dynamically .
How can I pass 'order by asc and desc' dynamically to SQL?
SELECT table1.prod_id,table2.prod_name from table1 left outer join table2
ON table1.prod1 = table2.prod_id
ORDER BY CASE WHEN :odb = 1 THEN prod_id END
I would like pass order by asc or desc to above SQL dynamically,
how can I do this?
A: You can do solutions like @TonyAndrews by manipulating numeric or data values. For VARCHAR2 an alternative to dynamic SQL could be to have two expressions:
order by
case when :sorting='ASC' then col1 end ASC,
case when :sorting='DESC' then col1 end DESC
When :sorting has the value 'ASC' the result of that ORDER BY becomes like if it had been:
order by
col1 ASC,
NULL DESC
When :sorting has the value 'DESC' the result of that ORDER BY becomes like if it had been:
order by
NULL ASC,
col1 DESC
One downside to this method is that those cases where the optimizer can skip a SORT operation because there is an index involved that makes the data already sorted like desired, that will not happen when using the CASE method like this. This will mandate a sorting operation no matter what.
A: If the column you are sorting by is numeric then you could do this:
order by case when :dir='ASC' then numcol ELSE -numcol END
For a date column you could do:
order by case when :dir='ASC' then (datecol - date '1901-01-01')
else (date '4000-12-31' - datecol) end
I can't think of a sensible way for a VARCHAR2 column, other than using dynamic SQL to construct the query (which would work for any data type of course).
A: This will work for all data types:
SELECT *
FROM (SELECT table.*
,ROW_NUMBER() OVER (ORDER BY prod_id) sort_asc
,ROW_NUMBER() OVER (ORDER BY prod_id DESC) sort_desc
FROM table)
ORDER BY CASE WHEN :odb = 1 THEN sort_asc ELSE sort_desc END
A: Unfortunatelly, you can not do it dynamically, because Oracle builds the execution plan based on these ASC and DESC keywords.
What you can do is changing the ORDER BY clause to this:
SELECT * table
ORDER BY :order_param
And pass :odb value to :order_param if you want to order it by ASC (bigger :odb values would be least in order) or 1/:odb if you want to order by DESC (bigger :odb values would produce smaller 1/:odb values and would appear on top of the result).
And, as always, you can generate the query dynamically in a stored procedure:
IF ... THEN
EXECUTE IMMEDIATE 'SELECT * table ORDER BY CASE WHEN :odb = 1 THEN 1 END ASC';
ELSE
EXECUTE IMMEDIATE 'SELECT * table ORDER BY CASE WHEN :odb = 1 THEN 1 END DESC';
END IF;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/27015623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: paypal chained payment ipn not returning notification I am using the following code for PayPal chained payment
require_once ("paypalplatform.php");
$amount=100;
$amt1=($amount * 10 )/100;
$amt2=$amount-$amt1;
$actionType = "PAY";
$cancelUrl = "http://test.com/test";
$returnUrl = "http://test.com/test";
$currencyCode = "USD";
$receiverEmailArray = array(
'a***********_per@gmail.com',
'a***********_biz@gmail.com'
);
$receiverAmountArray = array(
$amt1,
$amt2
);
$receiverPrimaryArray = array();
$receiverInvoiceIdArray = array(
'1',
'2'
);
$senderEmail = "a************_per@gmail.com";
$feesPayer = "";
$ipnNotificationUrl = "http://test.com/paypal/buynow.php";
$memo = "";
$pin = "agalameex";
$preapprovalKey = "";
$reverseAllParallelPaymentsOnError = "";
$trackingId = generateTrackingID();
$resArray = CallPay ($actionType, $cancelUrl, $returnUrl, $currencyCode, $receiverEmailArray,
$receiverAmountArray, $receiverPrimaryArray, $receiverInvoiceIdArray,
$feesPayer, $ipnNotificationUrl, $memo, $pin, $preapprovalKey,
$reverseAllParallelPaymentsOnError, $senderEmail, $trackingId
);
$ack = strtoupper($resArray["responseEnvelope.ack"]);
if($ack=="SUCCESS")
{
if ("" == $preapprovalKey)
{
// redirect for web approval flow
$cmd = "cmd=_ap-payment&paykey=" . urldecode($resArray["payKey"]);
RedirectToPayPal ( $cmd );
}
else
{
// payKey is the key that you can use to identify the result from this Pay call
$payKey = urldecode($resArray["payKey"]);
// paymentExecStatus is the status of the payment
$paymentExecStatus = urldecode($resArray["paymentExecStatus"]);
}
}
else
{
//Display a user friendly Error on the page using any of the following error information returned by PayPal
//TODO - There can be more than 1 error, so check for "error(1).errorId", then "error(2).errorId", and so on until you find no more errors.
$ErrorCode = urldecode($resArray["error(0).errorId"]);
$ErrorMsg = urldecode($resArray["error(0).message"]);
$ErrorDomain = urldecode($resArray["error(0).domain"]);
$ErrorSeverity = urldecode($resArray["error(0).severity"]);
$ErrorCategory = urldecode($resArray["error(0).category"]);
echo "Preapproval API call failed. ";
echo "Detailed Error Message: " . $ErrorMsg;
echo "Error Code: " . $ErrorCode;
echo "Error Severity: " . $ErrorSeverity;
echo "Error Domain: " . $ErrorDomain;
echo "Error Category: " . $ErrorCategory;
}
In the above code every thing is working fine except return notification ( $ipnNotificationUrl). I am not getting any notification from the $ipnNotificationUrl when a payment is done. Can any body help me in this?
A: Check your IPN history in PayPal. If it shows anything other than 200 response code you know something is wrong with your IPN listener.
You can check your web server logs to see exactly what error is happening when the script is hit.
Alternatively, you could setup a simple HTML form with hidden fields that match what you'd expect to get from PayPal. Then you can submit this in a browser so you can see the result on screen.
Keep in mind that testing this way will result in an INVALID response back from PayPal because the IPN data did not come from their server, but you can adjust for that accordingly for testing purposes, get your issues fixed, and then you'll be good to go.
A: Debugging tips: check your IPN configuration at the PayPal backend. Replace your script by a simple script that just shows you when it's called and with what input.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/14237809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Does python allow calling of an instance variable name from an instance method? I want to know if there is a way in python to call the name of an instance variable? For example, if I define a class
>>>class A(object):
... def get_instance_name(self):
... return # The name of the instance variable
>>>obj = A()
>>>obj.get_instance_name()
obj
>>>blah = A()
>>>blah.get_instance_name()
blah
A: Raise an exception. Not only is it the appropriate way to signal an error, it's also more useful for debugging. The traceback includes the line which did the method call but also additional lines, line numbers, function names, etc. which are more useful for debugging than just a variable name. Example:
class A:
def do(self, x):
if x < 0:
raise ValueError("Negative x")
def wrong(a, x):
a.do(-x)
wrong(A(), 1)
This gives a traceback similar to this, if the exception isn't caught:
Traceback (most recent call last):
File "...", line 1, in <module>
wrong(A(), 1)
File "...", line 7, in wrong
a.do(-x)
File "...", line 4, in do
raise ValueError("Negative x")
ValueError: Negative x
You can also use the traceback module to get this information programmatically, even without an exception (print_stack and friends).
A: globals() return a dictionary that represents the namespace of the module (the namespace is not this dictionary, this latter only represents it)
class A(object):
def get_instance_name(self):
for name,ob in globals().iteritems():
if ob is self:
return name
obj = A()
print obj.get_instance_name()
blah = A()
print blah.get_instance_name()
tu = (obj,blah)
print [x.get_instance_name() for x in tu]
result
obj
blah
['obj', 'blah']
.
EDIT
Taking account of the remarks, I wrote this new code:
class A(object):
def rondo(self,nameinst,namespace,li,s,seen):
for namea,a in namespace.iteritems():
if a is self:
li.append(nameinst+s+namea)
if namea=='__builtins__':
#this condition prevents the execution to go
# in the following section elif, so that self
# isn't searched among the cascading attributes
# of the builtin objects and the attributes.
# This is to avoid to explore all the big tree
# of builtin objects and their cascading attributes.
# It supposes that every builtin object has not
# received the instance, of which the names are
# searched, as a new attribute. This makes sense.
for bn,b in __builtins__.__dict__.iteritems():
if b is self:
li.append(nameinst+'-'+b)
elif hasattr(a,'__dict__') \
and not any(n+s+namea in seen for n in seen)\
and not any(n+s+namea in li for n in li):
seen.append(nameinst+s+namea)
self.rondo(nameinst+s+namea,a.__dict__,li,'.')
else:
seen.append(nameinst+s+namea)
def get_instance_name(self):
li = []
seen = []
self.rondo('',globals(),li,'')
return li if li else None
With the following
bumbum = A()
blah = A()
print "bumbum's names:\n",bumbum.get_instance_name()
print "\nmap(lambda y:y.get_instance_name(), (bumbum,blah) :\n",map(lambda y:y.get_instance_name(), (bumbum,blah))
print "\n[y.get_instance_name() for y in (bumbum,blah)] :\n",[y.get_instance_name() for y in (bumbum,blah)]
the result is
bumbum's names:
['bumbum']
map(lambda y:y.get_instance_name(), (bumbum,blah) :
[['bumbum'], ['blah']]
[y.get_instance_name() for y in (bumbum,blah)] :
[['bumbum', 'y'], ['blah', 'y']]
The second list comprehension shows that the function get_instance_name() must be used with care. In the list comp, identifier y is assigned in turn to every element of (bumbum,blah) then the finction finds it out as a name of the instance !
.
Now, a more complex situation:
ahah = A() # ahah : first name for this instance
class B(object):
pass
bobo = B()
bobo.x = ahah # bobo.x : second name for ahah
jupiter = bobo.x # jupiter : third name for ahah
class C(object):
def __init__(self):
self.azerty = jupiter # fourth name for ahah
ccc = C()
kkk = ccc.azerty # kkk : fifth name for ahah
bobo.x.inxnum = 1005
bobo.x.inxwhat = kkk # bobo.x.inxwhat : fifth name for ahah
# Since bobo.x is instance ahah, this instruction also
# creates attribute inxwhat in ahah instance's __dict__ .
# Consequently, instance ahah having already 5 names,
# this instruction adds 5 additional names, each one
# ending with .inxwhat
# By the way, this kkk being ahah itself, it results that ahah
# is the value of its own attribute inxwhat.
print ahah.get_instance_name()
result
['bobo.x', 'bobo.x.inxwhat',
'ahah', 'ahah.inxwhat',
'jupiter', 'jupiter.inxwhat',
'kkk', 'kkk.inxwhat',
'ccc.azerty', 'ccc.azerty.inxwhat']
I concur to judge this solution a little heavy and that if a coder thinks he needs such a heavy function, it is probably because the algorithm isn't optimal. But I find interesting to see that it's possible to do this in Python though it doesn't seem evident.
I say heavy, not hacky, I don't find it's hacky, by the way.
A: No, you can't. Objects can have any number of names, so the question doesn't even make sense. Consider:
a1 = a2 = a3 = A()
What is the name of the instance of A()?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/17620589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Import module with same package as existing one I have the following directory structure:
/some/dir
┣ mainmodule
┃ ┣ __init__.py
┃ ┗ module.py
┗ submodules
┣ __init__.py
┗ module
┣ __init__.py
┣ submodule_1.py
┣ ...
┗ submodule_n.py
Both /some/dir/mainmodule and /some/dir/submodules are not on pyhton's library path. Being located in directory /some/dir/mainmodule I want to import all modules (module.submodule_1, ..., module.submodule_n) in directory /some/dir/submodules.
I tried the following. But I always get ImportError: No module named submodule_1:
>>> import sys
>>> sys.path.append("/some/dir/submodules")
>>> import module.submodule_1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named submodule_1
>>>
The problem seems to be that module.py in /some/dir/mainmodule has the same name as the first package of the modules in /some/dir/submodules. Renaiming module.py or the package solves this issues, but as this is some widely used legacy code I'm working on, I don't know if there are undocumented references to these names. Thus I'm looking for a way to solve this without renaming any files.
A: use the following line.
sys.path.insert(0, '/some/dir/submodules')
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58991987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Overloading (c)begin/(c)end I tried to overload (c)begin/(c)end functions for a class so as to be able to call C++11 range-based for loop.
It works in most of the cases, but I don't manage to understand and solve one :
for (auto const& point : fProjectData->getPoints()){ ... }
This line returns error:
Error C2662: 'MyCollection<T>::begin' : cannot convert 'this' pointer from 'const MyCollection' to 'MyCollection<T> &'
because fProjectData is a pointer to const. If I make it non-const, it does work. I don't understand why, considering that cbegin() & cend() are developped with exactness as begin() & end() functions.
Here are my functions developped (in the header file) in MyCollection:
/// \returns the begin iterator
typename std::list<T>::iterator begin() {
return objects.begin();
}
/// \returns the begin const iterator
typename std::list<T>::const_iterator cbegin() const {
return objects.cbegin();
}
/// \returns the end iterator
typename std::list<T>::iterator end() {
return objects.end();
}
/// \returns the end const iterator
typename std::list<T>::const_iterator cend() const {
return objects.cend();
}
Any ideas?
A: A range-based for loop (for a class-type range) looks up for begin and end functions. cbegin and cend are not considered at all:
§ 6.5.4 [stmt.ranged]/p1 *:
[...]
*
*if _RangeT is a class type, the unqualified-ids begin and end are looked up in the scope of class _RangeT as if by class member access lookup (3.4.5), and if either (or both) finds at least one declaration, begin-expr and end-expr are __range.begin() and __range.end(), respectively;
*otherwise, begin-expr and end-expr are begin(__range) and end(__range), respectively, where begin and end are looked up in the associated namespaces (3.4.2). [ Note: Ordinary unqualified lookup (3.4.1) is not performed. — end note ]
For a const-qualified range the related member functions must be const-qualified as well (or should be callable with a const-qualified instance if the latter option is in use). You'd need to introduce additional overloads:
typename std::list<T>::iterator begin() {
return objects.begin();
}
typename std::list<T>::const_iterator begin() const {
// ~~~~^
return objects.begin();
}
typename std::list<T>::const_iterator cbegin() const {
return begin();
}
typename std::list<T>::iterator end() {
return objects.end();
}
typename std::list<T>::const_iterator end() const {
// ~~~~^
return objects.end();
}
typename std::list<T>::const_iterator cend() const {
return end();
}
DEMO
* the wording comes from C++14, but the differences are unrelated to the problem as it is stated
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31581880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Alter hive table add or drop column I have orc table in hive I want to drop column from this table
ALTER TABLE table_name drop col_name;
but I am getting the following exception
Error occurred executing hive query: OK FAILED: ParseException line 1:35 mismatched input 'user_id1' expecting PARTITION near 'drop' in drop partition statement
Can any one help me or provide any idea to do this? Note, I am using hive 0.14
A: There is also a "dumb" way of achieving the end goal, is to create a new table without the column(s) not wanted. Using Hive's regex matching will make this rather easy.
Here is what I would do:
-- make a copy of the old table
ALTER TABLE table RENAME TO table_to_dump;
-- make the new table without the columns to be deleted
CREATE TABLE table AS
SELECT `(col_to_remove_1|col_to_remove_2)?+.+`
FROM table_to_dump;
-- dump the table
DROP TABLE table_to_dump;
If the table in question is not too big, this should work just well.
A: suppose you have an external table viz. organization.employee as: (not including TBLPROPERTIES)
hive> show create table organization.employee;
OK
CREATE EXTERNAL TABLE `organization.employee`(
`employee_id` bigint,
`employee_name` string,
`updated_by` string,
`updated_date` timestamp)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'hdfs://getnamenode/apps/hive/warehouse/organization.db/employee'
You want to remove updated_by, updated_date columns from the table. Follow these steps:
create a temp table replica of organization.employee as:
hive> create table organization.employee_temp as select * from organization.employee;
drop the main table organization.employee.
hive> drop table organization.employee;
remove the underlying data from HDFS (need to come out of hive shell)
[nameet@ip-80-108-1-111 myfile]$ hadoop fs -rm hdfs://getnamenode/apps/hive/warehouse/organization.db/employee/*
create the table with removed columns as required:
hive> CREATE EXTERNAL TABLE `organization.employee`(
`employee_id` bigint,
`employee_name` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'hdfs://getnamenode/apps/hive/warehouse/organization.db/employee'
insert the original records back into original table.
hive> insert into organization.employee
select employee_id, employee_name from organization.employee_temp;
finally drop the temp table created
hive> drop table organization.employee_temp;
A: You cannot drop column directly from a table using command ALTER TABLE table_name drop col_name;
The only way to drop column is using replace command. Lets say, I have a table emp with id, name and dept column. I want to drop id column of table emp. So provide all those columns which you want to be the part of table in replace columns clause. Below command will drop id column from emp table.
ALTER TABLE emp REPLACE COLUMNS( name string, dept string);
A: ALTER TABLE emp REPLACE COLUMNS( name string, dept string);
Above statement can only change the schema of a table, not data.
A solution of this problem to copy data in a new table.
Insert <New Table> Select <selective columns> from <Old Table>
A: ALTER TABLE is not yet supported for non-native tables; i.e. what you get with CREATE TABLE when a STORED BY clause is specified.
check this https://cwiki.apache.org/confluence/display/Hive/StorageHandlers
A: After a lot of mistakes, in addition to above explanations, I would add simpler answers.
Case 1: Add new column named new_column
ALTER TABLE schema.table_name
ADD new_column INT COMMENT 'new number column');
Case 2: Rename a column new_column to no_of_days
ALTER TABLE schema.table_name
CHANGE new_column no_of_days INT;
Note that in renaming, both columns should be of same datatype like above as INT
A: Even below query is working for me.
Alter table tbl_name drop col_name
A: For external table its simple and easy.
Just drop the table schema then edit create table schema , at last again create table with new schema.
example table: aparup_test.tbl_schema_change and will drop column id
steps:-
------------- show create table to fetch schema ------------------
spark.sql("""
show create table aparup_test.tbl_schema_change
""").show(100,False)
o/p:
CREATE EXTERNAL TABLE aparup_test.tbl_schema_change(name STRING, time_details TIMESTAMP, id BIGINT)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1'
)
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION 'gs://aparup_test/tbl_schema_change'
TBLPROPERTIES (
'parquet.compress' = 'snappy'
)
""")
------------- drop table --------------------------------
spark.sql("""
drop table aparup_test.tbl_schema_change
""").show(100,False)
------------- edit create table schema by dropping column "id"------------------
CREATE EXTERNAL TABLE aparup_test.tbl_schema_change(name STRING, time_details TIMESTAMP)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1'
)
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION 'gs://aparup_test/tbl_schema_change'
TBLPROPERTIES (
'parquet.compress' = 'snappy'
)
""")
------------- sync up table schema with parquet files ------------------
spark.sql("""
msck repair table aparup_test.tbl_schema_change
""").show(100,False)
==================== DONE =====================================
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/34198114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.